Advanced Topics

Batch Submission Patterns

Use import jobs for large JSON and CSV submissions while preserving row-level visibility.

Overview

Bulk import endpoints process JSON and CSV rows asynchronously with row-level outcomes. Use them for campaigns that are too large for one-by-one publishing and need recoverable import visibility.

What You Will Learn

  • How to choose direct publish vs async import jobs.
  • Required row fields and CSV contract basics (`account_id`, `board_id`, `title`, `idempotency_key`).
  • How import jobs report `created`, `existing`, and `failed` results per row.

Implementation Checklist

  • Use JSON imports when your upstream system already has normalized payload objects.
  • Use CSV imports when teams operate campaigns in spreadsheets.
  • Keep batch size within import limits (maximum 500 rows per import request).
  • Generate deterministic `idempotency_key` values from stable source IDs.
  • For scheduled rows, include `run_at` with timezone offset (for example UTC `Z`).
  • Monitor job status and retry only failed rows with corrected payloads.

Deep Dive

1) Choosing direct publish vs import jobs

Use direct `POST /v1/pins` for single actions from user interactions. Use import jobs when you need throughput, auditability, and row-level retries across many items.

  • Direct publish: best for one-off UI actions or immediate operational tasks.
  • Import jobs: best for campaign batches, spreadsheet uploads, and queued processing.
  • Import rows can create immediate pins or scheduled publishes using `run_at`.

2) Idempotency strategy that survives retries

Generate deterministic `idempotency_key` values from stable source identifiers. Never use random keys for re-runs; use the same key for the same business row.

  • Good key pattern: `<campaign_id>-<source_row_id>`.
  • Re-running a row with the same key yields `existing` instead of duplicate creation.
  • Use corrected payload + same key only when retrying transient failures for the same item.

3) Monitoring and retrying with row-level outcomes

Treat import jobs as asynchronous processing pipelines. Poll job status and inspect row results to determine exactly what to retry.

  • Track counters: `created_rows`, `existing_rows`, and `failed_rows`.
  • Inspect each row result (`created` | `existing` | `failed`) and error details.
  • Retry only failed rows after fixing payload issues; avoid replaying the full batch when unnecessary.

CSV contract essentials

CSV imports require a valid header and specific required columns. Keep column names clean and values normalized.

  • Required columns: `account_id`, `board_id`, `title`, `idempotency_key`.
  • Maximum batch size: 500 rows per import request.
  • For scheduling through CSV, include timezone-aware `run_at` timestamps.

Relevant Endpoints

POST
/v1/pins/imports/json

Submit a JSON array of rows for async processing.

POST
/v1/pins/imports/csv

Upload a CSV file for async import processing.

GET
/v1/pins/imports

List import jobs for the active workspace.

GET
/v1/pins/imports/{job_id}

Inspect counters and row-level results for one job.

Example

curl -X POST "$API_BASE_URL/v1/pins/imports/json" \
  -H "Authorization: Bearer <ACCESS_TOKEN>" \
  -H "Content-Type: application/json" \
  -d '[
    {
      "account_id": "<ACCOUNT_ID>",
      "board_id": "<BOARD_ID>",
      "title": "Launch row 1",
      "image_url": "https://example.com/image-1.jpg",
      "idempotency_key": "launch-1"
    },
    {
      "account_id": "<ACCOUNT_ID>",
      "board_id": "<BOARD_ID>",
      "title": "Launch row 2",
      "image_url": "https://example.com/image-2.jpg",
      "idempotency_key": "launch-2",
      "run_at": "2026-03-10T10:00:00Z"
    }
  ]'

Related Guides