Idempotency
Safely retry network blips without double-sending. Stripe-style Idempotency-Key with a pending lock for concurrent retries.
POSTs to mutating endpoints can be safely retried by passing an
Idempotency-Key header. Same key + same body returns the cached
response; same key + different body returns 422; concurrent retries
serialize on the first one's outcome.
curl -X POST https://api.letssign.now/v1/signing-requests \
-H "Authorization: Bearer $LSK_KEY" \
-H "Idempotency-Key: txn-7d4f3c-2026-04-30" \
-F "file=@contract.pdf" \
-F 'signers=[{"email":"…","role":"…"}]'When to use it
Use it on every POST that creates or sends something — i.e.
POST /v1/signing-requests and the
manage endpoints (/remind, /withdraw). GETs are naturally
idempotent and don't need a key.
The classic deliverability story: your worker calls our API, the HTTP request hits a transient network error mid-response, the worker retries — and now the customer gets two signing emails. With an idempotency key, the retry replays our cached response instead.
How it works
The full lifecycle of a key:
First call
We compute hash(file_bytes + form_fields) for the body and INSERT a
pending row keyed by (workspaceId, key). Then we run the request.
Success → cache
We update the row to status='success' with the response status +
body. Subsequent retries with the same key + body replay the cached
response identically — same JSON, same status code, plus an
Idempotent-Replayed: true header.
Concurrent retry while still in flight
A second call with the same key arrives before the first finishes. We
return 409 idempotency_in_progress with a Retry-After: 60
header. The caller waits, then re-tries — by then the first call
finished and the cache hit replays its response.
Same key, different body
You sent two semantically different requests with the same key. We
return 422 idempotency_key_reuse so the misroute is loud
instead of silent — generate a fresh key per logical request.
Stale lock recovery
If the first call's worker crashed mid-flight and never finished, the pending row sits there. After 60 s the next retry takes over — clears the lock, runs the request, finalises the cache. No human intervention.
Key format
Up to 255 ASCII-printable characters. Anything goes — UUIDs, ULIDs, content hashes, your-system's request IDs. Recommended:
- Stable across retries — generate ONCE per logical request, use the same value for every retry attempt.
- Unique per logical request — never reuse a key for a new send, even if it's the "same content".
Lifetime
Cached rows live for 24 hours. After that they're pruned by the webhook-retries cron and a retry past 24 h runs the request fresh. If you're building a long-running queue, refresh keys for items that may sit longer than a day.
What's stored
The cache stores: response status, response body, request hash, and a
locked_at timestamp. We do not store the request body itself —
just its hash, for the "same key + different body" check.
Retry-aware client
async function postWithRetry(opts: { url: string; body: FormData; key: string }) {
for (let attempt = 1; attempt <= 5; attempt++) {
const res = await fetch(opts.url, {
method: 'POST',
headers: {
Authorization: `Bearer ${process.env.LSK_KEY!}`,
'Idempotency-Key': opts.key,
},
body: opts.body,
})
if (res.status === 409) {
// idempotency_in_progress — backoff per Retry-After
const wait = Number(res.headers.get('retry-after')) || 5
await new Promise(r => setTimeout(r, wait * 1000))
continue
}
if (res.status === 429) {
// rate_limited — backoff per Retry-After
const wait = Number(res.headers.get('retry-after')) || 60
await new Promise(r => setTimeout(r, wait * 1000))
continue
}
return res // success or non-retryable error
}
throw new Error('Exhausted idempotent retries')
}