Rate Limits & Quotas
API rate limits, quotas, and how to handle throttling gracefully.
Rate Limit Tiers
DUAL offers three subscription tiers with different rate limits and quotas:
| Tier | Requests/Minute | Requests/Day | Cost |
|---|---|---|---|
| Free | 100 | 1,000 | Free |
| Developer | 500 | 50,000 | $99/month |
| Enterprise | 5,000 | Unlimited | Custom |
These limits apply per organization. If you approach your limit, contact support to discuss upgrading or optimizing your usage.
Per-Endpoint Limits
Some endpoints have additional limits to prevent abuse and ensure fair resource allocation:
- Object mutations (POST /objects, PATCH /objects/:id) — 50 requests/minute
- Batch actions (POST /ebus/actions/batch) — 10 requests/minute
- File uploads (POST /files) — 20 requests/minute
- Template creation (POST /templates) — 10 requests/minute
- Read endpoints (GET /objects, /templates, /batches) — 1,000 requests/minute
Read endpoints have generous limits because they don't modify state. Plan write-heavy operations to stay within mutation limits.
Rate Limit Headers
All responses include headers showing your current rate limit status:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 42
X-RateLimit-Reset: 1678886400
X-RateLimit-Limit— Your limit for this window (minute or day)X-RateLimit-Remaining— Requests remaining in this windowX-RateLimit-Reset— Unix timestamp when the limit resets
Monitor X-RateLimit-Remaining in your client code. When it approaches 0, pause and wait for the reset.
Handling 429 Responses
When you exceed the rate limit, the API responds with HTTP 429 (Too Many Requests). Include a Retry-After header indicating seconds to wait.
Exponential Backoff with Jitter
Implement retry logic with exponential backoff to handle rate limits gracefully:
async function requestWithRetry(
fn: () => Promise,
maxRetries: number = 5
): Promise {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await fn();
} catch (error) {
if (error.status === 429) {
const retryAfter = parseInt(
error.headers['retry-after'] || '60',
10
);
const jitter = Math.random() * 1000;
const delay = retryAfter * 1000 * Math.pow(2, attempt) + jitter;
console.log("Rate limited. Retrying after " + delay + "ms...");
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
throw error;
}
}
throw new Error('Max retries exceeded');
}
Batch Operations
Use batch endpoints to reduce request count and improve throughput:
- POST /ebus/actions/batch — Execute up to 500 actions in a single request (counts as 1 request against limit, not 500)
- POST /objects/search — Query multiple objects with filters in one request
- POST /templates/{id}/variations — Create multiple variations at once
Batching is the most efficient way to handle bulk operations. A batch of 500 actions counts as 1 API call, not 500.
Sequencer Quotas
Beyond API rate limits, the Sequencer has its own quotas:
- Maximum batch size — 500 actions per batch
- Checkpoint frequency — Batches are anchored on-chain every 15 minutes
- Max object size — 1 MB per object (total property size)
- Max action history — Last 1,000 actions per object
These quotas are per-organization. If you need higher limits, contact support.
Optimization Tips
Reduce API usage and stay within limits with these strategies:
- Cache template definitions. Templates rarely change; fetch once and cache locally.
- Use webhooks instead of polling. React to events via webhooks rather than continuously querying for changes.
- Paginate list requests. Fetch 100 objects at a time, not 10,000 in one request. Use
limitandoffsetparameters. - Batch related operations. Instead of creating 100 objects individually, use batch mode to reduce requests to 1 (if under 500 count).
- Index frequently-queried fields. Work with support to add indexes on properties you search often.
- Monitor your usage. Check the dashboard daily to see request trends and plan capacity.