Rate Limits
OpenGraph.io limits concurrent requests based on your plan to ensure fair usage and optimal performance for all users.
Concurrent Request Limits
These limits apply to the number of requests you can have in-flight simultaneously:
| Plan | Concurrent Requests | Monthly Credits |
|---|---|---|
| Free | 1 | 100 |
| Developer | 5 | 50,000 |
| Production | 25 | 250,000 |
| Enterprise | 100 | 1,000,000 |
Note: These are concurrent limits, not per-second limits. Once a request completes, you can immediately start another.
Credit System
Each API request consumes credits. A cached response always costs 1 credit, while requests requiring JavaScript execution and proxies can cost more. Here's the breakdown:
| Feature | Cost | Description |
|---|---|---|
| Base request | 1 credit | A single API call to any endpoint |
| Cache hit | 1 credit | Returning cached data (no additional cost) |
| full_render | 10 credits | Renders the page in a headless browser with JavaScript execution |
| use_proxy | 10 credits | Routes request through datacenter proxies for basic bot protection |
| use_premium | 20 credits | Routes request through residential/mobile proxy pools |
| screenshot | 20 credits | Capturing a screenshot via the Screenshot API |
| use_superior | 30 credits | Advanced premium proxy for heavily protected sites (LinkedIn, Amazon, etc.) |
| Auto Proxy | varies | Automatically uses appropriate proxy when needed; billed for the proxy tier required |
Auto Proxy: This feature is enabled by default for qualifying plans and automatically uses a proxy for domains that require one. Set auto_proxy=false to disable. If a request triggers Auto Proxy, you will be billed for the proxy tier that was required.
Handling Rate Limit Errors
When you exceed your concurrent request limit, the API returns a 429 status code. Implement exponential backoff to handle this gracefully:
async function fetchWithRetry(url, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
const response = await fetch(url);
if (response.status === 429) {
// Wait with exponential backoff
const waitTime = Math.pow(2, i) * 1000;
await new Promise(r => setTimeout(r, waitTime));
continue;
}
return response.json();
}
throw new Error('Max retries exceeded');
}Best Practices
- Use caching – Let the API cache results to reduce redundant requests
- Queue requests – Process URLs sequentially rather than all at once
- Monitor usage – Track your credit usage in the dashboard
- Upgrade when needed – If you consistently hit limits, consider upgrading
Monitoring Usage
Track your credit usage and API calls in the OpenGraph.io dashboard. You can set up alerts to notify you when approaching your limits.