Rate Limits
Understanding rate limits, monitoring usage, and implementing strategies to stay within limits while maximizing API usage.
Overview
Rate limits control how many API requests you can make within a specific time period. Understanding and managing rate limits is crucial for building reliable applications that use the Twitter API effectively.
Why Rate Limits Exist
- Fair Usage: Ensure fair access for all users
- System Stability: Prevent API overload
- Resource Management: Manage server resources efficiently
- Cost Control: Control infrastructure costs
Rate Limit Tiers
Rate limits vary by your subscription plan and platform:
Direct Gateway
The Direct Gateway offers the highest rate limits:
| Plan | Requests per Hour | Requests per Second |
|---|---|---|
| Basic | 50 | 1 |
| Pro | 500 | 5 |
| Enterprise | 5,000 | 300 |
Note: Enterprise plan on Direct Gateway supports up to 300 requests per second, making it ideal for high-volume applications.
RapidAPI / JoJoAPI
Marketplace platforms have platform-specific rate limits:
| Plan | Requests per Hour | Requests per Second |
|---|---|---|
| Basic | 50 | 1 |
| Pro | 500 | 5 |
| Enterprise | 5,000 | 50 |
Understanding the Limits
- Requests per Hour: Maximum requests in any 60-minute window
- Requests per Second: Maximum requests per second (burst limit)
- Rolling Window: The hourly limit is a rolling window, not reset at the top of each hour
Important: Direct Gateway Enterprise plan offers significantly higher per-second limits (300 req/s) compared to marketplace platforms (50 req/s).
Rate Limit Headers
Every API response includes rate limit information in headers:
X-RateLimit-Limit: 500
X-RateLimit-Remaining: 487
X-RateLimit-Reset: 1698765432Header Fields
| Header | Description | Example |
|---|---|---|
X-RateLimit-Limit | Maximum requests allowed in the time window | 500 |
X-RateLimit-Remaining | Requests remaining in current window | 487 |
X-RateLimit-Reset | Unix timestamp when the limit resets | 1698765432 |
Reading Headers
function getRateLimitInfo(response) {
return {
limit: parseInt(response.headers.get('X-RateLimit-Limit') || '0'),
remaining: parseInt(response.headers.get('X-RateLimit-Remaining') || '0'),
reset: parseInt(response.headers.get('X-RateLimit-Reset') || '0'),
resetDate: new Date(
parseInt(response.headers.get('X-RateLimit-Reset') || '0') * 1000
),
percentageUsed: function() {
return ((this.limit - this.remaining) / this.limit) * 100;
}
};
}
// Usage
const response = await fetch(url, { headers: { 'X-API-KEY': API_KEY } });
const rateLimit = getRateLimitInfo(response);
console.log(`Used: ${rateLimit.percentageUsed().toFixed(1)}%`);
console.log(`Resets at: ${rateLimit.resetDate.toLocaleString()}`);Rate Limit Exceeded (429)
When you exceed rate limits, you'll receive a 429 status code:
{
"error": {
"code": 429,
"message": "Rate limit exceeded",
"retry_after": 60
}
}Handling 429 Errors
Exponential Backoff:
async function fetchWithBackoff(url, options, maxRetries = 5) {
for (let i = 0; i < maxRetries; i++) {
const response = await fetch(url, options);
if (response.ok) {
return await response.json();
}
if (response.status === 429) {
const retryAfter = parseInt(
response.headers.get('Retry-After') ||
response.headers.get('X-RateLimit-Reset') ||
'60'
);
const waitTime = Math.min(
Math.pow(2, i) * 1000, // Exponential backoff
retryAfter * 1000 // But respect Retry-After
);
console.log(`Rate limited. Waiting ${waitTime}ms...`);
await new Promise(resolve => setTimeout(resolve, waitTime));
continue;
}
throw new Error(`HTTP ${response.status}`);
}
throw new Error('Max retries exceeded');
}Rate Limit Strategies
1. Request Queuing
Implement a queue to manage requests and stay within limits:
class RateLimitedQueue {
constructor(requestsPerSecond = 5) {
this.queue = [];
this.processing = false;
this.interval = 1000 / requestsPerSecond;
}
async add(requestFn) {
return new Promise((resolve, reject) => {
this.queue.push({ requestFn, resolve, reject });
this.process();
});
}
async process() {
if (this.processing || this.queue.length === 0) return;
this.processing = true;
while (this.queue.length > 0) {
const { requestFn, resolve, reject } = this.queue.shift();
try {
const result = await requestFn();
resolve(result);
} catch (error) {
reject(error);
}
await new Promise(resolve => setTimeout(resolve, this.interval));
}
this.processing = false;
}
}
// Usage
const queue = new RateLimitedQueue(5); // 5 requests per second
for (let i = 0; i < 100; i++) {
queue.add(() => fetch(`/tweet/${i}`, {
headers: { 'X-API-KEY': API_KEY }
}));
}2. Token Bucket Algorithm
Implement token bucket for more sophisticated rate limiting:
class TokenBucket {
constructor(capacity, refillRate) {
this.capacity = capacity;
this.tokens = capacity;
this.refillRate = refillRate; // tokens per second
this.lastRefill = Date.now();
}
async consume(tokens = 1) {
this.refill();
if (this.tokens >= tokens) {
this.tokens -= tokens;
return true;
}
const waitTime = ((tokens - this.tokens) / this.refillRate) * 1000;
await new Promise(resolve => setTimeout(resolve, waitTime));
return this.consume(tokens);
}
refill() {
const now = Date.now();
const elapsed = (now - this.lastRefill) / 1000;
this.tokens = Math.min(
this.capacity,
this.tokens + (elapsed * this.refillRate)
);
this.lastRefill = now;
}
}
// Usage
const bucket = new TokenBucket(500, 500 / 3600); // 500 per hour
async function makeRequest(url) {
await bucket.consume(1);
return fetch(url, { headers: { 'X-API-KEY': API_KEY } });
}3. Caching
Cache responses to reduce API calls:
const cache = new Map();
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes
async function getCached(url) {
const cached = cache.get(url);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}
const response = await fetch(url, {
headers: { 'X-API-KEY': API_KEY }
});
const data = await response.json();
cache.set(url, { data, timestamp: Date.now() });
return data;
}4. Batch Operations
Use bulk endpoints when available:
// ❌ Bad: 100 individual requests
for (const userId of userIds) {
await fetch(`/user/${userId}`, { headers: { 'X-API-KEY': API_KEY } });
}
// ✅ Good: 1 bulk request
await fetch(`/users/bulk?ids=${userIds.join(',')}`, {
headers: { 'X-API-KEY': API_KEY }
});Monitoring Rate Limits
Real-Time Monitoring
class RateLimitMonitor {
constructor() {
this.usage = {
current: 0,
limit: 0,
resetTime: null
};
}
update(response) {
this.usage = {
current: parseInt(response.headers.get('X-RateLimit-Limit') || '0') -
parseInt(response.headers.get('X-RateLimit-Remaining') || '0'),
limit: parseInt(response.headers.get('X-RateLimit-Limit') || '0'),
resetTime: new Date(
parseInt(response.headers.get('X-RateLimit-Reset') || '0') * 1000
)
};
this.checkThresholds();
}
checkThresholds() {
const percentage = (this.usage.current / this.usage.limit) * 100;
if (percentage > 90) {
console.warn('⚠️ Rate limit at 90%!', this.usage);
} else if (percentage > 75) {
console.warn('⚠️ Rate limit at 75%', this.usage);
}
}
getStatus() {
return {
...this.usage,
percentage: (this.usage.current / this.usage.limit) * 100,
remaining: this.usage.limit - this.usage.current
};
}
}
// Usage
const monitor = new RateLimitMonitor();
const response = await fetch(url, { headers: { 'X-API-KEY': API_KEY } });
monitor.update(response);
console.log('Rate limit status:', monitor.getStatus());Best Practices
1. Monitor Headers
Always check rate limit headers in responses:
async function makeRequest(url) {
const response = await fetch(url, {
headers: { 'X-API-KEY': API_KEY }
});
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining') || '0');
if (remaining < 10) {
console.warn('Low rate limit remaining:', remaining);
}
return response;
}2. Implement Backoff
Use exponential backoff for 429 errors:
async function fetchWithBackoff(url, options) {
let retries = 0;
const maxRetries = 5;
while (retries < maxRetries) {
const response = await fetch(url, options);
if (response.ok) return response;
if (response.status === 429) {
const delay = Math.pow(2, retries) * 1000;
await new Promise(resolve => setTimeout(resolve, delay));
retries++;
continue;
}
throw new Error(`HTTP ${response.status}`);
}
throw new Error('Max retries exceeded');
}3. Cache Aggressively
Cache responses to minimize API calls:
const cache = new Map();
async function getCached(url, ttl = 5 * 60 * 1000) {
const cached = cache.get(url);
if (cached && Date.now() - cached.timestamp < ttl) {
return cached.data;
}
const response = await fetch(url, {
headers: { 'X-API-KEY': API_KEY }
});
const data = await response.json();
cache.set(url, { data, timestamp: Date.now() });
return data;
}4. Use Bulk Endpoints
Prefer bulk endpoints over individual requests:
// ✅ Good
const response = await fetch(`/users/bulk?ids=${ids.join(',')}`, {
headers: { 'X-API-KEY': API_KEY }
});
// ❌ Bad
for (const id of ids) {
await fetch(`/user/${id}`, { headers: { 'X-API-KEY': API_KEY } });
}5. Plan Request Distribution
Distribute requests evenly over time:
async function distributeRequests(requests, duration) {
const interval = duration / requests.length;
for (const request of requests) {
await request();
await new Promise(resolve => setTimeout(resolve, interval));
}
}Upgrading Your Plan
If you consistently hit rate limits, consider upgrading:
When to Upgrade
- Consistent 429 Errors: Regularly hitting rate limits
- Business Growth: Need for higher capacity
- Time-Sensitive Operations: Can't wait for rate limit resets
- High-Volume Applications: Processing large datasets
Plan Comparison
| Feature | Basic | Pro | Enterprise |
|---|---|---|---|
| Requests/Hour | 50 | 500 | 5,000 |
| Requests/Second | 1 | 5 | 50 |
| Best For | Testing, Low Volume | Small Apps | Production, High Volume |
Troubleshooting
Issue: Frequent 429 Errors
Symptoms: Getting rate limited frequently
Solutions:
- Implement caching
- Use bulk endpoints
- Add request queuing
- Consider upgrading plan
Issue: Not Using Full Capacity
Symptoms: Rate limit resets but not using all requests
Solutions:
- Implement parallel requests (within per-second limits)
- Remove unnecessary caching delays
- Optimize request patterns
Related Documentation
- Error Handling Guide - Handling 429 errors
- Best Practices - API usage best practices
- Authentication Guide - Authentication details
