Rate Limiting & Response Headers
Understanding rate limits and response headers is crucial for building reliable integrations with the OilPriceAPI.
Rate Limit Headers
Every API response includes headers that help you monitor your usage:
Standard Rate Limit Headers
Header | Description | Example |
---|---|---|
X-RateLimit-Limit | Maximum requests allowed in the current window | 100 |
X-RateLimit-Remaining | Remaining requests in the current window | 95 |
X-RateLimit-Reset | Unix timestamp when the rate limit resets | 1627849260 |
X-RateLimit-Reset-After | Seconds until the rate limit resets | 3600 |
X-RateLimit-Bucket | Rate limit bucket identifier | api:v1:prices |
Usage Tracking Headers
Header | Description | Example |
---|---|---|
X-Monthly-Quota | Total monthly request quota | 10000 |
X-Monthly-Usage | Requests used this month | 2543 |
X-Monthly-Remaining | Requests remaining this month | 7457 |
X-Monthly-Reset | Date when monthly quota resets | 2025-02-01T00:00:00Z |
Pagination Headers
When retrieving paginated data (historical prices, etc.):
Header | Description | Example |
---|---|---|
X-Total | Total number of records available | 2016 |
X-Total-Pages | Total number of pages | 21 |
X-Page | Current page number | 1 |
X-Per-Page | Number of records per page | 100 |
Link | RFC 5988 pagination links | See below |
Link Header Format
Link: <https://api.oilpriceapi.com/v1/prices/past_week?page=1>; rel="first",
<https://api.oilpriceapi.com/v1/prices/past_week?page=2>; rel="next",
<https://api.oilpriceapi.com/v1/prices/past_week?page=21>; rel="last"
Response Time Headers
Header | Description | Example |
---|---|---|
X-Response-Time | Server processing time in milliseconds | 145ms |
X-Request-Id | Unique request identifier for support | req_abc123def456 |
X-Cache | Cache status (HIT/MISS) | HIT |
Handling Rate Limits
Check Headers Before Making Requests
import requests
import time
class RateLimitedClient:
def __init__(self, api_key):
self.api_key = api_key
self.remaining = None
self.reset_time = None
def make_request(self, endpoint, params=None):
# Check if we need to wait
if self.remaining == 0 and self.reset_time:
wait_time = self.reset_time - time.time()
if wait_time > 0:
print(f"Rate limited. Waiting {wait_time:.0f} seconds...")
time.sleep(wait_time)
response = requests.get(
f'https://api.oilpriceapi.com/v1{endpoint}',
headers={'Authorization': f'Token {self.api_key}'},
params=params
)
# Update rate limit info
self.remaining = int(response.headers.get('X-RateLimit-Remaining', 0))
self.reset_time = int(response.headers.get('X-RateLimit-Reset', 0))
return response
Implement Exponential Backoff
async function fetchWithRetry(url, options, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
const response = await fetch(url, options);
// Check rate limit headers
const remaining = response.headers.get('X-RateLimit-Remaining');
const resetAfter = response.headers.get('X-RateLimit-Reset-After');
console.log(`Remaining requests: ${remaining}`);
if (response.status === 429) {
// Rate limited - wait and retry
const waitTime = parseInt(resetAfter || '60') * 1000;
console.log(`Rate limited. Waiting ${waitTime}ms...`);
await new Promise(resolve => setTimeout(resolve, waitTime));
continue;
}
if (response.ok) {
return response;
}
// Exponential backoff for other errors
const delay = Math.min(1000 * Math.pow(2, i), 10000);
await new Promise(resolve => setTimeout(resolve, delay));
}
throw new Error('Max retries exceeded');
}
Batch Requests Efficiently
def fetch_multiple_commodities_efficiently():
"""
Instead of making separate requests for each commodity,
use the all-prices endpoint or comma-separated codes
"""
# ❌ Inefficient - Multiple requests
# for code in ['WTI_USD', 'BRENT_CRUDE_USD', 'NATURAL_GAS_USD']:
# response = requests.get(f'/v1/prices/latest?by_code={code}')
# ✅ Efficient - Single request
response = requests.get(
'https://api.oilpriceapi.com/v1/prices/latest',
headers={'Authorization': 'Token YOUR_API_KEY'},
params={'by_code': 'WTI_USD,BRENT_CRUDE_USD,NATURAL_GAS_USD'}
)
return response.json()
Rate Limit Error Responses
429 Too Many Requests
When you exceed rate limits:
{
"status": "error",
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded. Please retry after 60 seconds.",
"details": {
"limit": 100,
"remaining": 0,
"reset_at": "2025-01-23T15:30:00Z",
"reset_after": 60
}
}
}
Response Headers on 429
HTTP/2 429
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1627849260
X-RateLimit-Reset-After: 60
Retry-After: 60
Best Practices
1. Cache Responses
class CachedAPIClient {
constructor(apiKey, cacheTTL = 300000) { // 5 minutes default
this.apiKey = apiKey;
this.cache = new Map();
this.cacheTTL = cacheTTL;
}
async fetch(endpoint, params = {}) {
const cacheKey = `${endpoint}:${JSON.stringify(params)}`;
const cached = this.cache.get(cacheKey);
if (cached && Date.now() - cached.timestamp < this.cacheTTL) {
console.log('Cache hit');
return cached.data;
}
const response = await fetch(
`https://api.oilpriceapi.com/v1${endpoint}?${new URLSearchParams(params)}`,
{ headers: { 'Authorization': `Token ${this.apiKey}` } }
);
const data = await response.json();
this.cache.set(cacheKey, {
data,
timestamp: Date.now()
});
return data;
}
}
2. Use Webhooks for Real-time Updates
Instead of polling, use webhooks (Reservoir Mastery tier):
# Instead of polling every minute
# ❌ while True:
# data = fetch_prices()
# time.sleep(60)
# ✅ Set up a webhook endpoint
from flask import Flask, request
app = Flask(__name__)
@app.route('/webhook/prices', methods=['POST'])
def handle_price_update():
data = request.json
# Process real-time price update
return '', 200
3. Implement Request Queuing
class RequestQueue {
constructor(apiKey, requestsPerMinute = 100) {
this.apiKey = apiKey;
this.queue = [];
this.interval = 60000 / requestsPerMinute;
this.processing = false;
}
async add(endpoint, params) {
return new Promise((resolve, reject) => {
this.queue.push({ endpoint, params, resolve, reject });
if (!this.processing) {
this.process();
}
});
}
async process() {
this.processing = true;
while (this.queue.length > 0) {
const { endpoint, params, resolve, reject } = this.queue.shift();
try {
const response = await fetch(
`https://api.oilpriceapi.com/v1${endpoint}`,
{
headers: { 'Authorization': `Token ${this.apiKey}` },
params
}
);
resolve(await response.json());
} catch (error) {
reject(error);
}
// Wait before next request
await new Promise(r => setTimeout(r, this.interval));
}
this.processing = false;
}
}
Monitoring Your Usage
Track Usage in Your Application
import logging
from datetime import datetime
class UsageMonitor:
def __init__(self):
self.requests_made = 0
self.monthly_remaining = None
self.rate_limit_remaining = None
def log_response(self, response):
# Extract headers
self.monthly_remaining = response.headers.get('X-Monthly-Remaining')
self.rate_limit_remaining = response.headers.get('X-RateLimit-Remaining')
self.requests_made += 1
# Log if approaching limits
if self.monthly_remaining and int(self.monthly_remaining) < 100:
logging.warning(f"Only {self.monthly_remaining} monthly requests remaining!")
if self.rate_limit_remaining and int(self.rate_limit_remaining) < 10:
logging.warning(f"Only {self.rate_limit_remaining} requests remaining in rate limit window!")
Dashboard Monitoring
Monitor your usage at oilpriceapi.com/dashboard:
- Real-time request count
- Usage by endpoint
- Error rates
- Response times
- Geographic distribution
Upgrading Your Plan
If you consistently hit rate limits, consider upgrading:
- Monitor your usage patterns
- Calculate required limits
- Visit oilpriceapi.com/pricing
- Upgrade instantly without downtime