Oil Price API Documentation - Quick Start in 5 Minutes | REST API
GitHub
GitHub
  • Guides

    • Authentication
    • Testing & Development
    • Error Codes Reference
    • Webhook Signature Verification
    • Production Deployment Checklist
    • Service Level Agreement (SLA)
    • Rate Limiting & Response Headers
    • Data Quality and Validation
    • Troubleshooting Guide
    • Incident Response Guide

Rate Limiting & Response Headers

Understanding rate limits and response headers is crucial for building reliable integrations with the OilPriceAPI.


Rate Limits by Plan

PlanMonthly RequestsRequests/SecondBurst Limit
Free100 (lifetime)15
Hobby10,000520
Starter50,0001050
Professional100,00020100
Business200,00050200
Enterprise500,000+100+Custom

Note: Burst limits allow short-term spikes above the per-second rate. Monthly limits reset on the 1st of each month at 00:00 UTC.


Rate Limit Headers

Every API response includes headers that help you monitor your usage:

Standard Rate Limit Headers

HeaderDescriptionExample
X-RateLimit-LimitMaximum requests allowed in the current window100
X-RateLimit-RemainingRemaining requests in the current window95
X-RateLimit-ResetUnix timestamp when the rate limit resets1627849260
X-RateLimit-Reset-AfterSeconds until the rate limit resets3600
X-RateLimit-BucketRate limit bucket identifierapi:v1:prices

Usage Tracking Headers

HeaderDescriptionExample
X-Monthly-QuotaTotal monthly request quota10000
X-Monthly-UsageRequests used this month2543
X-Monthly-RemainingRequests remaining this month7457
X-Monthly-ResetDate when monthly quota resets2025-02-01T00:00:00Z

Pagination Headers

When retrieving paginated data (historical prices, etc.):

HeaderDescriptionExample
X-TotalTotal number of records available2016
X-Total-PagesTotal number of pages21
X-PageCurrent page number1
X-Per-PageNumber of records per page100
LinkRFC 5988 pagination linksSee below

Link Header Format

Link: <https://api.oilpriceapi.com/v1/prices/past_week?page=1>; rel="first",
      <https://api.oilpriceapi.com/v1/prices/past_week?page=2>; rel="next",
      <https://api.oilpriceapi.com/v1/prices/past_week?page=21>; rel="last"

Response Time Headers

HeaderDescriptionExample
X-Response-TimeServer processing time in milliseconds145ms
X-Request-IdUnique request identifier for supportreq_abc123def456
X-CacheCache status (HIT/MISS)HIT

Handling Rate Limits

Check Headers Before Making Requests

import requests
import time

class RateLimitedClient:
    def __init__(self, api_key):
        self.api_key = api_key
        self.remaining = None
        self.reset_time = None

    def make_request(self, endpoint, params=None):
        # Check if we need to wait
        if self.remaining == 0 and self.reset_time:
            wait_time = self.reset_time - time.time()
            if wait_time > 0:
                print(f"Rate limited. Waiting {wait_time:.0f} seconds...")
                time.sleep(wait_time)

        response = requests.get(
            f'https://api.oilpriceapi.com/v1{endpoint}',
            headers={'Authorization': f'Token {self.api_key}'},
            params=params
        )

        # Update rate limit info
        self.remaining = int(response.headers.get('X-RateLimit-Remaining', 0))
        self.reset_time = int(response.headers.get('X-RateLimit-Reset', 0))

        return response

Implement Exponential Backoff

async function fetchWithRetry(url, options, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    const response = await fetch(url, options);

    // Check rate limit headers
    const remaining = response.headers.get('X-RateLimit-Remaining');
    const resetAfter = response.headers.get('X-RateLimit-Reset-After');

    console.log(`Remaining requests: ${remaining}`);

    if (response.status === 429) {
      // Rate limited - wait and retry
      const waitTime = parseInt(resetAfter || '60') * 1000;
      console.log(`Rate limited. Waiting ${waitTime}ms...`);
      await new Promise(resolve => setTimeout(resolve, waitTime));
      continue;
    }

    if (response.ok) {
      return response;
    }

    // Exponential backoff for other errors
    const delay = Math.min(1000 * Math.pow(2, i), 10000);
    await new Promise(resolve => setTimeout(resolve, delay));
  }

  throw new Error('Max retries exceeded');
}

Batch Requests Efficiently

def fetch_multiple_commodities_efficiently():
    """
    Instead of making separate requests for each commodity,
    use the all-prices endpoint or comma-separated codes
    """

    # ❌ Inefficient - Multiple requests
    # for code in ['WTI_USD', 'BRENT_CRUDE_USD', 'NATURAL_GAS_USD']:
    #     response = requests.get(f'/v1/prices/latest?by_code={code}')

    # ✅ Efficient - Single request
    response = requests.get(
        'https://api.oilpriceapi.com/v1/prices/latest',
        headers={'Authorization': 'Token YOUR_API_KEY'},
        params={'by_code': 'WTI_USD,BRENT_CRUDE_USD,NATURAL_GAS_USD'}
    )

    return response.json()

Rate Limit Error Responses

429 Too Many Requests

When you exceed rate limits:

{
  "status": "error",
  "error": {
    "code": "RATE_LIMIT_EXCEEDED",
    "message": "Rate limit exceeded. Please retry after 60 seconds.",
    "details": {
      "limit": 100,
      "remaining": 0,
      "reset_at": "2025-01-23T15:30:00Z",
      "reset_after": 60
    }
  }
}

Response Headers on 429

HTTP/2 429
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1627849260
X-RateLimit-Reset-After: 60
Retry-After: 60

Best Practices

1. Cache Responses

class CachedAPIClient {
  constructor(apiKey, cacheTTL = 300000) { // 5 minutes default
    this.apiKey = apiKey;
    this.cache = new Map();
    this.cacheTTL = cacheTTL;
  }

  async fetch(endpoint, params = {}) {
    const cacheKey = `${endpoint}:${JSON.stringify(params)}`;
    const cached = this.cache.get(cacheKey);

    if (cached && Date.now() - cached.timestamp < this.cacheTTL) {
      console.log('Cache hit');
      return cached.data;
    }

    const response = await fetch(
      `https://api.oilpriceapi.com/v1${endpoint}?${new URLSearchParams(params)}`,
      { headers: { 'Authorization': `Token ${this.apiKey}` } }
    );

    const data = await response.json();

    this.cache.set(cacheKey, {
      data,
      timestamp: Date.now()
    });

    return data;
  }
}

2. Use Webhooks for Real-time Updates

Instead of polling, use webhooks (Reservoir Mastery tier):

# Instead of polling every minute
# ❌ while True:
#     data = fetch_prices()
#     time.sleep(60)

# ✅ Set up a webhook endpoint
from flask import Flask, request

app = Flask(__name__)

@app.route('/webhook/prices', methods=['POST'])
def handle_price_update():
    data = request.json
    # Process real-time price update
    return '', 200

3. Implement Request Queuing

class RequestQueue {
  constructor(apiKey, requestsPerMinute = 100) {
    this.apiKey = apiKey;
    this.queue = [];
    this.interval = 60000 / requestsPerMinute;
    this.processing = false;
  }

  async add(endpoint, params) {
    return new Promise((resolve, reject) => {
      this.queue.push({ endpoint, params, resolve, reject });
      if (!this.processing) {
        this.process();
      }
    });
  }

  async process() {
    this.processing = true;

    while (this.queue.length > 0) {
      const { endpoint, params, resolve, reject } = this.queue.shift();

      try {
        const response = await fetch(
          `https://api.oilpriceapi.com/v1${endpoint}`,
          {
            headers: { 'Authorization': `Token ${this.apiKey}` },
            params
          }
        );
        resolve(await response.json());
      } catch (error) {
        reject(error);
      }

      // Wait before next request
      await new Promise(r => setTimeout(r, this.interval));
    }

    this.processing = false;
  }
}

Monitoring Your Usage

Track Usage in Your Application

import logging
from datetime import datetime

class UsageMonitor:
    def __init__(self):
        self.requests_made = 0
        self.monthly_remaining = None
        self.rate_limit_remaining = None

    def log_response(self, response):
        # Extract headers
        self.monthly_remaining = response.headers.get('X-Monthly-Remaining')
        self.rate_limit_remaining = response.headers.get('X-RateLimit-Remaining')
        self.requests_made += 1

        # Log if approaching limits
        if self.monthly_remaining and int(self.monthly_remaining) < 100:
            logging.warning(f"Only {self.monthly_remaining} monthly requests remaining!")

        if self.rate_limit_remaining and int(self.rate_limit_remaining) < 10:
            logging.warning(f"Only {self.rate_limit_remaining} requests remaining in rate limit window!")

Dashboard Monitoring

Monitor your usage at oilpriceapi.com/dashboard:

  • Real-time request count
  • Usage by endpoint
  • Error rates
  • Response times
  • Geographic distribution

Upgrading Your Plan

If you consistently hit rate limits, consider upgrading:

  1. Monitor your usage patterns
  2. Calculate required limits
  3. Visit oilpriceapi.com/pricing
  4. Upgrade instantly without downtime

Related Documentation

  • Authentication
  • Error Handling
  • Webhooks
  • Production Checklist
Last Updated: 12/10/25, 11:37 AM
Prev
Service Level Agreement (SLA)
Next
Data Quality and Validation