Rate Limits
Bundleport applies rate limits to ensure fair usage and system stability. This guide explains how rate limiting works and how to handle rate limit responses.
Rate Limit Overview
Rate limits are applied per service account (API key) and are configured in multiple time windows to ensure fair usage:
| Window | Default Limit | Description |
|---|---|---|
| Per Minute | 600 requests | Short-term burst protection |
| Per Hour | 15,000 requests | Medium-term usage control |
| Per Day | 250,000 requests | Daily quota management |
These are the default limits for new service accounts. Rate limits are configurable per service account and can be adjusted based on your plan and usage requirements.
Rate limits are synchronized from the Core API to Kong Gateway. Check your specific rate limits in app.bundleport.com or via the Core API.
How Rate Limits Work
- Per-Consumer Enforcement: Each service account has its own rate limit counter
- Multiple Windows: All three windows (minute, hour, day) are enforced simultaneously
- Fault Tolerant: Rate limiting continues to work even if Kong's database is temporarily unavailable
- Gateway-Level: Rate limits are enforced at the Kong Gateway layer before requests reach backend services
Rate Limit Headers
Every API response includes rate limit information in headers for each time window:
X-RateLimit-Limit-Minute: 600
X-RateLimit-Remaining-Minute: 595
X-RateLimit-Reset-Minute: 1640995260
X-RateLimit-Limit-Hour: 15000
X-RateLimit-Remaining-Hour: 14995
X-RateLimit-Reset-Hour: 1640998800
X-RateLimit-Limit-Day: 250000
X-RateLimit-Remaining-Day: 249995
X-RateLimit-Reset-Day: 1641081600
X-RateLimit-Limit-{Window}: Maximum requests allowed in the window (Minute, Hour, Day)X-RateLimit-Remaining-{Window}: Requests remaining in current windowX-RateLimit-Reset-{Window}: Unix timestamp when the limit resets
Kong Gateway may also provide standard X-RateLimit-Limit headers. Always check for the most restrictive window when implementing retry logic.
Rate Limit Response
When you exceed the rate limit, you'll receive a 429 Too Many Requests response:
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded. Retry after 60 seconds."
}
}
The response includes a Retry-After header indicating when you can retry:
HTTP/1.1 429 Too Many Requests
Retry-After: 60
Content-Type: application/json
Handling Rate Limits
1. Monitor Rate Limit Headers
Always check rate limit headers to prevent hitting the limit:
- JavaScript
- Python
async function makeRequest(url, options) {
const response = await fetch(url, options);
// Check rate limit headers
const limit = parseInt(response.headers.get('X-RateLimit-Limit') || '0');
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining') || '0');
const reset = parseInt(response.headers.get('X-RateLimit-Reset') || '0');
// Log warning when approaching limit
if (remaining < limit * 0.1) {
console.warn(`Rate limit warning: ${remaining} requests remaining`);
}
// Handle rate limit exceeded
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
throw new RateLimitError(`Rate limit exceeded. Retry after ${retryAfter} seconds.`, retryAfter);
}
return response;
}
import requests
import time
def make_request(url, **kwargs):
response = requests.request('POST', url, **kwargs)
# Check rate limit headers
limit = int(response.headers.get('X-RateLimit-Limit', 0))
remaining = int(response.headers.get('X-RateLimit-Remaining', 0))
reset = int(response.headers.get('X-RateLimit-Reset', 0))
# Log warning when approaching limit
if remaining < limit * 0.1:
print(f'Rate limit warning: {remaining} requests remaining')
# Handle rate limit exceeded
if response.status_code == 429:
retry_after = int(response.headers.get('Retry-After', 60))
raise RateLimitError(f'Rate limit exceeded. Retry after {retry_after} seconds.', retry_after)
return response
class RateLimitError(Exception):
def __init__(self, message, retry_after):
super().__init__(message)
self.retry_after = retry_after
2. Implement Exponential Backoff
When rate limited, wait before retrying:
async function retryAfterRateLimit(error, retryFn) {
if (error instanceof RateLimitError) {
console.log(`Rate limited. Waiting ${error.retryAfter} seconds...`);
await sleep(error.retryAfter * 1000);
return retryFn();
}
throw error;
}
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
3. Use Request Queuing
For high-volume applications, implement request queuing:
class RequestQueue {
constructor(maxConcurrent = 10, requestsPerMinute = 600) {
this.queue = [];
this.processing = 0;
this.maxConcurrent = maxConcurrent;
this.requestsPerMinute = requestsPerMinute;
this.requestTimestamps = [];
}
async add(requestFn) {
return new Promise((resolve, reject) => {
this.queue.push({ requestFn, resolve, reject });
this.process();
});
}
async process() {
if (this.processing >= this.maxConcurrent || this.queue.length === 0) {
return;
}
// Check rate limit
const now = Date.now();
const oneMinuteAgo = now - 60000;
this.requestTimestamps = this.requestTimestamps.filter(t => t > oneMinuteAgo);
if (this.requestTimestamps.length >= this.requestsPerMinute) {
// Wait until oldest request is outside the window
const oldest = Math.min(...this.requestTimestamps);
const waitTime = 60000 - (now - oldest) + 100; // Add 100ms buffer
setTimeout(() => this.process(), waitTime);
return;
}
this.processing++;
const { requestFn, resolve, reject } = this.queue.shift();
this.requestTimestamps.push(now);
requestFn()
.then(resolve)
.catch(reject)
.finally(() => {
this.processing--;
this.process();
});
}
}
Best Practices
1. Cache Responses When Possible
Cache search results and content data to reduce API calls:
const cache = new Map();
async function searchHotels(criteria) {
const cacheKey = JSON.stringify(criteria);
if (cache.has(cacheKey)) {
const cached = cache.get(cacheKey);
if (Date.now() - cached.timestamp < 60000) { // 1 minute cache
return cached.data;
}
}
const data = await makeRequest('/hotels/v1/search', { body: criteria });
cache.set(cacheKey, { data, timestamp: Date.now() });
return data;
}
2. Batch Requests When Possible
Some endpoints support batching multiple requests:
// Instead of multiple requests
const hotel1 = await getHotel('12345');
const hotel2 = await getHotel('67890');
const hotel3 = await getHotel('11111');
// Use batch endpoint if available
const hotels = await getHotels(['12345', '67890', '11111']);
3. Use Webhooks Instead of Polling
For real-time updates, use webhooks instead of polling:
// ❌ Don't poll every few seconds
setInterval(async () => {
const bookings = await getBookings();
// Process updates
}, 5000);
// ✅ Use webhooks for real-time notifications
app.post('/webhooks/bookings', async (req, res) => {
const { event, data } = req.body;
// Process event immediately
res.status(200).send('OK');
});
4. Monitor Your Usage
Track your API usage to stay within limits:
class RateLimitMonitor {
constructor() {
this.requests = [];
}
recordRequest() {
this.requests.push(Date.now());
// Keep only last hour
const oneHourAgo = Date.now() - 3600000;
this.requests = this.requests.filter(t => t > oneHourAgo);
}
getRequestsPerMinute() {
const oneMinuteAgo = Date.now() - 60000;
return this.requests.filter(t => t > oneMinuteAgo).length;
}
getEstimatedTimeToLimit(limit = 100) {
const current = this.getRequestsPerMinute();
if (current >= limit) return 0;
return Math.ceil((limit - current) / (current || 1)) * 60; // seconds
}
}
Configuring Rate Limits
Rate limits are managed through the Core API and automatically synchronized to Kong Gateway:
Default Limits
- Per minute: 600 requests
- Per hour: 15,000 requests
- Per day: 250,000 requests
Updating Rate Limits
Rate limits can be updated per service account via the Core API updateServiceAccountControls mutation:
mutation UpdateRateLimits {
updateServiceAccountControls(
id: "service-account-id"
limits: {
minute: 1000
hour: 30000
day: 500000
}
) {
id
metadata {
limits {
minute
hour
day
}
}
}
}
Changes are automatically synchronized to Kong Gateway within seconds.
Increasing Rate Limits
If you need higher rate limits:
- Contact Support: For enterprise needs, contact support@bundleport.com
Next Steps
- Error Handling Guide - Learn how to handle API errors
- Webhooks Guide - Set up real-time notifications
- API Reference - Browse all endpoints