Skip to main content

Overview

To ensure fair usage and maintain service quality, the Dzaleka Online Services API implements rate limiting based on IP address.
Current rate limit: 60 requests per minute per IP address

Rate Limit Configuration

The rate limiting system is implemented in src/utils/api-utils.ts:12-77 with the following parameters:
const RATE_LIMIT_WINDOW = 60 * 1000; // 1 minute
const MAX_REQUESTS_PER_WINDOW = 60; // 60 requests per minute

How It Works

  1. Each IP address gets a 60-request budget per minute
  2. The window starts with your first request
  3. The counter resets after 60 seconds
  4. Requests beyond the limit receive a 429 Too Many Requests response
Rate limits are tracked in-memory and reset automatically. The system periodically cleans up expired entries.

Rate Limit Headers

When you exceed the rate limit, the API returns these headers:
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1709982600000
Retry-After: 45

Header Descriptions

X-RateLimit-Limit
string
Maximum requests allowed per window (60)
X-RateLimit-Remaining
string
Number of requests remaining in current window (0 when limited)
X-RateLimit-Reset
string
Unix timestamp (milliseconds) when the rate limit resets
Retry-After
string
Number of seconds to wait before making another request

Rate Limit Response

When you exceed the rate limit, you’ll receive this response:
{
  "status": "error",
  "message": "Rate limit exceeded. Please try again later.",
  "retryAfter": 45
}

Example Rate Limit Error

curl -i https://dzaleka.com/api/services
Response when rate limited:
HTTP/1.1 429 Too Many Requests
Content-Type: application/json
Access-Control-Allow-Origin: *
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1709982645000
Retry-After: 45

{
  "status": "error",
  "message": "Rate limit exceeded. Please try again later.",
  "retryAfter": 45
}

IP Address Detection

Rate limits are applied per IP address. The system detects your IP from these headers (in order of priority):
  1. x-forwarded-for (first IP in the list)
  2. x-real-ip
  3. cf-connecting-ip (Cloudflare)
  4. Direct connection IP
From src/utils/api-utils.ts:25-29:
const clientIP =
  request.headers.get('x-forwarded-for')?.split(',')[0] ||
  request.headers.get('x-real-ip') ||
  request.headers.get('cf-connecting-ip') ||
  'unknown';
If multiple users share the same public IP (e.g., corporate network, shared hosting), they share the same rate limit.

Handling Rate Limits

Best Practices

When you receive a 429 response, wait before retrying:
async function fetchWithRetry(url, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    const response = await fetch(url);
    
    if (response.status !== 429) {
      return response;
    }
    
    // Get retry-after header (in seconds)
    const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
    
    // Wait with exponential backoff
    const delay = retryAfter * 1000 * Math.pow(2, i);
    console.log(`Rate limited. Retrying in ${delay}ms...`);
    
    await new Promise(resolve => setTimeout(resolve, delay));
  }
  
  throw new Error('Max retries exceeded');
}
Reduce API calls by caching responses:
const cache = new Map();
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes

async function fetchWithCache(url) {
  const cached = cache.get(url);
  
  if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
    return cached.data;
  }
  
  const response = await fetch(url);
  const data = await response.json();
  
  cache.set(url, {
    data,
    timestamp: Date.now()
  });
  
  return data;
}
Use POST requests with filters instead of multiple GET requests:
// Bad: Multiple requests
const services = await fetch('/api/services');
const events = await fetch('/api/events');
const news = await fetch('/api/news');

// Better: Use search endpoint
const results = await fetch(
  '/api/search?q=education&collections=services,events,news'
);
Track your API usage to avoid hitting limits:
class RateLimitTracker {
  constructor() {
    this.requests = [];
    this.window = 60 * 1000; // 1 minute
  }
  
  canMakeRequest() {
    const now = Date.now();
    // Remove requests outside current window
    this.requests = this.requests.filter(
      time => now - time < this.window
    );
    
    return this.requests.length < 60;
  }
  
  recordRequest() {
    this.requests.push(Date.now());
  }
  
  getRemainingRequests() {
    const now = Date.now();
    this.requests = this.requests.filter(
      time => now - time < this.window
    );
    return 60 - this.requests.length;
  }
}

Code Examples

JavaScript with Rate Limit Handling

class DzalekaAPIClient {
  constructor(baseURL = 'https://services.dzaleka.com/api') {
    this.baseURL = baseURL;
    this.requestQueue = [];
    this.processing = false;
  }
  
  async fetch(endpoint, options = {}) {
    return new Promise((resolve, reject) => {
      this.requestQueue.push({ endpoint, options, resolve, reject });
      this.processQueue();
    });
  }
  
  async processQueue() {
    if (this.processing || this.requestQueue.length === 0) {
      return;
    }
    
    this.processing = true;
    const { endpoint, options, resolve, reject } = this.requestQueue.shift();
    
    try {
      const response = await fetch(`${this.baseURL}${endpoint}`, options);
      
      if (response.status === 429) {
        const retryAfter = parseInt(
          response.headers.get('Retry-After') || '60'
        );
        
        console.log(`Rate limited. Waiting ${retryAfter}s...`);
        
        // Re-queue the request
        this.requestQueue.unshift({ endpoint, options, resolve, reject });
        
        // Wait before processing next request
        setTimeout(() => {
          this.processing = false;
          this.processQueue();
        }, retryAfter * 1000);
        
        return;
      }
      
      const data = await response.json();
      resolve({ status: response.status, data });
      
    } catch (error) {
      reject(error);
    }
    
    // Small delay between requests to avoid hitting limit
    setTimeout(() => {
      this.processing = false;
      this.processQueue();
    }, 1000); // 1 second between requests
  }
}

// Usage
const api = new DzalekaAPIClient();

const services = await api.fetch('/services');
const events = await api.fetch('/events');
const news = await api.fetch('/news');

console.log('Services:', services.data);

Python with Rate Limit Handling

import time
import requests
from typing import Optional, Dict, Any

class DzalekaAPIClient:
    def __init__(self, base_url: str = 'https://services.dzaleka.com/api'):
        self.base_url = base_url
        self.session = requests.Session()
    
    def fetch(self, endpoint: str, method: str = 'GET', 
              data: Optional[Dict] = None, max_retries: int = 3) -> Dict[Any, Any]:
        url = f"{self.base_url}{endpoint}"
        
        for attempt in range(max_retries):
            try:
                if method == 'GET':
                    response = self.session.get(url)
                else:
                    response = self.session.post(url, json=data)
                
                if response.status_code == 429:
                    retry_after = int(response.headers.get('Retry-After', 60))
                    print(f"Rate limited. Waiting {retry_after}s...")
                    time.sleep(retry_after)
                    continue
                
                response.raise_for_status()
                return response.json()
                
            except requests.exceptions.RequestException as e:
                if attempt == max_retries - 1:
                    raise
                print(f"Request failed: {e}. Retrying...")
                time.sleep(2 ** attempt)  # Exponential backoff
        
        raise Exception("Max retries exceeded")

# Usage
api = DzalekaAPIClient()

try:
    services = api.fetch('/services')
    print(f"Found {services['count']} services")
    
    events = api.fetch('/events')
    print(f"Found {events['count']} events")
    
except Exception as e:
    print(f"Error: {e}")

cURL with Manual Retry

#!/bin/bash

MAX_RETRIES=3
URL="https://services.dzaleka.com/api/services"

for i in $(seq 1 $MAX_RETRIES); do
  echo "Attempt $i..."
  
  RESPONSE=$(curl -s -w "\n%{http_code}" "$URL")
  HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
  BODY=$(echo "$RESPONSE" | sed '$d')
  
  if [ "$HTTP_CODE" = "200" ]; then
    echo "Success!"
    echo "$BODY" | jq .
    exit 0
  elif [ "$HTTP_CODE" = "429" ]; then
    RETRY_AFTER=$(echo "$BODY" | jq -r '.retryAfter // 60')
    echo "Rate limited. Waiting ${RETRY_AFTER}s..."
    sleep "$RETRY_AFTER"
  else
    echo "Error: HTTP $HTTP_CODE"
    echo "$BODY" | jq .
    exit 1
  fi
done

echo "Max retries exceeded"
exit 1

Rate Limit Strategies

Strategy Comparison

StrategyProsConsBest For
Request QueuePrevents rate limit errorsSlower overallBackground jobs
Exponential BackoffAutomatic recoveryAdds complexityProduction apps
Client-side CachingReduces API callsStale data possibleRead-heavy apps
Request ThrottlingSmooth trafficMay not use full quotaHigh-volume apps
Combine multiple strategies:
  1. Cache responses (5-10 minute TTL)
  2. Throttle requests (max 1 per second)
  3. Implement retry logic with exponential backoff
  4. Monitor usage and adjust throttling

Troubleshooting

Possible causes:
  • Multiple users sharing the same IP
  • Previous requests still counting toward limit
  • Aggressive polling or loops
Solution: Implement rate tracking and delay between requests
If your users share a public IP:
  • Consider server-side API calls instead of client-side
  • Implement request queuing on your server
  • Cache aggressively to reduce API calls
Rate limits reset after 60 seconds from the start of the window:
  • Check X-RateLimit-Reset header for exact reset time
  • Ensure you’re waiting for the full Retry-After duration

Future Enhancements

The following features are under consideration for future releases:
  • Higher rate limits for authenticated users
  • Per-user rate limiting (instead of IP-based)
  • Rate limit headers on all responses (not just 429s)
  • Burst allowance for occasional spikes

Next Steps