X-Pages Pagination
X-Pages pagination uses page numbers to navigate through datasets, with pages starting at 1. This is a traditional pagination approach where you request specific page numbers to browse through the data.
How pagination works
- Request a page: Use the
pagerequest parameter to specify which page you want to retrieve. - Check total pages: The
X-Pagesresponse header tells you how many pages are available in total. - Navigate through pages: Request pages sequentially (1, 2, 3, etc.) until you've retrieved all available data.
Request parameters
page: The page number to retrieve. Pages start at 1.
Response headers
X-Pages: The total number of pages available in the dataset.
Caching considerations
An important caveat with X-Pages pagination is caching behavior. If the cache expires between fetching two different pages, you may see duplicated items in your results.
This happens because:
- Page 1 might be cached with data from time T.
- By the time you fetch page 2, the cache for page 1 has expired.
- The server generates new data for page 1, which may overlap with what you already retrieved from page 2.
Solution: don't fetch close to expiry
Some implementations solve this by checking how close page 1 is to the cache expiration time. If page 1 is within a few seconds of expiring, they first wait for the cache to refresh. Only then do they fetch the entire set of pages.
Example
import requests
import time
def fetch_page(url, headers, page=1):
params = {
'page': page,
}
response = requests.get(url, params=params, headers=headers)
response.raise_for_status()
return response.json(), response.headers
def collect_current_data(url, headers):
all_records = []
# Fetch page 1 to get total pages.
records, page1_headers = fetch_page(url, headers, page=1)
total_pages = int(page1_headers.get('X-Pages', 1))
all_records.extend(records)
# Fetch remaining pages.
for page in range(2, total_pages + 1):
print(f"Fetching page {page}/{total_pages}...")
records, _ = fetch_page(url, headers, page=page)
all_records.extend(records)
print(f" Found {len(records)} records; total {len(all_records)} records")
time.sleep(0.1) # Throttle requests to avoid rate limiting.
print(f"Total: {len(all_records)} records across {total_pages} pages")
return all_records
if __name__ == "__main__":
url = "https://esi.evetech.net/..." # ... (replace with actual route)
headers = {
"User-Agent": "ESI-Example/1.0",
"X-Compatibility-Date": "2025-09-30",
}
all_data = collect_current_data(url, headers)
# ... (do something with all_data)