OpenSERP Cloud uses standard HTTP status codes. Every error response is a JSON object:
{
"error": "short_machine_readable_code",
"code": 400,
"message": "Human-readable explanation",
"reason": "OPTIONAL_DETAIL_TAG"
}
The error field is stable - switch on it in code. The message is for humans and may be reworded across releases.
Status code reference
| Status | error |
When it happens | Charged? | Retry? |
|---|---|---|---|---|
200 |
- | Success | Yes (see Pricing) | - |
400 |
bad_request |
Missing or invalid query params | No | No - fix the request |
401 |
unauthorized |
Missing / wrong / revoked API key | No | No - re-check the key |
402 |
insufficient_credits |
Balance hit zero | No | After top-up |
404 |
not_found |
Unknown engine name or path | No | No |
408 |
timeout |
Upstream engine took too long | No | Yes, with backoff |
422 |
unprocessable |
Param values are valid types but unusable together | No | No - fix the request |
429 |
rate_limited |
You hit a per-account or per-key rate limit | No | Yes - see Retry-After |
500 |
internal_error |
Bug on our side | No | Yes, with backoff |
502 / 503 |
service_unavailable |
Upstream search engines or fallback providers failed | No | Yes, with backoff |
Any/Fast endpoints return 502 if all requested/default engines fail. Failed all-engine Any/Fast responses are not charged.
Common 400 reasons
reason |
Cause |
|---|---|
EMPTY_QUERY |
No text was provided |
INVALID_LIMIT |
limit outside 1..100 |
INVALID_DATE_RANGE |
date not in YYYYMMDD..YYYYMMDD format, or end < start |
UNKNOWN_ENGINE |
engines= includes an engine we do not operate |
UNKNOWN_LANG / UNKNOWN_COUNTRY |
Unsupported code |
Validate and surface these to the caller. A 400 always means the request as sent will never succeed without changes.
Rate limits
Cloud uses a token-bucket per API key. Limits are deliberately loose - you should only hit them with concurrent traffic spikes, not steady-state load.
When you exceed the bucket, we return 429 with a Retry-After header (seconds):
HTTP/1.1 429 Too Many Requests
Retry-After: 4
Content-Type: application/json
{ "error": "rate_limited", "code": 429, "message": "Rate limit exceeded." }
Always honor
Retry-After. Hammering after a 429 will keep you at the limit longer.
If you have a workload that needs sustained high throughput, contact us at [email protected] before launch - we can raise limits per key.
Retry strategy
Retry only on 408, 429, 500, 502, 503. Use exponential backoff with jitter and cap the number of attempts. Never retry on 400, 401, 402, 404, 422 - they will not succeed on retry.
JavaScript
async function searchWithRetry(url, key, maxAttempts = 4) {
for (let attempt = 1; attempt <= maxAttempts; attempt++) {
const res = await fetch(url, { headers: { Authorization: `Bearer ${key}` } });
if (res.ok) return res.json();
const retryable = [408, 429, 500, 502, 503].includes(res.status);
if (!retryable || attempt === maxAttempts) {
throw new Error(`OpenSERP ${res.status}: ${await res.text()}`);
}
const headerWait = Number(res.headers.get("Retry-After")) || 0;
const backoff = headerWait * 1000 || Math.min(2 ** attempt * 250, 8000);
const jitter = Math.random() * 250;
await new Promise((r) => setTimeout(r, backoff + jitter));
}
}
Python
import os, random, time, requests
RETRYABLE = {408, 429, 500, 502, 503}
def search_with_retry(url, params, max_attempts=4):
headers = {"Authorization": f"Bearer {os.environ['OPENSERP_KEY']}"}
for attempt in range(1, max_attempts + 1):
r = requests.get(url, params=params, headers=headers, timeout=30)
if r.ok:
return r.json()
if r.status_code not in RETRYABLE or attempt == max_attempts:
r.raise_for_status()
wait = float(r.headers.get("Retry-After", 0)) or min(2 ** attempt * 0.25, 8.0)
time.sleep(wait + random.random() * 0.25)
Go
var retryable = map[int]bool{408: true, 429: true, 500: true, 502: true, 503: true}
func searchWithRetry(client *http.Client, req *http.Request, maxAttempts int) (*http.Response, error) {
for attempt := 1; attempt <= maxAttempts; attempt++ {
resp, err := client.Do(req)
if err == nil && resp.StatusCode < 400 {
return resp, nil
}
if resp != nil && (!retryable[resp.StatusCode] || attempt == maxAttempts) {
return resp, nil
}
wait := time.Duration(1<<attempt) * 250 * time.Millisecond
if h := resp.Header.Get("Retry-After"); h != "" {
if s, perr := strconv.Atoi(h); perr == nil {
wait = time.Duration(s) * time.Second
}
}
time.Sleep(wait + time.Duration(rand.Intn(250))*time.Millisecond)
}
return nil, fmt.Errorf("max attempts exceeded")
}
Partial failures
Megasearch does not have to fail the entire request when one engine errors. It returns results from engines that succeeded, and final billing follows the X-Credits-Used response header.
Any/Fast endpoints behave differently: they try engines until one succeeds. If one engine succeeds, the response is 200 and includes meta.engine_used and meta.engines_tried. If every engine fails, the endpoint returns 502 and does not debit credits.
{
"meta": {
"request_id": "019dc6c1-da45-706e-a57c-d671fa2862ee",
"took_ms": 1180,
"engine_used": "bing",
"engines_tried": ["google", "bing"]
},
"results": []
}
Debugging tips
- Echo
request_id- every success envelope includesmeta.request_id; send that to support to look up your call. - Use the Search Playground - paste the parameters that failed; it shows the exact request URL and response.
- Inspect
X-Engine-Used- for Any/Fast requests, this tells you which engine actually answered.
Next
- Endpoints reference - every parameter for every endpoint.
- Pricing & credits - what each call costs and how to read the credit headers.