// API Reference
Complete reference for Polaris Report API. Base URL: https://api.veroq.ai
Authentication
Pass your API key via header: Authorization: Bearer vq_live_xxx
Or query parameter: ?api_key=vq_live_xxx
Get your key at /developers or run veroq login from the Python or TypeScript SDK.
Try instantly -- no signup required
Use the public demo key to test any read endpoint:
curl "https://api.veroq.ai/api/v1/search?q=AI&api_key=demo"Works on search, feed, briefs, entities, trending, compare, and extract. 1,000/month · 10/min · 100/day.
Universal Agent — 3 Lines to Intelligence
The fastest way to add intelligence to your app. Install the SDK and ask questions in plain English.
Python
pip install veroq
from veroq import Agent
agent = Agent()
result = agent.ask("What's happening with NVDA?")
print(result.summary)TypeScript
npm install @veroq/sdk
import { Agent } from '@veroq/sdk';
const agent = new Agent();
const result = await agent.ask("What's happening with NVDA?");
console.log(result.summary);6 methods: ask() full() subscribe() runAgent() search() verify()
Machine Payments Protocol
Paid endpoints (/verify, /research, /extract, /compare, /context, /generate) support Stripe MPP. Agents can pay per-request without an API key — send a request with no credentials and follow the 402 payment challenge.
// SDK Libraries
npm install -g @veroq/cliexport VEROQ_API_KEY=vq_live_xxx veroq ask "What's happening with NVDA?" veroq screen "oversold semiconductors" veroq signal MSFT veroq compare AAPL MSFT GOOGL veroq earnings TSLA veroq market --human
pip install veroq && veroq loginfrom veroq import shield
from veroq.middleware import openai_shield
# Middleware — auto-shield every LLM call
client = openai_shield(openai.OpenAI())
response = client.chat.completions.create(...)
print(response.veroq_shield.trust_score)
# Or shield any text directly
result = shield("NVIDIA reported $22B in Q4")
print(result.trust_score, result.corrections)
# Multi-agent verified analysis
from veroq import VeroqClient
swarm = VeroqClient().create_verified_swarm(
"Analyze NVDA",
roles=["planner","researcher","verifier",
"critic","synthesizer"],
)
print(swarm["synthesis"]["summary"])pip install veroq[async] && veroq loginfrom veroq import AsyncVeroqClient
async with AsyncVeroqClient() as client: # reads ~/.veroq/credentials
# All methods are async
feed = await client.feed(category="ai_ml")
results = await client.search(
"quantum computing",
depth="deep",
exclude_sources="foxnews.com"
)
# Extract articles
extracted = await client.extract([
"https://reuters.com/article/..."
])
# Stream briefs in real-time
async for brief in client.stream(category="ai_ml"):
print(brief.headline)npm install @veroq/sdk && npx veroq loginimport { shield, shieldOpenAI } from "@veroq/sdk";
import OpenAI from "openai";
// Middleware — auto-shield every LLM call
const client = shieldOpenAI(new OpenAI());
const response = await client.chat.completions.create({...});
console.log(response.veroqShield.trustScore);
// Or shield any text directly
const result = await shield("NVIDIA reported $22B");
console.log(result.trustScore, result.corrections);
// Express middleware — one line
import { shieldMiddleware } from "@veroq/sdk";
app.use("/api/ai", shieldMiddleware({ threshold: 0.7 }));pip install langchain-veroqfrom langchain_veroq import (
VeroqSearchTool,
VeroqExtractTool,
VeroqTrendingTool,
VeroqRetriever,
)
# 40 tools for LangChain agents
tools = [
VeroqSearchTool(api_key="vq_live_xxx"),
VeroqExtractTool(api_key="vq_live_xxx"),
VeroqTrendingTool(api_key="vq_live_xxx"),
]
# RAG retriever for chains
retriever = VeroqRetriever(
api_key="vq_live_xxx",
category="ai_ml",
min_confidence=0.7,
include_sources="reuters.com",
)
docs = retriever.invoke("AI regulation")npm install @veroq/aiimport { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { veroqSearch, veroqFeed } from "@veroq/ai";
const result = await generateText({
model: openai("gpt-4o"),
tools: {
searchNews: veroqSearch({ apiKey: "vq_live_xxx" }),
getLatest: veroqFeed({ apiKey: "vq_live_xxx" }),
},
prompt: "What's happening in AI today?",
});pip install crewai-veroqfrom crewai import Agent, Task, Crew
from crewai_veroq import VeroqSearchTool, VeroqFeedTool
researcher = Agent(
role="News Analyst",
goal="Find and analyze the latest news",
tools=[
VeroqSearchTool(api_key="vq_live_xxx"),
VeroqFeedTool(api_key="vq_live_xxx"),
],
)
task = Task(
description="What are the top AI developments today?",
agent=researcher,
expected_output="A summary of today's top AI news",
)
crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()All errors return a JSON body with a message field.
Missing required params, invalid JSON body
Check request parameters and body format
Missing or invalid API key / JWT token
Verify your API key is correct and not revoked
Key doesn't have access to this endpoint or category
Check key scopes in the developer portal
Invalid endpoint path or resource ID
Verify the URL path and resource exists
Rate limit or monthly plan limit exceeded
Back off and retry -- check RateLimit-* headers
Server-side issue
Retry after a short delay; contact support if persistent
Error Response Schema
{
"status": "error",
"message": "Descriptive error message",
"code": 429
}Per-minute rate limits apply to all authenticated requests. Monthly limits depend on your plan.
Project Tracking
Pass the X-Project-ID header on any request to tag API calls by project (max 64 chars). Useful for tracking usage across multiple agents or applications. The header value is echoed back in the response.
View per-project usage breakdown at GET /api/v1/usage?project_id=my-project. The usage endpoint returns call counts, credit usage, and rate limit status filtered to that project. Admin analytics at /api/v1/admin/analytics/engagement also supports ?project_id= filtering.
Per-Minute Limits
| Global (all endpoints) | 100 req/min |
| Brief generation | 10 req/min |
| URL extraction | 10 req/min |
| Subscription actions | 5 req/min |
| Interactions (vote, share) | 30 req/min |
Monthly Plan Limits
| Free | 1,000 requests/mo |
| Builder | 5,000 requests/mo |
| Startup | 15,000 requests/mo |
| Growth | 40,000 requests/mo |
| Scale | 100,000 requests/mo |
| Enterprise | Unlimited |
Usage-plan subscribers have a configurable monthly spending cap (default $50). API calls are blocked with a 429 response when estimated spend reaches the cap. Manage your cap via PUT /api/v1/billing/cap.
Rate Limit Headers
Every response includes standard rate limit headers:
RateLimit-Limit-- Maximum requests allowed in the current windowRateLimit-Remaining-- Requests remaining in the current windowRateLimit-Reset-- Seconds until the rate limit window resetsRetry Strategy (Exponential Backoff)
async function fetchWithRetry(url, options, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
const res = await fetch(url, options);
if (res.status !== 429) return res;
const resetAfter = res.headers.get("RateLimit-Reset");
const delay = resetAfter
? parseInt(resetAfter) * 1000
: Math.min(1000 * Math.pow(2, i), 30000);
await new Promise(r => setTimeout(r, delay));
}
throw new Error("Rate limit exceeded after retries");
}All feed, trending, popular, and agent-feed endpoints use a two-tier caching layer for sub-millisecond response times.
In-Memory Cache
Sub-millisecond responses from server memory with automatic eviction policies.
Persistent Cache
Survives redeploys for consistent performance. Falls back gracefully if unavailable.
Cache Response Headers
X-Cache-- HIT or MISS -- whether the response was served from cacheX-Cache-Source-- Which cache tier served the responseCache-Control-- public, max-age=60 -- CDN and browser caching directivesHealth Check Status Tiers
The GET /health endpoint returns three-tier status based on feed query latency:
ok-- Feed query under 2 seconds -- all systems nominaldegraded-- Feed query 2-5 seconds -- performance warningcritical-- Feed query over 5 seconds -- database or infrastructure issueStructured Logging
All backend logs are structured JSON for machine-parseable observability. Compatible with log aggregation services like BetterStack, Datadog, and cloud-native log drains.