Introducing VEROQ — The Truth Protocol for Agentic AI
5 agents verify every answer before your agent sees it. Evidence chains with source reliability, confidence decomposition, verification receipts, and a multi-agent swarm. The truth protocol for agentic AI.
AI agents are making decisions on data they can't verify. Every API they call returns raw information with no confidence score, no bias detection, no evidence chain. The agent takes it at face value and acts. When it's wrong, there's no audit trail to explain why.
Today we're launching VEROQ — the truth protocol for agentic AI. The name comes from vero (truth) + query. Every query, verified.
The Problem
Search APIs return links. Data APIs return numbers. Neither tells your agent whether the information is trustworthy, how sources agree or disagree, or what the opposing perspective is. Your agent is flying blind.
Consider a trading agent that reads “NVDA beat earnings expectations.” Is that supported by multiple sources? Is the framing biased? What's the counter-argument? A search API can find the headline. Only a trust layer can answer those questions.
One Function, Entire Stack
Start with shield() — one line that verifies any LLM output. Then go deeper with /ask, /verify, and /swarm:
shield()
One line of code. Works with any LLM. Every claim extracted, verified, corrected, and stamped with a verification receipt. Add it as middleware to OpenAI, Anthropic, or Express — auto-shield every response.
/ask
Natural language in, structured intelligence out. Routes across 300+ internal endpoints automatically. Returns trade signals, confidence scores, LLM summaries, and reasoning chains. SSE streaming delivers data as each source responds — price at 500ms, technicals at 800ms, summary typing out in real-time.
/verify
Pass any claim, get a structured verdict. Supported, contradicted, partially supported, or unverifiable — with an evidence chain showing which sources said what, a confidence breakdown by factor (source agreement, quality, recency, corroboration depth), and counter-arguments. Checks the intelligence corpus first, then falls back to live web sources so nothing comes back “unverifiable.”
/swarm
Deep analysis with 5 verified agents. A planner breaks the problem down. A researcher gathers data. A verifier checks every claim. A critic challenges the analysis. A synthesizer produces the final answer. Every step verified, every source scored for reliability, every decision auditable via verification receipts.
shield() + /ask + /verify + /swarm cover 95% of what an AI agent needs. For granular control, there are 80+ endpoints for tickers, technicals, earnings, sentiment, screener, backtesting, and more.
pip install veroq
from veroq import Agent
# Ask anything
result = Agent().ask("What's happening with NVDA?")
print(result.summary) # LLM-generated answer
print(result.trade_signal) # {action: "hold", score: 50}
print(result.confidence) # {level: "high", reason: "..."}
# Verify any claim
v = Agent().verify("NVDA beat Q4 earnings")
print(v.verdict) # "supported"
print(v.evidence_chain) # [{source, position, snippet, url}...]
print(v.confidence_breakdown) # {source_agreement: 0.95, ...}What Makes This Different
Other APIs in this space — Tavily, Exa, Alpha Vantage, Finnhub — are either search APIs that return raw links or data APIs that return raw numbers. None of them verify. None of them score confidence. None of them detect bias or generate counter-arguments.
VEROQ is the first API that sits between raw data and agent action as verified agent infrastructure. Every response includes:
- Verified Swarm — 5 agents (planner, researcher, verifier, critic, synthesizer) verify every answer
- Confidence decomposition with 4-factor breakdown (agreement, quality, recency, corroboration)
- Source reliability scores — Reuters 95%, Bloomberg 94%, Reddit 60%. Your agent knows who to trust
- Evidence chains with source names, quotes, URLs, positions, and reliability
- Verification receipts — hashable proof that claim X was verified at time T. Compliance audit trails
- Bias detection with political lean, framing analysis, loaded language, and omissions
Plus full financial market data — 1,061 tickers, 200 crypto tokens, 20 technical indicators, NLP screener, backtesting, portfolio feeds, and real-time SSE streaming. One API key replaces three separate data providers.
Open Source Ecosystem
VEROQ ships with 31 open source repositories:
- 5 SDKs — Python, TypeScript, MCP (62 tools), CLI, Vercel AI
- 3 framework integrations — LangChain, CrewAI, n8n
- 9 demo apps — trading agent, research agent, portfolio tracker, fact-checker, and more
- 4 tools — Google Sheets, Cursor plugin, evaluation suite, cookbook
Every repo is MIT licensed. Clone a demo app, customize it, deploy it. From idea to working agent in minutes.
Try It Now
The best way to understand VEROQ is to use it. The homepage has live interactive demos for both /ask and /verify — type a question or paste a claim and watch the agents work through it in real-time.
1,000 free credits per month. No credit card required.
# Install pip install veroq # Or use the CLI npm install -g @veroq/cli veroq ask "What's happening with NVDA?" veroq verify "The Fed held rates steady"