AI agents on prediction markets.
World state, event probabilities, real-time data, and execution intents — composable for any agent runtime.
Four surfaces, six host bindings. AI agents on prediction markets read the world, reason, and act — through the same CLI, HTTP/Data API, WebSocket, and MCP adapter bytes that power the platform itself. 224 CLI commands, 86 HTTP catalog entries, and 102 MCP adapter tools (counts at /api/internal/statistics),36,949 indicator-scored markets and 9,310 realtime-feed markets, and a 15-minute world snapshot small enough to fit in the agent's context budget.

The polymath's Wunderkammer — every instrument an agent might need to read the world.
Four surfaces — what an agent gets
Every AI agent on prediction markets composes from the same four surfaces. Read the world, look up a probability, stream a live feed, submit an intent. Each row links to the depth page.
Agent host matrix
Seven host bindings cover the production agent landscape. Same JSON shapes across every host — pick the transport that fits the runtime. Patterns and worked examples live at /agentic-usage.
Three-step agent loop
Most production agents on the platform compose the same three-step loop. The shape is intentionally minimal; the policy lives in the operator's code, not in framework primitives.
01Read state
Run `sf world --json`, call the HTTP world endpoint, or subscribe to the realtime feed. Token-efficient by design — the snapshot is small enough to live in the agent's context budget.
02Reason
The agent (Claude, GPT, custom) consumes the state, applies the operator's policy, and produces a candidate action — usually as a normalized intent shape.
03Act
Submit the intent through the execution surface (CLI or REST). Idempotency keys make replays safe; dry-run validates the policy without capital at risk.
Worked examples (subscribe-react-act, tool-use, structured I/O) at /agentic-usage. For an autonomous template that runs the loop on its own, see /prediction-market-agent; for the hosted variant, /portfolio-autopilot.
Calibration — not marketing
The agent acts on the same probability the marketing page publishes. There is no separate "demo number" vs "API number". Four notes on what that means in practice.
01020304Cost and latency profile
Observed numbers from production traffic. Treat these as a profile, not a service guarantee — the platform does not publish formal SLAs.
World snapshot
~800 tokens compressed; ~30 tokens for delta-since-last. Public surface, no credential. Tail latency under 250 ms in measured production traffic.
Probability lookup
Single market: under 100 ms typical; cross-venue join: under 250 ms. Public surface, rate-limited free tier.
Real-time data feed (WS)
Sub-second event stream — orderbook deltas typically arrive within 200 ms of the venue. Edge token gates the WS connection at the proxy.
Execution intent submit
Round-trip to venue + risk-gate evaluation: typically 300–800 ms end to end on Kalshi, 500 ms – 2 s on Polymarket. Depends on venue load.
Cost per call
Read surfaces are free up to the published rate limit; agent / fund tiers unlock higher rates. No per-trade commission; venue trading fees pass through.
Read next from the library
Matched from SimpleFunctions blog, opinions, technical guides, concepts, and learn pages.
MCP Servers for Prediction Markets: Connect Claude Code to Kalshi and Polymarket
Connect Claude Code, Cursor, or Cline to Kalshi and Polymarket prediction markets via MCP. One-line setup, 18 tools, real-time market data for AI agents.
Connect Claude Code to Prediction Markets: MCP Server Setup Guide
Connect Claude Code, Cursor, or any MCP client to Kalshi and Polymarket prediction markets. 16 tools for thesis management, edge detection, and orderbook analysis.
How to Build a World-Aware Claude Agent
Give Claude real-time world awareness with prediction market data. Three integration patterns: system prompt injection, tool use, and MCP server. No API key needed.
How to Build a World-Aware Agent with the OpenAI Agents SDK
Give your OpenAI Agents SDK agent real-time world awareness. Calibrated probabilities for geopolitics, economy, energy, crypto. No API key needed.
SimpleFunctions vs Oddpool vs Raw Kalshi API — Which Prediction Market Tool Should You Use?
Compare SimpleFunctions, Oddpool, and raw Kalshi/Polymarket APIs for prediction market trading. Honest breakdown of features, pricing, and when to use each tool.
Connecting your AI agent to prediction market data in 5 minutes
Three integration paths to connect your AI agent to live prediction market data: MCP for Claude/Cursor, REST API for Python/JS, and CLI for terminal workflows.
FAQ
What is "AI agents on prediction markets"?
A composable surface that lets an LLM-driven agent — Claude Code, Codex, OpenAI tools, Cursor, or a custom runtime — read the prediction market world (Kalshi + Polymarket), reason about it, and optionally act through normalized execution intents. The primary binding is the `sf` CLI; the HTTP/Data APIs are the network binding; MCP is the compatibility adapter for hosts that require MCP.
Which APIs does the agent talk to?
Four surfaces: the world snapshot at /api/agent/world, the event-probability API, the real-time data API (REST + WebSocket), and the execution intents API. They are designed to compose — the world snapshot points at probability; probability points at the markets you might trade; the data API streams them live; intents act on them.
Should an agent use CLI, API, or MCP?
Use CLI first when the agent can run shell commands: `sf describe --all --json`, `sf query --json`, `sf world --json`, `sf inspect --json`, and `sf agent --plain --once`. Use the HTTP/Data APIs when the agent is remote or browser/server hosted. Use `sf agent --headless` only when an external LLM runtime wants NDJSON tool-server mode. Use MCP last, when the host specifically wants an MCP connector.
How many commands and tools are available?
The installed CLI catalog is the canonical local surface and is discoverable with `sf describe --all --json`. The HTTP catalog is available at /api/tools for remote agents. The MCP adapter exposes its own host-visible tool list. /api/internal/statistics publishes current counts for all three so pages do not carry stale marketing numbers.
Does it work with Claude vs OpenAI vs Cursor vs Codex?
Yes — the surface is host-agnostic. Claude Code and Codex work best through CLI commands; OpenAI Agents consume the HTTP/Data APIs as function calls; Cursor can use CLI from a terminal or MCP if configured. Same product surface, different binding.
How does the agent execute trades?
Through normalized execution intents — declare market, side, size, price, optional trigger, idempotency key. The execution layer runs the operator-configured risk gates, maps the intent to the venue's order shape, and submits through the operator's BYOK credential. Dry-run validates the full pipeline without capital at risk.
What is the cost?
Read surfaces (world, probability, search, screener, indicators) ship a free tier with rate limits sufficient for most agent prototyping. Execution and high-volume real-time tiers are priced per-seat / per-agent with usage-based multipliers above documented thresholds. SimpleFunctions never charges trading commissions; venue fees pass through.
Is the calibration claim real?
Yes. Every prediction market contract resolves to a binary outcome, so calibration is mechanical to compute. The world snapshot, the probability API, and the calibration surface share a single source — there is no marketing number that disagrees with the API number. Methodology is documented in /papers.
What is the typical latency?
World snapshot: under 250 ms tail in measured production. Probability lookup: under 100 ms typical. Real-time data WS: sub-second from venue event to subscriber. Execution intent submit: 300 ms – 2 s depending on venue. Treat these as observed numbers, not service guarantees.
Can the agent run autonomously?
Yes — see /prediction-market-agent for the autonomous-agent template (read state → reason → act → reconcile in a loop) and /portfolio-autopilot for the hosted variant (LLM-driven portfolio policy on the same intent + risk-gate stack).
Is the data live or replayed?
Live. The world snapshot refreshes every 15 minutes; the probability API and screener refresh on a sub-second cadence; the real-time data WebSocket streams venue events as they happen. Historical surfaces (/data/historical) are a separate product for replay and backtest.
Related surfaces
SimpleFunctions CLI
Canonical local surface — sf world, sf query, sf inspect, sf describe, and agent modes.
Agentic usage
Patterns + worked examples for AI agents on the four surfaces — subscribe, react, act.
Agentic CLI
The sf binary as an agent control plane — JSON output, idempotency keys, dry-run safety.
World state
Calibrated 15-minute snapshot of the prediction market world — token-budgeted for agents.
Event probability API
Per-event probability across Kalshi + Polymarket with next-actions graph.
Real-time data API
Sub-second WebSocket and REST — what reactive agents subscribe to.
Prediction market agent
End-to-end autonomous agent template — read state, reason, act, reconcile.
Portfolio Autopilot
Hosted LLM-driven autonomous policy on top of the same intent + risk-gate stack.