SimpleFunctions

AI agents on prediction markets.

World state, event probabilities, real-time data, and execution intents — composable for any agent runtime.

Four surfaces, six host bindings. AI agents on prediction markets read the world, reason, and act — through the same CLI, HTTP/Data API, WebSocket, and MCP adapter bytes that power the platform itself. 224 CLI commands, 86 HTTP catalog entries, and 102 MCP adapter tools (counts at /api/internal/statistics),36,949 indicator-scored markets and 9,310 realtime-feed markets, and a 15-minute world snapshot small enough to fit in the agent's context budget.

Realist oil painting in Vermeer-applied-to-Renaissance-cabinet style — a polymath's Wunderkammer: globes, instruments, manuscripts, specimens, every tool the natural philosopher needed to read the world

The polymath's Wunderkammer — every instrument an agent might need to read the world.

Four surfaces — what an agent gets

Every AI agent on prediction markets composes from the same four surfaces. Read the world, look up a probability, stream a live feed, submit an intent. Each row links to the depth page.

Surface
What the agent does with it
Open
World state
A 15-minute calibrated snapshot of the prediction market world — ~800 tokens, the agent's context budget for the next call. Read in one fetch.
Event probabilities
Per-event probability with cross-venue normalization, gov + econ overlays, history, and next-action graph attached. The agent's "what is true right now" surface.
Real-time data
Sub-second WebSocket plus REST for markets, orderbooks, trades, candles, indicators. The agent's "what is changing" surface — what triggers actually watch.
Execution intents
Idempotent normalized intents with triggers, status state machine, audit log. The agent's "act on the world" surface — across Kalshi and Polymarket.

Agent host matrix

Seven host bindings cover the production agent landscape. Same JSON shapes across every host — pick the transport that fits the runtime. Patterns and worked examples live at /agentic-usage.

Host
How it binds
Surface depth
Agentic CLI (sf)
Primary control plane. Agents run the same binary as operators: stable JSON output, local auth, idempotency keys, dry-run safety, trace/replay.
Full surface — see /agentic-cli
Claude Code / Codex
Call `sf ... --json` from the shell. Let the coding agent own repo context while SimpleFunctions owns market data, thesis state, and execution commands.
Full surface via CLI
OpenAI Agents / GPT
Use REST function calls generated from the public HTTP tool manifest. Same server shapes as the CLI wraps, without requiring a local binary.
Full HTTP surface — read + execute
Custom API agents
Call REST endpoints directly for world state, probability lookup, screening, realtime data, intents, fills, and audit export.
HTTP + WebSocket
Custom WebSocket agents
Subscribe to ticker:* / orderbook:* / featured / trade:* / candle:* topics. Pair the stream with CLI or REST for execution.
Read-stream
Claude Desktop / Cursor MCP
Use the MCP adapter when the host expects MCP. It is the compatibility layer over the same product surface, not the canonical entry point.
Adapter surface

Three-step agent loop

Most production agents on the platform compose the same three-step loop. The shape is intentionally minimal; the policy lives in the operator's code, not in framework primitives.

01

Read state

Run `sf world --json`, call the HTTP world endpoint, or subscribe to the realtime feed. Token-efficient by design — the snapshot is small enough to live in the agent's context budget.

02

Reason

The agent (Claude, GPT, custom) consumes the state, applies the operator's policy, and produces a candidate action — usually as a normalized intent shape.

03

Act

Submit the intent through the execution surface (CLI or REST). Idempotency keys make replays safe; dry-run validates the policy without capital at risk.

Worked examples (subscribe-react-act, tool-use, structured I/O) at /agentic-usage. For an autonomous template that runs the loop on its own, see /prediction-market-agent; for the hosted variant, /portfolio-autopilot.

Calibration — not marketing

The agent acts on the same probability the marketing page publishes. There is no separate "demo number" vs "API number". Four notes on what that means in practice.

01
World snapshot publishes the same probability the agent acts on. There is no separate "marketing number" vs "API number".
02
Calibration is live: every market resolves to a binary outcome, so a Brier-style track record is mechanical. The /calibration surface and the /world endpoint share the same source.
03
When confidence is low (sparse history, illiquid market), the snapshot says so explicitly. Agents are expected to read confidence as a first-class field.
04
See /papers and the Wisdom-Aggregating Brier work for the methodology behind cross-venue normalization and confidence assignment.

Cost and latency profile

Observed numbers from production traffic. Treat these as a profile, not a service guarantee — the platform does not publish formal SLAs.

World snapshot

~800 tokens compressed; ~30 tokens for delta-since-last. Public surface, no credential. Tail latency under 250 ms in measured production traffic.

Probability lookup

Single market: under 100 ms typical; cross-venue join: under 250 ms. Public surface, rate-limited free tier.

Real-time data feed (WS)

Sub-second event stream — orderbook deltas typically arrive within 200 ms of the venue. Edge token gates the WS connection at the proxy.

Execution intent submit

Round-trip to venue + risk-gate evaluation: typically 300–800 ms end to end on Kalshi, 500 ms – 2 s on Polymarket. Depends on venue load.

Cost per call

Read surfaces are free up to the published rate limit; agent / fund tiers unlock higher rates. No per-trade commission; venue trading fees pass through.

Read next from the library

Matched from SimpleFunctions blog, opinions, technical guides, concepts, and learn pages.

Browse library

FAQ

What is "AI agents on prediction markets"?

A composable surface that lets an LLM-driven agent — Claude Code, Codex, OpenAI tools, Cursor, or a custom runtime — read the prediction market world (Kalshi + Polymarket), reason about it, and optionally act through normalized execution intents. The primary binding is the `sf` CLI; the HTTP/Data APIs are the network binding; MCP is the compatibility adapter for hosts that require MCP.

Which APIs does the agent talk to?

Four surfaces: the world snapshot at /api/agent/world, the event-probability API, the real-time data API (REST + WebSocket), and the execution intents API. They are designed to compose — the world snapshot points at probability; probability points at the markets you might trade; the data API streams them live; intents act on them.

Should an agent use CLI, API, or MCP?

Use CLI first when the agent can run shell commands: `sf describe --all --json`, `sf query --json`, `sf world --json`, `sf inspect --json`, and `sf agent --plain --once`. Use the HTTP/Data APIs when the agent is remote or browser/server hosted. Use `sf agent --headless` only when an external LLM runtime wants NDJSON tool-server mode. Use MCP last, when the host specifically wants an MCP connector.

How many commands and tools are available?

The installed CLI catalog is the canonical local surface and is discoverable with `sf describe --all --json`. The HTTP catalog is available at /api/tools for remote agents. The MCP adapter exposes its own host-visible tool list. /api/internal/statistics publishes current counts for all three so pages do not carry stale marketing numbers.

Does it work with Claude vs OpenAI vs Cursor vs Codex?

Yes — the surface is host-agnostic. Claude Code and Codex work best through CLI commands; OpenAI Agents consume the HTTP/Data APIs as function calls; Cursor can use CLI from a terminal or MCP if configured. Same product surface, different binding.

How does the agent execute trades?

Through normalized execution intents — declare market, side, size, price, optional trigger, idempotency key. The execution layer runs the operator-configured risk gates, maps the intent to the venue's order shape, and submits through the operator's BYOK credential. Dry-run validates the full pipeline without capital at risk.

What is the cost?

Read surfaces (world, probability, search, screener, indicators) ship a free tier with rate limits sufficient for most agent prototyping. Execution and high-volume real-time tiers are priced per-seat / per-agent with usage-based multipliers above documented thresholds. SimpleFunctions never charges trading commissions; venue fees pass through.

Is the calibration claim real?

Yes. Every prediction market contract resolves to a binary outcome, so calibration is mechanical to compute. The world snapshot, the probability API, and the calibration surface share a single source — there is no marketing number that disagrees with the API number. Methodology is documented in /papers.

What is the typical latency?

World snapshot: under 250 ms tail in measured production. Probability lookup: under 100 ms typical. Real-time data WS: sub-second from venue event to subscriber. Execution intent submit: 300 ms – 2 s depending on venue. Treat these as observed numbers, not service guarantees.

Can the agent run autonomously?

Yes — see /prediction-market-agent for the autonomous-agent template (read state → reason → act → reconcile in a loop) and /portfolio-autopilot for the hosted variant (LLM-driven portfolio policy on the same intent + risk-gate stack).

Is the data live or replayed?

Live. The world snapshot refreshes every 15 minutes; the probability API and screener refresh on a sub-second cadence; the real-time data WebSocket streams venue events as they happen. Historical surfaces (/data/historical) are a separate product for replay and backtest.

Related surfaces