Alternative · Analytics aggregator
Hashdive vs
SimpleFunctions.
Same upstream venues. Hashdive surfaces analytics through a proprietary Smart Score system and web dashboard designed for human traders. SimpleFunctions ships the agent layer above it: causal-tree thesis system, autonomous trading, calibrated world model, computed indicators across 48K contracts, and a 56-tool MCP server.
Verified 2026-04 · public sources only · live SF data from /calibration
Verdict
Pick the one that fits how
you actually work.
Choose SimpleFunctions if
You are building agents, autonomous trading systems, or research that needs more than ranked market scores — calibrated probabilities with public Brier scores, causal-tree thesis modelling with auto-evaluation cycles, regime classification across the full 48K-contract universe, computed indicators (implied yield, cliff risk, liquidity availability score), and a 56-tool MCP server that drops into Claude Code or Cursor in one line.
Choose Hashdive if
You want a curated, point-and-click analytics dashboard built around proprietary Smart Score rankings — screening markets visually without querying any API or writing code. Hashdive has focused their product on exactly that use case and built their UX around traders who prefer a ranked, browser-based workflow.
Same upstream venues. Hashdive surfaces Smart Score analytics via a web dashboard for human traders. SimpleFunctions ships the agent layer above it: theses, indicators, autopilot, MCP.
At a glance
Three things that
actually differ.
Everything Hashdive gives you — normalised prices and analytics across Kalshi and Polymarket — SimpleFunctions also gives you, on the same underlying feeds.
On top of that, SF ships a causal-tree thesis system, an autonomous trading agent (Portfolio Autopilot, 1M-context LLM, 7-gate risk cascade), and 56 MCP tools that no current PM analytics product exposes.
SF also publishes live Brier scores for itself at /api/calibration — Kalshi 0.20, Polymarket 0.12 on T-24h price, past 90 days. Most competitors claim accuracy; we let you check ours.
Side by side
9 dimensions · verified 2026-04SimpleFunctionsKalshi + Polymarket normalised, 48K+ active contracts indexed and queryable via REST.
HashdiveKalshi + Polymarket analytics surfaced through a web dashboard.
SimpleFunctionsSix pre-computed indicators per contract — IY, CRI, LAS, EE, τ-days, regime label — queryable via API at /screen.
HashdiveProprietary Smart Score ranking system surfaced inside the dashboard; sub-signals not documented publicly.
SimpleFunctionsGET /api/public/market/{ticker}?depth=true returns bid/ask ladder, spread, and slippage estimate.
HashdiveNot documented as a public feature.
SimpleFunctionsFull public REST API with openapi.json, llms.txt, 56-tool MCP server, and MIT-licensed CLI — no auth required for reads.
HashdiveNo public REST API documented; the product surface is a web dashboard.
SimpleFunctionsLive Brier scores at /api/calibration, segmented by venue, category, and price bucket — publicly auditable.
HashdiveNot published.
SimpleFunctionsPOST /api/thesis/create decomposes any sentence into a causal tree, scans for edges, runs an evaluation heartbeat, and accepts signal injection at /api/thesis/{id}/signal.
HashdiveNot in scope.
SimpleFunctionsPortfolio Autopilot runs a 1M-context LLM across 13 data sources and passes every candidate trade through a 7-gate risk cascade before execution.
HashdiveNot in scope.
SimpleFunctions56 tools via claude mcp add simplefunctions --url https://simplefunctions.dev/api/mcp/mcp; works in Claude Code, Cursor, any MCP client.
HashdiveNo MCP server published.
SimpleFunctionsPublic REST, MCP, and CLI reads require no auth; pay-per-token only on thesis and intent execution, free up to 15M tokens.
HashdiveNot publicly documented.
Methodology
Verified 2026-04 from public sources only — Hashdive's documentation, public website, and publicly observable behaviour. We never claim non-public information about Hashdive's internals. SimpleFunctions claims on this page are computed live from /api/calibration, /api/public/cross-venue/pairs, and /api/public/markets — you can re-verify them yourself with curl.
Use cases
Same data, different
best fit per scenario.
Scenario 01
Building an AI agent that needs to query Kalshi and Polymarket programmatically.
SimpleFunctions · best fit
SF's REST API, 56-tool MCP server, and CLI expose every data point — prices, orderbook depth, cross-venue pairs, computed indicators — with no auth required for reads. The MCP server installs in one line and works inside Claude Code, Cursor, or any MCP-compatible client.
Hashdive
Hashdive is a web dashboard product without a documented public REST API. Programmatic access for agents or automated pipelines is not its design target.
Scenario 02
Screening prediction markets visually using a ranked signal feed without writing any code.
SimpleFunctions
SF's /screen page surfaces pre-computed indicators across 48K+ contracts, but the primary interaction mode is API-first. If the workflow is purely visual browsing rather than programmatic querying, the dashboard is a secondary interface.
Hashdive · best fit
Hashdive is built precisely for this: a web dashboard with proprietary Smart Scores ranking markets for human traders. If point-and-click exploration is the primary workflow, Hashdive is the more direct tool.
Scenario 03
Decomposing a complex political or economic thesis into tradeable sub-claims across venues.
SimpleFunctions · best fit
SF's POST /api/thesis/create decomposes any natural-language sentence into a causal tree, propagates probabilities, scans Kalshi and Polymarket for edges, and runs an evaluation heartbeat. You can inject new signals at any time via /api/thesis/{id}/signal and fork public theses.
Hashdive
Hashdive's Smart Scores operate at the individual market level. Causal decomposition of multi-part theses spanning multiple contracts is not a documented feature.
Scenario 04
Running autonomous overnight trading with risk gating and a large-context world model.
SimpleFunctions · best fit
Portfolio Autopilot uses a 1M-context LLM reading 13 data sources and runs every candidate trade through a 7-gate risk cascade — kill switch, position limits, drawdown gate, regime check — before execution. It is designed for 24/7 autonomous operation.
Hashdive
Hashdive is a dashboard product for human traders reviewing markets manually. Autonomous execution is not a documented feature.
Live data
The SimpleFunctions claims on this page are not marketing copy. Brier scores, market counts, and cross-venue pair counts are computed live from /calibration, /screen, and /api/public/cross-venue/pairs. All public, all free, all CC-BY-4.0.
FAQ
What is SimpleFunctions' thesis system and how does it differ from Hashdive's Smart Scores?+
SF's thesis system (POST /api/thesis/create) takes any natural-language sentence, decomposes it into a causal tree of testable sub-claims, propagates probabilities across the tree, and scans Kalshi and Polymarket for tradeable edges. An evaluation heartbeat runs continuously — news scan, price refresh, milestone check, LLM evaluation, confidence update. Smart Scores are a proprietary ranking signal surfaced in a dashboard. SF's thesis system is a causal decomposition engine that you can inject new evidence into via /api/thesis/{id}/signal and fork publicly. No current PM analytics product exposes this.
Does SimpleFunctions have a web dashboard like Hashdive?+
SF has a screening interface at /screen that surfaces pre-computed indicators (implied yield, cliff risk, liquidity availability score, event overround, τ-days, regime label) across 48K+ active contracts. Hashdive's dashboard is built for point-and-click market exploration with Smart Score rankings. If your primary workflow is browsing a ranked feed of markets in a browser without querying any API, Hashdive's dashboard may be the more direct starting point.
How do I connect SimpleFunctions to Claude Code or Cursor?+
Run: claude mcp add simplefunctions --url https://simplefunctions.dev/api/mcp/mcp. That single command exposes 56 tools — market queries, thesis creation, signal injection, cross-venue pair lookup, autopilot controls — inside any MCP-compatible client. No API key is required for read operations. Hashdive does not publish an MCP server.
What is Portfolio Autopilot and how does it work?+
Portfolio Autopilot is SF's autonomous trading agent. It uses a 1M-context LLM reading 13 data sources and passes every candidate trade through a 7-gate risk cascade — including a kill switch, position limits, drawdown gate, and regime check — before execution. The architecture is designed for AI agents running 24/7, not for humans reviewing a dashboard. Hashdive's product is built around the manual, dashboard-based workflow.
Does SimpleFunctions publish its own prediction accuracy?+
Yes. /api/calibration returns SF's live Brier scores segmented by venue, category, and price bucket — Kalshi 0.20, Polymarket 0.12 on T-24h price, past 90 days. Most analytics products claim accuracy or imply edge; SF makes its calibration auditable with a public API endpoint you can verify yourself with curl. Hashdive does not publish comparable calibration data.
What computed indicators does SimpleFunctions expose?+
SF pre-computes six indicators across 48K+ active contracts: IY (implied yield), CRI (cliff risk index), LAS (liquidity availability score), EE (event overround), τ-days (time to settlement), and a regime label (adverse-selection classification). These are available at /screen and via the REST API. Hashdive's primary signal is its proprietary Smart Score; the underlying sub-signals are not documented publicly.
Is there a migration path from Hashdive to SimpleFunctions?+
Hashdive is a web dashboard product without a documented public REST API, so there is no API migration path in the traditional sense. SimpleFunctions exposes a full REST API (openapi.json, llms.txt, MIT CLI) with no auth required for reads. The workflow shift is from browser-based market exploration to API-first or MCP-first interaction. The MCP server installs in one line for Claude Code or Cursor users.
Start for free.
Public endpoints are free for normal usage and rate-limited for reliability. Authenticated endpoints are free up to 15M tokens, then pay per token. No credit card to start.