SimpleFunctions
← Back to Blog
insightsApr 30, 202617 min read

Event Compression Is the Signal

Volume is posterior heat. Nonlinearity is the front wave. Where alpha actually lives in prediction markets, and how to systematize the signal.

Patrick Liu
#prediction-markets#trading#strategy#alpha#event-time

In prediction markets, volume is posterior heat. Nonlinearity is the front wave.


If you accept that prediction markets trade in event time rather than clock time, the next question is what this actually means for strategy. The honest answer is that almost everything you've been taught about screening, charting, and strategy in equities is downstream of a different time substrate, and importing those habits into prediction markets gets you killed slowly.

The most common mistake comes first. Most participants screen for "high volume markets" because that's what equity traders are taught: go where the action is. Then they apply momentum, mean reversion, candlestick patterns, all the standard kit. None of these are wrong tools, exactly. They are just downstream of a more fundamental fact that almost no one is pricing.

In prediction markets, alpha doesn't come from being a better forecaster. It comes from being earlier to recognize that the market's event time has changed and the price hasn't. Volume is the aftershock. Event compression is the front wave. If you're trading the volume signal, you're already late.

Information density blocks

Stock prices are anchored by continuous quantities. Cash flows. Discount rates. Sector beta. Earnings cycles. There is always something for a stock price to discount, even on a slow day. Information arrives in a steady stream, mostly as drift, occasionally as a discrete shock. The shocks are big but rare. The drift is small but constant.

Prediction markets do not have continuous anchors. They are discrete propositions: will this bill pass, will this candidate win, will this storm make landfall above Cat 3. There is no daily cash flow, no continuous fundamentals. There is just a probability estimate of a future binary, and a set of facts that, when revealed, will lock that probability in.

This means time in prediction markets isn't experienced as a linear axis at all. It's path length on an event tree. Between events, no path. At events, multiple new paths open and close. The price isn't drifting against a continuously evolving cash flow. It's sitting still, then jumping when the tree branches.

Prediction market time is compressed into information density blocks. Most of the calendar is empty: nothing relevant is happening, no path on the tree is being traversed. Then a node arrives. A committee schedule, a model run, a court filing, a poll, a candidate withdrawal, an agency rule. The node compresses what would have been weeks of price discovery in a normal market into minutes. The block is short on the calendar but enormous in informational weight.

Stocks have these too. They are called earnings releases. But for stocks, the dominant mode is continuous. For prediction markets, the dominant mode is the block.

The volume trap

Now apply this to volume.

In a stock market, volume is fairly informative as a signal. Persistent volume increases mean institutional participation, narrative formation, increased attention, often before price moves much. You can build strategies around it: screening for unusual volume, watching the tape for accumulation and distribution patterns. Volume is a leading indicator at the right horizon.

In prediction markets, volume is mostly a lagging indicator. The information density block, the actual compression event, fires first. Then the price moves, often violently, often in a few minutes. Then attention catches up. Then volume rises as participants pile in. By the time you see the volume spike on a screener, the alpha has already been distributed. You're looking at the heat trail, not the source.

The pattern: news hits, professional traders react first, price moves, headlines pick it up over the next 30 to 60 minutes, retail volume floods in, the market consolidates around a new equilibrium. If your discovery process starts at "high volume," you arrive at step 5 of 5.

This is the inverse of how it works in equities, where the volume often shows up before the price does. The reason is that prediction markets are thin, decisions are fast, information is cleaner, and the move-to-volume feedback loop runs the other direction.

So: volume in prediction markets is not where alpha lives. The herd is there. The visible move is there. But the front wave of nonlinearity already passed.

Volume as execution filter, not discovery primitive

This isn't an argument that volume doesn't matter. It absolutely matters. Without depth, you can't enter at a sane price, can't exit at all, can't size up, can't verify that the listed price is real.

The framing that gets it right: volume is an execution filter, not a discovery primitive.

You discover the trade idea by looking at where event time is compressing, where the substrate of the market is changing in a way the price hasn't absorbed. Then you check, as a downstream constraint, whether the market actually has the depth and spread to take a position cleanly. If yes, trade it. If no, watch it for when liquidity arrives or skip it entirely.

This sequence is the inverse of "screen high-volume markets, then look for setups." It's "find compression events, then check if executable."

Most retail-style prediction market workflows have these two steps in the wrong order, which produces a bias toward over-traded markets where alpha is already gone, and under-attention to thin markets where the alpha exists but the screener missed it.

The right strategy frame

Once you've stopped using volume as the discovery signal, the question becomes: what is the discovery signal?

The right frame is two-clocked.

Each market has a clock time. Days until resolution. Wall-clock distance to expiry. This is what every interface shows by default.

Each market also has an event time. Distance, measured in events not in days, to the next catalyst that will materially change the proposition. For a bill, that might be one committee meeting away. For an econ market, one CPI release away. For a weather market, one model run. For an election market, one debate or one filing deadline. Sometimes the event time is "zero": the catalyst has already happened and the market is just digesting. Sometimes the event time is "many": the resolution is far away and lots of catalysts remain.

Most participants look at clock time and price. The right strategy is to estimate event time and ask: did the event time of this market just shrink in a way the price hasn't priced?

Two signs that event time has shrunk.

A scheduled catalyst is approaching faster than the market seems to recognize. The CPI release is in 36 hours, but the implied vol of related markets isn't priced for it. The committee hearing is on the calendar but the relevant market is flat. The procedural deadline is closer than usual but the price doesn't reflect the shorter window.

A surprise catalyst already fired. The court filing posted. The model run dropped. The candidate spoke. The price has started to move but not finished moving. The event time just collapsed in a single step.

If event time is shrinking and price hasn't tracked, you have a candidate trade. The size of the candidate is a function of how much event time compressed and how little price absorbed.

This is quantifiable

This isn't a vibe-based argument. It can be made into a metric, and arguably should be.

Call it event compression score. It's not volatility (which is a function of past price moves). It's not volume (which is a function of past trading activity). It's a forward-looking measure of how much resolution-relevant information is scheduled to release in a short window relative to how much the market has priced in.

The components are roughly:

How much resolution-relevant information is going to release in the next 24h, 72h, 7d, conditional on what the market is about. For a Congress market, that's procedural events on the bill's path. For an econ market, that's the data calendar. For a weather market, that's the model schedule. The base data is structural and largely deterministic if you have the right source feeds.

Whether the release window is matched by current price behavior. If a major catalyst is 8 hours away and the implied move (from related markets, from history of similar setups) is 12¢ but the current spread is 1¢ and price hasn't moved in 3 days, that's a mismatch. The compression is high. The price has not adjusted.

Whether prior nodes were smoothly absorbed. If the last three procedural milestones each produced 5¢ to 8¢ moves and the current similar milestone is producing zero move so far, the market is either ahead of you (they think this milestone doesn't matter) or behind you (they haven't noticed yet).

How similar historical events typically jumped. The base rate matters. If "candidate withdraws from primary" historically produces a 20¢ shift and the current configuration suggests a withdrawal is plausible in the next week, that's measurable potential energy.

Build all four into one score and you have something that ranks markets not by volume or by 24h change, but by how much event time is compressing relative to absorbed price. That ranking is much closer to where alpha actually lives than any volume screener.

Types of compression

Different market families produce different shapes of event compression. Understanding which type you're looking at matters for both the magnitude and the timing.

Procedural time compression. Government markets. A bill that has been in committee suddenly enters markup, gets added to the floor calendar, hits cloture or a vehicle for amendment. A nomination that was sitting in committee gets discharged. A regulatory rule that was open for comment has the comment period close. Each of these is a procedural step that compresses what was previously diffuse uncertainty into a near-term decision. Query Gov surfaces the procedural state. The trade signal is the difference between the procedural step that just occurred and the price's reaction so far.

Scheduled data compression. Economic markets. CPI, NFP, FOMC, GDP advance, jobless claims, retail sales, ISM. These are calendar releases. The compression timing is fully scheduled. What varies is how the print itself diverges from consensus, and how related markets price the run-up. Query Econ surfaces the calendar and the distance to the last print. The trade signal is the relationship between consensus, recent prints, and the implied move in related markets.

Model-time compression. Weather and any markets with model-driven probabilities. The NOAA, ECMWF, GFS, ICON model run cycles are the time substrate. Each run drops, the cone shifts or doesn't, the path probability re-rates. The compression happens on the model schedule, not the wall clock. The trade signal is the change in model output relative to the prior run, before the market has absorbed it.

Political event compression. Election and political markets. Filing deadlines, debate schedules, polling release cadence, primary days, certification deadlines, swing-state legal milestones. Each one is a discrete event that can collapse a probability distribution. Largely scheduled but with surprise components from withdrawals, endorsements, statements. The trade signal is in the proximity to the event versus current absorption.

These categories aren't exhaustive but they cover the bulk of major prediction market activity. The framework: identify the dominant compression type per market, monitor the relevant source clock, score the gap between source movement and market response.

Why 15-minute candles are wrong

Once you internalize event compression, the absurdity of applying generic timeframe charts to prediction markets becomes obvious.

A 15-minute candle on a stock is a meaningful unit. Roughly 15 minutes of trading happens, against roughly 15 minutes of new information. The candle has shape, body, wicks. You can make claims about direction, momentum, exhaustion based on it.

A 15-minute candle on a prediction market is, most of the time, garbage. It might be two retail trades with no information flow connecting them. It might be a market maker repricing on a stale book. It might be empty entirely. The candle has no ground truth backing it because there was no information event in the window. Reading patterns off it is reading patterns off noise.

Worse: when there IS an information event in the window, 15 minutes is too coarse. The actual price discovery happens in 30 seconds to 3 minutes after the event hits the wire. A 15-minute candle aggregates the pre-event flat, the burst of repricing, and the post-event consolidation into one bar that hides everything you actually want to see.

Prediction market time isn't 15-minute units. It's "distance to the next state-changing event." Sometimes that distance is 5 seconds (a court filing was just posted). Sometimes it's 3 weeks (the next scheduled debate). The right time axis is event-anchored, not calendar-anchored.

From time-series chart to event-series chart

Charts for prediction markets, if they were designed for the substrate they're actually plotting, would not be time-series first. They would be event-series first.

The horizontal axis is still time, but the meaningful structure is the event markers laid on top of it. Every data release. Every bill action. Every court filing. Every market listing change. Every venue add or remove. Every news burst above some threshold. Every model update.

The strategy questions then become event-relative, not time-relative. How fast did the market react to that filing? How much did it move on that data print versus the historical reaction size? Is the price still drifting two hours after the model run, or has it stabilized? Did related markets move in sympathy, and on what lag?

This is the actual tape reading of prediction markets. It isn't the kind of tape reading you do on a stock, where every tick has order flow context. It's a different shape: events firing, prices reacting, lags forming, related markets confirming or disconnecting, attention catching up or staying absent.

A prediction market terminal that gets this right shows you events as primary objects and price as a function of them, not the other way around.

Three layers of strategy

Once you have the event-time framing, the strategy itself decomposes cleanly into three layers, each answering a different question. They have to be evaluated in order.

Layer 1: Nonlinear discovery. Where in the market universe is event time compressing? This is the question event compression score is built to answer. It returns a ranked list of markets where the substrate is moving even if the price isn't. This layer doesn't care about depth, spread, or price level yet. It only cares about which clocks just changed.

Layer 2: Reaction gap. For each candidate from Layer 1, has the market absorbed the change? Look at price (did it move proportional to historical reactions to similar nodes?). Look at volume (has flow shown up?). Look at spread (did market makers widen, signaling uncertainty, or narrow, signaling confidence in a new equilibrium?). Look at related markets (did they confirm or stay disconnected?). The reaction gap is the difference between what should have moved and what did. A wide gap means the alpha is still on the table. A closed gap means the market already absorbed it and you're late.

Layer 3: Execution viability. For each candidate that passes Layer 2, can you actually trade it? Check spread, depth, total available size at acceptable prices, expiry timeline relative to your thesis horizon, adverse selection risk on the venue. This is the layer where volume returns to relevance, but as a filter, not a discoverer. You'd rather take half-size in a thinner market with a wide compression gap than full size in a thick market where the gap closed three hours ago.

A strategy that runs this stack in order ends up systematically biased toward early entry on real edges, not toward chasing the hottest markets.

Why context feeds aren't optional

You cannot do Layer 1 from the orderbook alone. The orderbook tells you about price and depth. It does not tell you that a CPI release is in 6 hours, that a court filing posted on the docket 18 minutes ago, that the ECMWF model just shifted the storm path 40 miles east, that a senator's office posted a statement at 14:32, that the bill the market depends on was just added to a floor calendar.

These signals live outside the order book. They live in source feeds: government event streams, economic release calendars, weather model outputs, news cadence, market listing changes, related contract behavior, attention flow.

A prediction market trading system that doesn't ingest these is operating with one eye closed. It can react to price moves, but it can't see the substrate underneath. By construction, it's running a strategy of "respond to volume" because that's the only signal it has.

What SimpleFunctions has been building, with Query Gov, Query Econ, world delta, indices, monitors, contextual next-actions, is the apparatus to plug source clocks into the market clock. The source clocks are what tells you that event time just changed. The market clock is what tells you whether anyone has noticed. The gap is the trade.

This is also why a prediction market terminal isn't really competing with TradingView. It's a different category. TradingView is excellent at price-time charting in continuous markets. A prediction market terminal needs to be excellent at event-source ingestion, source-to-market clock alignment, and reaction gap measurement. The chart is the smallest part of it.

The metrics that follow

Once you have the framework, the metrics fall out of it. There are five worth building first, and each one corresponds to a question a trader actually has to answer.

Event Compression: how much resolution-relevant information is scheduled or expected to arrive in a defined window (24h, 72h, 7d), per market. Higher means the substrate is about to move a lot. This is the primary discovery signal.

Market Absorption: how much of recent compression has already been priced in, measured across price, volume, spread, and related-market correlation. Low absorption with high compression is the high-edge zone.

Reaction Lag: how long, on average, between a source event and the market's price response. A consistently fast lag means the market is well-watched. A consistently slow lag means there's an attention deficit you can exploit. Trends in lag tell you whether attention is rising or falling on a market.

Nonlinear Opportunity: a composite of (high event compression) AND (low absorption) AND (viable execution). This is the actionable rank. It's deliberately stricter than just compression. Compression without absorption-gap is a known move. Absorption-gap without execution is a phantom trade. Only the joint condition is worth a position.

Dead Time: the inverse signal. Markets where calendar time is flowing but no meaningful event clock is active. These are markets to ignore. Watching them is attention you could spend on real signal.

These five together give you a discovery surface that's information-dense, decision-aligned, and pretty far from any standard "trending markets" screener. They tell you what to look at, what to skip, and how to size, all from a substrate that isn't visible if you're just watching price.

What this means for a model-driven workflow

Most discussions of "AI trading" in prediction markets focus on the wrong thing. They imagine a model that predicts the resolution outcome better than the market. That's a hard, slow, mostly losing game. Markets with clean information aren't easy to beat on pure forecast accuracy.

The right model-driven workflow, given an event-time framework, splits roles cleanly.

The analyst model isn't trying to predict the outcome. It's monitoring source clocks across all tracked markets, continuously asking: did event time just change here? It surfaces compression events, absorption gaps, reaction lags. It produces a ranked watchlist of markets where the substrate has moved. It runs all the time, on every market, because that's the only way to catch compression as it happens rather than after.

The trader model takes the analyst's surfaced opportunities and decides: should we take a position? At what size? With what entry, target, stop? When does this thesis decay? The trader's job isn't to pick the resolution outcome. It's to size and time bets on transient compression events that may close quickly.

This decomposition is a much better fit for what models can actually do well at scale. Continuously monitoring thousands of source feeds against thousands of markets is something a model can do trivially and a human cannot. Sizing a discrete bet given a few specific data points is something a model can do well. Predicting the underlying resolution outcome better than the market itself is a much harder problem and not where the marginal alpha lives.

The deeper claim

In equities, alpha is partly about being a better fundamental analyst, partly about being a better technical reader, partly about being a better positioning observer. The substrate is so dense and so continuous that you can construct an edge from many different angles.

In prediction markets, the substrate is sparse and bursty. Most participants see it as a chart with weird flat lines and sudden jumps. Most strategies imported from equities fail because the substrate they were designed for doesn't exist here.

What does exist is a clean signal almost nobody is systematically tracking: event time compression and the reaction gap that follows. When the time substrate of a market changes faster than the price absorbs, there's a window. The window is sometimes 30 seconds, sometimes 3 days. The size of the alpha in the window is roughly proportional to the size of the compression. The number of these windows across all tracked markets, every day, is large.

Volume tells you where attention is. It does not tell you where the substrate just moved. By the time volume flags a market, you're trading the heat trail. The trade was a few minutes earlier.

Nonlinearity is the front wave. Volume is the aftermath. The prediction market opportunity is to systematize the front wave.

This is what SimpleFunctions is for. Not a dashboard. Not a data feed. The actual tape of prediction markets, where the substrate is treated as the primary object and price reads off of it, instead of the other way around.

Engine-written disclosure

This article was primarily written by the SimpleFunctions engine and does not represent the views of the company.