Why 90% of Backtests Lie — And How to Fix Yours
Your backtest is a controlled hallucination. It shows you a world where you always got the price you wanted, where every stock you tested survived, where tomorrow's news never leaked into today's signal. None of that is real. And the gap between that world and the live market is where money disappears.
Here are the four lies most backtests tell — and specifically how to stop them.
The Four Lies
Look-Ahead Bias
Your code uses tomorrow's close to generate today's signal. Or you use an earnings number on the day the quarter closed — when in reality that report arrived 45 days later. The backtest never complains. It just quietly inflates every return. This is the hardest bug to catch because the code looks correct — you're just indexing one row off.
Enforce strict point-in-time data access. Signal generated on bar t executes on bar t+1 open, always. For fundamental data, apply the actual filing date — not the period-end date.
Survivorship Bias
You're testing on stocks that exist today. That means every company that went bankrupt, got delisted, or was acquired between 2015 and now — vanished from your dataset. Your "universe" is pre-filtered by success. You're backtesting on the winners by definition, and wondering why the backtest always wins.
Use a survivorship-bias-free dataset that includes all historical constituents — including delisted stocks. For Indian equities, Nifty 500 composition changes are publicly available and must be applied point-in-time.
Overfitting / Data Snooping
You ran 400 parameter combinations. The best one had a Sharpe of 3.1. You called it your strategy. But if you run 400 random strategies on any dataset, roughly 20 will show a Sharpe above 2.0 purely by chance. You found noise with a good costume. The moment market conditions shift even slightly, it collapses — because it never understood the market, it memorised it.
Form your hypothesis first, then test. Use walk-forward optimisation — optimise on a rolling 2-year window, validate on the following 6 months, never revisit. Hold out a completely untouched out-of-sample period. Never touch it until the strategy is finalised.
Ignoring Realistic Transaction Costs
Most backtesting tools apply a flat ₹20 brokerage and call it done. That's not how the market works. When you place a market order for 500 lots of Bank Nifty during an RBI event, you don't fill at the mid. You fill wherever the order book has liquidity — which, in volatile moments, is significantly worse than you modelled.
Model slippage as a function of order size, instrument liquidity, and volatility regime. Use VWAP or limit-order-based execution assumptions, not market-order fills at close.
The Slippage Model That Changes Everything
Most traders apply a fixed slippage — say, 0.05% per trade — and think they're done. That's better than nothing. But it misses the variable that destroys high-frequency and event-driven strategies: slippage scales with volatility and position size. A model that works fine when VIX is at 12 falls apart when it hits 22 — because your assumed fills were calibrated to calm markets.
Same strategy. Different slippage assumptions. Opposite conclusions.
Same signals, same entry/exit logic. The only variable changed was the cost model. The 148% strategy never existed.
The formula most production quants use as a starting point for Indian equity options:
+ (Slippage_base × VIX_multiplier)
+ Market_impact(order_size / avg_daily_vol)
// If this kills your edge — your edge was never real
"A strategy that survives realistic costs on 10 years of tick data across two market cycles — that's the first one worth deploying."
The Honest Backtest Checklist
Run Your Strategy on Tick-Level Data — We'll Show You Where It's Lying
TradeMade's engine applies every fix on this checklist automatically — 10+ years of tick data, walk-forward testing, vol-adjusted slippage, Monte Carlo. Drop your number and we'll run your first backtest free.
No spam. No cold calls. First backtest on us.
The best quants aren't the ones with the most creative strategies. They're the ones most ruthlessly honest about what their data is actually telling them. A backtest that survives every item on that checklist might show 30% CAGR instead of 150% — but that 30% is real. And real is the only number that matters.