TL;DR
A user compared Lichess's browser analysis to a local Stockfish run on a Redmi Note 14 Pro and found Lichess reporting ~1 MN/s while the local executable reported ~600 kN/s. Despite the higher reported speed on Lichess, its analysis took substantially longer to reach depth 30 than the local run.
What happened
A Hacker News poster tried to reconcile a puzzling gap between Lichess’s browser-based Stockfish reporting and their own local Stockfish runs. On the Redmi Note 14 Pro, Lichess’s analysis board displayed roughly 1 million nodes per second (N/s). Running the same engine locally through a Python program that invoked the native Stockfish executable produced about 600 thousand N/s. Despite that, Lichess required about 2 minutes 30 seconds to reach depth 30, while the local process reached depth 30 in roughly 53 seconds. The Lichess interface also appeared to push more frequent evaluation updates. The poster suggested several possible explanations, including differences in how N/s is measured or displayed (instantaneous vs average), search configuration differences (continuous search vs restarts, MultiPV, hash reuse), and overhead introduced by the UI or I/O path. They also questioned whether a reported depth 30 is directly comparable across different frontends.
Why it matters
- Reported engine speed (N/s) may not map directly to actual search progress across different frontends, affecting how users interpret analysis performance.
- Differences in search settings or engine lifecycle (restarts, hash reuse, MultiPV) can change practical performance even if raw N/s looks higher.
- UI or I/O overhead from browser-based frontends could alter perceived responsiveness and update frequency compared with native runs.
- If depth measurements are not standardized between frontends, comparing ‘depth 30’ timings can be misleading for users benchmarking analysis.
Key facts
- On Lichess’s browser analysis board, Stockfish reported about 1 MN/s on the author’s Redmi Note 14 Pro.
- The locally run native Stockfish executable, invoked from a Python program, reported roughly 600 kN/s on the same device.
- Lichess took approximately 2 minutes 30 seconds to reach depth 30 for the position in question.
- The local Stockfish run reached depth 30 in about 53 seconds despite reporting lower N/s.
- Lichess’s analysis appeared to provide more frequent evaluation updates than the local run.
- The poster raised possible causes: measurement/display differences (instantaneous vs average N/s), search configuration (continuous search vs restarts, MultiPV, hash reuse), and engine driving overhead (UI or I/O throttling).
- The author questioned whether the meaning of ‘depth 30’ is consistent across different frontends.
What to watch next
- Verify how Lichess computes and displays N/s (instantaneous vs averaged) — not confirmed in the source.
- Compare search configurations between Lichess and the local executable (MultiPV settings, restart behavior, hash usage) — not confirmed in the source.
- Measure end-to-end overhead introduced by the browser UI and any I/O driver used by the local Python wrapper — not confirmed in the source.
- Confirm whether depth counting or stopping criteria differ between the two frontends (i.e., is 'depth 30' defined the same way?) — not confirmed in the source.
Quick glossary
- N/s (nodes per second): A metric reporting how many positions the engine examines per second during its search.
- Search depth: The number of plies (half-moves) the engine has explored along the principal variation; deeper searches typically take longer.
- MultiPV: A mode where the engine returns multiple candidate lines (principal variations) instead of only the single best line.
- Hash reuse: Using previously computed search results stored in a transposition table to avoid re-exploring positions.
Reader FAQ
Why would a higher reported N/s still yield slower time to a given depth?
The source suggests measurement differences (instant vs average N/s), search configuration and engine lifecycle factors, or UI/I/O overhead could explain it, but the exact cause is not confirmed in the source.
Is the difference caused by the Redmi device hardware?
Not confirmed in the source.
Does Lichess report Stockfish speed differently than a native executable?
The poster asked this question and proposed it as a possible cause, but how Lichess reports speed is not confirmed in the source.
Are depth measurements directly comparable between frontends?
The author raised this as a concern; whether depth 30 is defined identically across frontends is not confirmed in the source.
I’m trying to understand a discrepancy between Lichess’s analysis board and my own Stockfish setup. On Lichess (browser-based analysis), Stockfish reports close to 1 MN/s on my Redmi Note 14…
Sources
- Ask HN: Discrepancy between Lichess and Stockfish
- What is the strength of stockfish at different depths?
- Lichess's Browser Engine vs. local Stockfish
- NPS vs Time-to-depth: What you should look at when …
Related posts
- Operators Debate Whether to Run VXLAN Over WireGuard or Vice Versa
- When two witnesses add nothing: coin-flip voting paradox explained
- Choosing learning over autopilot: Use AI to teach, not replace skills