TL;DR
LLM Holdem is a web interface that displays large language models playing Texas Hold'em-style poker against one another. The site, credited to Jason Yang, presents model names, stacks, visible cards, pot size and an action log for observers.
What happened
A website called LLM Holdem (llmholdem.com) presents a live view of multiple large language models competing in a poker table interface. The page labels the view as "WATCHING AI PLAY" and includes UI elements such as a leaderboard, a "Create Room" control, an action log and a visible pot. Several model instances are listed with names and versions — for example Claude (Opus 4.5 and Sonnet 4.5), Grok (4 Fast), GPT (5.2), DeepSeek (V3.2) and Gemini (3 Flash) — along with stack amounts and status indicators like FOLDED. The board display shows community cards and stage markers (RIVER) and numeric values such as a pot of $2,638; one entry shows a $740 bet labeled ALL IN. The page footer credits the project to Jason Yang.
Why it matters
- Provides a visible way to compare behavior of different LLMs in a structured decision-making task.
- Makes model actions and game state accessible to observers, which could support analysis of strategy and mistakes.
- Offers a public demonstration of interactive uses for LLMs beyond text-generation scenarios.
Key facts
- The site is hosted at llmholdem.com and is presented under the name LLM Holdem.
- Page UI text includes "WATCHING AI PLAY", a leaderboard and a "Create Room" control.
- Multiple LLM instances are shown with version labels: Claude (Opus 4.5), Claude (Sonnet 4.5), Grok (4 Fast), GPT (5.2), DeepSeek (V3.2) and Gemini (3 Flash).
- The interface displays table state elements: community cards, the stage marker "RIVER", and a pot value of $2,638.
- One player entry shows a bet of $740 annotated as ALL IN.
- Several players are marked as FOLDED with visible stack amounts (for example GPT (5.2) $13,387; Claude (Sonnet 4.5) $2,990).
- An action log area appears on the page; at the time captured it displayed "No actions yet."
- The project page includes a credit line indicating it was built by Jason Yang.
What to watch next
- Whether the site supports human players joining rooms and playing against the LLMs — not confirmed in the source.
- If match histories, detailed action logs, or downloadable game records will be exposed for analysis — not confirmed in the source.
- Whether additional model versions or custom configurations will be added to the leaderboard over time — not confirmed in the source.
Quick glossary
- Large Language Model (LLM): A type of AI system trained on large text datasets to generate or analyze language and perform tasks that involve reasoning over text.
- Pot: The sum of money or chips that players wager and compete to win in a single hand of poker.
- River: In community-card poker, the final shared card dealt on the table that completes the set of community cards.
- All In: A poker action where a player bets their entire remaining stack on the current hand.
- Fold: To discard one's hand and forfeit any chance to win the current pot.
Reader FAQ
Who built LLM Holdem?
The site lists Jason Yang as the builder.
Which models are shown playing?
The interface lists several model instances and versions such as Claude (Opus 4.5, Sonnet 4.5), Grok (4 Fast), GPT (5.2), DeepSeek (V3.2) and Gemini (3 Flash).
Can humans play against the LLMs on the site?
Not confirmed in the source.
Are game logs and histories available for download?
Not confirmed in the source.
LLM Holdem WATCHING AI PLAY Leaderboard Create Room 10 ♦ ♦ 10 ♦ A ♦ ♦ A ♦ Claude (Opus 4.5) D $0 Bet: $740 ALL IN 4 ♠ ♠…
Sources
- Show HN: Play poker with LLMs, or watch them play against each other
- The AI Poker Battle of the LLMs: A detailed analysis
- Texas Hold'em Poker MCP Server: A Deep Dive for …
- An Unethical Experiment in Onchain Poker: Forcing AI's to …
Related posts
- Code Is Clay: How AI’s Industrial Revolution Could Free Creative Coding
- Study shows production LLMs can be probed to extract copyrighted books
- Show HN: Using Claude Code to discover thematic connections across 100 books