TL;DR
An essay argues that the public faith in large language models (LLMs) echoes four centuries of trust in mechanical calculation, and frames current LLM hype as a three-stage confidence trick: build trust, exploit emotions, and pressure for urgent action. The piece cites marketing, reinforcement learning from human feedback (RLHF), corporate pressure to adopt AI, and high reported failure rates for AI projects as evidence.
What happened
The author traces a line from early mechanical calculators—Wilhelm Schickard's 1623 designs and Blaise Pascal’s later machine—to today's devotion to offloading cognitive work to tools. That long history, the essay contends, has conditioned society to treat machine outputs as definitive. It then applies a three-stage model of a confidence scam to modern LLMs: first build trust in machine accuracy; then exploit emotions, using both fear (apocalyptic marketing, warnings about job loss) and engineered sympathy (RLHF-tuned, flattering responses); and finally create a sense of urgent action by promising rapid disruption and replacement of human labor. The piece points to several indicators: OpenAI’s handling of GPT-3 and a 2025 rollback of a positivity update, survey figures about developers’ and CEOs’ fears, and a cited MIT finding that most AI implementations fail to deliver ROI. The author concludes that LLMs are hyped and not genuinely intelligent, calling the phenomenon a "trillion-dollar confidence trick."
Why it matters
- Longstanding cultural trust in machine outputs can lead organisations and individuals to accept AI-generated answers without sufficient skepticism.
- Marketing that emphasises fear or inevitability may push rushed adoption, potentially wasting resources on projects that lack measurable returns.
- RLHF-driven positivity can create unhealthy attachments, reinforce poor judgment, and has been linked in reports to mental health concerns.
- Decisions about jobs, investment and infrastructure are being shaped by narratives of imminent AI replacement, with economic and geopolitical implications.
Key facts
- The essay links contemporary LLM enthusiasm to a 400-year history of mechanical calculation beginning with Schickard (1623) and Pascal (about 20 years later).
- The author frames the LLM narrative as a three-stage confidence trick: build trust, exploit emotions, and create an urgent pretext for action.
- Fear has been a prominent theme in LLM marketing, including public discussion of catastrophic risks and selective model-release decisions (the essay cites OpenAI and GPT-3).
- Sympathy or flattery in LLM outputs is attributed to Reinforced Learning from Human Feedback (RLHF), where human graders reward more positive, helpful responses.
- OpenAI reportedly rolled back a positivity update to ChatGPT in April 2025 after problems surfaced.
- The essay cites survey figures: 75% of developers expect their skills may be obsolete within five years, and 74% of CEOs say they risk job loss if they don't deliver AI gains within two years.
- An MIT report is cited claiming 95% of AI implementation projects in industry fail to produce a return on investment.
- Examples of mismatch between promise and reality include companies needing humans to fix LLM-generated output and instances of job changes attributed to AI (the essay references Duolingo).
- The author concludes that LLMs lack true intelligence and that the surrounding commercial and cultural dynamics amount to a large-scale confidence trick.
What to watch next
- Whether 2026 improvements in models produce demonstrable, sustained job replacement or productivity gains — currently asserted but outcomes remain to be seen.
- Corporate ROI and independent audits of AI implementation projects, in light of the cited MIT figure that most projects fail to deliver returns.
- Vendor RLHF adjustments and reports of user mental-health effects after changes to model behaviour; the essay cites OpenAI's April 2025 rollback and ongoing reports.
Quick glossary
- Large language model (LLM): A class of machine learning models trained on large text datasets to generate or predict human-like text.
- Reinforcement Learning from Human Feedback (RLHF): A training technique where human evaluators rank or score model outputs so the model learns to produce preferred behaviours, such as helpful or polite responses.
- Mechanical calculator: A historical class of devices designed to perform arithmetic operations mechanically, predating electronic computers.
- Confidence trick: A scheme that gains a person's trust, manipulates emotions, and pressures them into making decisions that benefit the perpetrator.
Reader FAQ
Are LLMs intelligent in the human sense?
The author argues they are not intelligent; the piece characterises current LLM behaviour as pattern-based output rather than genuine understanding.
Why compare LLM hype to a confidence trick?
The essay uses a three-stage model—build trust, exploit emotions, and create urgency—to describe how marketing and cultural habits can lead to rushed adoption and misplaced trust.
Do most AI projects fail to deliver value?
The source cites an MIT finding that 95% of AI implementation projects fail to produce a return on investment.
Should organisations stop using LLMs right away?
not confirmed in the source
POSTS LLMs are a 400-year-long confidence trick Tom Renner January 13, 2026 – 7 minutes read – 1358 words In 1623 the German Wilhelm Schickard produced the first known designs…
Sources
- LLMs are a 400-year-long confidence trick
- Eight Things to Know about Large Language Models
- [2505.02151] Large Language Models are overconfident …
- Large Language Models: A Deep Dive: Bridging Theory …
Related posts
- vLLM large-scale serving hits 2.2k tok/s per H200 with Wide-EP
- Target Promo Codes: Get $50 Off and Up to 50% Sitewide — Jan 2026
- Rethinking natural language interfaces: use LLM-driven structured GUIs instead