TL;DR
Tinygrad marks five years since its initial commit and has evolved from a small project into a six-person company with a compact software stack. The team is removing LLVM dependencies to run on AMD GPUs, claims performance advantages over PyTorch on many workloads, and is pursuing a public, contribution-driven development model.
What happened
The tinygrad project began with a first commit on October 17, 2020, and the author reports five years of continuous development. The codebase currently measures 18,935 lines (excluding tests) and the organization has grown into a six-person company. Funding was secured nearly three years ago and the group runs a modest commercial arm that sells computers, generating roughly $2 million in annual revenue. The team says tinygrad now includes a frontend, a graph compiler, runtimes and drivers, and is actively working to eliminate LLVM to become dependency-free aside from pure Python — a change intended to improve support for AMD GPUs. The project claims to outperform PyTorch on many workloads and is preparing to reach roughly 20,000 lines when the cleanup is complete. Development, hiring and some negotiations (including an AMD MLPerf-related engagement) have occurred in public forums like GitHub, Discord and Twitter.
Why it matters
- A smaller, sovereign software stack could lower the barrier to training large models on hardware beyond dominant vendors.
- Dependency reduction (removing LLVM) aims to broaden hardware support, notably for AMD GPUs, which could diversify the ML training ecosystem.
- If tinygrad’s performance claims hold, a compact codebase may offer a simpler alternative to larger frameworks for some workloads.
- The project’s public, contribution-driven model and commercial revenue may offer a different path for funding and staffing ML infrastructure work.
Key facts
- First commit: October 17, 2020.
- Reported codebase size: 18,935 lines (tests excluded); target roughly 20,000 lines after cleanup.
- Organization size: 6 people.
- Approximately $2M annual revenue from a computer sales division.
- Active effort to remove LLVM to make tinygrad dependency-free except for pure Python.
- tinygrad includes a frontend, graph compiler, runtimes and drivers.
- Author claims tinygrad outperforms PyTorch on many workloads.
- Public negotiation and collaboration used for at least one AMD contract concerning MI350X and MLPerf for Llama 405B.
What to watch next
- Progress and results from the LLVM removal effort to enable AMD GPU support.
- Public MLPerf runs involving the MI350X for Llama 405B that were mentioned in relation to AMD (timing and outcomes not confirmed in the source).
- Further performance comparisons with PyTorch on a range of workloads and independent verification of those claims.
Quick glossary
- LLVM: A collection of modular and reusable compiler and toolchain technologies commonly used to generate optimized machine code.
- Graph compiler: Software that transforms a computational graph (representing model operations) into optimized code or kernels for execution on target hardware.
- Runtime: The component that executes compiled code, managing resources like memory and device scheduling during model execution.
- MLPerf: An industry-standard benchmark suite for measuring machine learning training and inference performance across hardware and software stacks.
- Petaflop: A measure of computing performance equal to one quadrillion (10^15) floating-point operations per second.
Reader FAQ
When did tinygrad start?
The project’s first commit was made on October 17, 2020.
How big is the tinygrad codebase?
The author reports 18,935 lines of code excluding tests, with a target near 20,000 lines after cleanup.
Is tinygrad a company?
Yes — the project operates as a company of six people and runs a computer sales division that generates revenue.
Has tinygrad partnered with hardware vendors?
The source states there is a contract involving AMD and MI350X for MLPerf related to Llama 405B, negotiated largely in public.
Does tinygrad beat PyTorch?
The author asserts tinygrad outperforms PyTorch on many workloads; independent verification is not provided in the source.
Five years of tinygrad Dec 29, 2025 The first commit to tinygrad was October 17, 2020. It’s been almost three years since we raised money. The company is 6 people…
Sources
- Five Years of Tinygrad
- Commoditizing the Petaflop — with George Hotz of the tiny …
- tinygrad: A simple and powerful neural network framework
- the tiny corp raised $5.1M | the singularity is nearer
Related posts
- Nvidia completes $5 billion purchase of Intel shares under September deal
- Epilogue’s SN Operator turns your PC into a Super Nintendo with USB dock
- GOG acquired by co-founder Michał Kiciński — what the change means