TL;DR

NERD (No Effort Required, Done) is an experimental language designed to be written primarily by large language models rather than humans. It uses dense English-like tokens to reduce token counts and compiles to native code via LLVM with no runtime, prioritizing auditability over human editability.

What happened

A new experimental language called NERD proposes a shift in source-code design for an era where LLMs generate a growing share of code. The language swaps conventional punctuation and symbol-heavy syntax for compact, English-token-oriented constructs; the author argues that LLMs tokenize words like "plus", "if" and "minus" more efficiently than symbols such as braces or equality signs. NERD aims to be terse and machine-optimized rather than human-friendly, claiming substantial token reductions (the source reports roughly 50–70% fewer tokens, and an example showing about 67% fewer tokens than the same logic in TypeScript). Its toolchain is a bootstrap compiler written in C that emits LLVM IR and produces native binaries with no runtime dependency. The proposed workflow treats humans as observers and requirements setters, while LLMs author and adjust the NERD source; human reviewers audit and prompt changes rather than editing line-by-line.

Why it matters

  • If AI becomes the primary author of code, languages and toolchains optimized for machine generation could reduce costs tied to token usage and inference.
  • A design focused on auditability rather than human editability reframes how teams would inspect and govern production systems.
  • Compiling directly to native code with no runtime could shift deployment and performance considerations compared with managed-language stacks.
  • The concept challenges long-standing assumptions about source readability as a primary language design goal in software engineering.

Key facts

  • The source cites an estimate that roughly 40% of code is now produced by LLMs.
  • NERD stands for "No Effort Required, Done."
  • Design favors dense English tokens over punctuation-heavy syntax to reduce tokenizer overhead for LLMs.
  • The author reports token savings in the range of 50–70% versus conventional languages; one example claims about 67% fewer tokens compared with TypeScript for equivalent logic.
  • NERD's toolchain is a bootstrap compiler implemented in C that emits LLVM IR and compiles to native binaries with no runtime dependency.
  • The intended workflow has LLMs authoring NERD code while humans issue high-level prompts, review outputs, and request changes rather than editing code directly.
  • The project characterizes NERD as auditable and verifiable even if not intended for direct human authorship.
  • The creator frames the effort as an experiment and acknowledges it could be wrong, noting adoption and long-term outcomes are uncertain.

What to watch next

  • Whether the author's prediction that most production code will be AI-authored within five years materializes (not confirmed in the source).
  • Development of tooling that translates NERD into human-friendly audit views and evidence of compliance workflows (not confirmed in the source).
  • Adoption by production teams and integration with existing CI/CD pipelines (not confirmed in the source).

Quick glossary

  • LLM: Large language model — a neural network trained on large text corpora to generate or analyze natural language and related token sequences.
  • Tokenization: The process of breaking text into discrete units (tokens) that models use as inputs; different characters and words map to different token counts.
  • LLVM: A collection of modular and reusable compiler and toolchain technologies used to generate machine code from intermediate representations.
  • Native compilation: Compiling source code down to machine code for a target platform, producing executables that run without a separate language runtime.
  • Auditable: Capable of being inspected and verified for correctness, security, or compliance by humans or automated processes.

Reader FAQ

What is NERD?
An experimental programming language designed to be authored by LLMs using dense English-like tokens, compiled to native code via LLVM; it aims for auditable, machine-optimized source rather than human editing.

Is NERD meant to be readable or edited by humans?
The project describes NERD as not human-friendly for direct editing but human-observable for auditing; humans are positioned as reviewers and requirement setters rather than line-by-line authors.

Does NERD require a runtime environment?
According to the source, NERD compiles to native binaries through LLVM and is intended to have no runtime dependency.

Will NERD replace TypeScript or other human-oriented languages?
Not confirmed in the source.

How does debugging work with AI-authored NERD code?
The author argues debugging shifts to the abstraction layer where humans interact — for example, by asking the LLM why a particular feature fails — rather than stepping through low-level code; this approach is presented as analogous to not debugging virtual machine internals directly.

The Story The Question 40% of code is now written by LLMs. That number is growing. I was using Claude Code, watching it generate TypeScript. A thought hit me: Why…

Sources

Related posts

By

Leave a Reply

Your email address will not be published. Required fields are marked *