Skip to content

Instantly share code, notes, and snippets.

@georgemandis
Last active June 25, 2025 21:54
Show Gist options
  • Save georgemandis/b2a68b345262b94782fa6b08e41fbcf2 to your computer and use it in GitHub Desktop.
Save georgemandis/b2a68b345262b94782fa6b08e41fbcf2 to your computer and use it in GitHub Desktop.
Summary of Andrej Karpathy’s Talk on Software and the Era of AI

Video for reference: https://www.youtube.com/watch?v=LCEmiRjPEtQ

Summary of Andrej Karpathy’s Talk on Software and the Era of AI:

  1. Software is Changing Rapidly: Karpathy observes that after decades of relative stability (software 1.0, written as explicit code), software development has fundamentally changed—twice in the last few years—ushering in a “new normal.”

  2. Software 1.0 vs. 2.0 vs. LLM Programming:

    • Software 1.0: Traditional hand-written code using programming languages.
    • Software 2.0: Programs where the “code” consists of neural network weights learned from data. Developers curate datasets and train models, rather than write rules directly.
    • Large Models (LLMs): The latest paradigm shift allows “programming” by writing prompts in English, interfacing with large models (e.g., GPT), fundamentally changing the way we instruct computers.
  3. Programming in Natural Language: The surprising new trend is that with LLMs, humans “program” not just with code, but with English-language prompts, dramatically expanding who can program computers.

  4. Stack Evolution Illustrated via Autopilot at Tesla: At Tesla, the transition from code-driven logic to neural-network-driven logic was evident in Autopilot: more and more functionality was subsumed by neural nets (software 2.0), with hand-written code being “eaten” over time.

  5. Industry Impact and Open Tools: The software ecosystem now includes open-source codebases, datasets, model hubs, and tools (like GitHub for code, “model Atlas” for AI models), enabling new forms of collaboration and innovation.

  6. Economic and Infrastructure Analogy (“Ultra-seed”): Karpathy uses the analogy of utility infrastructure (electricity) to explain LLMs as foundational AI “utilities.” Major players (OpenAI, Google, Anthropic) invest vast capital to train models, then serve these capabilities to users via APIs—offering intelligence as a utility, with discrete providers, high reliability, and fast switching (like the power grid).

  7. LLMs as Operating Systems, Not Just Utilities: Unlike commoditized resources like water/power, LLMs resemble operating systems: a few core providers host complex, evolving platforms atop which other software is built. These systems are modular, layered, and form the “substrate” for future applications.

  8. Implications and Future Prospects: This new paradigm brings enormous opportunities and challenges: code is more malleable and accessible, but the underlying infrastructure is capital-intensive and centralized. The industry is still in its early stages, but the impact could be as profound as the shifts driven by previous computing paradigms.

In essence: Karpathy’s talk charts the rapid evolution of software—from explicit, hand-coded logic (software 1.0), to machine-learned neural programs (software 2.0), to programming large models via natural language. He explains how this shift is revolutionizing not just how we make software, but who makes it, what tools matter, and how economic power and software infrastructure are being redefined—moving toward a world where advanced AI acts as a core “operating system” and utility for digital life.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment