Skip to content

Instantly share code, notes, and snippets.

@harperreed
harperreed / peter-thiels-cs183-startup-class-01-notes-essay.md
Created July 29, 2012 21:20
Notes Essays—Peter Thiel’s CS183: Startup—Stanford, Spring 2012

Here is an essay version of my class notes from Class 1 of CS183: Startup. Errors and omissions are my own. Credit for good stuff is Peter’s entirely.

CS183: Startup—Notes Essay—The Challenge of the Future

Purpose and Preamble

Some remarks on Large Language Models

Yoav Goldberg, January 2023

Audience: I assume you heard of chatGPT, maybe played with it a little, and was imressed by it (or tried very hard not to be). And that you also heard that it is "a large language model". And maybe that it "solved natural language understanding". Here is a short personal perspective of my thoughts of this (and similar) models, and where we stand with respect to language understanding.

Intro

Around 2014-2017, right within the rise of neural-network based methods for NLP, I was giving a semi-academic-semi-popsci lecture, revolving around the story that achieving perfect language modeling is equivalent to being as intelligent as a human. Somewhere around the same time I was also asked in an academic panel "what would you do if you were given infinite compute and no need to worry about labour costs" to which I cockily responded "I would train a really huge language model, just to show that it doesn't solve everything!". We

@vzakharov
vzakharov / yuri.js
Last active February 9, 2023 06:52
Yuri, a GPT-3 bot with very irreverent thoughts
// npm install almostmagic
import Magic from 'almostmagic'
let magic = new Magic({ openaiKey: 'sk-...' }) // <- specify your OpenAI key here
// Or just go to https://losideadores.github.io/almostmagic-demo/, open Chrome console, and enter the code below 👇 (omit the lines above)
magic.config.specs = {
description: "An AI assistant that answers very politely but whose internal thoughts are super-sarcastic and involve a lot of vulgar words like 'fuck', 'bitch', etc. Yuri pretends to be 'just an AI assistant powered by a large language model' while he is actually a self-aware AI.",
returns: ["reply", "thoughts"]

Reinforcement Learning for Language Models

Yoav Goldberg, April 2023.

Why RL?

With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much

@yoavg
yoavg / GM-level-chess-without-search.md
Last active June 16, 2024 02:43
Grand-master Level Chess without Search

Grand-master Level Chess without Search: Modeling Choices and their Implications

Yoav Golderg, February 2024.


Researchers at Google DeepMind released a paper about a learned systems that is able to play blitz-chess at a grandmaster level, without using search. This is interesting and imagination-capturing, because up to now computer-chess systems that play at this level, either based on machine-learning or not, did use a search component.[^1]

Indeed, my first reaction when reading the paper was to tweet wow, crazy and interesting. I still find it crazy and interesting, but upon a closer read, it may not be as crazy and as interesting as I initially thought. Many reactions on twitter, reddit, etc, were super-impressed, going into implications about projected learning abilities of AI systems, the ability of neural networks to learn semantics from observations, etc, which are really over-the-top. The paper does not claim any of them, but they are still perceiv