Principles of Adult Behavior
- Be patient. No matter what.
- Don’t badmouth: Assign responsibility, not blame. Say nothing of another you wouldn’t say to him.
- Never assume the motives of others are, to them, less noble than yours are to you.
- Expand your sense of the possible.
- Don’t trouble yourself with matters you truly cannot change.
- Expect no more of anyone than you can deliver yourself.
- Tolerate ambiguity.
- Laugh at yourself frequently.
#include <stdio.h> | |
#define SQ(x) (x)*(x) | |
#define M0(x,y) SQ(x)+SQ(y)<4?0:0xe0 | |
#define M1(x,y,x0,y0) (SQ(x)+SQ(y)<4)?M0(SQ(x)-SQ(y)+(x0),2*(x)*(y)+(y0)):0xc0 | |
#define M2(x,y,x0,y0) (SQ(x)+SQ(y)<4)?M1(SQ(x)-SQ(y)+(x0),2*(x)*(y)+(y0),x0,y0):0xa0 | |
#define M3(x,y,x0,y0) (SQ(x)+SQ(y)<4)?M2(SQ(x)-SQ(y)+(x0),2*(x)*(y)+(y0),x0,y0):0x80 | |
#define M4(x,y,x0,y0) (SQ(x)+SQ(y)<4)?M3(SQ(x)-SQ(y)+(x0),2*(x)*(y)+(y0),x0,y0):0x60 | |
#define M5(x,y,x0,y0) (SQ(x)+SQ(y)<4)?M4(SQ(x)-SQ(y)+(x0),2*(x)*(y)+(y0),x0,y0):0x40 |
Yoav Goldberg, April 2023.
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much