Notes on this tweet.
-
The screenshots were taken on different sessions.
-
The entire sessions are included on the screenshots.
-
I lost the original prompts, so I had to reconstruct them, and still managed to reproduce.
-
The "compressed" version is actually longer! Emojis and abbreviations use more tokens than common words.
-
In practice, plain, well-designed summaries should be optimal to fit larger documents in the context.
-
This concept has potential, though; building lookup tables seems to outperform long text summarization.
-
No idea why this is still going, you're too impressionable! But glad to entertain you I guess.
¯\_(ツ)_/¯
Compress the following text in a way that fits a Tweet, and such that you (GPT-4) can reconstruct it as close as possible to the original. This is for yourself. Do not make it human readable. Abuse of language mixing, abbreviations, symbols (unicode and emojis) to aggressively compress it, while still keeping ALL the information to fully reconstruct it.
## Text to compress:
Lambda calculus may be untyped or typed. In typed lambda calculus, functions can be applied only if they are capable of accepting the given input's "type" of data. Typed lambda calculi are weaker than the untyped lambda calculus, which is the primary subject of this article, in the sense that typed lambda calculi can express less than the untyped calculus can. On the other hand, typed lambda calculi allow more things to be proven. For example, in the simply typed lambda calculus it is a theorem that every evaluation strategy terminates for every simply typed lambda-term, whereas evaluation of untyped lambda- terms need not terminate. One reason there are many different typed lambda calculi has been the desire to do more (of what the untyped calculus can do) without giving up on being able to prove strong theorems about the calculus.
I asked you to compress a long text using your own abbreviations. You replied with:
LmbC_untyped/typed.TypLmbC:funcs_appl✅if
input"type"👌.Typ<untyp(🎯)=exprss🔽.TypLmbC:prov➕.SmplTypLmbC:⏹️eval_strat_term
.💡untypLmb-term:may_not_term.🔍variousTypLmbC:do➕(untyp),💪thrm_provs.
Reconstruct the original text.
Model : GPT-4
Temperature : 0.7
Top P : 1
Frequency penalty : 0
Presence penalty : 0
Seriously everyone, this prompt is silly, stop hyping it. This is just GPT trying to make abbreviations, there is no alien language or AI ghost trying to scape here. If you want to play with this, try improving the prompt. Remove the Tweet limit, ask it to abbreviate all words, give it some examples of how you want it to be done. One way this kind of thing can be useful is to pre-define 1-token synonyms (lookup tables) when building interactive prompts. This way you can save a lot of space on recursive interactions, which is actually desirable given that these have quadratic costs.
Two things:
First, what happens if you ask it to encode step by step? That could shed some lights on how it's done.
Second, isn't it possible to use this kind of feat to compile code? I'm sure part of the future for computer science is to have compilers be made by AIs that optimize the s*** out code. Next step is to have CPU gates optimized by AIs to take advantage of this kind of language and compiled code.
What do you think?