Skip to content

Instantly share code, notes, and snippets.

@yoavg
Last active October 30, 2024 08:38
Show Gist options
  • Save yoavg/59d174608e92e845c8994ac2e234c8a9 to your computer and use it in GitHub Desktop.
Save yoavg/59d174608e92e845c8994ac2e234c8a9 to your computer and use it in GitHub Desktop.

Some remarks on Large Language Models

Yoav Goldberg, January 2023

Audience: I assume you heard of chatGPT, maybe played with it a little, and was imressed by it (or tried very hard not to be). And that you also heard that it is "a large language model". And maybe that it "solved natural language understanding". Here is a short personal perspective of my thoughts of this (and similar) models, and where we stand with respect to language understanding.

Intro

Around 2014-2017, right within the rise of neural-network based methods for NLP, I was giving a semi-academic-semi-popsci lecture, revolving around the story that achieving perfect language modeling is equivalent to being as intelligent as a human. Somewhere around the same time I was also asked in an academic panel "what would you do if you were given infinite compute and no need to worry about labour costs" to which I cockily responded "I would train a really huge language model, just to show that it doesn't solve everything!". Well, this response aged badly! or did it? And how does it co-exist with my perfect-language-modeling-as-intelligence story I was also telling at the same time?

Perfect Language Modeling is AI-Complete

My pop-sci-meets-intro-NLP talk ("teaching computers to understand language") was centered around Claude Shannon's "guessing game" and the idea of language modeling. It started with AI-for-games, then quickly switched to "a different kind of game" invented by Shannon in 1951: the game of "guessing the next letter". The game operator chooses some text and a cutting point within the text, and hides the end. The players need to guess the first hidden letter in the smallest number of gueses.

I gave a few examples of this game, that demonstrate various kinds of linguistic knowledge that are needed in order to perform well in it, at different levels of linguistic understanding (from morphology through various levels of syntax, smantics, pragmatics and sociolinguistics). And then I said that humans are great at this game without even practicing, and that's it is hard for them to get better at it, which is why they find it to be a not-great game.

I then said that computers kinda suck at this game compared to humans, but that by teaching them to play it, we gain a lot of implicit knowledge of language. And that there is a long way to go, but that there was some steady progress: this is how machine translation works today!

I also said that computers are still not very good, and that this is understandable: the game is "AI-complete": really playing the game "at human level" would mean solving every other problem of AI, and exhibiting human-like intelligence. To see why this is true, consider that the game entails completing any text prefix, including very long ones, including dialogs, including every possible conversation prefix, including every description of experience that can be expressed in human language, including every answer to every question that can be asked on any topic or situation, including advanced mathmatics, including philosophy, and so on. In short, to play it well, you need to understand the text, understand the situation described in the text, imagine yourself in the situation, and then to respond. It really mimicks the human experience and thought. (Yes, there could be several objections to this argument, for example humans may also need to ask questions about images or scenes or other perceptual inputs that the model cannot see. But I think you get the point.)

So that was the the story I told about Shannon's guessing game (aka "language modeling") and how playing it at human level entails human level intelligence.

Building a large language model won't solve everything / anything

Now, if obtaining the ability of perfect language modeling entails intelligence ("AI-complete"), why did I maintain that building the largest possible language model won't "solve everything"? and was I wrong?

The answer is that I didn't think building a very large language model based on the then-existing tech (which was then just shifting between RNNs/LSTMs and the Transformer) will get us nowhere even close to having "perfect language modeling".

Was I wrong? sort of. I was definitely surprised by the abilities demonstrated by large language models. There turned out to be a phase shift somewhere between 60B parameters and 175B parameters, that made language models super impressive. They do a lot more than what I thought a language model trained on text and based on RNNs/LSTMs/Transformers could ever do. They certainly do all the things I had in mind when I cockily said they will "not solve everything".

Yes, current-day language models (first release of chatGPT) did "solve" all of the things in the set of language understanding problems I was implicitly considering back then. So in that sense, I was wrong. But in another sense, no, it did not solve everything. Not yet, at least. Also, the performance of current day language-models is not obtained only by language modeling in the sense I had in mind then. I think this is important, and I will elaborate on it a bit soon.

In what follows I will briefly discuss the difference I see between current-day-LMs and what was then perceived to be an LM, and then briefly go through some of the things I think are not yet "solved" by the large LMs. I will also mention some arguments that I find to be correct but irrelevant / uninteresting.

Natural vs Currated Lanagueg Modeling

What do I mean by "The performance of current days language models are not obtained by language modeling"? The first demonstration of large language models (let's say at the 170B parameters level, a-la GPT-3) was (to the best of our knowledge) trained on naturally occuring text data: text found in books, crawled from the internet, found in social networks, etc. Later replications (BLOOM, OPT) also used similar data. This is very close to Shannon's game, and also what most people in the past few decades thought of as 'language modeling'. These models already brought remarkable performance. But chatGPT is different.

What's different in chatGPT? There are three conceptual steps between GPT-3 and chatGPT: Instructions, code, RLHF. The last one is, I think, the least interesting despite getting the most attention, but all are interesting. Here's my hand-wavy explanation. Maybe some day I will turn it into a more formal argument. I hope you get an intuition out of it though.

Training on "text alone" like a "traditional language model" does have some clear theoretical limitations. Most notably, it doesn't have a connection to anything "external to the text", and hence cannot have access to "meaning" or to "communicative intent". Another way to say it is that the model is "not grounded". The symbols the model operates on are just symbols, and while they can stand in relation to one another, they do not "ground" to any real-world item. So we can know the symbol "blue", but there is no real-world concept behind it.

In instruction tuning the model trainers stopped training on just "found" data, and started trainign on also on specific human-created data (this is known in machine-learning circles as "supervised learning", e.g. learning from annotated examples), in addition to the found one. For example, the human annotators would write something like "please summarize this text", followed by some text they got, followed by a summary they produced of this text. Or, they may write "translate this text into a formal language", followed by some text, followed by formal language. They would create many instructions of these kind (many summaries, many translations, etc), for many different "tasks". And then these will be added to the model's training data. Why is this significant? At the core the model is still doing language modeling, right? learning to predict the next word, based on text alone? Sure, but here the human annotators inject some level of grounding to the text. Some symbols ("summarize", "translate", "formal") are used in a consistent way together with the concept/task they denote. And they always appear in the beginning of the text. This make these symbols (or the "instructions") in some loose sense external to the rest of the data, making the act of producing a summary grounded to the human concept of "summary". Or in other words, this helps the model learn the communicative intent of the a user who asks for a "summary" in its "instruction". An objection here would be that such cases likely naturally occur already in large text collections, and the model already learned from them, so what is new here? I argue that it might be much easier to learn from direct instructions like these than it is to learn from non-instruction data (think of a direct statement like "this is a dog" vs needing to infer from over-hearing people talk about dogs). And that by shifting the distribution of the training data towards these annotated cases, substantially alter how the model acts, and the amount of "grounding" it has. And that maybe with explicit instructions data, we can use much less training text compared to what was needed without them. (I promised you hand waving didn't I?)

Additionally, the latest wave of models is also trained on programming language code data, and specifically data that contains both natural language instructions or descriptions (in the form of code comments) and the corresponding programming language code. Why is this significant? This produced another very direct form of grounding. Here, we have two separate systems in the stream of text: one of them is the human language, and the other is the programming language. And we obsere the direct interaction between these two systems: the human language describes concepts (or intents), which are then realized in the form of the corresponding programs. This is now a quite explicit "form to meaning pairing". We can certainly learn more from it than what we could learn "from form alone". (Also, I hypothesize that latest models are also trained on execution: pairs of programs and their outputs. This is an even stronger form of grounding: denotations). This is now very far from "just" language modeling.

Finally, RLHF, or "RL with Human Feedback". This is a fancy way of saying that the model now observes two humans in a conversation, one playing the role of a user, and another playing the role of "the AI", demonstrating how the AI should respond in different situations. This clearly helps the model learn how dialogs work, and how to keep track of information across dialog states (something that is very hard to learn from just "found" data). And the instructions to the humans are also the source of all the "It is not appropriate to..." and other formulaic / templatic responses we observe from the model. It is a way to train to "behave nicely" by demonstration.

ChatGPT has all three of these, if not more. This is why I find it to be very different from "traditional" language models, why it may not "obey" some of the limitations we (or I) expect to from language models, and why it performs so much better on many tasks: it is a supervised model, with access to an external modality, which is also trained explicitly by demonstration to follow a large set of instructions given in dialog form.

What's still missing?

Common-yet-boring arguments

There are a bunch of commonly occuring arguments about language models, which I find to be true but uninspiring / irrelevant to my discussion here:

  • They are wasetful, training them is very expensive, using them is very expensive.

    • Yes, this is certainly true today. But things get cheaper over time. Also, let's put things in perspective: yes, it is enviromentally costly, but we aren't training that many of them, and the total cost is miniscule compared to all the other energy consumptions we humans do. And, I am also not sure what the environmental argument has to to with the questions of "are these things interesting", "are these things useful", etc. It's an economic question.
  • The models encode many biases and stereotypes.

    • Well, sure they do. They model observed human's language, and we humans are terrible beings, we are biased and are constantly stereotyping. This means we need to be careful when applying these models to real-world tasks, but it doesn't make them less valid, useful or interesting from a scientiic perspective.
  • The models don't really understand language.

    • Sure. They don't. So what? Let's focus on what they do manage to do, and maybe try to improve where they don't?
  • These models will never really understand language.

    • Again, so what? There are parts they clearly cover very well. Let's look at those? Or don't look at them if you don't care about these aspects. Those who want to really understand language may indeed prefer to look elsewhere. I am happy with approximate understanding.
  • The models do not understand language like humans do.

    • Duh? they are not humans? Of course they differ in some of their mechanisms. They still can tell us a lot about language structure. And for what they don't tell us, we can look elsewhere.
  • You cannot learn anything meaningful based only on form:

    • But it is not trained only on form, see section above.
  • It only connects pieces its seen before according to some statistics.

    • ...And isn't it magnificent that "statistics" can get you so far? The large models connect things in a very powerful way. Also, consider how many terribly wrong ways there are to connect words and phrases from the corpus according to statistics. And how many such ways the models manage to avoid, and somehow choose "meaningful" ones. I find this utterly remarkable.
  • We do not know the effects these things may have on society:

    • This is true about any new tech / new discovery. Let's find out. We can try and be careful about it. But that doesn't make the thing less interesting / less effective / less worthy of study. It just adds one additional aspect worth studying.
  • The models don't cite their sources:

    • Indeed they don't. But... so what? I can see why you would like that in certain kinds of applications, and you certainly want the models to not bulshit you, and maybe you want to be able to verify that they don't bulshit you, but these are all not really related to the core of what language models are / this is not the right question to ask, in my opinion. After all, humans don't really "cite their sources" in the real sense, we rarely attribute our knowledge to a specific single source, and if we do, we very often do it as a rationalization, or in a very deliberate process of first finding a source and then citing it. This can be replicated. From an application perspective (say, if we want to develop a search system, or a paper writing system, or a general-purpose question answering system), people can certainly work on linking utterances to sources, either through the generation process or in a post-processing step, or on setups where you first retrieve and then generate. And many people do. But this is not really related to language understanding. What is interesting though, and what I find to be a more constructive thing to ask for, is (a) how do we separate "core" knowledge about language and reasoning, from specific factual knowledge about "things"; and (b) how do we enable knowledge of knowledge (see below).

So what's missing / what are some real limitations?

Here is an informal and incomplete list of things that I think are currently challenging in current "large language models", including the latest chatGPT, and which hinders them from "fully understanding" language is some real sense. These are a bunch of things the models still cannot do, or are at least very ill-equipped to do.

  • Relating multiple texts to each other. In their training, the models consume text either as one large stream, or as independent pieces of information. They may pick up patterns of commonalities in the text, but it has no notion of how the texts relate to "events" in the real world. In particular, if the model is trained on multiple news stories about the same event, it has no way of knowing that these texts all describe the same thing, and it cannot differentiate it from several texts describing similar but unrelated events. In this sense, the models cannot really form (or are really not equipped to form) a coherent and complete world view from all the text they "read".

  • A notion of time. Similarly, the models don't have a notion of which events follow other events in their training stream. They don't really have a notion of time at all, besides maybe explicit mentions of time. So it may learn the local meaning of expressions like "Obama became president in 2009", and "reason" about other explicitly dated things that happened before or after this. But it cannot understand the flow of time in the sense that if it reads (in a separate text) that "Obama is the current president of the united state" and yet in a third text that "Obame is no longer the president", which of these things follow one another, and what is true now. It can concurrently "believe" that both "Obame is the current president of the US", "Trump is the current president of the US" and "Biden is the current president of the US" are all valid statements. Similarly, it really has no practical way of interpreting statments like "X is the latest album by Y" and how they stand in relation to each other.

  • Knowledge of Knowledge The models don't really "know what they know". They don't even know what "knowing" is. All they do is guess the next token in a stream, and this next token guess may be based on either well founded acuired knowledge, or it may be a complete guess. The models' training and training data have no explicit mechanism for distingushing these two cases, and certainly don't have explicit mechanisms to act differently according to them. This is manifested in the well documented tendency to "confidently make stuff up". The learning-from-demonstraiton (RLHF) made the models "aware" that some answers should be treated with caution, and maybe the models even learned to associate this level of caution with the extent to which some fact, entity or topic were covered in their training data, or the extent to which the data is reflected in their internal weights. So in that sense they exhibit some knowledge of knowledge. But when they get over this initial stage of refuding to answer, and go into "text generation mode", they "lose" all such knowledge of knowledge, and, very quickly tranistion into "making stuff up" mode, also on things that it clearly stated (in a different stage) that is has no knowledge of.

  • Numbers and math The models are really ill-equipped to perform math. Their basic building blocks are "word pieces", which don't really correspond to numbers in any convenient base. They also don't have any appropriate way of learning the relations between different numebrs (such as the +1 or "greater than" relations) in any meaningful and consistent way. LLMs manage to perform semi-adequately on some questions involving numbers, but really there are so much better ways to go about representing numbers and math than the mechanisms we gave LLM, that it is surprising they can do anything at all. But I suspect they will not get very far without some more explicit modeling.

  • Rare events, high recall setups, high coverage setups: By their nature, the models focus on the common and probable cases. This makes me immediately suspicious about their ability to learn from rare events in the data, or to recall rare occurances, or to recall all occurances. Here I am less certain than in the other points: they might be able to do it. But I am currently skeptical.

  • Data hunger This is perhaps the biggest technical issue I see with current large language models: they are extremely data hungry. To achieve their impressive performance, they were trained on trillions of words. The obvious "...and humans learn from a tiny fraction of this" is of course true, but not very interesting to me on its own: so what? we don't have to mimick humans to be useful. There are other implications though, that I find to be very disturbing: Most human languages don't have so much data, certainly not data vailable in digital form.

    Why is this significant? Because it means that we will be hard-pressed to replicate the incredible English understanding results that we have now for other languages, such as my native language Hebrew, or even to more common ones like German, French or Arabic, or even Chinese or Hindi (I don't even consider so called "low resource" language like the many african and phillipinian ones).

    We can get a lot of data in these language, but not so much data. Yes, with "instruct training" we may need less data. But then the instruction data needs to be created: this is a huge undertaking for each new language we want to add. Additionally, if we believe (and I do) that training on code + language is significant, this is another huge barrirer for achieving similar models for languages other than English.

    Can't this be solved by translation? after all, we have great progress also in machine translation. We can translate to English, run the model there, and then translate back. Well, yes, we could. But this will work only at a very superficial level. Different languages come from different geographical regions, and these regions have their local cultures, norms, stories, events, and so on. These differ from the cultures, norms, stories and events of English speaking geographies in various ways. Even a simple concept such as "a city" differs across communities and geographies, not to mention concepts such as "politeness" or "violence". Or "just" "factual" knowledge about certain people, historic events, significant places, plants, customs, etc. These will not be reflected in the English training data, and cannot be covered by translation.

    So, data hunger is a real problem, if we consider that we may want to have language understanding and "AI" technologies also outside of English.

    For those of us who want to worry about social implications, this combination of data-hunger and English/US-centrality is definitely a huge issue to consider.

  • Modularity At the end of the "common yet boring arguments" section above, I asked "how do we separate "core" knowledge about language and reasoning, from specific factual knowledge about "things"". I think this is a major question to ask, and that solving it will go a long way towards making progress (if not "solving") many of the other issues. If we can modularize and separate the "core language understanding and reasoning" component from the "knowledge" component, we may be able to do much better w.r.t to the data-hunger problem and the resulting cultural knowledge gaps, we may be able to better deal with and control biases and stereotypes, we may get knowledge-of-knowledge almost "for free". (Many people are working on "retrieval augmented language models". This may or may not be the right way to approach this problem. I tend to suspect there is a more fundamental approach to be found. But history has proven I don't have great intuitions about such things.)

Conclusion

Large language models are amazing. Language modeling is not enough, but "current language models" are actually more than language models, and they can do much more than we expected. This is still "not enough", though, if we care about "inclusive" language understanding, and also if we don't.

@sherjilozair
Copy link

I'm surprised you didn't mention grounding in the limitations. Do you consider supervised learning and RLHF sufficient to fully ground these models?

@viveksck
Copy link

viveksck commented Jan 3, 2023

Very useful overview and agree with most! Is it fair to assume that the limitations characterized as boring may not be boring per se (since that is quite subjective), but may be weak arguments and should not be used to dismiss amazing progress over the last decade? I suppose that research in these directions for eg. methods to increase training efficiency could still be fruitful and useful.

@jb747
Copy link

jb747 commented Jan 3, 2023

the ability to gather the sources responsible for an AI work and assess how much they influenced it might be required to address copyright infringement claims. The broader question of whether artists should be compensated for providing training data may be a societal issue. Maybe Shakespeare's estate should get more royalties than Stephen King or Somerset Maugham for an AI-generated creative work.. or maybe none of them do if their influence is judged to be within acceptable limits of copyright law :).

@abesamma
Copy link

abesamma commented Jan 3, 2023

I think LLMs stand to benefit from being grounded and integrated with public knowledge graphs. They're definitely extremely useful as knowledge tools, in which case, this is of paramount importance. They should not be allowed to force themselves into hallucinating some plausible statement out of thin air. A simple IDK is more useful to me as a response than a bunch of plausible text that I cannot be sure of.

Addendum: @kegl I second that sentiment. If we don't constrain the model's knowledge creation behaviour, we might end up creating a world that relies on a somewhat better form of Galactica for its knowledge. We might as well call it a justified true belief generator with tons of Gettier problem/lottery problem traps embedded in the text.

@nukeop
Copy link

nukeop commented Jan 3, 2023

Goldberg?

@iolalla
Copy link

iolalla commented Jan 3, 2023

Hello Yoav,

I really like your ideas and arguments , I don't agree in the source citing .. it's very relevant , in fact is crucial.

My biggest concern related to ML as we have it today in creative models is that it's very complicated to include new disruptive changes in knowledge when the trained systems work for some time and they start drifiting or missing parts, so this prune work can be more than challenging.... How will a GAN forget the knowledge when it gets obsolete?

I recommend you to review the typos in the text, you have a fork where I've done that.

@hannahbast
Copy link

Thanks for a very interesting read. There are typos in one of the section headers: "Natural vs Currated Lanagueg Modeling" should be "Natural vs Curated Language Modeling". Feel free to delete this comment after you fixed it.

@carsonkahn-external
Copy link

Great as always Yoav. I wonder if "solving" knowledge of knowledge might also "solve" relating multiple texts to each other, basically through comparative "reasoning".

@kyleschmolze
Copy link

Lovely essay! I'd love to offer a differing perspective on this point:

Data hunger This is perhaps the biggest technical issue I see with current large language models: they are extremely data hungry. To achieve their impressive performance, they were trained on trillions of words. The obvious "...and humans learn from a tiny fraction of this" is of course true, but not very interesting to me on its own...

Chomsky's basic argument about linguistics actually makes the opposite case: He says that a human child learns a language so preposterously fast that it's clear that our brains must structurally contain information about language. He also notes the many similarities, or universal patterns, among all languages that exist. Aka. humans are not blank slates - we have brains evolved for language and which come with pre-built patterns for language usage! If you accept that argument, you'd actually say that humans have learned language through evolutionary forces applying to the millions of ancestors and probably trillions of words going back 100,000+ years - probably a much larger data set than chatGPT!

@cedrickchee
Copy link

I forked and fixed all typos and grammar so it's easier to read.
https://gist.github.com/cedrickchee/054956f6277430ae5a973c61e4a93073

@dcorney
Copy link

dcorney commented Jan 4, 2023

Nice essay! I'm now (at least half-) convinced that LLMs can ground symbols such as "summarise", "translate", "paraphrase" etc. - symbols about textual manipulation. Presumably, it could learn to ground symbols such as "font" or "colour" with suitable representation. And that's a big step forward in symbol grounding. But is it a step towards general grounding? Can supervised learning help an LLM to ground the symbol "floor" or "chair" or "dog"?

@joehonton
Copy link

Thanks @cedrickchee for fixing the atrocious spelling.

@yoavg It's not that I can't forgive an honest mistake here and there, but any author that doesn't care about spelling is doing his potential readers a disservice.

OK, I get it that LLMs have to figure out what this guffaw means: "Obame is the current president of the united state".

But really, all these too: barrirer vailable occurances numebrs refuding demonstraiton distingushing acuired statments bulshit scientiic enviromentally wasetful templatic obsere trainign occuring Lanagueg mathmatics smantics gueses imressed?

(At least we can be sure that post wasn't written by chatGPT.)

@ericftmkr
Copy link

Nice and thought-provoking write up, Thanks @yoavg

@ryancallihan
Copy link

This was very well thought out and quite measured. Thanks for taking the time to do this.

I think one point I might push back on and potentially discuss is considering "They are wasetful, training them is very expensive, using them is very expensive." as boring. So much focus when this argument is brought up is around training. Yes, you only need to train the base model once, but that is not factoring in 1) (and less interesting) the several rounds of development before then and 2) the environmental cost of companies running these after they are trained.

Yes, things get cheaper to run over time, but there is a lot of evidence / studies which demonstrate that increases in the efficiency of hardware correlate with higher demand. So, from a research point of view for LLMs, the environmental factor is minimal, but finding a more efficient solution which actually lowers emissions is quite interesting and valuable to discuss.

@evariste
Copy link

evariste commented Jan 5, 2023

Thanks for this. It will be interesting to see how the representations LLMs generate internally develop and how they might be linked to explicit systems of representing knowledge/reasoning, e.g., ontological models, theorem provers etc..

@MiladInk
Copy link

MiladInk commented Jan 7, 2023

Thanks for the great essay.

Relating multiple texts to each other.

I have seen multiple people translating javascript code to python code with ChatGPT. Doesn't that show these models can relate multiple texts? As I don't think such data has been in the training data much?

A notion of time.

Can't be fixed by feeding the model always the time a text is written?

@jjdonson
Copy link

jjdonson commented Jan 7, 2023

Can ChatGPT be used to clarify its own evolving abilities and limitations?

An emphasis on HOW and WHY would be fundamentally instructive to all humans.

How might we know when it is lying to us?

How is plagiarism systematically detected?

Hint: https://gptzero.me/

Where can we learn more about symbolism that is "grounded"?
Do those rules not change with CONTEXT?
Is comprehension not about CONTEXT?

@nukeop
Copy link

nukeop commented Jan 7, 2023

Not gonna click your retarded website, you know nothing about the topic

@dirkhovy
Copy link

Very nice piece, @yoavgo! I would argue that some of the issues you outline in the last section are related to/follow from the boring "not really understanding language (like humans)" arguments, especially the cultural differences, but also the notion of time and relatedness of texts. Much of understanding language like humans is based in the (social) grounding of it. And the argument for languages other than English is actually also one for addressing the biases (not all biases have to do with racism, hate speech, or stereotypes).

Relatedly, I would also argue that the code production is not really grounding, since programming languages are a also (albeit simpler) language, whereas the grounding you justifiedly ask for is external (non-linguistic). Some of that you might get if these LLMs are used in conjunction with sensors or robots, though that would probably raise some other issues (combining two incomplete complex systems does not make one complete complex system).

I fully agree that it will be exciting to see where "simple statistics" can get us, though, and which of these things get addressed.

@pisabell2
Copy link

What about the difference between "being possible" and "being true"? Language modelling is usually about text that is well-formed, text that makes sense, and not about text that expresses truth. Sure, huge language models like chatGPT do very often produce true statements. But they also produce lots of fanciful statements that happen to be plainly false. For example, if you ask "Were there any good restaurants in Drummondville back in 2020?", the answer will list a number of restaurants that never existed in Drummondville. That answer looks really really good, but it is plainly false.

The LM doesn't know the difference between a well-formed statement and a true statement. And it doesn't know that it doesn't know that. Clearly, language modelling is about modelling language, not truth. While being intelligent does not mean knowing every truth, it does mean being able to make the difference between statements that one believes to be true, statements that one believes to be false and statements that one doesn't know whether they are true or false.

@yoavg
Copy link
Author

yoavg commented Jan 13, 2023

Thanks for all the comments, I few remarks here:

  • Huge thanks for creating a fork edited for typo fixing! do you know if there is a simple way to just accept the changes? if not, I will just go over and copy them, but an "accept" or "merge" option would be so much nicer!

  • Many of the comments are around "I disagree that X is boring, it is actually interesting!". I agree with these, the boring aspects are not universally boring. I realize that some people find some of them interesting, I was just listing what was boring to me. But even more than that, some of these topics are also interesting to me, just not in the context of the LLM debate here. Yes, if we want to use LLMs as knowledge sources, they should be able to cite. Yes, there are situations where we certainly want to remove biases or at least be very aware of them. Yes, it would be great to have more efficient models. These are all great areas to explore, but they don't serve, for me, as interesting arguments about LLMs and their ability to model / understand language. They are arguments about other things, such as the LLMs utility to particular tasks, the ethics of deploying LLMs, etc.

  • Relatedly, " 'Not understanding like humans' is not boring, it will save many of the unresolved issues ": I agree it is one way of solving the issues, not the only way. And sure, if we can incorporate human-like understanding, that will be great. But it is again not central to how I view the capacity / abilities of LLMs.

  • "What about grounding", "aren't there limitations to what can be grounded": I didn't mention grounding in the limitations part for two reasons: (a) I don't know how much grounding these models do or don't have, and I don't know the extent to which grounding is possible. I believe it is partial (can ground some things but not others), but have no way of assessing the size of the two groups. But more importantly: (b) I personally don't find grounding to be all that interesting. People could argue that grounding is essential to achieving AGI/Strong-AI. This may or may not be true (I don't want to get into this debate), but since my intention and interests were never around achieving AGI/Strong-AI, I really don't find this problem to be all that interesting. We can obviously do very many things without grounding. It may be sufficient.

  • "Pre-training is like evolution, the brain was also developed over millions of years and many language utterances": maybe, but the kinds of linguistic abilities that formed in our brains are very different from what is learned in training of large LLMs. In particular, we are not born with factual / cultural knowledge, not born with a complete vocabulary, not born with distributional evidence of all possible verbs, etc.

  • "Knowledge of Knowledge is a major limitation": I agree and discuss this.

  • "Can't X be solved simply by doing Y": it might be. Go ahead and do it, we'll discuss after its done.

@pisabell2
Copy link

pisabell2 commented Jan 13, 2023

@yoavg What about my comment regarding the difference between (syntactic and semantic) well-formedness and truth? LM's are about the former only and this is one reason why they can be so deceptive. They talk as if well-formedness was enough.

@yoavg
Copy link
Author

yoavg commented Jan 14, 2023

I think I touch upon this in "Knowledge of Knowledge" and in "Modularity".

@nirbenz
Copy link

nirbenz commented Jan 15, 2023

The point about data hunger and the problem of training similar models for other languages is fascinating. I wonder if at least some of the "structural understanding" of those models can be adapted to other languages. I guess we'll know at some point because otherwise it'll simply be impossible to get such models for anything other than english.

@pisabell2
Copy link

I think I touch upon this in "Knowledge of Knowledge" and in "Modularity".

Perhaps you hinted at it. But I would say that overall you are greatly understating the complete absence of the notions of TRUTH and BELIEF in LLM's. I find it astonishing that you conclude that the most important thing missing from LLM's is "inclusivity" rather than "truth". Let's go for inclusive AI systems that just don't care about truth!

@SivilTaram
Copy link

Really cool presentation, thanks! Couldn't agree more with the below statement:

I hypothesize that latest models are also trained on execution: pairs of programs and their outputs

In fact, I have one paper Reasoning Like Program Executors (released by Jan 2022) which exactly does the same thing! I would be very excited if the LLM has done that!

@albertbn
Copy link

albertbn commented Feb 2, 2023

Maybe natural language is the small grounding layer learned over the huge genetically-rich model a human being is born with? If so, language models are trained on a communication layer rather than on the deeper payload information.
The brain potential a baby has can be conveniently and probably wrongly associated with some evolutionary based model that has plenty of compressed and encrypted power waiting to be revealed.
If this assumption is partially true then the language (or multiple languages), vocabulary and other skills a child acquires can be referred to as instruction/grounding learning, decrypting and revealing parts of the base genetic model and harnessing their power.
It's a fact that many people can live their lives by reading no more than ten books (if not less). And still survive, some of them better than the book-worm types.
Then the resemblance of a LLM with ~170B neurons being tuned over instructions of around 50K is quite strong when compared to the human example.
A quick check with chatGPT suggests that the average human brain has around 100B neurons.

@borisgin
Copy link

borisgin commented Feb 4, 2023

"How do we separate "core" knowledge about language and reasoning, from specific factual knowledge about "things"". I think this is a major question to ask, and that solving it will go a long way towards making progress (if not "solving") many of the other issues. "
What are alternatives to the LM integrated with retrieval from some external "knowledge base", which can be made modular?
Large number of small 'expert" neural models?

@lukaemon
Copy link

lukaemon commented Mar 5, 2023

After reading I understand why this gist article could be cited by a paper, Augmented Language Models: a Survey. Thanks for sharing.

@hlzhang109
Copy link

hlzhang109 commented Mar 25, 2023

Interesting read. Some of the problems in Numbers and Math are actively being solved such as Tool-use models (LLMs+Wolfram). Would like to see more discussions on alignment and compositional generalization.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment