Skip to content

Instantly share code, notes, and snippets.

@arpagon
Created May 18, 2023 21:58
Show Gist options
  • Save arpagon/309faf9de91fc0179e54bb89bf563e77 to your computer and use it in GitHub Desktop.
Save arpagon/309faf9de91fc0179e54bb89bf563e77 to your computer and use it in GitHub Desktop.
lepoint.fr
Yuval Noah Harari (Sapiens) versus Yann Le Cun (Meta) on artificial intelligence
Guillaume Grallet, Héloïse Pons
30–38 minutes
This is the topic of the moment, one that is on the minds of both Xi Jinping and Joe Biden. Vladimir Putin himself said in 2017 that the country that would lead the field of artificial intelligence would be the one that would dominate the world. History has shown how bellicose the intentions of the master of the Kremlin were, to say the least, and it is now impossible to ignore the advances of artificial intelligence.
Capable of facilitating spatial observation and the development of new materials, and even allowing for visual representations of protein folding and other medical advances, as well as a clearer picture of reality with the proliferation of visuals inspired by current events that are more accurate than what the human eye can perceive, what should we think of this rise in AI's power, which is probably the most spectacular ascent since the term "artificial intelligence" appeared at the Dartmouth Conference in 1956 ?
On paper, Yann LeCun and Yuval Noah Harari are complete opposites. One is a researcher, the other a historian. The former sees no reason to panic about the emergence of this discipline, while the latter fears that it will lead to the collapse of our civilization. Yuval Noah Harari signed the open letter launched by the think tank The Future of Life, alongside nearly 30,000 researchers, calling for a six-month pause in the development of tools more powerful than GPT-4, the language model that accelerated the adoption of ChatGPT in record time, while Yann LeCun saw in this appeal only an outcry from doomsayers.
The meeting proposed by Le Point has been held via video conference between New York and Jerusalem. It has allowed the director of research in AI at Meta, Turing Award laureate (the equivalent of the Nobel Prize in computer science) and the author of Quand la machine apprend: La révolution des neurones artificiels et de l'apprentissage profond (Odile Jacob), and the best-seller Sapiens author to compare their views on the promises, dangers and future of AI. Spoiler: we couldn't get them to agree.
À LIRE AUSSIIntelligence artificielle : le débat choc et inédit Harari-Le Cun
Le Point : First of all, what is your definition of intelligence ?
Yann LeCun : Intelligence means being able to perceive a situation, then plan a response and then act in ways to achieve a goal – so being able to pursue a situation and plan a sequence of actions.
Yuval Noah Harari : Intelligence is the ability to solve problems. From single-celled organisms looking for food to humans discovering how to fly to the moon, that's intelligence. It's not the same as consciousness. In humans, they are mixed together. Consciousness is the ability to feel things, pain, pleasure, love, hate. We, humans, sometimes use consciousness to solve problems, but it's not an essential ingredient. Lots of organisms solve problems without having any consciousness, like plants and microorganisms. Machines too can be intelligent and solve problems without having any feelings.
Yann LeCun : Not yet, but it will happen.
When ? Within five, ten years ?
Yann LeCun : It's very hard to predict how long it's going to take. But there's no question in my mind that we'll have machines at least as intelligent as humans. And if they have the ability to plan and set goals, they'll also have the equivalent of feelings, because very often emotions are just an anticipation of outcomes. You need to be able to anticipate what's going to happen–whether an outcome is going to be good or bad– you want the ability to plan, and that's a major cause of emotion. As humans, if we anticipate that a situation is likely to be dangerous, we feel fear, which motivates us to explore different options to escape the dangerous situation. If machines can do this, they will have emotions.
It seems that we are still very far from the possibility of a machine gaining consciousness…
Yuval Noah Harari : It's possible, but not inevitable. I define consciousness simply as the ability to have feelings. Once you can feel pleasure, you have consciousness. Self-consciousness is something else, it's the ability to reflect on the fact that you feel emotion. And we have self-consciousness only a very small part of the time. Based on this definition, I think it's totally possible that machines will gain consciousness or have feelings. But it is not inevitable. Machines might be progressing along a different evolutionary path. In the evolution of humans, as well as mammals and birds, consciousness and intelligence have gone hand-in-hand, we solve problems by having feelings. But there could be other routes for the evolution of intelligence, in which you have a different type of intelligence, which could be superior to human intelligence and still not involve any feeling at all. It's already the case in limited areas like in chess or in Go. I don't think that anybody really thinks that AlphaGo is happy when it wins the game, and yet it's more intelligent than humans in this very narrow field. It could be the case, even with general artificial intelligence, that it will far surpass us in intelligence and still not feel any kind of emotion.
Yann LeCun : There's certainly going to be a lot of systems that we qualify as being intelligent. It already exists if we look at a Go player or even a system that drives your car. They don't have feelings, but ultimately, if you want systems to have some level of autonomy and work by attempting to satisfy a goal, then those systems probably will have the equivalent of emotions because then they're going to have to be able to predict ahead of time what the outcome is going to be for a particular sequence of actions.
Would you say that with its reappropriation of language, ChatGPT is dangerous for our democracy and beliefs ?
Yann LeCun : I don't think it's dangerous at the moment, but the reason it could be is that it's closed, so people cannot understand it, and researchers cannot research it. In my opinion, the way to make progress with large language models is to make it open. And you see a number of efforts around the world today to producing evidence that is open source. This is a good idea from the economic point of view, but also from a safety point of view. And also for the progress of research and the understanding of exactly how to make those things work properly and good steerable control. But then, on the deployment side, you need regulation to make sure the products that are deployed in society have a positive effect. And this is the same for any technology. You put it on the market, it has to go through some certification process and there are regulations around it. But I think if anyone imagines a future where everyone has an intelligent agent under their control, it's like everybody having kind of a staff of people working for them that are smarter than them. As a former manager of industry and as a professor in university, I try to only work with people who are smarter than me. So that's a good path for success. So people shouldn't feel threatened by that, they should feel empowered. The same way your car is more powerful than you, imagine a future where everybody is empowered with a staff of intelligent machines that makes them more efficient, productive or creative. That's why I think it will bring a new Renaissance, basically a new Enlightenment.
Yuval Noah Harari : The question is in whose hands it is ? It could potentially be wonderful, but in the wrong hands it could destroy democracy. Because democracy is a conversation between people, and what happens if you have these agents that are able to hold a conversation better than any human being deployed in the service of either irresponsible or bad actors… If the conversation between people is hacked or hijacked, which AI is now fully capable of doing, that would destroy the foundation of the democratic system. Imagine you're talking with someone online and you don't know if it's a human being or an AI bot. And just as with social media ten years ago we saw an arms race for attention, we can now see an arms race for intimacy, and intimacy is far more powerful than attention. If you could get somebody to trust your AI, then any questions you have, you don't go to Google to search (this is why Google feels threatened), you just ask your AI. You don't need newspapers because 'why should I read the newspaper, I just ask my AI to tell me what's new !' And let's say that, 95 % of the time, it really serves your best interests and you learn to trust it and you have a very deep connection with this entity. And then every now and then it turns your political views in a particular direction because this is the direction it was given by the people who built and operated it. We need to guard against these dangers because what we've seen previously with recommendation algorithms, it's nothing compared to the power of such AIs to change people's views on everything, from which products to buy to which politician to vote for.
À LIRE AUSSIIntelligence artificielle : « Les humains doivent s'accrocher à leur liberté »
Yann LeCun : Another question is what is the proper way to handle this ? In the future, all people will interact with the digital world mostly through intelligent agents. The movie « Her » is kind of a good picture of what might happen in the future. And if the design of those AI systems or the people who control it are ill-intentioned, very bad things could happen. But I think the right analogy to that world is Wikipedia. People trust Wikipedia. There's a lot of editorial work that goes into trying to keep those opinions in check. So there is a way to do that ethically, and what we end up with is something like Wikipedia, which was invented by millions of people. And what that means is that the AI systems of the future will basically have to be crowdsourced by billions of people working together. That's the only way to do it, which is why it has to be open. So I don't think we're going to have AI systems in the future that are closed, that are owned by a single company, as we've seen so far, I think it's going to have to be vetted by a lot of people because there are a lot of ways that the system can go wrong.
Is it more dangerous to have an intelligent assistant that is 95 % correct than one that hallucinates half the time ?
Yann LeCun : The current systems are indeed very unreliable. So they're very good as writing assistants, but not as reliable sources of information. Now, what's going to happen over the next few years is that the basic design of the system is going to change to create systems that are more directly steerable, controllable and factual, if required, because sometimes you don't want them to be factual. You want them to tell you a story or to write a poem or something, right ? This is going to make a lot of progress over the next 2 to 5 years. So, eventually, those systems would be more reliable than any other method of seeking information. And it's going to be a little bit like autonomous driving and driving assistance. Currently, you can have driving assistance systems for your car. They're somewhat reliable on the highway, but not everywhere, and you're supposed to keep your hands on the wheel at all times. Currently, with autoregressive large language models, you're supposed to keep your hands on the keyboard at all times because they could say stupid things. They don't have the type of common sense that we expect from people, even though they seem to have accumulated an enormous amount of factual knowledge that they can regurgitate. But it's going to be like cars : people will accept completely autonomous cars when those cars are significantly more reliable than humans.
So there is a risk of seeing human beings abandon all their decision-making power to the machine. Are you afraid of that ?
Yann LeCun : We live in a world where most decisions are already made by the machines. But they are not very important decisions. When you do a search on a search engine, the order in which the information is shown to you is determined by algorithms. When you talk to your friends who speak a different language, it's translated and it's not perfect, but it can be useful. Every tool can be used for good things and bad things, and you use it for things that are useful. We can use a knife to cook and to kill. In the end, it is always the users who determine what the algorithms do because these algorithms automatically adapt themselves according to the users' desires.
Yuval Noah Harari : It's not a knife : a knife can't decide whether to kill someone or to save life in surgery. The decision is always in human hands. AI is the first tool that potentially can replace us in decision making. And the danger is substantial here because it's likely we will make mistakes. And this could be a mistake on a level that we may not have time to learn from it. In the 20th century, humans developed new industrial technologies. We didn't know how to build a good industrial society. We therefore made some terrible mistakes, terrible failed experiments in how to build industrial societies. Nazism and Communism were failed experiments. The two world wars were terrible mistakes. After destroying the lives of hundreds of millions of people, we learned from these mistakes, and eventually learned how to build better industrial societies. But with the new technology of the 21st century, if we make such mistakes again, for instance building a totalitarian regime based on AI, or starting World War III, maybe there will be no way back. The only reason we survived the 20th century is because the technology was not powerful enough to destroy us. The technology of the 21st century is far more powerful. If we make the same kind of mistakes with the current technology, there is a high chance we will not survive to learn from those mistakes.
Yann LeCun : The same could be said of all the technological revolutions in history. The printing press destroyed society in Europe in the early 16th century with religious divisions, but it also brought the Enlightenment, rationalism, science, philosophy, and democracy.
Yuval Noah Harari : The printing press fueled the enlightenment, but also European imperialism and the wars of religion. It wasn't powerful enough, though, to totally destroy humanity. AI is powerful enough. If there is another round of religious wars, like between Protestants and Catholics in 16th century France, but with A.I. and nuclear bombs, I don't think humanity will survive. The danger is that AI could destroy the foundations of democracy and lead to the creation of new digital dictatorships that will be even more extreme than the Soviet Union.
À LIRE AUSSIIntelligence artificielle : ces Français qui défient ChatGPT
With deep fakes, is democracy in danger ? And does that justifies a pause ?
Yann LeCun : The amplification of human intelligence by machine will enable a new renaissance or a new age of enlightenment, propelled by an acceleration of scientific, technical, medical, and social progress thanks to AI. I don't think a slowdown or a pause in research is useful because I don't think that pauses any danger. To pause a danger, you would have to make some assumptions that are just false. It is this idea that the minute you turn on the super intelligence system it's going to take over the world. That's ridiculous. To deploy large scale AI systems in the open world, you need to be very careful about it. You need to do it slowly and under the scrutiny of governments and the public. I think the fall of societies that has been happening over the last decades is partly due to the fact that governments react slowly. Several years ago, Facebook came to ask various governments of liberal democracies across the world : “Define for us what is acceptable content on social networks” because we don't think we have the legitimacy to decide for society what constitutes acceptable content. Should we take down neo-nazi propaganda and should we take down Holocaust denialism, for example, which we should have to do in Europe because it's illegal ? And the answer was crickets. The Trump administration said « we have the First Amendment here, so you're on your own ». The only government in the world to respond was actually the French Macron government which sat down and discussed the issue. No law came out of it, but there were discussions of what's the proper way to define what constitutes appropriate content, and it's a very difficult thing to do because you have to decide the right tradeoffs between free speech and preserving democracy. I think it's the democratic systems that have to decide.
Yuval Noah Harari : What you just described shows that society needs time. Democratic governments failed to respond to your request, because they didn't understand its importance and the impact of social media. Now that they understand, the reaction is different. It's normal for human societies, especially democracies, to take decades to understand changes and make up their mind. This is why we need more time. We now release these new AI tools and it will take years for society to understand their impact on politics, psychology, economy… We need to slow it down so society can identify the dangers and decide what to do about it.
And with new tools like Midjourney or creation of new images that seem very real, and Runway, this software that can help you create new videos that look real, is this a new definition of truth ? Is it going to be even more and more difficult to know ?
Yuval Noah Harari : Even before the current AI revolution, we already had this problem. The democratic conversation is collapsing because people, even mainstream parties, are no longer able to reach a consensus. I think there is a paradox today in Western democracies. Countries like the United States have the most powerful information technology in history, yet Americans are unable to agree on who won the last elections in 2020, or on whether vaccines help us or endanger us, some even question if climate change is real. How is it that a society with such powerful information technology seems unable to agree on the most basic matters ?
Yann LeCun : The question is whether this is a new phenomenon or whether the fact that we now have communication technology just makes it more obvious. Some of my colleagues at NYU years ago did a study as to who clicks on clickbaits that seem revolting. The answer was mostly people who were 65 years old. If you are a little older, you have perhaps more difficulty following the evolution of the modern world, and you can be fooled by it. But the younger generation isn't, the generation that grew up with the Internet takes everything with a huge grain of salt and has a more systematic way of tracing the origin of a piece of information. A lot of the predictions that are made are ignoring the fact that people adapt the way they think.
Some researchers see the rise of AI as a big danger to humanity. Are we already losing crontrol of it ?
Yuval Noah Harari : For now, we are still in control, but we might not be in control in the very near future. From a historical perspective, AI is the first tool in history that can make decisions by itself. A lot of people compare it to previous revolutions, like nuclear energy. But it's not an atom bomb. Atom bomb could not make decisions about how to use atom bomb. AI can make decisions about how to use AI. It's been doing it, with Facebook for example : it's increasingly the algorithms that decide which content to show people and there is this famous alignment problem that you give the AI the goal of maximizing traffic on the platform and you get the unintended result of maximizing hate within society because the AI discovered by itself that hate increases engagement. I don't think that social media or its managers intentionally gave it the instruction to increase hate, but AI discovered that the way to increase user engagement is by pushing content which spreads hate, anger and conspiracy theories. And this has a huge impact, which is not easily reversed even if we now change the algorithms. Trump was elected in part because of these algorithms, and you couldn't un-elect him, or all the changes he made to American society and to the world. So the question is : what chances do you have of reversing something after it has made a major mistake ? And now there is another danger : previously, the social media algorithms only spread content which was produced by humans. Now, AI can also produce the content by itself, and that is the first entity ever that could create new ideas. Printing presses could not write books, radios could not write music, AI can. So we are dealing with a completely new phenomenon, very powerful in human history that can take power away from us.
Yann Le Cun : First of all, Facebook or the social network you described would have been accurate around 2015. This is not what's happening today. So, indeed, when you deploy something, you measure the effect, and if the effects are bad, you correct. So, engagement was a logical thing to maximize, and people who were using Facebook were not happy with the result, so the ranking algorithm was completely changed. There is some major complete redesign of the feed algorithms to do things like prevent information bubbles to not create economic models for clickbaits for example. What is interesting in the context of social networks and computation systems today is that AI actually is not a problem, it's the solution. So if you want a system that accurately takes down his speech and attempts corrective actions and things like that on social networks, AI is the solution.
À LIRE AUSSILes 5 dangers de l'IA selon Geoffrey Hinton, un de ses pionniers
So we might need the same regulation for AI that we have for nuclear power ?
Yann LeCun : AI systems at the moment require huge resources to be able to be created, but in the end they can be used by anyone. So it's not going to be like nuclear weapons. Also the big difference is that AI is designed to make people smarter or to augment human intelligence, whereas nuclear weapons are designed to kill people.
Yuval Noah Harari : That's a good question about AI : Do we design AI to make people smarter or do we design AI to control people ? It depends on the country.
Yann LeCun : Some countries are probably designing it to control people. But those countries will inevitably fall behind in terms of technology as the Ottoman Empire did in the 17th century by prohibiting the use of printing. In the Middle Ages, the Muslim world was dominating science. By the 17th century, they were way behind and then fell further behind afterwards. Something like this may happen in China, which is isolating itself from the AI ecosystem. And if they don't get access to more powerful artificial intelligence to their people because they are worried about controlling immigration, they will also fall behind.
Yuval Noah Harari : I hope you're right but I think it's wishful thinking. The Nazis were far more advanced than the West in, for instance, rockets and jet airplanes. In 1940, Germany defeated France very decisively, even though the economic balance of power favored the Franco-British alliance. So I wouldn't be so quick to bet that democracies will always prevail. We need to take worst case scenarios into account.
AI can help us create new materials, medicine, and create jobs, but also has the potential to destroy jobs. A study from the University of Pennsylvania said that the jobs that might be vulnerable would most likely be mathematicians or accountants, rather than cooks or dancers. It's the first time white collar jobs are more at risk than other professions. Do you see it as potentially a more destructive force than a creative one ?
Yann LeCun : Economists like Erik Brynjolfsson at Stanford say that new technologies displace jobs, but they also make people more productive, although it takes 15 to 20 years to make people more productive in a measurable way. Technology displaces jobs, but we create it so that it makes people more productive. The amount of wealth produced per hour worked increases in total. And the reason it takes so long is that the speed at which the technology impacts the economy is limited by how fast people can learn to use it. And so the displacement of jobs is also limited by the speed at which people can learn to use the new technology. So it's going to take 10 or 15 years. There are going to be some important shifts in certain areas of the economy. Some people are going to have to retrain. There's going to have to be social plans in countries that know how to do this, which doesn't include the U.S. but does include Western Europe. No question there's going to be some disruption. But in the end, people are going to be making more money for a certain amount of work or working less for the same amount of money.
So you see everyone having a job and no rising inequality twenty years from now ?
Yann LeCun : New jobs are going to be created, old jobs are going to disappear the same way industrialization did with the proportion of people working into food production from 60 % to 2 %. New professions will appear and replace the old ones. Who would have thought 20 years ago that you could live pretty well by producing YouTube videos, mobile apps or being a website designer ? And then there's a question of how this wealth is being distributed among people. That's a political question, it depends on who you choose to elect.
Yuval Noah Harari : Old jobs will disappear and new jobs will be created. But I'm worried about the process of transition. In history, transitions are dangerous. Even if in the long run, there'll be enough jobs for everybody, in the transition periods, lots of people may suffer greatly in the interim. Going back to the usual example from Germany, after the 1929 crisis there were just 4 years of high unemployment in Germany – and we got Hitler. So time is crucial. And the other element is space. Developed countries that lead the AI revolution will benefit enormously and will be able to cushion and retrain their workforce. The big question is what happens in the rest of the world ? So if you had a country like Bangladesh, which relies overwhelmingly on the textile industry, what happens when it's cheaper to automatically produce shirts in Europe than in Bangladesh ? Where would you get the money to retrain millions of 50-years-old Bangladeshi textile workers to now be AI engineers ? So even though I don't think that AI would create an absolute lack of jobs, I'm afraid that the disruption would be extremely dangerous politically. It's not impossible to solve it. You could tax high tech corporations in the United States and use the money to retrain Bangladeshi textile workers, but I don't see any American administration doing something like that…
À LIRE AUSSIMistral, le projet secret français pour contrer ChatGPT
Yann LeCun : It's a big problem, but I don't think it's a big problem that comes with AI. The effects of technological progress in the past has had pretty much the same effect.
Yuval Noah Harari : AI is different, because it can destroy the need for cheap manual labor, which has been the main economic asset of many developing countries.
Do you think that AI can help humans achieve immortality (or, on the contrary, shorten human lives) ? What do you think of Elon Musk's proposal to implant chips in the brains of humans to enable us to be as strong as AI ?
Yuval Noah Harari : The question is : What do we think about the ability to upgrade humans ? I think it will be very dangerous to use AI or brain-computer interfaces to « upgrade » humans and give us longer lives or stronger memories. We humans have always suffered from a big gap between our power to manipulate systems, and the wisdom needed to understand these systems deeply. Unfortunately, it is much easier to manipulate than to understand. It is easier to build a dam over a river than understand the impact it will have on the ecosystem. Hence we humans start manipulating things long before we understand the consequences of our actions. In the past, we humans have learned to manipulate the world outside us. We learned how to control the rivers, the animals, the forests. But because we didn't understand the complexity of the ecological system, we misused our power. We unbalanced the ecological system, and we now face ecological collapse. In the 21th century we might learn to manipulate not just the world outside us, but also the world inside us. AI and other technologies like genetic engineering might enable us to redesign our bodies and minds, and to manipulate our emotions, thoughts and sensations. But because we don't understand the complexity of our internal mental system, we might misuse that power. We might unbalance our bodies and minds, and we might face an internal human breakdown paralleling the external ecological crisis. In particular, governments, corporations and armies are likely to use new technologies to enhance skills that they need, like intelligence and discipline, while having far less interest in developing other skills, like compassion, artistic sensitivity or spirituality. The result might be very intelligent and disciplined humans who lack compassion, lack artistic sensitivity and lack spiritual depth. We could thereby lose a large part of our human potential without even realizing that we had it. Indeed, we have no idea what the full human potential is, because we know so little about the human mind. And yet we hardly invest much in exploring the human mind, and instead focus on increasing the power of algorithms. I hope that for every dollar and every minute we spend on developing AI, we spend another dollar and minute on exploring and developing our own mind.
Yann LeCun : It is certain that AI will accelerate progress in medicine, we are already seeing it in medical imaging. AI helps us understand the biochemical mechanisms of life, for example by predicting the conformation of proteins and their interactions with other proteins. I don't believe in the favorite science fiction scenario where AI would dominate and eliminate humanity. The AIs of the future may be smarter than us, but they will serve us and have no desire to dominate humanity. As for brain implants, research has been going on for many years. They are very useful in the treatment of certain sight and hearing disorders, and very promising for certain paralysis. In the future, we can imagine treating memory defects by adding an artificial hippocampus. But I doubt that many people will accept to be implanted to interact with a virtual assistant. At least not in the near future.
What should we be teaching our children to help them survive and prosper in this new world ?
Yann LeCun : Knowledge has a long shelf life. I think it's better to learn creative professions that are either artistic or scientific and require deep skills. And anything that has to do with intrinsic human experience and communication, because it is not replaceable by machines. Imagine you're a good jazz improviser, you are communicating a state of mind. We can make mechanical or electronic jazz improvisations, but what do they communicate ? It's the same with art : you can make good-looking art, but what do you communicate ? Those tools are going to amplify human creativity, but they're not going to make human creativity obsolete. It's just going to make it easier for a wider, larger number of people to be creative and exercise their creative juices.
À LIRE AUSSIQuand l'intelligence artificielle s'improvise scénariste
Yuval Noah Harari : It's dangerous to bet on a specific skill. It's best to focus on learning how to learn and also the psychological tools necessary to keep changing throughout your life. The biggest problem will be psychological : how do you keep changing, keep reinventing yourself, as the job market becomes more and more volatile. AI comes in, maybe there is a new profession, but the new profession also disappears or changes dramatically within ten or 20 years. And you have to again reinvent yourself. So we should teach people how to learn, and also help them build a type of personality which is able to embrace change because change will be the only constant in the 21st century. Good luck Yann with what you do, I think the future of democracy to a large extent is on you and other people in the industry. So please do a good responsible job.
Yann LeCun : This has always been my guiding principle.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment