Skip to content

Instantly share code, notes, and snippets.

@adnanelhb
Last active June 17, 2023 16:45
Show Gist options
  • Star 7 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save adnanelhb/c23cf39be8c3e58065a1201935dc5ea1 to your computer and use it in GitHub Desktop.
Save adnanelhb/c23cf39be8c3e58065a1201935dc5ea1 to your computer and use it in GitHub Desktop.
A conversation with OpenAI's CEO Sam Altman - Hosted by Station F (Paris) - Verbatim

A conversation with OpenAI's CEO Sam Altman - Hosted by Station F (Paris) - Verbatim

Sam Altman at Station F - 26 May 2023
Sam Altman at Station F - 26 May 2023 - Image source: the author

I was privileged to attend a Fireside chat with Sam Altman, OpenAI’s CEO, on May 26 at Station F in Paris. This conversation was moderated by Roxanne Varza, the Director of Station F. For those unable to attend, I've compiled a selection of verbatim questions from the session in an easy-to-navigate format.

While Sam Altman’s European tour is covered in the press, I think it can be interesting to present the 'raw' conversations on these topics that can clearly be of public interest. Some headlines may give the impression that Altman's positions are inconsistent. But, actually listening to Altman, whether or not you agree with all his views (I don't), it is clear he’s not just spouting nonsense wherever he goes and his positions are a little bit more nuanced than that.

For transcribing this conversation, I used Whisper, a tool provided by OpenAI. So, all the transcription errors can be attributed to them! I decided to preserve the occasional grammar, syntax, and transcription errors in the questions for authenticity. If you notice any glaring mistakes, do let me know. I’ve removed personal information and introduction from the audience's questions, except when the occupation added relevance.

For clarity and easier navigation, I've organized the questions into categories. Although some questions could fit into multiple categories, I chose the one that seemed the most fitting, but it’s not an exact science.

Thanks to Sam Altman for answering our questions, Station F for organizing the conversation, and OpenAI for providing valuable tools like Whisper and ChatGPT. I used these tools for formatting this gist, and it's only fair to acknowledge when they boost productivity.

The French AI Landscape

On meeting Emmanuel Macron

Roxanne Varza: You met the French president a few days ago, tell us about it.

Sam Altman: It was great, we talked about, as the minister said, we talked about how to get the balance right between protections on the new technology and letting it flourish and have all the positive impacts, and how France can make sure that France is sort of a critical part of the success here.

The place of France in the AI landscape

Roxanne Varza: You've been going from country to country for the last 2 weeks [...], how does France measure up in what you've seen so far?

Sam Altman: The reason for doing this trip is to get out of this sort of Bay Area tech bubble and talk to people about how they're using the technology, what they want, how they're thinking about AI, what their concerns are.
And France has been a super interesting case study in a country that is far more advanced in thinking about this technology and adopting it than most other places.
And also really, really trying to get this balance right, most countries I would say, or other countries have been more on the like, either this is totally amazing on all of it or they're like we're really scared about this, this is an interesting balance. And the AI talent and the engineering talent of course here is super, super impressive

Impact of AI: Society, Education, Work

AI's Impact on Higher Education and Future Careers

Audience: Hi, I just finished high school. We had ChatGPT the last year. We had some teachers who were like, 'this tool is great, use it. You don't need your textbooks anymore, ChatGPT will teach it to you'. Some teachers were like, 'Don't ever use this, you're not going to learn anything'. I'm about to enter college and start my career. Do you have any advice for how AI is going to change college for me and how it's going to affect my career in the future?

Sam Altman: The most confident thing I can say is that the rate of change in the world is just going to be much higher, and so, you know, we've had a reasonably static, like, your educational experience is probably reasonably static and then in six months, it changed a lot, and I think that's a great example of what you should be prepared to have in your career a lot.

You know, in the example of your education, I don't think it works for teachers to just say 'Don't use ChatGPT'. I think that would be like math teachers, for a while they were saying 'Don't use calculators'. The answer is use calculators and hold students to a higher level, teach them higher-level concepts, start doing other things.

And I think anyone who wants to be competitive in the world and on the sort of leading edge of human capability will find that they've got to use ChatGPT. But they're able to do things that were unimaginable. You know, in ten years, the stuff people do will be amazing today, and again, that's the course of better tools. So I think the answer is to embrace it and figure out how to up-level everything. I would go do that in college, I would try to not pay too much attention to people who tell you you shouldn't use ChatGPT. That's happened with every technological revolution. I think it's the wrong bet.

Addressing Bias in AI

Audience: I would like to talk about biases, because I think it's something that comes up a lot, especially in the AI world and especially in the education system, where actually children are involved in using this tool to educate themselves. What are you going to do to cope with biases in general while training the algorithm?

Sam Altman: So I was very worried about this for a long time. We used to talk a lot about how we were ever going to get an unbiased training dataset and what that would mean. Because I think it's very hard for any two people to agree that a system is unbiased. One of the positive surprises for us has been how well RLHF, our post-training stuff, works for bias in particular. There were some papers maybe two or three weeks ago that came out looking at the bias in GPT-4 and finding that on implicit bias tests, it's less biased than humans. And I think that's going to be something we see more and more of. When we can explain to these models what it is to be unbiased, they will not have the psychological flaws or history that a human does. And we'll find out that they are less biased and a force for reducing bias in the world. And then an individual will be able to control a model to a high degree.

Addressing Copyright and Authenticity in AI-Generated Art

Audience: I'm working now on ChatGPT and DALL-E for my art creation project. So, my question is, how does OpenAI deal with copyright issues and authenticity of the generated work? What measures have been put in place to prevent copyright infringement and guarantee the original creation policy?

Sam Altman: Okay, a few thoughts there. One, I think these models are meant—we want them to be a reasoning engine, not a database. So, I think there's some confusion about this because they're capable of both, but we don't want to make models that are, you know, used to regurgitate copyrighted material.

I think the right thing to do, even though there's like good fair use law in the US that we understand, is find a way to help content owners benefit, so that maybe we say we train this model that learns to be good at reasoning. It can be trained on content that we can cleanly license, and also on synthetic content, which I think will be a huge part of the future. And then it can call out to copyrighted content and access it if it's necessary for a particular question.

And there are many ways that you could see compensating copyright holders, or of course, they could choose not to have it. But again, if the goal is that we make the reasoning engine, and it can point out to content under a number of frameworks, or it can compensate you if you want, say, the model to be—if you're an artist and you want the model to be able to generate something in your style—I think that's all reasonable.

But I do think it's really important to understand this point, that we don't want to be a database. We don't want to store a ton of content. We want to build a system that can think and access other content.

Using AI to address the climate issue

Audience: I wanted to ask, how can you use AI and OpenAI to address the climate issue?

Sam Altman: I think there are things to do here, but I think mostly we should accelerate progress towards fusion, in particular, and also other sustainable forms of energy. And we don't need AI for that. It's nice to throw AI at every problem. And there are some things, you know, you can use it to optimize like load draw and cooling. But I think we have, I think we are close to major technological solutions.

AI's Impact on Journalism (Le Monde CEO's Question)

Louis Dreyfus (CEO of Le Monde): Hi, I'm Louis. I'm CEO of Le Monde, a French publisher. My business model is to pay talented and expert journalists to produce exclusive content and to have people pay for reading it, right? When I discussed today with other publishers in France or abroad, they see AI as a technology that can produce content without any human intervention and make it available for free to everybody. So, can you help me not to be frightened by what's going on here, and what can be my business model in the very near future?

Sam Altman: I think you have one of the great newspapers in the world, and I don't think it's going to be replicated by AI. AI writing the stories anytime soon. But I bet your journalists that use this to help them in their creative process will find that they can be more effective, do better investigations, come up with better ideas, and spend more time doing the things that make it such a great source of information and a thing to read in the world.

So what I would say is, what if each of your journalists had a team of 100 people working for them in different areas? I think you would say, alright, it'll be better, it'll be a better news org overall. And that's what I think is going to happen. But there's something deep about human taste and humans knowing what other humans want and asking questions, and also the sort of people like other people, like people want to know what a particular journalist that they think highly of thinks on an issue.

Even if there could be a great novel written by an AI, I think there's something that people want about that backstory of the human and the kind of connection with the person who created it, in whatever way that works. It's just a different category of thing. So I think you have to adapt, but I don't think the fundamental concept of journalism and humans doing journalism and deciding what to write about, doing the story, doing the work is going anywhere. I think it'll get way better.

Great question.

Student Plan for ChatGPT

Audience: I'm a third-year engineering student. [...] And I have a few quick questions for you: Why not make ChatGPT free for the students?

Sam Altman: Well, if we have, when we have more compute, I'm definitely interested in doing some sort of student plan. Right now we cannot nearly serve the people who want it, and we have a bunch of other problems there. But yeah, I'd expect us to have a student plan over time. [...] We will at some point do a student plan, yes.

AI and Development in Emerging Markets

Audience: I have a question regarding mainly the emerging markets. As someone from the emerging markets, I have seen how highly disruptive technologies allow us to skip a lot of very costly steps to develop our economies. In the way that the models eliminate the development of appropriate infrastructure in landlines, what do you think the biggest step will be for you to have a technology that we could skip altogether with this kind of technologies? Thank you.

Sam Altman: Yeah, I think that this is a technology that is going to help the developing world even more than the developed world. I mean, it's going to lift everybody up, but there will be a hugely dramatic impact on less developed economies. The price of cognitive labor, the access to it—richer countries already have more—and as that gets equalized, it will have a huge impact, and it will allow people to sort of skip a lot of the institution building you have to do gradually and just say, 'All right, now everybody can access very high quality, very high-end cognitive services.'

And even what I've noticed so far on this trip is developing economies are just embracing this and getting big benefits from it very fast.

Safety Concerns and Risk Management

What does a world look like where it goes very wrong?

Roxanne Varza: People are getting really negative and freaked out about AI and what it can do. We've had people calling for a six-month break, and I don't know if that's even possible. And you even mentioned that if this technology goes wrong, it can go very wrong. What does a world look like where it goes very wrong?

Sam Altman: I mean, pick your favorite sci-fi book. I think none of them are right, but they at least illustrate the creativity of humanity in the ways this could go wrong. You know, there's a lot of stuff I think we can imagine in the short term—disinformation campaigns and persuasion, influence on elections, computers being hacked at sort of mass scale. Again, I'm happy to talk about this all day.

I have a feeling more people here are interested in some of the upsides we can do now. I think it's super important, but I think everyone can kind of see the downsides. And one thing I've been surprised about is how in sync people around the world are and what we should do to mitigate those. So, I think there's intense shared desire, but also pretty good cohesion on what we should do about it. I think we'll be able to mitigate those. Pretty optimistic.

Unforeseen Safety Challenges and Mitigation Strategies

Audience: With millions of users now using GPT-4 and doing everything they can to jailbreak it, have you discovered new safety challenges that you hadn't anticipated? And what are you doing about it?

Sam Altman: So, one of the reasons that we deploy these systems at all, aside from the tremendous benefits and good use we see people doing with them, is to discover what the actual safety issues are in practice. And we do find new jailbreaks, and also very subtle things that are not quite like a jailbreak but still have some negative effect, or things that we thought we really needed to prevent, which in turn turn out not to be a real challenge.

So, we are continuing to learn about the shape of the technology, the risks, where we need to address new jailbreaks, where we also need to allow more flexibility. And without deploying, without contact with the real world, I think you would never get this right.

Giving examples of balancing profit and AI alignment

Audience: Some of my questions are going to be a bit tough. So, OpenAI has said it often that its main goal is to prevent misalignment of AI. It is a capped-profit company, as far as I understand. I think both the goal of making revenue and profit and preventing misalignment of AI, they overlap sometimes, but they don't always overlap. And I want to ask you, can you give me examples where you sacrifice profit or revenue to improve the safety of AI?

Sam Altman: Sure, a lot. First of all, hard questions are great. I would love more of them. Second of all, our mission is not just alignment. Our mission is broad distribution of benefits. Part of that is to get alignment, but if all we were really trying to do was build the safest AI, maybe we wouldn't build anything, maybe we wouldn't release it. A big part of this is to get the benefits into society and to do that we have to confront the risks and the challenges first.

One example of this, if we were trying to maximize profits, we would allow a lot of things that we don't. But one is adult content. So this is some significant fraction of what people are using other language models for that we don't allow. Of course, we have no problem with consenting adults generating adult content, but we can't reliably say we can be confident that we will stop any child sexual material from being generated while allowing adult content. We could get pretty close. We could get close enough that I think most companies would do it. But from a minimize harm perspective first, we just don't allow that at all. And it would be a massive amount of usage if we allowed it.

What are the three main dangers in AI according to you?

Audience: So my question is, we are talking a lot about the benefits of AI, but we can see apps like Replica to make your virtual AI friends. We are also talking about killer robots, and that's kind of very scary for a lot of people. So, what are the three main dangers in AI according to you? Thank you

Sam Altman: Yeah, it's... look, I think it would be strange not to have some fear here. I think there's something deeply human about that with any new technology and certain mechanisms that we do. I will give you the honest answer, which is I cannot precisely articulate what the three biggest dangers are going to be. I think anyone who tries to do that or anyone who says that with certainty right now is naive at best.

This technology and society will co-evolve. People will use it in different ways, different reasons. We'll find things that didn't seem that scary that turn out to be scary. And we'll find things that we were really worried about that turned out not to be problems. And I think this... this, you know, so we can say things like, 'Alright, medical advice, that's super high risk. So let's put that in a high-risk category and, you know, we'll be very careful there.' But then we find that there's actually tremendous benefit and you can actually save a ton of lives if you allow certain medical access. And that the downsides weren't as bad as we think.

You could say something that sounds harmless like Replica, although honestly, I personally feel misgivings about a world where we all have more AI friends than human friends. I don't personally want that. If other people want that, I understand it. I do think we have to study the impacts of that and be thoughtful. But that sounds low risk, or at least some people think that's low risk. Maybe we find that AI's biggest danger is that it can persuade people and it can really kind of make us all mad at each other. And that wasn't obvious at the time but turned out to be a big risk.

So, I can point to you the obvious risks, computer security and bioterror, the ones that people use a lot. But I would say as we jump into all the benefits here, and I do think the benefits will be tremendous, keeping an eye out for the non-obvious risks and having a very tight feedback loop is how we're going to have to successfully navigate this. And if we can do that, I think we'll be great. I think we will get to incredible benefit in the world. Many of you are building it right now. It's super inspiring to see. But that's the difficult thing we have to balance. We cannot predict the most serious risks right now and be confident we're right.

ChatGPT and misinformation

Audience: Hi, I'm a journalist, and I completely understand what you said to the gentleman from Le Monde. And I think it can help journalists to improve their content, but at the same time, we see tools like this are sources of misinformation. And I'm working on giving access to everyone in a democratic and non-democratic society to trustful content. So far, what's happening is that we in the civil society run after organizations like you and criticize and try to reduce risk and damage. Only there is a lot of talking outside with you, but barely with you. And I'm wondering if we can think of regulation and safeguards from the other end. So, safety by design, something that is built in before launching or right after launching in order to help us. Because I don't understand. I have to learn before I can put anything in place.

Sam Altman: After we finished training GPT-4, we spent eight months figuring out how to make it safe, figuring out how to align it, internal and external red teaming, external audits, developing safety standards to test on. And while we were doing this, it kind of leaked out that we had it because you've got to have contact with people. And there were a lot of people pushing on us, saying, 'Release it now, we deserve this.' And we said, 'We understand why you want it. We're trying to get it out as fast as we can. But we want to make it as safe as we can figure out before we release it.' And that's important with new technologies. Future models may take us even longer to do.

We invented a lot of good new technology in the course of that. We trained the model to refuse things we don't want. We built monitoring after the fact and a whole bunch of other things in the stack. And the thing I would ask for with people is patience and support as we do that. As the technology gets more powerful, we expect to have a higher and higher bar before we release it. Other companies may have different approaches, but that's what we're going to do.

And I think this is important. I also think it's important to deploy, to learn about it. There are many ways that disinformation could have gone, that these could be used. And we need contact with reality. We need to see what's actually happening, how people use it. And society's got to decide where they want to set the risk and benefit trade-off on that, which we have done with other technologies.

But I believe that by deploying, by giving people time to gradually update, think about this, to learn, and to adapt, we can have a feedback loop where civil society, the government, the public as a whole decides what the limits of this should be. So that's kind of our approach.

OpenAI Strategy, Management, Teams

How Sam Altman and OpenAI use ChatGPT

Audience: I would really love to know how do you use such tools as ChatGPT in your everyday life, especially with maybe product development. Do you have routines and habits already established? I would be very interested to know.

Sam Altman: So, I sadly don't get to actually do the product development work at all anymore. But definitely the most important thing that OpenAI uses it for is people that are writing code. It's like an integrated part of their workflow.

Summarization is a big thing for me. On this trip, the last two weeks, the translation has been super great. I actually had not used it much for that, and I'm amazed how good it is at translation.

When I get stuck writing something, I had to write a blog post last week and I just didn't have a lot of time and I couldn't get into the right headspace to start. And so I asked them to help draft the first sentence for me, the first paragraph maybe. But sort of like lots of other things too, I leave it up on the computer screen and try it for stuff.

How Sam Altman is able to create such a productive team?

Audience: You created an awesome team as well. I think you are around 400 people, something like that, and you managed to create something so big. What are the secrets behind actually this awesome team, and how much advice would you give us actually to be able to create such a productive team?

Sam Altman: Thank you for saying that. Actually, for me, the most enjoyable part of the job is working with such an incredible group of people. I would say we created something great not in spite of the fact that we were small or are small, but because of it. You know, other AI labs have hugely more people than us, and I would not trade places. I think talent density really matters. I also think the best people want to be around other great people, and medium-talent people in an org are like neutron absorbers. They slow the whole thing down more than you want.

So what we've tried to really be the best in the world at is highest talent density, the most clarity and focus on what we work on, and high conviction bets. And that is not how most research labs work. Most research labs are kind of you hire anyone that seems reasonably good, you try many things with a little bit of effort, and you don't take big, risky bets with the whole org.

So I think we had a very strong vision. We were willing to be misunderstood for years. Like now, everybody talks about AGI, but when we started, you know, eminent AI scientists would say, 'Well, it's irresponsible to talk about AGI. That's decades away. These are, you know, kids who don't know what they're talking about.' But we just stuck to the courage of our convictions and said, 'We don't know how we're going to solve this, can't give you an answer on what our plan is yet, but we're going to work super hard to figure it out.'

And then the last thing I would say is I think we have an unusual culture of sweating every detail. And for AI development in particular, and for doing this kind of scale research, that's very important. It means that we get every little component right of a system. We don't have little bugs that usually, you know, people say, 'Well, I could do that too. It's just a scale-up,' but it turns out to actually do it is tough. And it means we're willing to think about the entire stack. So we take research very seriously, but also how we're going to make a product out of this, how we're going to talk to people about it, how we're going to handle trust and safety, how we're going to think about what comes next. And that sort of taking every aspect of the system seriously is an important part of it.

But most of all, I would say you just have to do it. Like most, I think this is one of the really great things about Silicon Valley is because failure is tolerated and also ambitious to the point of ridicule ideas are tolerated, it means you have people shoot for very aggressive things. And if you don't, if you don't have a culture that really supports both of those things, then there's like a cap on the ambition level that you can get a group of people to go for.

Competitions between AI Labs: Driven by ego or desire to help humanity?

Audience: Quick question, so Fair just released their MMS as a competitor to Whisper. What do you think about the competition between big AI Labs? And do you think it's mainly driven by ego, or genuine desire to help humanity? Or at least, what part of it is driven by ego?

Sam Altman: I mean, again, I think people competing with each other to make better and better models that everybody gets to benefit from is awesome. I think that's part of how progress happens. There's some ego caught up in there for sure. There's like a famous quote, 'Science wouldn't get done without any ego' or without a lot of ego, and you know, sure, that's fine. But, like, as long as we're not competing on safety, or as long as we're not sort of competing in a way that puts safety at risk, people competing to make better models while raising the bar on safety, I think that's great. And if ego is part of that for some people, fine.

On Open Source models

Audience: A few days ago, an internal note from Google leaked, revealing that the giant AI are offering the new open-source models which are slowly bridging the gap with the proprietary model. What's your feeling about it? And will OpenAI release a small open-source model as it used to?

Sam Altman: Yeah, again, our mission is to maximize the benefits of AI in the world, AGI in particular, and also figure out safety. So we cheer on open-source models. There are versions of that we'd be concerned about, but I think we're very far away from that power threshold. We have open-source models. You know, we did Clip and Whisper. I expect we'll do more in the future. We've got some things to think through there to be able to do that in a safe and legal way. But we're not against it, certainly, and we're happy to see the innovation in the industry.

On the Partnership with Snap

Audience: I'm working at the AR studio at Snap, and your technology has been integrated with Snap for a few months, so I just want to know - what have you learned about this partnership?

Sam Altman: I'll say something nice about Snap that I've learned, which is, it's inspiring to see a company as big as Snap able to move as quickly as you all are. It gives me hope about the future of OpenAI. I think Snap has done an unbelievable job of nimbleness at scale. So, I didn't learn anything about AI there, but learned something important about companies.

AI Regulation

AI Regulation in Europe

Roxanne Varza: And so now we're in Europe, regulation is our favorite topic. How do you feel it's going?

Sam Altman: I think it's going to get to a good place. I think it's important that we do this. I think regulatory clarity will be a good thing. I think protecting people from new technology like this, or with new technology, is important. You know, there's... The conversations have been very productive. We very much intend to work and get any input we can and comply with the regulation on itself. I suspect it'll still evolve a lot. And even once it's written, given the way this technology... The rate at which this technology is changing, I expect it'll evolve again.

When work on the Act started, generative AI was not a thing at all, just as an example of how fast this world moves. And whether we think of... The way we think about where risks are, what the right framework is, will depend on how the technology evolves. And that is still an open scientific question to some degree.

Plans for Opening OpenAI Offices in the World and EU Regulation

Audience: I just wanted to know, you're doing a world tour right now. Do you have firm plans to open other offices in the world? What would you be looking for in terms of regulation? Can you specify what you said about EU regulation? If you could lead OpenAI to cease operation in the EU?

Sam Altman: We like to be working together in person. We will open other offices around the world, but probably not very many, and probably slowly. Definitely on this trip, we're thinking about where it makes sense. And we want to do that, but we're still a small company. And it's horribly unfashionable, but we still believe in in-person work. And I think for the kind of research we do, that's helpful.

So, you know, the model that a lot of companies our size do of three people here, five people there, that's probably not what works best for this kind of research. There are things we do do around the world. We invest in companies around the world, for example. And we will open offices as we get bigger, but we'll be thoughtful and probably slower. Because we're a different kind of company, research is different than other kinds of companies I've seen before.

In terms of the EU, like, yeah, we plan to comply. We really like Europe, we really want to offer services in Europe. We just want to make sure we're technically able to. And, again, the conversations need to be productive.

The Challenges of Regulation of AI on a global scale

Audience: My question is about regulation. So, on the kind of pessimistic end of the spectrum, the kind of doomsayer alarmist take is that AI is fundamentally driven to eventually become autonomous. And if there are going to be a wealth of models out there in the world, and more and more of them, do you think that it is genuinely, authentically possible to regulate on a global scale? Because it seems that, I was assuming that it's not, sooner or later, if there's the potentiality for an AI model to actually become quite, and it seems like that could happen anytime, anywhere with any of these models. So, how fearful are you that things could become complicated as there's this kind of global proliferation?

Sam Altman: So, I think it's impossible to stop the proliferation of smaller and weaker models, and I think that's fine. I mean, there's going to be challenges that come with it, but I think the benefits will be orders of magnitude greater. I think people are, basically, mostly good. People do incredible things with incredible tools. There are some harms, for sure, and we'll address those as we go.

I think the question is, can we avoid the existential risks with a global regulatory framework on the forefront? And, I think I definitely believe that the most powerful models are capable of the most harms, and if we can work really hard at getting those right, studying the problems, it gives society some time to adapt to what will, you know, the giant pack that will come behind the frontier. Definitely, we're going to face challenges, and definitely there are going to be instabilities in the world because of this huge proliferation that's going to happen, but if we watch the forefront, if we watch the true things capable of superintelligence closely, then I hope we can avoid the most grievous harms.

Regulations that might hinder ChatGPT

Audience: One question that I have for you is around regulations. I know we continue to talk about this. I'm just curious, what are the top two or three regulations that you're either dreading, receiving right now, or being proposed to, or you foresee you might have to deal with in the future, that you think might hinder the overall advancement of a chat GPT? I know that there are some regulations that you might be dreading, or you think that you might not even be able to adhere to.

Sam Altman: Actually, I think most of the regulations being proposed, or at least maybe we have a biased sample because people talk to us about what we're known for, what we think about. But the things we're mostly hearing about, licensing frameworks and safety standards, I think make total sense. And I'm very happy about it. The kind of things that say, that maybe don't have an understanding of how generative AI in its current state works, and say, you've got to be able to meet this guarantee 100% of the time, those we say, well, here's the limitations of the technology and what we have to do to be able to meet that. Or if someone says, you need to make a model that can never be jailbroken under any circumstances, you can say, we honestly don't know how to do that yet, but here's what we can do. But yeah, it's mostly been quite productive.

@gilbaogit
Copy link

Hello, thank you for this work !
I was wondering if you have save also the part when he talked about integration of chat gpt into Snapchat ? I would love to read its response again as I didn’t pay attention fully (I asked the question) thanks

@adnanelhb
Copy link
Author

Hello, thank you for this work ! I was wondering if you have save also the part when he talked about integration of chat gpt into Snapchat ? I would love to read its response again as I didn’t pay attention fully (I asked the question) thanks

Sure! Added it to the 'OpenAI Strategy, Management, Teams' section.

@gilbaogit
Copy link

Awesome! thanks

@Mika0303
Copy link

Awesome work. Thanks a lot for sharing this as I didn't get a chance to attend :))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment