Who Is Mira Murati, OpenAI’s New Interim CEO?

Who Is Mira Murati, OpenAI’s New Interim CEO?

Until the dramatic departure of OpenAI’s cofounder and CEO Sam Altman Friday, Mira Murati was its chief technology officer—but you could also call her as its minister of truth. In addition to heading the teams that develop tools such as ChatGPT and Dall-E, it’s been her job to make sure those products don’t mislead people, show bias, or snuff out humanity altogether.

This interview was conducted in July 2023 for WIRED’s cover story on OpenAI. It is being published today after Sam Altman’s sudden departure to provide a glimpse at the thinking of the powerful AI company’s new boss.

Steven Levy: How did you come to join OpenAI?

Mira Murati: My background is in engineering, and I worked in aerospace, automotive, VR and AR. Both in my time at Tesla [where she shepherded the Model X], and at a VR company [Leap Motion] I was doing applications of AI in the real world. I very quickly believed that AGI would be the last and most important major technology that we built, and I wanted to be at the heart of it. Open AI was the only organization at the time that was incentivized to work on the capabilities of AI technology and also make sure that it goes well. When I joined in 2018, I began working on our supercomputing strategy, and managing a couple of research teams.

What moments stand out to you as key milestones during your tenure here?

There are so many big-deal moments, it’s hard to remember. We live in the future, and we see crazy things every day. But I do remember GPT-3 being able to translate. I speak Italian, Albanian, and English. I remember just creating pair prompts of English and Italian. And all of a sudden, even though we never trained it to translate in Italian, it could do it fairly well.

You were at OpenAI early enough to be there when it changed from a pure non-profit to reorganizing so that a for-profit entity lived inside the structure. How did you feel about that?

It was not something that was done lightly. To really understand how to make our models better and safer, you need to deploy them at scale. That costs a lot of money. It requires you to have a business plan because your generous nonprofit donors aren’t going to give billions like investors would. As far as I know, there’s no other structure like this. The key thing was protecting the mission of the nonprofit.

That might be tricky since you partner so deeply with a big tech company. Do you feel your mission is aligned with Microsoft’s?

In the sense that they believe that this is our mission.

But that’s not their mission.

No, that’s not their mission. But, it was important for the investor to actually believe that it’s our mission.

When you joined in 2018, OpenAI was mainly a research lab. While you still do research, you’re now very much a product company. Has that changed the culture?

It has definitely changed the company a lot. I feel like almost every year, there’s some sort of paradigm shift where we have to reconsider how we’re doing things. It is kind of like an evolution. What’s more obvious now to everyone is this need for continuous adaptation in society, helping bring this technology to the world in a responsible way, and helping society adapt to this change. That wasn’t necessarily obvious five years ago, when we were just doing stuff in our lab. But putting GPT-3 in an API, in working with customers and developers, helped us build this muscle of understanding the potential that the technology has to change things in the real world, often in ways that are different than what we predict.

You were involved in Dall-E. Because it outputs imagery, you had to consider different things than a text model, including who owns the images that the model draws upon. What were your fears and how successful you think you were?

Obviously, we did a ton of red-teaming. I remember it being a source of joy, levity, and fun. People came up with all these like creative, crazy prompts. We decided to make it available in labs, as an easy way for people to interact with the technology and learn about it. And also to think about policy implications and about how Dall-E can affect products and social media or other things out there. We also worked a lot with creatives, to get their input along the way, because we see it internally as a tool that really enhances creativity, as opposed to replacing it. Initially there was speculation that AI would first automate a bunch of jobs, and creativity was the area where we humans had a monopoly. But we’ve seen that these AI models actually have a potential to really be creative. When you see artists play with Dall-E, the outputs are really magnificent.

Since OpenAI has released its products, there have been questions about their immediate impact in things like copyright, plagiarism, and jobs. By putting things like GPT-4 in the wild, it’s almost like you’re forcing the public to deal with those issues. Was that intentional?

Definitely. It’s actually very important to figure out how to bring it out there in a way that’s safe and responsible, and helps people integrate it into their workflow. It’s going to change entire industries; people have compared it to electricity, or the printing press. And so it’s very important to start actually integrating it in every layer of society and think about things like copyright laws, privacy, governance and regulation. We have to make sure that people really experience for themselves what this technology is capable of, versus reading about it in some press release, especially as the technological progress continues to be so rapid. It’s futile to resist it. I think it’s important to embrace it and figure out how it’s going to go well.

Are you convinced that that’s the optimal way to move us towards AGI?

I haven’t come up with a better way than iterative deployments to figure out how you get this continuous adaptation and feedback from the real end feeding back into the technology to make it more robust to these use cases. It’is very important to do this now, while the stakes are still low. As we get closer to AGI, it’s probably going to evolve again, and our deployment strategy will change as we get closer to it.


Steven Levy

Leave a Reply