The excitement around the London arrival of OpenAI CEO Sam Altman was palpable from the queue that snaked its way around the University College London building ahead of his speech on Wednesday afternoon. Hundreds of eager-faced students and admirers of OpenAI’s chatbot ChatGPT had come here to watch the UK leg of Altman’s world tour, where he expects to travel to around 17 cities. This week, he has already visited Paris and Warsaw. Last week he was in Lagos. Next, he’s on to Munich.
But the queue was soundtracked by a small group of people who had traveled to loudly express their anxiety that AI is advancing too fast. “Sam Altman is willing to bet humanity on the hope of some sort of transhumanist utopia,” one protester shouted into a megaphone. Ben, another protester, who declined to share his surname in case it affects his job prospects, was also worried. “We’re particularly concerned about the development of future AI models which might be existentially dangerous for the human race.”
Speaking to a packed auditorium of close to 1,000 people, Altman seemed unphased. Wearing a sharp blue suit with green patterned socks, he talked in clipped answers, always to the point. And his tone was optimistic, as he explained how he thinks AI could reinvigorate the economy. “I’m excited that this technology can bring the missing productivity gains of the last few decades back,” he said. But, while he didn’t mention the protests outside, he did admit to concerns over how generative AI could be used to spread disinformation.
“Humans are already good at making disinformation, and maybe the GPT models make it easier. But that’s not the thing I’m afraid of,” he said. “I think one thing that will be different [with AI] is the interactive, personalized, persuasive ability of these systems.”
Although OpenAI plans to build in ways to make ChatGPT refuse to spread disinformation, and plans to create monitoring systems, he said, it will be difficult to mitigate these impacts when the company releases open-source models to the public—as it announced several weeks ago that it intends to do. “The OpenAI techniques of what we can do on our own systems won’t work the same.”
Despite that warning, Altman said it’s important that artificial intelligence not be overregulated while the technology is still emerging. The European Parliament is currently debating legislation called the AI Act, new rules that would shape the way companies can develop such models and might create an AI office to oversee compliance. The UK, however, has decided to spread responsibility for AI between different regulators, including those covering human rights, health and safety, and competition, instead of creating a dedicated oversight body.
“I think it’s important to get the balance right here,” Altman said, alluding to debates now taking place among policymakers around the world about how to build rules for AI that protect societies from potential harm without curbing innovation. “The right answer is probably something between the traditional European-UK approach and the traditional US approach,” Altman said. “I hope we can all get it right together this time.”
He also spoke briefly about OpenAI’s commercial strategy of selling access to its API, a type of software interface, to other businesses. The company wants to offer intelligence as a service, he says. “What we’d like is that a lot of people integrate our API. And then as we make the underlying model better, it lifts the whole world of products and services up. It’s a very simple strategy.” Listening to what people want from that API has been a big part of his world trip, he said.
Altman also talked about his vision for AI-assisted humans, where people are enhanced and not replaced by technology. “I think there will be way more jobs on the other side of this technological revolution,” he said. “I’m not a believer that this is the end of work at all.”
He added: “I think we now see a path where we build these tools that get more and more powerful. And there will be trillions of copies being used in the world, helping individual people be more effective, capable of doing way more.”
Before the trip, Altman said on Twitter the purpose of his world tour was to meet with OpenAI users and people interested in AI in general. But in London, it looked like the company was also trying to cement its leader’s reputation as the person who would usher the world into the AI age. Audience members asked him about his vision for AI, but also about the best way to educate their children and even how to build life on Mars. In an onstage discussion with UCL professors, one panelist said she was here to represent humanity. Altman uncharacteristically jumped in to stress his company was not working against it. “I represent humanity too,” he said.