From ChatGPT to Gemini: how AI is rewriting the internet

From ChatGPT to Gemini: how AI is rewriting the internet

Big players, including Microsoft, with Copilot, Google, with Gemini, and OpenAI, with GPT-4o, are making AI chatbot technology previously restricted to test labs more accessible to the general public.

How do these large language model (LLM) programs work? OpenAI’s GPT-3 told us that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.” 

Or, in the words of James Vincent, a human person: “These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.”

But there are so many more pieces to the AI landscape that are coming into play (and so many name changes — remember when we were talking about Bing and Bard before those tools were rebranded?), but you can be sure to see it all unfold here on The Verge.


  • Side by side photos of Google CEO Sundar Pichai and OpenAI CEO Sam Altman.

    Google CEO Sundar Pichai and OpenAI CEO Sam Altman.
    Getty Images / The Verge

    “Google will do the Googling for you.” 

    Out of everything said onstage at Google I/O this year, I’ve been thinking the most about that line from Search executive Liz Reid. It summarizes not only how Google is fundamentally changing Search but also how the company is increasingly on a collision course with OpenAI.

    Read Article >

  • Vox Media’s 2023 Code Conference - Day 2

    Vox Media’s 2023 Code Conference - Day 2

    Mike Krieger has a long history in tech and AI.
    Photo by Jerod Harris / Getty Images for Vox Media

    As Anthropic tries to take on the AI giants, it has a new big-name executive on board: the company announced this morning that Mike Krieger is its new chief product officer. Krieger, of course, was one of the co-founders of Instagram and spent the last few years working on Artifact, an AI news-reading app that was recently acquired by Yahoo.

    Krieger will oversee all of Anthropic’s product efforts going forward. It’s an important moment for the company to push hard on product, too: it recently released the Claude app for iOS, long after a bunch of its competitors were available on mobile, and just announced support for use in Spanish, French, Italian, and German. Anthropic, which was founded by ex-OpenAI employees, has been seemingly primarily focused on building out its core technology for the last few years but seems to understand that it needs to turn all that tech into products — and there’s no time to waste.

    Read Article >

  • Vector collage showing different aspects of using AI tools.

    Vector collage showing different aspects of using AI tools.

    Image: The Verge

    Google I/O introduced an AI assistant that can see and hear the world, while OpenAI put its version of a Her-like chatbot into an iPhone. Next week, Microsoft will be hosting Build, where it’s sure to have some version of Copilot or Cortana that understands pivot tables. Then, a few weeks after that, Apple will host its own developer conference, and if the buzz is anything to go by, it’ll be talking about artificial intelligence, too. (Unclear if Siri will be mentioned.)

    AI is here! It’s no longer conceptual. It’s taking jobs, making a few new ones, and helping millions of students avoid doing their homework. According to most of the major tech companies investing in AI, weappear to be at the start of experiencing one of those rare monumental shifts in technology. Think the Industrial Revolution or the creation of the internet or personal computer. All of Silicon Valley — of Big Tech — is focused on taking large language models and other forms of artificial intelligence and moving them from the laptops of researchers into the phones and computers of average people. Ideally, they will make a lot of money in the process.

    Read Article >

  • CTO of OpenAI Mira Murati gives a talk onstage.

    CTO of OpenAI Mira Murati gives a talk onstage.

    Screenshot: OpenAI

    OpenAI is launching GPT-4o, an iteration of the GPT-4 model that powers its hallmark product, ChatGPT. The updated model “is much faster” and improves “capabilities across text, vision, and audio,” OpenAI CTO Mira Murati said in a livestream announcement on Monday. It’ll be free for all users, and paid users will continue to “have up to five times the capacity limits” of free users, Murati added.

    In a blog post from the company, OpenAI says GPT-4o’s capabilities “will be rolled out iteratively,” but its text and image capabilities will start to roll out today in ChatGPT.

    Read Article >

  • Apple and OpenAI are apparently close to an iOS chatbot deal.

    With less than a month to go before Apple details its AI plans at WWDC 2024, the company is “finalizing terms” to let ChatGPT use iOS 18 features, according to Bloomberg.

    That would be Apple’s first such deal if it closes before the company sets up similar agreements with Google or Anthropic.

  • The true promise of AI: Siri that doesn’t suck.

    Apple’s big focus on AI next month is not a secret. But rather than building a bot with a personality, it seems more interested in just Siri doing Siri things. Only, you know, better:

    Apple has focused on making Siri better at handling tasks that it already does, including setting timers, creating calendar appointments and adding items to a grocery list. It also would be able to summarize text messages.

  • OpenAI keeps vaguely teasing GPT-5.

    COO Brad Lightcap is speaking at Bloomberg’s Tech conference and was just asked when the next model is arriving. His answer hints that ChatGPT will evolve to act like an agent on your behalf or, at the very least, take on more of a persona.

    “Will there be such a thing as a prompt engineer in 2026?” he says. “You don’t prompt engineer your friend.”

  • Leaked OpenAI slide deck reveals how it’s wooing publishers.

    According to Adweek, OpenAI’s incentives for publishers include financial compensation as well as:

    …priority placement and “richer brand expression” in chat conversations, and their content benefits from more prominent link treatments.

    In exchange, OpenAI gets training data and a license to display info with attribution and links. OpenAI has struck deals with publishers like Axel Springer, The Financial Times, and most recently, People magazine publisher Dotdash Meredith. A comment from OpenAI said Adweek’s report “contains a number of mischaracterizations and outdated information.”

  • Vector art of the TikTok logo.

    Vector art of the TikTok logo.

    Image: The Verge

    TikTok already automatically applies an “AI-generated” tag to content on its platform made using TikTok’s AI tools, and that same label will now apply to content created on other platforms. Now, TikTok will detect when images or videos are uploaded to its platform containing metadata tags indicating the presence of AI-generated content and says it’s the first social media platform to support the new Content Credentials.

    Support for the Adobe-developed tagging system (which has been added to tools like Photoshop and Firefly) comes as TikTok partners with Adobe’s Content Authenticity Initiative (CAI) as well as the Coalition for Content Provenance and Authenticity (C2PA).

    Read Article >

  • A rendition of OpenAI’s logo, which looks like a stylized whirlpool.

    A rendition of OpenAI’s logo, which looks like a stylized whirlpool.

    Illustration: The Verge

    AI tools behaving badly — like Microsoft’s Bing AI losing track of which year it is — has become a subgenre of reporting on AI. But very often, it’s hard to tell the difference between a bug and poor construction of the underlying AI model that analyzes incoming data and predicts what an acceptable response will be, like Google’s Gemini image generator drawing diverse Nazis due to a filter setting.

    Now, OpenAI is releasing the first draft of a proposed framework, called Model Spec, that would shape how AI tools like its own GPT-4 model respond in the future. The OpenAI approach proposes three general principles — that AI models should assist the developer and end-user with helpful responses that follow instructions, benefit humanity with consideration of potential benefits and harms, and reflect well on OpenAI with respect to social norms and laws.

    Read Article >

  • Photo illustration of the shape of a brain on a circuit board.

    Photo illustration of the shape of a brain on a circuit board.

    Illustration: Cath Virginia / The Verge | Photos: Getty Images

    Stack Overflow’s new deal giving OpenAI access to its API as a source of data has users who’ve posted their questions and answers about coding problems in conversations with other humans rankled. Users say that when they attempt to alter their posts in protest, the site is retaliating by reversing the alterations and suspending the users who carried them out.

    A programmer named Ben posted a screenshot yesterday of the change history for a post seeking programming advice, which they’d updated to say that they had removed the question to protest the OpenAI deal. “The move steals the labour of everyone who contributed to Stack Overflow with no way to opt-out,” read the updated post.

    Read Article >

  • OpenAI is entering the search game.

    OpenAI is developing a search engine for ChatGPT, giving users the ability to crawl the web for answers to their questions, Bloomberg reports. Sources also tell The Verge that OpenAI has been aggressively trying to poach Google employees for a team that is working hard to ship the product soon.

  • We’re desi, so I guess we wear turbans?

    Meta Al is generating turbans on an overwhelmingly high amount of prompts for Indian men, TechCrunch has found. It’s certainly a stereotype that’s long depicted for South Asians and even Arabs in Hollywood, which has led some folks in the western world to assume any turban wearers’ background and religion.

    three AI images of Indian men wearing turbans

    three AI images of Indian men wearing turbans

    Meta AI: If Indian man, then turban.
    Image: TechCrunch
  • Randy Travis singing at Cheyenne Frontier Days

    Randy Travis singing at Cheyenne Frontier Days

    Randy Travis in 1987.
    Photo: Mark Junge / Getty Images

    For the first time since a 2013 stroke left country singer Randy Travis unable to speak or sing properly, he has released a new song. He didn’t sing it, though; instead, the vocals were created with AI software and a surrogate singer.

    The song, called “Where That Came From,” is every bit the kind of folksy, sentimental tune I came to love as a kid when Travis was at the height of his fame. The producers created it by training an unnamed AI model, starting with 42 of his vocal-isolated recordings. Then, under the supervision of Travis and his career-long producer Kyle Lehning, fellow country singer James DuPre laid down the vocals to be transformed into Travis’ by AI.

    Read Article >

  • YouTube tests out using AI to skip to the good part.

    Some YouTube Premium subscribers can now jump to the most-watched part of a video, only in the YouTube app, by double-tapping the right side of the screen (which normally skips ahead 10 seconds), then tapping a “Jump ahead” button that appears, according to 9to5Google.

    To see if you have the feature and enable it, go to Settings > Try experimental new features.

  • “You’re holding a taco!”

    If you’ve already read our review of the Rabbit R1 but haven’t gotten around to watching the video version of it, what better time than now?

  • Vector illustration of the Microsoft Copilot logo.

    Vector illustration of the Microsoft Copilot logo.

    Microsoft’s latest Windows Insider blog posts say that when it comes to testing new Copilot features in Windows 11, “We have decided to pause the rollouts of these experiences to further refine them based on user feedback.” For people who already have the feature, “Copilot in Windows will continue to work as expected while we continue to evolve new ideas with Windows Insiders.”

    Microsoft is holding an AI event on May 20th which would be a good time to show more of what’s next, and after setting up 2024 as “the year of the AI PC,” with a new Copilot key on Windows keyboards, there’s a lot to live up to.

    Read Article >

  • OpenAI makes ChatGPT’s chat history feature available to everyone — no strings attached.

    OpenAI says free and Plus subscribers can now use the feature without giving over their chats to train its models.

    With chat history on, users can pick up previous chats where they left off, and the chatbot will reply as though they never stopped. The company also says users can start one-off chats that aren’t saved in the history.

  • A rendition of OpenAI’s logo, which looks like a stylized whirlpool.

    A rendition of OpenAI’s logo, which looks like a stylized whirlpool.

    Illustration: The Verge

    OpenAI announced the Memory feature that allows ChatGPT to store queries, prompts, and other customizations more permanently in February. At the time, it was only available to a “small portion” of users, but now it’s available for ChatGPT Plus paying subscribers outside of Europe or Korea.

    ChatGPT’s Memory works in two ways to make the chatbot’s responses more personalized. The first is by letting you tell ChatGPT to remember certain details, and the second is by learning from conversations similar to other algorithms in our apps. Memory brings ChatGPT closer to being a better AI assistant. Once it remembers your preferences, it can include them without needing a reminder.

    Read Article >

  • A 2022 Apple iPad Pro in a Magic Keyboard case on a wooden desk.

    A 2022 Apple iPad Pro in a Magic Keyboard case on a wooden desk.

    Image: Dan Seifert / The Verge

    Apple is preparing for its big AI coming out party in this year’s Worldwide Developer Conference; that much you can count on. But apparently, the company is going to start that party a little early with the OLED iPad Pro that it’s expected to unveil on May 7th. According to Bloomberg’s Mark Gurman, there’s “a strong possibility” the tablet will launch with an M4 chip and its accompanying neural engine, making it Apple’s “first truly AI-powered device.”

    Writing in his Power On newsletter today, Gurman said the company could use its May event to explain “its AI chip strategy without distraction,” freeing it to focus on exactly how the iPad Pro and its other M4 devices will use the company’s AI offerings in iPadOS 18. Those could include on-device Apple-developed features and deeply-integrated chatbots from one or more other companies like Google or OpenAI.

    Read Article >

  • What will Instagram’s chatbot creator look like?

    At the moment, it seems Meta’s “AI studio” will let people make private and public bots, tuned for duties like personal shopping, trip-planning, meme generation, and helping users “never miss a romantic connection.” (I assume that last one is designed to trawl Craigslist Missed Connections for you.)

    Alessandro Paluzzi posted these screenshots in a thread where he’s been tracking the feature since January.

    Two screenshots from the forthcoming AI creator for Instagram.

    Two screenshots from the forthcoming AI creator for Instagram.

    Meta’s chatbot maker can apparently help you “nerd out on your passions.”
    Image: Alessandro Paluzzi
  • An image of the Meta logo.

    An image of the Meta logo.

    Illustration by Alex Castro / The Verge

    The generative AI gold rush is underway — just don’t expect it to create profits anytime soon.

    That was the message from Meta CEO Mark Zuckerberg to investors during Wednesday’s call for the company’s first-quarter earnings report. Having just put its ChatGPT competitor in a bunch of places across Instagram, Facebook, and WhatsApp, much of the call focused on exactly how generative AI will become a money-making endeavor for Meta.

    Read Article >

  • Illustration of the Microsoft wordmark on a green background

    Illustration of the Microsoft wordmark on a green background

    Illustration: The Verge

    Microsoft launched the next version of its lightweight AI model Phi-3 Mini, the first of three small models the company plans to release. 

    Phi-3 Mini measures 3.8 billion parameters and is trained on a data set that is smaller relative tolarge language models like GPT-4. It is now available on Azure, Hugging Face, and Ollama. Microsoft plans to release Phi-3 Small (7B parameters) and Phi-3 Medium (14B parameters). Parameters refer to how many complex instructions a model can understand. 

    Read Article >

  • No one’s going to misuse this, right?

    Microsoft’s new AI model, VASA-1, transforms a single still image and an audio clip into an animated video, which is impressive, if not a little creepy.

    The benefits – such as enhancing educational equity, improving accessibility for individuals with communication challenges, offering companionship or therapeutic support to those in need, among many others – underscore the importance of our research and other related explorations.

    Microsoft says it won’t release a demo, API, or product with VASA-1 “until we are certain that the technology will be used responsibly.”

Alex Heath

Leave a Reply