As the artificial intelligence frenzy builds, a sudden consensus has formed. We should regulate it!
While there’s a very real question whether this is like closing the barn door after the robotic horses have fled, not only government types but also people who build AI systems are suggesting that some new laws might be helpful in stopping the technology from going bad. The idea is to keep the algorithms in the loyal-partner-to-humanity lane, with no access to the I-am-your-overlord lane.
Though since the dawn of ChatGPT many in the technology world have suggested that legal guardrails might be a good idea, the most emphatic plea came from AI’s most influential avatar of the moment, OpenAI CEO Sam Altman. “I think if this technology goes wrong, it can go quite wrong,” he said in a much anticipated appearance before a US Senate Judiciary subcommittee earlier this month. “We want to work with the government to prevent that from happening.”
That is certainly welcome news to the government, which has been pressing the idea for a while. Only days before his testimony, Altman was among a group of tech leaders summoned to the White House to hear Vice President Kamala Harris warn of AI’s dangers and urge the industry to help find solutions.
Choosing and implementing those solutions won’t be easy. It’s a giant challenge to strike the right balance between industry innovation and protecting rights and citizens. Clamping limits on such a nascent technology, even one whose baby steps are shaking the earth, courts the danger of hobbling great advances before they’re developed. Plus, even if the US, Europe, and India embrace those limits, will China respect them?
The White House has been unusually active in trying to outline what AI regulation might look like. In October 2022—just a month before the seismic release of ChatGPT—the administration issued a paper called the Blueprint for an AI Bill of Rights. It was the result of a year of preparation, public comments, and all the wisdom that technocrats could muster. In case readers mistake the word blueprint for mandate, the paper is explicit on its limits: “The Blueprint for an AI Bill of Rights is non-binding,” it reads, “and does not constitute US government policy.” This AI bill of rights is less controversial or binding than the one in the US Constitution, with all that thorny stuff about guns, free speech, and due process. Instead it’s kind of a fantasy wish list designed to blunt one edge of the double-sided sword of progress. So easy to do when you don’t provide the details! Since the Blueprint nicely summarizes the goals of possible legislation, let me present the key points here.
- You should not face discrimination by algorithms, and systems should be used and designed in an equitable way.
- You should be protected from abusive data practices via built-in protections, and you should have agency over how data about you is used.
- You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
- You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
I agree with every single one of those points, which can potentially guide us on the actual boundaries we might consider to mitigate the dark side of AI. Things like sharing what goes into training large language models like those behind ChatGPT, and allowing opt-outs for those who don’t want their content to be part of what LLMs present to users. Rules against built-in bias. Antitrust laws that prevent a few giant companies from creating an artificial intelligence cabal that homogenizes (and monetizes) pretty much all the information we receive. And protection of your personal information as used by those know-it-all AI products.
But reading that list also highlights the difficulty of turning uplifting suggestions into actual binding law. When you look closely at the points from the White House blueprint, it’s clear that they don’t just apply to AI, but pretty much everything in tech. Each one seems to embody a user right that has been violated since forever. Big tech wasn’t waiting around for generative AI to develop inequitable algorithms, opaque systems, abusive data practices, and a lack of opt-outs. That’s table stakes, buddy, and the fact that these problems are being brought up in a discussion of a new technology only highlights the failure to protect citizens against the ill effects of our current technology.
During that Senate hearing where Altman spoke, senator after senator sang the same refrain: We blew it when it came to regulating social media, so let’s not mess up with AI. But there’s no statute of limitations on making laws to curb previous abuses. The last time I looked, billions of people, including just about everyone in the US who has the wherewithal to poke a smartphone display, are still on social media, bullied, privacy compromised, and exposed to horrors. Nothing prevents Congress from getting tougher on those companies and, above all, passing privacy legislation.
The fact that Congress hasn’t done this casts severe doubt on the prospects for an AI bill. No wonder that certain regulators, notably FTC chair Lina Khan, isn’t waiting around for new laws. She’s claiming that current law provides her agency plenty of jurisdiction to take on the issues of bias, anticompetitive behavior, and invasion of privacy that new AI products present.
Meanwhile, the difficulty of actually coming up with new laws—and the enormity of the work that remains to be done—was highlighted this week when the White House issued an update on that AI Bill of Rights. It explained that the Biden administration is breaking a big-time sweat on coming up with a national AI strategy. But apparently the “national priorities” in that strategy are still not nailed down.
Now the White House wants tech companies and other AI stakeholders—along with the general public—to submit answers to 29 questions about the benefits and risks of AI. Just as the Senate subcommittee asked Altman and his fellow panelists to suggest a path forward, the administration is asking corporations and the public for ideas. In its request for information, the White House promises to “consider each comment, whether it contains a personal narrative, experiences with AI systems, or technical legal, research, policy, or scientific materials, or other content.” (I breathed a sigh of relief to see that comments from large language models are not being solicited, though I’m willing to bet that GPT-4 will be a big contributor despite this omission.)
Anyway, humans, you have until 5:00 pm ET on July 7, 2023, to submit your documents, lab reports, and personal narratives to shape an AI policy that’s still just a blueprint, even as millions play with Bard, Sydney, and ChatGPT, and employers make plans for slimmer workforces. Maybe then we’ll get down to embodying those lovely principles into law. Hey, it worked with social media! Uhhhhhh …
Sam Altman’s appearance in congress was a lot different than the visits Mark Zuckerberg has endured. The CEO of Facebook, as it used to be called, did not get solicited for advice on how to solve problems. Instead, he got roasted, as I described in my account of a 2019 hearing in the House of Representatives.
You are Mark Zuckerberg. It is 1:45 pm in Room 2128 of the Rayburn Office Building. You have been testifying for almost four hours, enduring the questions of the House Financial Services Committee, five minutes per representative, some of them very angry at you. You have to pee.
Chair Maxine Waters (D-California) listens to your request for a break and consults with a staffer. There is a floor vote coming up and she wants one more member to ask you questions. So before your break, she instructs, you will take questions from Representative Katie Porter (D-California). Porter begins by asking you about a contention that Facebook’s lawyers made in court earlier this year that Facebook users have no expectation of privacy. You might have heard this—it got press coverage at the time—but you say you can’t comment without the whole context. You’re not a lawyer!
She turns to the plight of the thousands of content moderators Facebook employed as contractors who look at disturbing images all day for low wages. You explain that they get more than minimum wage to police your service, at least $15 an hour and, in high-cost regions, $20 an hour. Porter isn’t impressed. She asks if you would vow to spend one hour a day for the next year doing that work. This is something you clearly don’t want to commit to. You squirm—is it nature’s call or the questioning?—and sputter that isn’t the best use of your time. She triumphantly takes that as a no. Waters grants the recess and you run a photographer gauntlet for some relief.
Philip asks, “I was wondering if AI could write my biography using data collected on me, since that process kicked into overdrive post 9/11?”
Thanks for asking, Philip, though I suspect you are more interested in deploring the government collection of personal data than dreaming of an algorithmic Boswell. But you bring up an interesting question: Can an AI model craft a biography based simply on raw data about your life? I seriously doubt that even an extensive dossier kept on you would provide the fodder needed for even the driest account of your life. All the hotels you checked into, the bank loans and mortgage payments, those dumb tweets you’ve made over the years … will they really allow GPT-4 to get a sense of who you are? Probably the AI model’s most interesting material will be its hallucinations.
If you’re a public figure, though, and have strewn a lot of your personal writings and been featured in numerous interviews, maybe some generative AI model could whip up something of value. If you were the one directing the project, you’d have the advantage of looking it over and prompting the chatbot to “be nicer”—or of cutting to the chase and saying, “Make this a hagiography.” But don’t wait around for the Pulitzer Prize. Even if you choose to let the AI biographer be as incisive and critical as it wants to be, we’re far, far away from a biological LLM like Robert Caro.
You can submit questions to email@example.com. Write ASK LEVY in the subject line.
Here’s a challenge for regulations: getting rid of AI-powered “digital colonialism.”
Not illegal but should be: AI-generated podcasts. They’re boring!
Who will bid $1 trillion for all the world’s seagrass? It’s a bargain!
New York isn’t the only city that’s sinking. But it’s sinking! Welcome to the Great Wet Way.
Don’t miss future subscriber-only editions of this column. Subscribe to WIRED (50% off for Plaintext readers) today.