Speaker 1: Hunger fantasies of science fiction. They are real and present the promises of curing cancer or developing new understandings of physics and biology, or modeling, climate and weather. All very encouraging and hopeful. But we also know the potential harms and we’ve seen them already weaponized disinformation, housing discrimination, harassment of women and impersonation, fraud, [00:00:30] voice cloning, uh, deep fakes. These are the potential despite the other rewards. And for me, perhaps the biggest nightmare is the looming new industrial revolution. The displacement of millions of workers, the loss of huge numbers of jobs, the need to prepare for this new industrial revolution in skill training [00:01:00] and relocation that may be required. And already industry leaders are calling attention to those challenges.
Speaker 1: To quote, chat, chat, G P T. This is not necessarily the future that we want. We need to maximize the good over the bad. Congress has a choice. Now. We had the same choice when [00:01:30] we face social media. We failed to seize that moment. The result is predators on the internet, toxic content exploiting children, creating dangers for them. And Senator Blackburn and I and others like Senator Durbin on the Judiciary Committee are trying to deal with it. Kids Online Safety Act. But Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before [00:02:00] the threats and the risks become real sensible. Safeguards are not in opposition to innovation. Accountability is not a burden far from it. They are the foundation of how we can move ahead while protecting public trust. Perhaps the biggest nightmare is the looming new industrial revolution. The displacement of millions of workers, the loss of huge numbers of jobs, the [00:02:30] need to prepare for this new industrial revolution in skill training and relocation that may be required. And already industry leaders are calling attention.
Speaker 2: And I think my question is, what kind of an innovation is it going to be? Is it gonna be like the printing press that diffused knowledge and power and learning widely across the landscape that empowered, ordinary, everyday [00:03:00] individuals that led to greater flourishing, that led above all to greater liberty? Or is it gonna be more like the Adam bomb? Huge technological breakthrough, but the consequences severe, terrible, continue to haunt us to this day.
Speaker 3: Before we release G P T four, our latest model, we spent over six months conducting extensive evaluations, external red teaming and dangerous capability testing. We are proud of the progress that we made. [00:03:30] G P T four is more likely to respond helpfully and truthfully and refuse harmful requests than any other widely deployed model of similar capability. However, we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models. For example, the US government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities. There are several other areas I [00:04:00] mentioned in my written testimony where I believe that companies like ours can partner with governments, including ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes to develop and update safety measures and examining opportunities for global coordination. And as you mentioned, uh, I think it’s important that companies have their own responsibility here no matter what Congress does.
Speaker 4: To that end, IBM urges Congress to adopt a precision regulation approach to ai. This means [00:04:30] establishing rules to govern the deployment of AI in specific use cases, not regulating the technology itself. Such an approach would involve four things. First, different rules for different risks. The strongest regulation should be applied to use cases with the greatest risks to people and society. Second, clearly defining risks. There must be clear guidance on AI uses or categories of AI supported activity that are inherently [00:05:00] high risk. This common definition is key to enabling a clear understanding of what regulatory requirements will apply in different use cases and contexts. Third, be transparent. So AI shouldn’t be hidden. Consumers should know when they’re interacting with an AI system and that they have recourse to engage with a real person should they so desire. No person anywhere should be tricked into interacting with an AI system. And finally, showing the impact. For higher risk use cases, companies should be required to conduct [00:05:30] impact assessments that show how their systems perform against test for bias and other ways that they could potentially impact the public and to attest that they’ve done so.
Speaker 1: Uh, you may have had in mind the effect on, on jobs, which is really my biggest nightmare in the long term. Uh, let me ask you, uh, what your biggest nightmare is and whether you share that
Speaker 3: Concern. Like with [00:06:00] all technological revolutions, I expect there to be significant impact on jobs, but exactly what that impact looks like is very difficult to predict. If we went back to the, the other side of a previous technological revolution, talking about the jobs that exist on the other side, um, you know, you can go back and read books of this. It’s, uh, what people said at the time. It’s difficult. I believe that there will be far greater jobs on the other side of this and that the jobs of today will get better. I, I think it’s important. First of all, I think [00:06:30] it’s important to understand and think about G PT four as a tool, not a creature, which is easy to get confused. And it’s a tool that people have a great deal of control over in how they use it. Uh, and second g PT four and things, other systems like it, uh, are good at doing tasks not jobs. And so you see already people that are using GPT four to do their job much more efficiently, um, by helping them with tasks.
Speaker 2: Should we be concerned about models that can, large language models [00:07:00] that can predict survey opinion and then can help organizations, entities fine tune strategies to elicit behaviors from voters? Should we be worried about this for our
Speaker 3: Elections? Yeah. Uh, thank you Senator Holly for the question. It’s, it’s one of my areas of greatest concern. The, the, the, the more general ability of these models to manipulate, to persuade, uh, to provide sort of one-on-one, uh, you know, interactive disinformation. I think that’s like a broader version of what you are talking about. But giving that we’re gonna face an election [00:07:30] next year and these models are getting better, uh, I think this is a significant area of concern. I think there’s a lot, there’s a lot of policies that companies can voluntarily adopt and I’m happy to talk about what we do there. Um, I do think some regulation would be quite wise on this topic. Uh, someone mentioned earlier, it’s something we really agree with. People need to know if they’re talking to an ai, if if content that they’re looking at might be generated or might not. I think it’s a, a great thing to do is to make that clear. Um, I think we also [00:08:00] will need rules, guidelines, uh, about what, what’s expected in terms of disclosure, uh, from a company providing a model, uh, that could have the, these sorts of, uh, abilities that you talk about. So I’m nervous about it.
Speaker 2: Should we be concerned about that for its corporate applications, for the monetary applications for the manipulation that that could come from that Mr. Almond?
Speaker 3: Uh, yes, we should be concerned about that. To [00:08:30] be clear, uh, open AI does not, we we’re not off, you know, we wouldn’t have an ad-based business model. So we’re not trying to build up these profiles of our users. We’re not, we’re not trying to get them to use it more a actually, we’d love it if they use it less cause we don’t have enough GPUs. Um, but I think other companies are already, uh, and certainly will in the future, use AI models to create, you know, very good ad predictions of what a user will like.
Speaker 5: My view is that we probably need a cabinet level, uh, organization [00:09:00] within the United States in order to address this. Um, and my reasoning for that is that the number of risks is large. The amount of information to keep up on is so much. I think we need a lot of technical expertise. I think we need a lot of coordination of these efforts. So there is one model here where we stick to only existing law and try to shape all of what we need to do. And each agency does their own thing. But I think that AI is gonna [00:09:30] be such a large part of our future and is so complicated and moving so fast. And this does not fully solve your problem about a dynamic world. Um, but it’s a step in that direction to have an agency that’s full-time job is to do this. I personally have suggested in fact that we should want to do this at a global way.
Speaker 6: We’ve lived through Napster.
Speaker 3: Yes, but
Speaker 6: We’re, that was something that really cost a lot of artists, a lot of money.
Speaker 3: Oh, I understand. Yeah, [00:10:00] for sure.
Speaker 6: In the digital distribution era.
Speaker 3: I, I don’t, I don’t know the numbers on jukebox on the top of my head as a research release. I can, I can follow up with your office, but it’s not, juke jukebox is not something that gets much attention or usage. It was put out to, to show that something’s possible.
Speaker 6: Well, Senator Durbin just said, you know, and I think it’s a fair warning to you all if we’re not involved in this from the get-go, and you all already are a long way down the path on this, but if we don’t step in, then [00:10:30] this gets away from you. So are you working with a copyright office? Are you considering protections for content generators and creators in generative ai?
Speaker 3: Yes, we are absolutely engaged on that. Again, to reiterate my earlier point, we think that content creators, content owners need to benefit from this technology exactly what the economic model is. We’re still talking to artists and content owners about what they want. I think there’s a lot of ways this can happen, but very clearly, no matter what [00:11:00] the law is, the right thing to do is to make sure people get significant upside benefit from this new technology.
Speaker 7: Um, with an election upon us, uh, with primary elections upon us, that we’re gonna have all kinds of misinformation. And I just wanna know what you’re planning on doing it, doing about it. I know we’re gonna have to do something soon, not just for the images of the candidates, but also, um, for misinformation about the actual polling places and election rules.
Speaker 3: Thank you, Senator. The, [00:11:30] we, we talked about this a little bit earlier. We are quite concerned about the impact this can have on elections. I think this is an area where hopefully the entire industry and the government can work together quickly. There’s, there’s many approaches, and I’ll talk about some of the things we do, but before that, I think it’s tempting to use the frame of social media. Um, but this is not social media, this different. And so the, the, the response that we need is different. You know, th this is a tool that a user is using to [00:12:00] help generate content more efficiently than before. They can change it, they can test the accuracy of it. If they don’t like it, they can get another version. Um, but it still then spreads through social media or other ways like chat. G B T is a, you know, single player experience where, where you’re just using this. Um, and so I think as we think about what to do, that’s, that’s important to understand there, there’s a lot that we can and do, do there. Um, there’s, uh, things that the model refuses to generate. Uh, we have policies, [00:12:30] uh, we also importantly have monitoring. So at scale, uh, we can detect someone generating a lot of those tweets, even if generating one tweet is okay.
Speaker 8: Do you agree with me that the simplest way and the most effective way is have an agency that is more nimble and smarter than Congress, which should be easy to create overlooking what you do?
Speaker 3: Yes, we’d be enthusiastic about
Speaker 8: That. You agree with that Mr. Marcus? Absolutely. You agree with that, Ms. Montgomery?
Speaker 4: I would have some nuances. I think we [00:13:00] need to build on what we have in place already today.
Speaker 8: We don’t have an agency
Speaker 4: Regulators,
Speaker 8: Uh, wait a minute. Nope, nope, nope. We
Speaker 4: Don’t have an agency that regulates the technology.
Speaker 8: So should we have one? But
Speaker 4: A lot of the issues, I, I don’t think so. A lot of the issues, okay,
Speaker 8: Wait a minute. Wait a minute. So IBM says we don’t need an agency. Uh, interesting. Should we have a license required for these tools?
Speaker 4: So, so what we believe is that we need to regulate,
Speaker 8: That’s a simple question. Should you get a license [00:13:30] to produce one of these tools?
Speaker 4: I I think it comes back to some of them potentially, yes. So what I said at the onset is that we need to, um, clearly define risks.
Speaker 8: Do, do you claim section two 30 applies in this area at all? We
Speaker 4: Are not a platform company and we’ve again, long advocated for reasonable care standard in section two.
Speaker 8: I I just don’t understand how you could say that you don’t need an agency to deal with the most transformative technology maybe ever.
Speaker 4: Well, [00:14:00] I, I think we have existing
Speaker 8: Is this a transformative technology that Yes. Can disrupt absolutely life as we know it good and bad.
Speaker 4: I think it’s a transformative technology, certainly. And the conversations that we’re having here today have been really bringing to light the fact that yeah, this is the domains and the
Speaker 8: Issues. This one with you has been very enlightening to me. Military application. How can AI change the warfare? And you got one minute
Speaker 3: [00:14:30]
Speaker 8: I’ll give you one example. A drone can, a drone you pro you can plug into a drone the coordinates and it can fly out and it goes over this target and it drops a missile on this car moving down the road and somebody’s watching it. Could AI create a situation where a drone can select the target itself?
Speaker 3: I think we shouldn’t allow that.
Speaker 8: Well, can it be [00:15:00] done? Sure. Thanks.