Anne Neuberger, a Top White House Cyber Official, Sees the ‘Promise and Peril’ in AI

Anne Neuberger, a Top White House Cyber Official, Sees the ‘Promise and Peril’ in AI

When Anne Neuberger stepped into the newly created role of deputy national security adviser for cyber and emerging technology on the White House’s National Security Council at the start of the Biden administration, she was already one of the government’s most experienced cyber veterans.

Neuberger spent a decade at the National Security Agency, serving as its first chief risk officer, and then assistant deputy director of operations, and then leading the newly created cybersecurity directorate. Just weeks after she started at the White House, the May 2021 Colonial Pipeline ransomware incident forever realigned the US government’s focus on online criminal actors. In the nearly three years since, her office at the National Security Council has helped drive both the Biden administration’s major executive order on cybersecurity as well as its recent executive order on artificial intelligence.

Ahead of her trip last week to the Munich Security Conference, Neuberger spoke with WIRED about the emerging technology issues that are top of mind in her office today, from the broadband needs of John Deere tractors to how Hamas’ attack in Israel identified the new national security threat posed by traffic cameras, to security concerns about software patches for autonomous vehicles, advancements in threats from AI, the push for quantum-resistant cryptography, and next steps in the fight against ransomware attacks. This interview has been lightly edited for length and clarity.

WIRED: People in your type of work often get asked, “What keeps you awake at night?” I want to ask the opposite: Tell me one thing that is making you sleep better these days? What’s a problem or threat that you previously were more worried about that today you’re less worried about? What’s getting better?

Anne Neuberger: Let me give you three because thankfully there’s more than one.

First, we’ve long been concerned about disruptive cyberattacks against critical infrastructure. Historically, we’ve relied on voluntary information-sharing partnerships between government and companies because pipelines, power plants, and banks are private-sector-owned and operated. But these partnerships have not achieved outcomes—cybersecurity improvements—on the scale and speed we need. The Colonial Pipeline cyberattack, which resulted in the US’s major East Coast pipeline being offline for six days, showed how much change was needed. With the president’s support, we took a new approach: interpreting existing safety rules to apply to cybersecurity, which gives regulators the ability to act. Today, that is driving major improvements to cybersecurity across the nation’s pipelines, airports, airlines, railroads, and energy systems.

Two years ago, we started with pipelines, inviting CEOs from across the country to sensitive intelligence briefings at the White House, followed by a call to action. Fast forward to today: I have a chart, provided by DHS and TSA [the US Department of Homeland Security and the Transportation Security Administration], that tracks 97 critical pipelines in the country, and where each of them are in doing cybersecurity assessments and making TSA-mandated improvements. We see that this model—and the close-up work it drives with companies—achieves on-the-ground change. We’re working with regulators and companies to scale that for every critical infrastructure sector.

That process is in various forms of maturity for different sectors. It builds upon what DHS/CISA [the US Cybersecurity and Infrastructure Security Agency] is doing in communicating about vulnerabilities and cybersecurity mitigations, by involving regulators so they can ensure we’re getting on-the-ground impacts. The cybersecurity requirements are adapted for a particular sector—water systems are different from railway signaling systems—but some of the basic practices are the same. You shouldn’t have an operational-scale system connected to the internet with a default password.

It’s a shift in our entire approach, and we got it done using existing regulators. We’re not treating cyber as a stand-alone thing, but we’re leveraging systems expertise and matching it with the enforcement action. It’s a work in progress. I want it to go faster and better. There’s education we need to do—particularly on the Hill, so that overseers understand you also have to fund the enforcement part in order to actually achieve the outcomes you want.

So that’s one thing where the shift has been good and marked progress. The second area that’s getting better is in “following the money” to tackle money laundering via cryptocurrencies, which fuels cyberattacks by countries, notably the DPRK [North Korea], and criminals, notably ransomware. Post-9/11, the intelligence community and Treasury Department realized there was an entire movement of funds in the hawala system that was bypassing global anti-money-laundering. Over time, we built—to the extent we could—controls in. We are doing the same for cryptocurrency, working with countries and financial institutions around the world.

That’s included establishing a cryptocurrency targeting cell at the FBI with representatives from across the US government, equipping them with the tools and analytic capabilities; it’s included building close relationships with VASPs [virtual asset service providers], blockchain analytic firms, and others, as well as new international partnerships, notably with South Korea and Japan. It’s personal for them, given that the DPRK has stolen billions of dollars in cryptocurrency via hacking and used that to fuel their missile program. The DPRK launched more and more sophisticated missiles in 2023 than in any year prior, and crypto hacks are financing that. It’s still a work in progress, but everyone is energized, with new rolodexes, tools, and urgency. When recent hacks happened, there were FBI agents working overnight getting tips, the intelligence community had the relationships to pursue, and we knew if we needed help from a foreign government, we could pick up the phone. In terms of KYC and AML [know-your-customer and anti-money-laundering] implementation for cryptocurrency, Treasury offers training, helping countries put that in place as well.

So that’s the second one. The final one I would point to in the international space is the Counter Ransomware Initiative, which has just totally changed international cooperation on cybersecurity: Fifty-six countries working together since the White House launched it in October 2021. We’ve just launched a platform where they can collaborate on sharing TTPs [tactics, techniques, and procedures] and request support during a ransomware attack. CRI did its first-ever policy statement announcing that governments won’t pay ransoms. It’s an operational and policy partnership in a way that we’ve never experienced in international cyber cooperation. We focus it very intentionally on cybercrime, because if we say, “We’re countering Chinese activity,” “We’re countering Iranian activity,” “We’re countering Russian activity,” some countries balk. Instead, we focus in on cybercrime. Everybody knows quietly where many of those are harbored. We’ve built the operational policy partnerships and coordinated capacity-building that we’ve desperately needed in the international space.

You had the October 31 announcement of the 40 or so countries that said, “We’re not going to pay ransom.” Are you seeing any effect from that in terms of attacks or threat operators?

It’s always hard to tie X-leads-to-Y. It’s been inspiring to watch my team working country by country to get sign-off because it showed us the gaps in cyber-rich and cyber-poor countries, and how helpful we can be to other countries in fighting cybercrime, that’s impacting companies and people around the world. We did multiple rounds of blockchain analysis classes. When the countries came in for the two-day Counter Ransomware Initiative meet, on the third day, we did a voluntary half-day workshop on blockchain analysis and then a half-day workshop on digital forensics. The countries you’d expect stayed—pretty much everybody from Latin America, Africa, big parts of Asia, and even some European countries. It’s been deeply fulfilling to see the impact we can have hands-on like that

Between a recent warning about Iran targeting US water systems and the director of national intelligence’s worldwide Annual Threat Assessment specifically mentioning China targeting US oil and gas infrastructure, the administration has seemed to be making a choice to be more forthright about critical infrastructure penetrations. What’s the strategy behind the decision to make those comments more publicly than they’ve been made in the past?

Our strategy has two goals. One, those statements are always accompanied by practical cybersecurity advice—here are the things you need to do to be safe. And to combine that with a call to action by critical infrastructure companies, by cybersecurity firms. Given the scale of critical services in the US, that’s a good way to get the message out. I want to note the deep partnership between CISA, FBI, NSA, and several foreign intelligence services that has produced these products.

These announcements help us build our international partnerships on cyber as well, because China targets a number of our key allies and partners in the Indo-Pacific region. As part of getting that information out, we also do private briefings through our very close bilateral partnerships with Japan, Taiwan, South Korea. This advice works for them as well, and we share that in that way.

In parallel, as part of a broader approach, we’ve communicated to the Chinese—the president’s communicated, [national security advisor] Jake [Sullivan] has communicated, Secretary [Antony] Blinken has communicated—that we would view any disruptive cyber activity against critical infrastructure as escalatory, and contrary to what might be expected—that we would refrain from engaging in areas that we believe are national priorities—we would instead treat that as escalatory and respond accordingly. That’s a part of the message as well—to say, “We know about this. Yes, we’re engaging our defensive community to say this is a call to action to lock your digital doors, but also as part of our conveying to China, we take this seriously.”

US forces have engaged with Iran-backed Houthi rebels in the Middle East following attacks on shipping in the Red Sea, the drone attack in Jordan that killed three US personnel, and the retaliatory US airstrikes against Islamic Revolutionary Guard Corps targets. Are you seeing new activity from Iran that you are concerned about?

The Iranian targeting of water systems stopped after we went public and said, “This is the Iranian government, not ‘hacktivists’” and also issued the guidelines to say, “Folks, you got to make it harder. Seriously? Default passwords of 1111 on systems protecting Americans’ water?” We have not seen further Iranian cyber activity against the US. We’ve certainly seen it against other countries in the region, but not against the US. But we know that can change at any moment.

In the wake of the October 7 Hamas attack in Israel and the unfolding conflict in the Middle East, how has any of that activity in the kinetic physical space changed your thinking around cyber?

Both Russia’s invasion of Ukraine and Hamas’ attacks against Israel, as well as the Iranian cyber activity, have shown that cyber is a part of every conflict today. That includes both creative forms of espionage—Russian targeting of Ukrainian street cameras as part of their drone targeting, Hezbollah targeting road cameras to monitor logistics movements. There are so many connected devices in our homes, streets, and cities, it underscores how far we’ve come from the cybersecurity we used to think about. We didn’t necessarily think of road traffic cameras as national security infrastructure. The reality is, if you have traffic cameras near your borders—or near your defense logistics points—they are a collection opportunity. Having them secure needs to be a national security priority.

I would note that’s where the Cyber Trust Mark program comes in, as we try to scale the number of internet of things devices in a manageable way. That program is really one of our most important efforts, especially as we roll out agreements with other countries, as we just did with the EU, to make it as large a market as possible.

One more thing we’ve seen is the trade-off between espionage and effects—where intelligence services who we expected to launch intensive offensive cyber activity didn’t. As we looked at it, we said, “Well, the espionage off those same platforms may have been so valuable that it wasn’t worth bringing them down.” It was more important to collect on it.

Let’s switch gears and talk about the bigger picture and some emerging challenges. Your office had a major role in shaping the Biden administration’s executive order on AI. What should people be paying attention to there?

So first, big picture: AI is a classic case of where we see promise and peril at the same time. Our job is to figure out how to really leverage AI for societal good, for national security purposes, while also really addressing the risks from the start so that Americans feel safe with what we’re doing.

Let me talk about three examples. If you look at the education space, universities are super worried about students using ChatGPT to write their essays and not learning. On the other hand, you think about classrooms with 30 kids, each learning differently—I look at my own two kids, and each has a different learning style—so the opportunity for teachers to use AI to customize learning based on learning styles is so tantalizing. We have to figure out how to do both—so that we can use it and also protect kids in that space.

In the area of voice cloning—another good example on the promise side—my husband and I are involved in a charity that does “voice banking” for individuals afflicted with ALS so that when they lose their ability to speak and they’re just communicating via blinking or banging their head, their families can have those actions translated into voice. It gives those families the opportunity to feel like this is a living person, that their family member is still there.

On the other hand, you saw the President Biden voice-fake in New Hampshire. We saw the Taylor Swift AI fakes flood social media—that’s a different kind of deepfake. We actually have another effort that we’re looking at about nonconsensual intimate imagery and potentially asking for some voluntary commitments, but I’ll leave that aside for right now.

We hosted a voice-cloning gathering here last week, bringing in key telecom companies, key researchers, academic researchers in the space—like Hany Farid from Berkeley—as well as the FCC and the FTC. The FCC has been engaged on this. The FTC launched a challenge—they got 85 responses to talk about what might be possible in the space. There were some really cool ideas that came up, building on the history that the telecom industry has done to target spam calls.

And then third, in the area of cybersecurity, there’s the potential to use AI models to help individuals write more secure code. I think we’ve seen that something like 46 percent of the code on GitHub is now AI-generated. Now, if that’s been trained on existing code, that’s a mixed bag. If it’s trained on code that particularly we know is more secure as well as some malware to identify that, I think potentially that could be a jump change in generating more secure code that’s underpinning our economy. On a flip side, you have models that are trained to find vulnerabilities. There’s pros and cons to them.

We’re already seeing malign actors use AI to more rapidly build malware. That’s why things like our Darpa AI Cyber Challenge to jump-start AI-driven defenses, or the CASTLE effort at Darpa to jump-start AI autonomous agents, are so needed, to try to stay a step ahead.

What do you think is the most important part of the AI executive order that people are not paying attention to, in terms of your work?

I would say the risk assessments we have for critical infrastructure regulators, regarding what’s the delta risk in critical infrastructure from deployment of AI models. So for example, in terms of optimizing train traffic or in terms of optimizing chlorination of water systems—there might be models that get trained to determine what’s optimal following massive rainfall, what’s optimal in different seasons where people have colds or whatever it is—and ensuring that before such models can be deployed, there are standards for what data they’re trained on, standards for red-teaming, etc. That I think is a super important way to apply the lessons we’ve learned in cybersecurity.

The fact that in 2023 we’re rolling out mandated minimum cybersecurity practices for the first time in critical infrastructure—we’re one of the last countries to do that.

Building in the red-teaming, the testing, the human-in-the-loop before those models are deployed is a core lesson learned from cybersecurity that we want to make in the AI space.

In the AI executive order, regulators were tasked to determine where their existing regulations—let’s say for safety—already account for the risks around AI, and where are there deltas? Those first risk assessments have come in, and we’re going to use those both to inform the Hill’s work and also to think about how we roll those into the same cybersecurity minimum practices that we just talked about that regulators are doing.

Where are you starting to see threat actors actually use AI in attacks on the US? Are there places where you’re seeing this technology already being deployed by threat actors?

We mentioned voice cloning and deepfakes. We can say we’re seeing some criminal actors—or some countries—experimenting. You saw FraudGPT that ostensibly advances criminal use cases. That’s about all we can say we’re releasing right now.

You have been more engaged recently on autonomous vehicles. What’s drawn your interest there?

There’s a whole host of risks that we have to look at, the data that’s collected, patching—bulk patches, should we have checks to ensure they’re safe before millions of cars get a software patch? The administration is working on an effort that probably will include both some requests for input as well as assessing the need for new standards. Then we’re looking very likely in the near term to come up with a plan to test those standards, ideally in partnership with our European allies. This is something we both care about, and it’s another example of “Let’s get ahead of it.”

You already see with AVs large amounts of data being collected. We’ve seen a few states, for example, that have given approval for Chinese car models to drive around and collect. We’re taking a look at that and thinking, “Hold on a second, maybe before we allow this kind of data collection that can potentially be around military bases, around sensitive sites, we want to really take a look at that more carefully.” We’re interested both from the perspective of what data is being collected, what are we comfortable being collected, as well as what new standards are needed to ensure American cars and foreign-made cars are built safely. Cars used to be hardware, and they’ve shifted to including a great deal of software, and we need to reboot how we think about security and long-term safety.

You’ve also been working a lot on spectrum—you had a big gathering about 6G standards last year. Where do you see that work going, and what are the next steps?

First, I would say there’s a domestic and an international part. It comes from a foundational belief that wireless telecommunications is core to our economic growth—it’s both manufacturing robotics in a smart manufacturing factory, and then I just went to CES and John Deere was showing their smart tractors, where they use connectivity to adjust irrigation based on the weather. On the CES floor, they noted that integrating AI in agriculture requires changes to US policies on spectrum. I said, “I don’t understand, America’s broadband plan deploys to rural sites.” He said, “Yeah, you’re deploying to the farm, but there’s acres and acres of fields that have no connectivity. How are we going to do this stuff?” I hadn’t expected to get pinged on spectrum there, on the floor talking about tractors. But it shows how it’s core to what we want to do—this huge promise of drones monitoring electricity infrastructure after storms and determining lines are down to make maintenance far more efficient, all of that needs connectivity.

Our spectrum is congested and contested. Most of the most usable spectrum is used. The biggest user is the US government, Department of Defense. And that’s historically because we have so many platforms—think ballistic missile defense, think training, the future of satellite connectivity as well. So the National Spectrum Strategy was saying we need a strategic approach to spectrum, both because the national security of the military community really relies on innovation, and we need companies to be able to innovate in the space, from Starbucks Wi-Fi to massive spectrum auctions.

We need a strategic approach. It’s complicated because the scale of spectrum use is fascinating. You remember a year ago at Christmas there was this fight between the airline industry and the telecom industry regarding 5G. One part you may have seen with the National Spectrum Strategy was a presidential memorandum about how we will resolve those disputes going forward—because nobody wants an American getting on a plane to feel unsafe. The White House now will be directly managing those disputes. We spend a great deal of our time in convening agencies together to say these are genuinely hard issues, they’re very technical, we need to resolve them together, because this is core not only to our innovation but to our global leadership.

It was not coincidental that the National Spectrum Strategy rolled out directly before the UN’s World Radio Conference. It was intentional because we were going in with our strategy to this once-in-four-years international conference, and countries can rally around it, because at the end of the day, the biggest markets are the ones who set the approaches right. When we and China go in, countries who are aligned with us say, “Yeah, we’re in on which band we’re going to have our companies build to.”

The international piece is, the Trump administration really began this pushback on 5G. The big challenge we had with our strategy was we didn’t have an alternative [to China’s 5G standards]. We could go in and say, “Look, Huawei—real national security risks, theft of your company’s IP, theft of your nation’s secrets.” And many countries looked at us and said, “OK, but they’re 30 percent cheaper—what are you going to do?” We’ve made a big push in our partnership with India because India, China, and we are the three largest telecom markets, and India has massive telecommunications innovation. They’ve done big 5G rollouts. We said, if the US and India are aligned on a strategic approach, we will drive the global market. We can drive global innovation. That’s why we launched two task forces—one is really government-to-government with our telecoms, and one is focused on 6G because we wanted to directly jump in on the standards work and do that together. We’re jointly innovating in that space.

Under the CHIPs Act, the Commerce Department made an announcement of a $1.5 billion open innovation fund. One of the reasons we’re making a big push on open-standards-based 5G is because it allows more diverse players. Commerce just awarded a significant award to doing large-scale, transparent interoperability testing, allowing telecoms from all around the world to come test with different kinds of gear. That really brings American companies in on the ground, and it brings innovation to telecom, which has historically been a sector that has lacked it. Digital infrastructure is core to our allies’ economic strength.

Lastly, you’ve also been working on quantum issues. What are you thinking about there?

We’re thinking a lot about the US rollout of post-quantum cryptography. Cryptography upgrades take a decade to happen. NIST will be issuing the final standard of their set of standards this spring. The president’s NSM [National Security Memorandum] Number 10 announced that the US government is starting with our cryptographic inventories, we’re starting that move in that push. And we’ve already begun the transition for weapon systems and national security information. Now we’re working closely with allies around the world, essentially innovating in quantum and also saying, we have to begin this migration, because at the end of the day, cryptography underpins all of cybersecurity. We plan to convene key companies in the White House in the coming weeks to discuss our national transition plan.

What’s the next step on that critical infrastructure standards work?

You will shortly be seeing an executive order coming out giving the Coast Guard the authority to set cybersecurity rules for ports, in addition to the physical rules for ports. It’s been already signed, essentially, but we wanted to issue with the notice of public rule-making, so it’s all really ready to go. We’ll see that in the next two to three weeks. HHS [US Department of Health and Human Services]—Medicare, and Medicaid have what’s called “conditions of participation” for hospitals. Essentially, it sets the rules to say if blood spills on the floor, how quickly does it have to be cleaned up? How quickly does a nurse have to respond to a call? They’ve never had any rules for cybersecurity. They’re now going to be releasing the first “conditions of participation” for cybersecurity rules for hospitals. It’s really been two years of work to get there.

This repeated large-scale ransomware attacks against hospitals, I think, has really made the sector realize they have a problem. We’ve been doing a lot of outreach with the American Hospital Association, with HHS, to use the authorities that we have, but in a way that is game-changing. We have to do it with care and thoughtfulness. And then there is a set of sectors—essentially CISA sectors—where we lack authorities to mandate, and then we need to go to the Hill to get those authorities. That’s where you’re going to see this work coming. For example, 911. Believe it or not, that’s someplace where the ability to mandate is very much needed.

https://www.wired.com/feed/rss

Garrett M. Graff

Leave a Reply