Defense leads in race to secure AI, but threats to models and data could soon change that
The good news in the cybersecurity world is that wider deployment of artificial intelligence has not yet opened a massive security hole in the world’s computer systems. The bad news: Flaws and vulnerabilities are beginning to appear that could foreshadow major problems ahead.
That was the prevailing sentiment expressed by numerous research analysts and industry executives at the Black Hat USA cybersecurity conference in Las Vegas this week. AI may be relatively new to many threat actors, but the technology industry’s familiarity with AI has provided it with a certain measure of protection… so far.
“My view right now, when you think about AI versus AI, is that the defense is winning,” Chris Krebs, a SentinelOne Inc. executive and former director of the U.S. Cybersecurity and Infrastructure Security Agency, said during an appearance at Black Hat’s inaugural AI Summit on Tuesday. “We are seeing some leading-edge capabilities that are actually working. In the state actor space… they are still messing around with what works. The defense is still outpacing the offense.”
Upleveling analysts and describing signals
Some of those leading-edge capabilities involve added support for security analysts. AI can take the pressure off weary personnel in security operations centers who suffer from alert overload. It can also provide operational insight much more rapidly, according to Alex Stamos, chief information security officer at SentinelOne and former chief security officer for Facebook and Yahoo.
“The best defensive use of generative AI is analyst efficiency,” Stamos told SiliconANGLE. “’Show me all of my laptops that have talked to a Russian IP address in the last 24 hours.’ It’s taking normal smart humans and upleveling them into super analysts.”
This notion of leveraging AI to provide critical source data has been taken a step further by Dataminr Inc. The 15-year-old company, reported to be preparing for an initial public offering, began using AI in 2018 to integrate descriptions of what its global network of sensors was detecting.
In 2022, Datminr was selected by the Defense Information Systems Agency to provide social media monitoring information to the White House. The firm’s generative AI capabilities received a demonstration in July when Datminr began to pick up signals from social media posts about issues with Microsoft Windows. The service provided early alerts that summarized the narrative in what emerged as the CrowdStrike global outage.
“Generative AI has the capability to be tied into predictive AI systems to automatically describe the signals in front of you,” Ted Bailey, Dataminr’s founder and chief executive, said in a presentation during Black Hat’s AI Summit.
Security flaws in AI repositories
The looming problem is that current spending on AI far outpaces investment in security to protect it. Dave Dewalt, chief executive of NightDragon and former CEO of FireEye and McAfee, noted during the AI Summit that there had been $67 billion in AI investments over just the past 12 months.
“Take a guess of how much security investment has gone into this in the same amount of time,” Dewalt said. “About $300 million. We have to catch up security to AI. We can’t let that gap be there.”
Gaps are already being exposed by security practitioners in how generative AI models are protected. Researchers from Wiz Inc. presented a report at Black Hat on Wednesday detailing how they breached model repositories in AI as-a-service providers Hugging Face, Replicate and SAP. They reported their exploits to all three companies, which have since corrected the vulnerabilities disclosed.
“We were able to get access to millions of public and private AI models… and we had the ability to interfere with all this data,” said Hillai Ben Sasson, a security researcher at Wiz. “This is confidential data we should not have been able to access.”
Security researchers are finding that because the AI attack surface is relatively new, it will take some time to pinpoint where the protection is needed. Nvidia Corp. has been testing weaknesses in large language model structures and found that retrieval-augmented generation or RAG plugins can be especially vulnerable.
“You can specifically target models for poisoning people on their results if you have access to a specific RAG store,” said Rich Harang, principal security architect at Nvidia. “Unfortunately, this is just how RAG works. Limit the data that you have that RAG applications have access to.”
Data as a blind spot
While the security community is focusing on AI models, there is also growing concern around the data that fuels them. Jennifer Gold, head of threat intelligence for the FBI-driven public/private collaboration New York Metro InfraGard, noted that ChatGPT and Copilot data had been found on platforms used in the dark web.
One Singapore-based cybersecurity firm reported that more than 225,000 logs containing compromised ChatGPT credentials were available for sale in the underground. That could potentially open new pathways of threats as malicious actors are able to tap into data stores that were not readily available before.
“Lots of companies are focused, rightfully so, on threats to models,” Steve Stone, head of Zero Labs at Rubrik Inc., told SiliconANGLE. “I’m concerned about the data. If organizations are already struggling with their data today… what happens when they have seven times that data? I am deeply concerned that you have a bunch of threat actors that are able to find a much deeper data surface.”
History is not necessarily on the security community’s side for getting ahead of the curve when a technology wave comes ashore. The world’s embrace of the internet spawned a whole new class of threats and disruptions that is still playing out. Mobile platforms have become prime targets in recent years, with reported deepfake attacks on banking apps and other smartphone exploits.
“Are we handling AI functionally different than any new technology?” Stone said. “I don’t think we are. We probably have already had a really nasty AI intrusion that we don’t even know about.”
This possibility is drawing renewed attention from regulators. Representatives from several government agencies were part of the speaker lineup at Black Hat this week and several expressed concern that, in the urgency to pile onto the AI bandwagon, businesses are not thoroughly testing deployments.
“I am concerned about ways that people are rushing to get AI products to market without safety and security testing,” said Lisa Einstein, newly appointed chief AI officer for CISA. “We see people not being really clear about the ways that security can be brought in.”
Governments are becoming increasingly more motivated to develop a regulatory framework around AI. IDC has reported that 60% of governments worldwide will adopt a risk management approach to framing generative AI policies by 2028.
“It is going to be a really rich enforcement ecosystem,” said former CISA head Krebs.
Much as AI has accelerated the pace of many applications, it has also contributed to a sense within the security community that the threat landscape is shifting rapidly as well. Previous technology waves of adoption have allowed security researchers time to analyze the data and craft ways to combat escalating attacks. Yet, as Black Hat founder Jeff Moss told the gathering in his conference keynote on Wednesday, there is an uneasy feeling that this rapidly advancing wave is going to be very different.
“We’ve got this giant bucket of other problems that’s making it feel like things are speeding up,” Moss said. “It just feels like it’s different and it feels like it’s getting different faster.”
Photo: Mark Albertson/SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU