Categories: Technology

AI models could be attacked, flawed by this Hugging Face security issue — security worries add to AI concerns

There is a way to abuse the Hugging Face Safetensors conversion tool to hijack AI models and mount supply chain attacks.

This is according to security researchers from HiddenLayer, who discovered the flaw and published their findings last week, The Hacker News reports.

For the uninitiated, Hugging Face is a collaboration platform where software developers can host and collaborate on unlimited pre-trained machine learning models, datasets, and applications.

Changing a widely used model

Safetensors is Hugging Face’s format for securely storing tensors which also allows users to convert PyTorch models to Safetensor via a pull request.

And that’s where the trouble lies, as HiddenLayer says the conversion service can be compromised: “It’s possible to send malicious pull requests with attacker-controlled data from the Hugging Face service to any repository on the platform, as well as hijack any models that are submitted through the conversion service.”

So, the hijacked model that’s supposed to be converted allows threat actors to make changes to any Hugging Face repository, claiming to be the conversion bot.

Furthermore, hackers can also exfiltrate SFConversionbot tokens – belonging to the bot that makes the pull requests – and sell malicious pull requests themselves.

Consequently, they could modify the model and set up neural backdoors, which is essentially an advanced supply chain attack.

“An attacker could run any arbitrary code any time someone attempted to convert their model,” the research states. “Without any indication to the user themselves, their models could be hijacked upon conversion.”

Finally, when a user tries to convert a repository, the attack could lead to their Hugging Face token getting stolen, granting the attackers access to restricted internal models and datasets. From there on, they could compromise them in various ways, including dataset poisoning.

In one hypothetical scenario, a user submits a conversion request for a public repository, unknowingly changing a widely used model, resulting in a dangerous supply chain attack.

“Despite the best intentions to secure machine learning models in the Hugging Face ecosystem, the conversion service has proven to be vulnerable and has had the potential to cause a widespread supply chain attack via the Hugging Face official service,” the researchers concluded.

“An attacker could gain a foothold into the container running the service and compromise any model converted by the service.”

More from TechRadar Pro

https://www.techradar.com/rss

Sead Fadilpašić

Sead Fadilpascaroni263

Share
Published by
Sead Fadilpascaroni263

Recent Posts

Love Island legend signs up for Celebs Go Dating after failing to find romance in All Stars villa

A LOVE Island All Star is turning to a new dating show in a bid…

2 hours ago

Trump Trial: Stormy Daniels is an Insane Lying Harlot

https://www.youtube.com/watch?v=fESQ46jdIlI

2 hours ago

The Greta Thunberg Idiots’ Revolt

https://www.youtube.com/watch?v=O3cJx8l0ixw

2 hours ago

Apple’s entire AirPods lineup is discounted, plus the rest of the week’s best tech deals

The new iPad Pro and iPad Air — and the internet's reaction to how they've…

2 hours ago

Apple might bring AI transcription to Voice Memos and Notes

/ The new feature could save you from relistening to your audio recordings.By Emma Roth,…

3 hours ago

QAnon conspiracy theories are surging on Elon Musk’s X. Here’s proof.

On Jan. 6, 2021, QAnon conspiracy theorists played a significant role in inciting Donald Trump…

3 hours ago