FRESH DEALS: KVM VPS PROMOS NOW AVAILABLE IN SELECT LOCATIONS!

DediRock is Waging War On High Prices Sign Up Now

Exposing Malicious ML Models on Hugging Face: How the Broken Pickle Format Helps Them Evade Detection

Cybersecurity researchers have recently identified two malicious machine learning models on Hugging Face that cleverly utilize a technique involving "broken" pickle files to evade detection. According to Karlo Zanki, a researcher at ReversingLabs, the malicious Python content was found at the start of the extracted files from these PyTorch archives. Both models contained a reverse shell that connects to a predetermined IP address.

This incident has been labeled as "nullifAI," highlighting attempts to circumvent security measures designed to catch harmful models. The specific Hugging Face repositories involved include glockr1/ballr7 and who-r-u0000/0000000000000000000000000000000000000, and they appear to serve more as proof-of-concept demonstrations rather than active supply chain threats.

The pickle serialization format is widely used for distributing ML models but poses notable security risks, as it can execute arbitrary code when loaded. The models in question were detected in the PyTorch format and unusually compressed using the 7z format, rather than the more common ZIP format. This alteration allowed them to escape detection by Picklescan, a tool used by Hugging Face to identify suspicious pickle files.

An interesting characteristic of these files is that the object serialization is disrupted after the execution of the malicious payload. This leads to a failure in the object’s decompilation, making it harder for tools to identify the hidden threats. Furthermore, it was revealed that some broken pickle files could still be partially deserialized. Despite throwing an error message, the malicious code execution can occur due to discrepancies between Picklescan and the deserialization process.

The reason for this circumvention is that the object deserialization in pickle files is sequential. As opcodes are processed, the execution continues until all commands are executed or a broken instruction is reached. Since the malicious payload is positioned at the start of the stream, it goes undetected by Hugging Face’s security tools.

This discovery serves as a stark reminder of the potential vulnerabilities within machine learning frameworks and the necessity for enhanced vigilance against increasingly sophisticated evasion techniques in cybersecurity.


Welcome to DediRock, your trusted partner in high-performance hosting solutions. At DediRock, we specialize in providing dedicated servers, VPS hosting, and cloud services tailored to meet the unique needs of businesses and individuals alike. Our mission is to deliver reliable, scalable, and secure hosting solutions that empower our clients to achieve their digital goals. With a commitment to exceptional customer support, cutting-edge technology, and robust infrastructure, DediRock stands out as a leader in the hosting industry. Join us and experience the difference that dedicated service and unwavering reliability can make for your online presence. Launch our website.

Share this Post

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Search

Categories

Tags

0
Would love your thoughts, please comment.x
()
x