AI Supply Chain Security
Modern AI is built on open source. But downloading a model from Hugging Face is like downloading an executable. Do you know what's inside?
The Pickle Problem
The standard format for saving PyTorch models is `pickle`. This format is insecure by design.
A malicious `pickle` file can execute arbitrary code on your machine as soon as you load it (`torch.load()`). Attackers are uploading backdoored models to public hubs that look useful but steal your AWS keys.
Safetensors
Solution: Always prefer the `safetensors` format. It is a safe, zero-copy format that stores tensors without code execution capabilities.
Model Serialization Attacks
Even if the file format is safe, the model weights themselves can be backdoored.
Trojan Models: An attacker trains a model to behave normally 99% of the time, but to fail catastrophically when a specific "trigger" (e.g., a purple pixel) is present.
Securing the AI BOM
Just like a Software Bill of Materials (SBOM), you need an AI Bill of Materials (AI-BOM).
Provenance
Where did this model come from? Who trained it? What dataset was used? If you can't answer these questions, don't deploy it.
Version Control
Models drift. Ensure you are using a specific, immutable hash of the model weights, not just the "latest" tag.
Scan Your Models
Railguard's Model Scanner detects malicious code in pickle files and known backdoors in weights.