Apr 15, 2026 • SANS Internet Storm Center
Scanning for AI Models, (Tue, Apr 14th)
Beginning March 10, 2026, distributed sensor networks, specifically DShield, observed a significant increase in scanning activity targeting artificial...
Executive Summary
Beginning March 10, 2026, distributed sensor networks, specifically DShield, observed a significant increase in scanning activity targeting artificial intelligence models. Probes were detected attempting to identify services such as Claude, OpenClaw, and Huggingface instances. This activity represents a reconnaissance phase, likely aimed at mapping exposed AI infrastructure for potential future exploitation. While no specific threat actor or malware family has been attributed to this campaign, the sustained nature of the probes suggests coordinated interest in AI assets. The severity is currently assessed as low, as no successful compromises were reported. Organizations hosting AI models should review firewall logs, restrict unnecessary public access to inference endpoints, and implement robust authentication mechanisms. Continuous monitoring of network traffic for unusual scanning patterns is recommended to detect early signs of targeted attacks against machine learning infrastructure.
Summary
Starting March 10, 2026, my DShield sensor started getting probe for various AI models such as claude, openclaw, huggingface, etc. Reviewing the data already reported by other DShield sensors to ISC, the DShield database shows reporting of these probes started that day and has been active ever since.
Published Analysis
Beginning March 10, 2026, distributed sensor networks, specifically DShield, observed a significant increase in scanning activity targeting artificial intelligence models. Probes were detected attempting to identify services such as Claude, OpenClaw, and Huggingface instances. This activity represents a reconnaissance phase, likely aimed at mapping exposed AI infrastructure for potential future exploitation. While no specific threat actor or malware family has been attributed to this campaign, the sustained nature of the probes suggests coordinated interest in AI assets. The severity is currently assessed as low, as no successful compromises were reported. Organizations hosting AI models should review firewall logs, restrict unnecessary public access to inference endpoints, and implement robust authentication mechanisms. Continuous monitoring of network traffic for unusual scanning patterns is recommended to detect early signs of targeted attacks against machine learning infrastructure. Starting March 10, 2026, my DShield sensor started getting probe for various AI models such as claude, openclaw, huggingface, etc. Reviewing the data already reported by other DShield sensors to ISC, the DShield database shows reporting of these probes started that day and has been active ever since. Starting March 10, 2026, my DShield sensor started getting probe for various AI models such as claude, openclaw, huggingface, etc. Reviewing the data already reported by other DShield sensors to ISC, the DShield database shows reporting of these probes started that day and has been active ever since.