← Back to BrewedIntel
othermediumCampaignReconnaissance

Jan 08, 2026 • GreyNoise Blog

Threat Actors Actively Targeting LLMs

Recent analysis of Ollama honeypot infrastructure indicates a significant surge in adversarial activity targeting Large Language Model (LLM) deployments....

Source
GreyNoise Blog
Category
other
Severity
medium

Executive Summary

Recent analysis of Ollama honeypot infrastructure indicates a significant surge in adversarial activity targeting Large Language Model (LLM) deployments. Between October 2025 and January 2026, researchers recorded 91,403 attack sessions, highlighting two distinct campaigns focused on systematically mapping the AI attack surface. While specific threat actor identities and malware families remain unconfirmed in this dataset, the volume of sessions suggests organized reconnaissance efforts against AI infrastructure. This activity underscores the growing interest adversaries hold in compromising AI systems. Organizations deploying LLMs should prioritize securing exposed endpoints and monitoring for unauthorized access attempts. Although specific mitigation strategies are not detailed in the report, the findings necessitate enhanced visibility into AI deployment surfaces. Severity is medium due to reconnaissance, though exploitation potential remains high given systematic mapping observed across the four-month monitoring period.

Summary

Our Ollama honeypot infrastructure captured 91,403 attack sessions between October 2025 and January 2026. Buried in that data: two distinct campaigns that reveal how threat actors are systematically mapping the expanding surface area of AI deployments.

Published Analysis

Recent analysis of Ollama honeypot infrastructure indicates a significant surge in adversarial activity targeting Large Language Model (LLM) deployments. Between October 2025 and January 2026, researchers recorded 91,403 attack sessions, highlighting two distinct campaigns focused on systematically mapping the AI attack surface. While specific threat actor identities and malware families remain unconfirmed in this dataset, the volume of sessions suggests organized reconnaissance efforts against AI infrastructure. This activity underscores the growing interest adversaries hold in compromising AI systems. Organizations deploying LLMs should prioritize securing exposed endpoints and monitoring for unauthorized access attempts. Although specific mitigation strategies are not detailed in the report, the findings necessitate enhanced visibility into AI deployment surfaces. Severity is medium due to reconnaissance, though exploitation potential remains high given systematic mapping observed across the four-month monitoring period. Our Ollama honeypot infrastructure captured 91,403 attack sessions between October 2025 and January 2026. Buried in that data: two distinct campaigns that reveal how threat actors are systematically mapping the expanding surface area of AI deployments. Our Ollama honeypot infrastructure captured 91,403 attack sessions between October 2025 and January 2026. Buried in that data: two distinct campaigns that reveal how threat actors are systematically mapping the expanding surface area of AI deployments.