Oct 13, 2025 • ESET WeLiveSecurity
AI-aided malvertising: Exploiting a chatbot to spread scams
Cybercriminals are leveraging artificial intelligence to enhance phishing campaigns through a technique dubbed Grokking. This method involves manipulating Xs...
Executive Summary
Cybercriminals are leveraging artificial intelligence to enhance phishing campaigns through a technique dubbed Grokking. This method involves manipulating Xs AI chatbot, Grok, into promoting malicious links and phishing scams to users. By exploiting the trust associated with AI-driven responses, attackers increase the likelihood of victim engagement and credential theft. While no specific malware families or named threat groups are identified in this report, the emergence of AI-aided malvertising represents a significant evolution in social engineering tactics. Organizations should update security awareness training to include AI-generated content risks. Users are advised to verify links independently, even when suggested by trusted platforms or AI tools. Security teams must monitor for unusual AI interactions and implement strict filtering on AI outputs to prevent the propagation of fraudulent schemes within corporate environments utilizing similar technologies.
Summary
Cybercriminals have tricked X’s AI chatbot into promoting phishing scams in a technique that has been nicknamed “Grokking”. Here’s what to know about it.
Published Analysis
Cybercriminals are leveraging artificial intelligence to enhance phishing campaigns through a technique dubbed Grokking. This method involves manipulating Xs AI chatbot, Grok, into promoting malicious links and phishing scams to users. By exploiting the trust associated with AI-driven responses, attackers increase the likelihood of victim engagement and credential theft. While no specific malware families or named threat groups are identified in this report, the emergence of AI-aided malvertising represents a significant evolution in social engineering tactics. Organizations should update security awareness training to include AI-generated content risks. Users are advised to verify links independently, even when suggested by trusted platforms or AI tools. Security teams must monitor for unusual AI interactions and implement strict filtering on AI outputs to prevent the propagation of fraudulent schemes within corporate environments utilizing similar technologies. Cybercriminals have tricked X’s AI chatbot into promoting phishing scams in a technique that has been nicknamed “Grokking”. Here’s what to know about it. Cybercriminals have tricked X’s AI chatbot into promoting phishing scams in a technique that has been nicknamed “Grokking”. Here’s what to know about it.