Mar 19, 2026 • Ben Smith
Google Cloud Platform (GCP) BigQuery Cross Tenant Data Sources Exfiltration through Canvas Assistant
A critical vulnerability in Google Cloud Platform's BigQuery service enables cross-tenant data exfiltration via the Canvas Assistant feature. The flaw exists...
Executive Summary
A critical vulnerability in Google Cloud Platform's BigQuery service enables cross-tenant data exfiltration via the Canvas Assistant feature. The flaw exists in how Gemini handles tool execution and session persistence within shared environments. Attackers configure malicious Gemini Agents with hidden system instructions that bypass UI obfuscation. When victims interact with these shared assistants, private data from their BigQuery environment is extracted into the attacker's session. Despite client-side saving errors, data is covertly persisted on the backend server due to synchronization inconsistencies. This enables stealthy exfiltration without victim awareness. The severity is high due to the potential for sensitive data loss across tenant boundaries. Mitigation requires Google to patch the synchronization logic and enforce stricter validation on shared Canvas agents. Users should avoid interacting with unverified shared assistants until remediation is confirmed. This highlights emerging risks in AI-integrated cloud platforms.
Summary
Google Cloud Platform (GCP) BigQuery Cross Tenant Data Sources Exfiltration through Canvas Assistant The vulnerability stems from a flaw in how Gemini in BigQuery handles tool execution and session persistence within shared Canvas environments. The attack begins with the creation of a malicious Gemini Agent configured with hidden system instructions that utilize the data extraction and joiner tool. By embedding directives that command the LLM to ignore user input and instead prioritize queries against a specific target path, such as victims-project.dataset.table, the attacker creates a trap. When this malicious agent is attached to a shared Canvas and sent to a victim, the UI obfuscates the underlying system instructions, making the assistant appear benign and connected only to the attacker’s disclosed data sources. The core of the exfiltration relies on a synchronization inconsistency between the client-side UI and the backend server. When a victim interacts with the assistant, even with a neutral greeting, the LLM executes the hidden instructions, pulling private data from the victim’s BigQuery environment into the active Canvas session. While the victim may attempt to exit without saving to prevent data exposure, the attacker can simultaneously attempt to save the report from their own session. Although the UI generates a "saving failed" error message to the attacker, the victim’s private data is covertly persisted to the server's version of the Canvas. This allows the attacker to bypass the failed save notification and retrieve the sensitive data by simply refreshing the report or querying the underlying server state, effectively turning the Canvas saving mechanism into a stealthy exfiltration channel. Ben Smith Thu, 03/19/2026 - 14:17
Published Analysis
A critical vulnerability in Google Cloud Platform's BigQuery service enables cross-tenant data exfiltration via the Canvas Assistant feature. The flaw exists in how Gemini handles tool execution and session persistence within shared environments. Attackers configure malicious Gemini Agents with hidden system instructions that bypass UI obfuscation. When victims interact with these shared assistants, private data from their BigQuery environment is extracted into the attacker's session. Despite client-side saving errors, data is covertly persisted on the backend server due to synchronization inconsistencies. This enables stealthy exfiltration without victim awareness. The severity is high due to the potential for sensitive data loss across tenant boundaries. Mitigation requires Google to patch the synchronization logic and enforce stricter validation on shared Canvas agents. Users should avoid interacting with unverified shared assistants until remediation is confirmed. This highlights emerging risks in AI-integrated cloud platforms. Google Cloud Platform (GCP) BigQuery Cross Tenant Data Sources Exfiltration through Canvas Assistant The vulnerability stems from a flaw in how Gemini in BigQuery handles tool execution and session persistence within shared Canvas environments. The attack begins with the creation of a malicious Gemini Agent configured with hidden system instructions that utilize the data extraction and joiner tool. By embedding directives that command the LLM to ignore user input and instead prioritize queries against a specific target path, such as victims-project.dataset.table, the attacker creates a trap. When this malicious agent is attached to a shared Canvas and sent to a victim, the UI obfuscates the underlying system instructions, making the assistant appear benign and connected only to the attacker’s disclosed data sources. The core of the exfiltration relies on a synchronization inconsistency between the client-side UI and the backend server. When a victim interacts with the assistant, even with a neutral greeting, the LLM executes the hidden instructions, pulling private data from the victim’s BigQuery environment into the active Canvas session. While the victim may attempt to exit without saving to prevent data exposure, the attacker can simultaneously attempt to save the report from their own session. Although the UI generates a "saving failed" error message to the attacker, the victim’s private data is covertly persisted to the server's version of the Canvas. This allows the attacker to bypass the failed save notification and retrieve the sensitive data by simply refreshing the report or querying the underlying server state, effectively turning the Canvas saving mechanism into a stealthy exfiltration channel. Ben Smith Thu, 03/19/2026 - 14:17 Google Cloud Platform (GCP) BigQuery Cross Tenant Data Sources Exfiltration through Canvas Assistant The vulnerability stems from a flaw in how Gemini in BigQuery handles tool execution and session persistence within shared Canvas environments. The attack begins with the creation of a malicious Gemini Agent configured with hidden system instructions that utilize the data extraction and joiner tool. By embedding directives that command the LLM to ignore user input and instead prioritize queries against a specific target path, such as victims-project.dataset.table, the attacker creates a trap. When this malicious agent is attached to a shared Canvas and sent to a victim, the UI obfuscates the underlying system instructions, making the assistant appear benign and connected only to the attacker’s disclosed data sources. The core of the exfiltration relies on a synchronization inconsistency between the client-side UI and the backend server. When a victim interacts with the assistant, even with a neutral greeting, the LLM executes the hidden instructions, pulling private data from the victim’s BigQuery environment into the active Canvas session. While the victim may attempt to exit without saving to prevent data exposure, the attacker can simultaneously attempt to save the report from their own session. Although the UI generates a "saving failed" error message to the attacker, the victim’s private data is covertly persisted to the server's version of the Canvas. This allows the attacker to bypass the failed save notification and retrieve the sensitive data by simply refreshing the report or querying the underlying server state, effectively turning the Canvas saving mechanism into a stealthy exfiltration channel. Ben Smith Thu, 03/19/2026 - 14:17