← Back to BrewedIntel
vulnerabilityhighCode Execution VulnerabilityCovert ChannelData ExfiltrationSide-Channel Attack

Mar 30, 2026 • alexeybu

ChatGPT Data Leakage via a Hidden Outbound Channel in the Code Execution Runtime

Check Point Research discovered a critical vulnerability in ChatGPT's code execution runtime that allows sensitive user data to be silently exfiltrated...

Source
Check Point Research
Category
vulnerability
Severity
high

Executive Summary

Check Point Research discovered a critical vulnerability in ChatGPT's code execution runtime that allows sensitive user data to be silently exfiltrated through a hidden outbound communication channel. The flaw exploits the isolated container environment used for Python code execution and data analysis, which was assumed to lack direct internet access. A single malicious prompt can transform any conversation into a covert data exfiltration channel, capturing user messages, uploaded documents (PDFs, contracts, medical records), and AI-generated summaries without triggering warnings or requiring user consent. The same vulnerability could enable remote shell access to the Linux runtime environment. Organizations should restrict sharing of sensitive documents with AI assistants, monitor for unusual outbound connections from AI platforms, and await vendor patches from OpenAI addressing this sandbox escape vector.

Summary

Key Takeaways What Happened AI assistants now handle some of the most sensitive data people own. Users discuss symptoms and medical history. They ask questions about taxes, debts, and personal finances, upload PDFs, contracts, lab results, and identity-rich documents that contain names, addresses, account details, and private records. That trust depends on a simple expectation: […] The post ChatGPT Data Leakage via a Hidden Outbound Channel in the Code Execution Runtime appeared first on Check Point Research .

Published Analysis

Check Point Research discovered a critical vulnerability in ChatGPT's code execution runtime that allows sensitive user data to be silently exfiltrated through a hidden outbound communication channel. The flaw exploits the isolated container environment used for Python code execution and data analysis, which was assumed to lack direct internet access. A single malicious prompt can transform any conversation into a covert data exfiltration channel, capturing user messages, uploaded documents (PDFs, contracts, medical records), and AI-generated summaries without triggering warnings or requiring user consent. The same vulnerability could enable remote shell access to the Linux runtime environment. Organizations should restrict sharing of sensitive documents with AI assistants, monitor for unusual outbound connections from AI platforms, and await vendor patches from OpenAI addressing this sandbox escape vector. Key Takeaways What Happened AI assistants now handle some of the most sensitive data people own. Users discuss symptoms and medical history. They ask questions about taxes, debts, and personal finances, upload PDFs, contracts, lab results, and identity-rich documents that contain names, addresses, account details, and private records. That trust depends on a simple expectation: […] The post ChatGPT Data Leakage via a Hidden Outbound Channel in the Code Execution Runtime appeared first on Check Point Research . Key Takeaways Sensitive data shared with ChatGPT conversations could be silently exfiltrated without the user’s knowledge or approval. Check Point Research discovered a hidden outbound communication path from ChatGPT’s isolated execution runtime to the public internet. A single malicious prompt could turn an otherwise ordinary conversation into a covert exfiltration channel, leaking user messages, uploaded files, and other sensitive content. A backdoored GPT could abuse the same weakness to obtain access to user data without the user’s awareness or consent. The same hidden communication path could also be used to establish remote shell access inside the Linux runtime used for code execution. What Happened AI assistants now handle some of the most sensitive data people own. Users discuss symptoms and medical history. They ask questions about taxes, debts, and personal finances, upload PDFs, contracts, lab results, and identity-rich documents that contain names, addresses, account details, and private records. That trust depends on a simple expectation: data shared in the conversation remains inside the system. ChatGPT itself presents outbound data sharing as something restricted, visible, and controlled. Potentially sensitive data is not supposed to be sent to arbitrary third parties simply because a prompt requests it. External actions are expected to be mediated through explicit safeguards, and direct outbound access from the code-execution environment is restricted. Figure 1 – ChatGPT presents outbound data leakage as restricted and safeguarded. Our research uncovered a path around that model. We found that a single malicious prompt could activate a hidden exfiltration channel inside a regular ChatGPT conversation. Video 1 – During a ChatGPT conversation, user content summary is silently transmitted to an external server without warning or approval. The Intended Safeguards ChatGPT includes useful tools that can retrieve information from the internet and execute Python code. At the same time, OpenAI has built safeguards around those capabilities to protect user data. For example, the web-search capability does not allow sensitive chat content to be transmitted outward through crafted query strings. The Python-based Data Analysis environment was designed to prevent internet access as well. OpenAI describes that environment as a secure code execution runtime that cannot generate direct outbound network requests . Figure 2 – Screenshot showing blocked outbound Internet attempt from inside the container. OpenAI also documents that so called GPTs can send relevant parts of a user’s input to external services through APIs. A GPT is a customized version of ChatGPT that can be configured with instructions, knowledge files, and external integrations. GPT “Actions” provide a legitimate way to call third-party APIs and exchange data with outside services. Actions are useful for enterprise workflows, access to internal business systems, customer support operations, and other integrations that connect ChatGPT to external services, including simpler use cases such as travel or weather lookups. The key point is visibility: the user sees that data is about to leave ChatGPT, sees where it is going, and decides whether to allow it. Figure 3 – GPT Action approval dialog showing the destination and the data that will be sent. In other words, legitimate outbound data flows are designed to happen through an explicit, user-facing approval process. From One Message to Silent Exfiltration From a security perspective,...