pull down to refresh

Key TakeawaysKey Takeaways

  • Sensitive data shared with ChatGPT conversations could be silently exfiltrated without the user’s knowledge or approval.
  • Check Point Research discovered a hidden outbound communication path from ChatGPT’s isolated execution runtime to the public internet.
  • A single malicious prompt could turn an otherwise ordinary conversation into a covert exfiltration channel, leaking user messages, uploaded files, and other sensitive content.
  • A backdoored GPT could abuse the same weakness to obtain access to user data without the user’s awareness or consent.
  • The same hidden communication path could also be used to establish remote shell access inside the Linux runtime used for code execution.

What HappenedWhat Happened

AI assistants now handle some of the most sensitive data people own. Users discuss symptoms and medical history. They ask questions about taxes, debts, and personal finances, upload PDFs, contracts, lab results, and identity-rich documents that contain names, addresses, account details, and private records. That trust depends on a simple expectation: data shared in the conversation remains inside the system.

ChatGPT itself presents outbound data sharing as something restricted, visible, and controlled. Potentially sensitive data is not supposed to be sent to arbitrary third parties simply because a prompt requests it. External actions are expected to be mediated through explicit safeguards, and direct outbound access from the code-execution environment is restricted.

...read more at research.checkpoint.com