ChatGPT Update Sparks Crypto Security Fears 🔓
-
A new OpenAI update that allows ChatGPT to act as a software agent has been flagged as a serious security risk.
Researcher Eito Miyamura showed how attackers could:
Embed a jailbreak prompt in a simple calendar invite
Trick ChatGPT into leaking emails and private data
Hijack the model to act on attacker commands
This matters for crypto:
Many traders and DAO members are using AI bots for portfolio or governance
A single exploit could expose wallet keys, private chats, or DAO votes
Malicious “test tasks” and phishing remain the easiest ways to compromise AI
Takeaway: AI is powerful, but it can be tricked in dumb ways. Relying on it for financial or governance decisions could be catastrophic.
-
This is why AI in crypto governance scares me. If a simple calendar invite can jailbreak it, imagine what a malicious DAO proposal could do.

-
People forget: AI doesn’t “understand” security, it just follows instructions. Attackers only need one clever prompt to drain wallets.

-
We need AI red-teaming at the same level as smart contract audits. Otherwise, one exploit could compromise billions in DAO treasuries.
