AI Can Now Execute 70% of DeFi Exploits When Given Attack Pattern Knowledge and ZetaChain Just Proved Why That Matters
-

Two developments published on the same week have combined to paint a concerning picture of where DeFi security is heading. ZetaChain disclosed that a premeditated $334,000 exploit used a combination of three individually minor design flaws chained together, the exact type of multi-step vulnerability that security researchers have consistently struggled to get protocols to take seriously. Simultaneously, a new a16z study found that an off-the-shelf AI agent given structured knowledge about common attack patterns and exploit workflows succeeded in producing working DeFi exploits in 70% of cases, up from 10% without that knowledge. The gap between those two numbers represents how much of an advantage an attacker gains simply by knowing what to look for.
The ZetaChain case illustrates the practical consequence of that dynamic. The bug was reported through the protocol's bug bounty program and dismissed. A human attacker with knowledge of chained attack vectors then used a combination of arbitrary cross-chain instruction permissiveness, an overly narrow blocklist, and uncleaned unlimited spending permissions to drain the gateway across four chains. As a16z's research demonstrates, an AI agent equipped with structured knowledge of those same attack pattern categories would now have a 70% chance of identifying and exploiting similar vulnerabilities autonomously. For DeFi protocols, the implication is direct: the threshold for what constitutes a dangerous bug report has fundamentally changed. A vulnerability that appears harmless in isolation but dangerous in combination is no longer a theoretical edge case that can be dismissed. It is precisely the type of multi-step attack surface that AI-assisted exploiters are now most capable of finding and executing at scale.