Max Tegmark Warns AI Industry “Shot Itself in the Foot” on Regulation
-

As the Anthropic crisis unfolded, Max Tegmark, founder of the Future of Life Institute, offered a blunt assessment: the AI industry’s long resistance to binding regulation has left it exposed. Tegmark argues that companies such as OpenAI, Google DeepMind, and Anthropic repeatedly promised voluntary safety measures while lobbying against enforceable oversight. In his view, that “regulatory vacuum” now allows the government to demand controversial uses of AI without clear legal guardrails.
Tegmark also reframed the geopolitical race narrative often invoked by AI firms. While companies warn that slowing development could cede ground to China, he contends that uncontrollable superintelligence poses a national security threat rather than a strategic advantage. Comparing the moment to Cold War nuclear brinkmanship, Tegmark suggests that racing toward systems society cannot control risks mutual harm. For him, the alternative path is clear: treat AI like other high-risk industries, require rigorous independent testing before deployment, and convert voluntary pledges into binding law before the technology outruns governance entirely.