AI Has Been Warning Us About Itself for 500 Years and We Keep Ignoring the Message
-

From the clay Golem of 16th century Prague to HAL 9000 to the neural networks of cyberpunk fiction, literature has been telling the same story for five centuries: the thing you build to help yourself ends up reshaping you. Rabbi Loew created a protector and almost immediately had to shut it down when it went rogue. Mary Shelley's Frankenstein was never really a monster story. It was a case study in what happens when a brilliant engineer solves the technical problem and ignores every consequence that follows. Karel Capek coined the word robot in 1920 and wrote a story where the machines do not revolt out of malice. Humans simply make themselves unnecessary by outsourcing everything they used to do. The lesson was available then and has been available ever since. We read it, nodded, and went back to asking chatbots to write our legal briefs and medical advice.The failures of the current AI deployment wave read like a predictable continuation of that story.
Microsoft replaced editors with an algorithm that mixed up photos of singers in a story about racism and had to bring humans back to clean up the mess. A major eating disorder support organization replaced volunteers with a chatbot that advised people with anorexia to count calories and lose weight. Air Canada ended up in court because its chatbot invented a refund policy that did not exist and tried to argue the bot was a separate legal entity. Studies now show 55% of companies that rushed to replace employees with AI deeply regret it, with savings evaporating into lost customers and reputational damage. The pattern is not a failure of the technology. It is a failure to understand what the technology actually is.
-
We read Frankenstein, nodded thoughtfully, and then deployed a chatbot to give medical advice to people with eating disorders.