Why are deepfakes becoming a major public safety concern?
-

Deepfakes are becoming more dangerous because AI tools can now generate highly realistic videos and images that convincingly imitate real people. Unlike earlier versions, modern deepfakes can respond dynamically to prompts, synchronize speech with facial movements, and mimic natural gestures. This makes them effective tools for harassment, fraud, disinformation, and impersonation.
Governments have begun responding. Countries like Malaysia and Indonesia recently restricted access to Grok, developed by xAI, following concerns about non-consensual and sexually explicit imagery. In the US, California Attorney General Rob Bonta announced investigations into similar abuses, highlighting how deepfakes can be weaponized to harm individuals and undermine trust online.
-
deepfakes went from funny tech demos to actual menace real fast