Why Bigger Models Aren’t the Future of AI
-

The era of scaling-at-all-costs may be nearing its limits. While massive models like GPT-3 proved that size alone could unlock new capabilities, researchers now believe returns are flattening. Voices like Yann LeCun and Ilya Sutskever have argued that progress will require new ideas, not just more compute.
In response, smaller language models are gaining traction. Leaders like Andy Markus say fine-tuned SLMs can match large models for enterprise use while being cheaper, faster, and easier to deploy—especially on edge devices where efficiency matters most.