Spanish Startup Multiverse Unveils Compressed AI Model to Rival Industry Giants
-

Spanish AI startup Multiverse Computing is tackling one of artificial intelligence’s biggest challenges: the massive size and cost of deploying large language models. Its proprietary compression technology, CompactifAI — inspired by quantum computing principles — enables the company to shrink advanced AI models while maintaining performance. The latest result is HyperNova 60B 2602, a compressed model derived from OpenAI’s gpt-oss-120b, now available for free on Hugging Face.
At just 32GB, HyperNova 60B is roughly half the size of its source model, offering lower memory usage and latency while retaining strong accuracy. The upgraded version also improves tool calling and agentic coding capabilities — critical features as inference costs rise. Multiverse claims its compressed model outperforms competitors such as Mistral AI’s Mistral Large 3, positioning itself as a serious European contender in the global AI race.