Multiverse Computing Raises $215M to Curb Mounting LLM Costs
Spain-based Multiverse Computing lands a 215 million Series B to scale CompactifAI, a tool that shrinks language models by 95 percent and dramatically cuts compute bills.

Spain-based Multiverse Computing lands a 215 million Series B to scale CompactifAI, a tool that shrinks language models by 95 percent and dramatically cuts compute bills.
Series B underscores investor appetite
The new round, disclosed on 12 June 2025, was led by Bullhound Capital and lifts Multiverse Computing’s total backing to about 250 million dollars. The syndicate features strategic names such as HP Tech Ventures, SETT, Forgepoint Capital International, CDP Venture Capital, Santander Climate VC, Quantonation, Toshiba and Capital Riesgo de Euskadi – Grupo SPR1. Management says the proceeds will accelerate global roll-out of CompactifAI and expand the firm’s presence across North America, Europe and Asia.
Tensor networks push compression beyond conventional limits
Unlike pruning or quantisation methods that trim neurons and often erode accuracy, CompactifAI converts weight matrices into Matrix Product Operators through sequential singular-value decompositions. Co-founder and CEO Enrique Lizaso Olmos notes that the approach “treats the correlation space as primary, letting us keep model fidelity while erasing redundant parameters.” Internal tests on Llama 4 Scout, Llama 3.3 70B, Llama 3.1 8B and Mistral Small 3.1 show 95 percent size reductions with only a two- to three-point accuracy dip. Inference speeds climb four- to twelve-fold, energy use falls by 84 percent and training time is nearly halved.
Commercial options and early adopters
CompactifAI is offered three ways: cloud deployment over AWS, on-premises licensing for data-sensitive enterprises and edge builds for resource-constrained hardware. More than 100 organisations — including Iberdrola, Bosch, the Bank of Canada, BBVA and the European Tax Agency — have put the software into production across ten verticals. CTO Roman Orus, whose tenure at the Donostia International Physics Centre underpins the algorithm, says incoming capital will fund sector-specific model libraries and customer success teams. The company already holds 160 patents spanning quantum and AI techniques and was named a Gartner Cool Vendor in financial services software.
Competitive landscape
Peers such as Classiq, SandboxAQ, QpiAI, Quantum Mads, Quantum Motion, Terra Quantum, 1QBit, Zapata AI and CogniFrame also marry quantum concepts with machine-learning workloads, yet few target language-model compression at this scale. Traditional pipelines typically achieve 50–60 percent compression with accuracy losses exceeding 20 percent; Multiverse Computing reports 95 percent compression with minimal accuracy drift, positioning the firm as a distinct player in the 106 billion-dollar AI inference market projected to reach 255 billion by 2030.
Cost, sustainability and the road ahead
A full-size LLM often consumes thousands of GPUs and up to five million dollars in training outlay. In contrast, a CompactifAI-compressed Llama 4 Scout Slim instance runs on AWS at roughly ten cents per million tokens versus 14 cents for the uncompressed model. The resulting energy savings speak to corporate carbon targets as much as bottom lines, making the technology attractive to firms facing tight margins or regulatory pressure.
With fresh capital, a growing patent moat and a clientele spanning energy, banking and public administration, Multiverse Computing aims to turn quantum-inspired compression into an everyday fixture of AI infrastructure—long before fault-tolerant quantum hardware becomes mainstream.