Hyper Intelligence Releases Quantum-Inspired Algorithm Designed to Reduce Cost of LLMs
Insider Brief
- Hyper Intelligence, Inc. announces the launch of quantum-inspired algorithm that enhances how Large Language Models operate on AI infrastructure.
- The product, Hyper.Train, optimizes and removes redundancy within LLMs to reduce the cost of ownership for AI compute by at least 30% or more.
- Inefficiencies in LLM and Generative AI (GAI) can cost providers hundreds of millions to billions of dollars per year, according to the company.
PRESS RELEASE — Hyper Intelligence, Inc. announces the launch of its flagship software product, Hyper.Train, a proprietary, quantum-inspired algorithm that drastically enhances how Large Language Models (LLMs) operate on AI infrastructure. Hyper.Train optimizes and removes redundancy within LLMs using three patented methods resulting in a reduction of the total cost of ownership for AI compute by at least 30% or more.
Hyper.Train can save large scale organizations hundreds of millions if not billions of dollars and deliver market-transforming results through the use of their three patented methods:
- ‘Critical Node Detection’
- ‘Polymorphic Pruning’ which incorporates quantum-inspired optimization and proprietary solvers
- ‘Critical Neuron Selection’ which observes LLMs and Neural Networks to understand state, remove and re-order the neurons that impede performance, and then apply patent-pending poly-morphic pruning to create optimal connections and pathways between neurons and nodes.
As a result, this process brings more order and removes redundancies to highly chaotic neural networks and large language models while delivering more accurate training. This in turn drastically reduces the costs of inference.
“We believe that LLMs and GAIs can solve pressing global challenges,” said Jason Turner, Chairman, Hyper Intelligence and CEO of Entanglement. “With the explosive growth of these capabilities in the market, Hyper.Train solves many of the challenges faced by the largest service providers and users, and is designed to accelerate growth, deliver new found efficiencies, and potentially save billions of dollars.”
Inefficiencies in LLM and Generative AI (GAI) cost the largest service providers and companies hundreds of millions to billions of dollars per year. Cloud service providers and data centers are struggling to bring on additional GPU capacity to meet that demand more so as they experience a CAGR of 32.7% by 2030. Hyper.Train solves this challenge by utilizing quantum-inspired algorithms and proprietary techniques to reduce costs by 30% while improving compute capacity thereby allowing AI and LLM organizations to run larger, more efficient training models.
“Hyper Intelligence delivers advanced technological capabilities by using modern machine learning with quantum-inspired optimization to find and eliminate bloat from neural networks,” said John Lister, CTO of Hyper Intelligence and Entanglement, Inc.
Hyper.Train already supports GPUs and AI accelerators from a diverse set of companies like NVIDIA, AMD and Intel. The product is server and chipset agnostic with plans to roll-out to a diverse set of customers. Hyper.Train is currently in beta for strategic customers and will be available in early Q1’24.
Hyper Intelligence is the second spin-out of Entanglement and this launch follows the launch of Entanglement Inc.’s first spin-out seQure, an enterprise zero-trust cybersecurity and data observability company ensuring the safety of Fortune 100 companies, government, and critical infrastructure customers.