Chinese Researchers Use Quantum Computer to Fine-Tune Billion-Parameter AI Model

Insider Brief
- Chinese scientists used a domestically developed quantum computer to fine-tune a billion-parameter AI model, claiming this as a global first, Global Times reported.
- The experiment, conducted on the 72-qubit Origin Wukong system, showed improved model performance even after reducing parameters by over 75 percent, according to the Anhui Quantum Computing Engineering Research Center.
- The research remains a demonstration rather than a commercial deployment, and no peer-reviewed study has been released, Global Times noted.
- Image: Anhui Quantum Computing Engineering Research Center/Global Times
Chinese scientists used a quantum computer to fine-tune a billion-parameter artificial intelligence (AI) model, marking what they claim is a global first and a step toward combining quantum computing with advanced AI tasks, the state-affiliated Global Times reported.
The experiment was conducted on a superconducting quantum computer called Origin Wukong, which runs on a domestically built 72-qubit chip. According to Global Times, the fine-tuning task showed that quantum hardware could improve model training performance — even when the number of model parameters was drastically reduced — offering a potential solution to mounting demands for computing power.
Anhui Quantum Computing Engineering Research Center
Origin Wukong is operated by the Anhui Quantum Computing Engineering Research Center, which described the development on Monday. The center said a single batch of input data can trigger hundreds of quantum tasks simultaneously, enabling parallel processing at scale.
In one trial, the team fine-tuned a billion-parameter model using Origin Wukong, the newspaper reports. On a psychological counseling dialogue dataset, the resulting model showed a 15 percent drop in training loss, a key metric for how well a model is learning. In a separate task focused on mathematical reasoning, the model’s accuracy jumped from 68 percent to 82 percent.
The Global Times, which is affiliated with China’s state-run People’s Daily, reported that researchers saw an 8.4 percent improvement in training effectiveness even when they slashed the model size by 76 percent. Scientists say this could lead to more efficient models that need less memory and energy to run.
Fine-tuning is a common step in AI development and is used to adapt pre-trained large language models (LLMs) to specific domains like legal advice or customer service. The process generally requires high computing power to adjust millions or billions of parameters, which are internal numerical values the model uses to make predictions. Doing this on quantum hardware could lighten the workload traditionally handled by classical computers.
Huge Step Forward
“This is a huge step forward in the field of quantum computing,” said Chen Zhaojun, a deputy researcher at the Institute of Artificial Intelligence under the Hefei Comprehensive National Science Center, in comments to Global Times. He told the paper it was the first time a real quantum machine had been used to support fine-tuning of a large model, showing that the hardware is ready for such tasks.
Dou Menghan, vice president of Origin Quantum Computing Technology Co. and deputy director at the Anhui Quantum Computing Engineering Research Center, said the effort worked by pairing quantum tools with classical systems. “It’s like equipping a classical model with a ‘quantum engine,’ allowing the two to work in synergy,” Dou told Global Times.
The researchers used Origin Wukong to carry out the fine-tuning task over a series of quantum processing jobs. Each job was able to support model training in parallel, a process that quantum systems are theoretically well-suited for. Quantum computers rely on qubits — units of quantum information — to perform computations that are difficult or impossible for classical systems.
The 72-qubit chip inside Origin Wukong is among the most powerful developed in China and ranks among the more advanced systems internationally, according to Global Times. Wukong has been operating since January 6, 2024, and has completed more than 350,000 computing tasks, spanning applications in fluid dynamics, finance, and biomedicine. It has also received remote access requests from users in 139 countries and regions, according to data provided to Global Times.
The name Wukong was inspired by Sun Wukong, the Monkey King of Chinese mythology. Known for his ability to transform into 72 different forms, the figure was chosen to symbolize the machine’s flexibility and range across industries and problem types.
Industry analysts cited by Global Times said the experiment could offer a way out of the so-called “computing power anxiety” that surrounds the AI field. As LLMs grow larger, demands on hardware rise, driving up costs and energy usage. Quantum processors, even in their early stages, could eventually help offset those needs.
While China has made aggressive investments in quantum hardware and research, global competition remains intense. The U.S., Europe and Canada have also launched programs to integrate quantum with AI. But according to the researchers behind this latest trial, Origin Wukong is one of the first quantum platforms to handle a full-scale AI fine-tuning workload with tangible results.
It’s important to note that the experiment is still a demonstration rather than a full commercial application. There’s no indication in the article that the researchers published, or will publish a study on the experiment, which would allow a more technical look into the experiment.