CESGA proposes network architectures capable of rapidly scaling up quantum computing capabilities

By the time fully operational quantum computers exist, systems capable of connecting them to tackle tasks of unimaginable size and complexity will have been designed. CESGA aims to make a significant contribution to this challenge through its project to develop rapidly scalable network architectures for distributed quantum computing that enable collective operations to be run and coordinated among multiple nodes.

The 1960s saw the germ of one of the major milestones in computer technology: distributed computing. The ARPANET, popularly known as the Internet’s precursor, laid the foundations for multiple computers to connect and share resources.

From then on, the qualitative leap brought about by this model spurred the scientific and innovative efforts of the research community and the technology industry to exploit its full potential. Thanks to the distributed model, classical computing reached its highest levels of scalability, fault tolerance, efficiency in the use of resources, parallel processing capacity, flexibility, and cost-benefit. In short, it is the key to processing large volumes of data and performing complex calculations efficiently.

Iago Fernández

With this precedent, it is not surprising that one of the main lines of work in quantum technologies is the design of a distributed computing model. The promise is clear: the exponential computing power inherent to quantum computing could rise to heights that are still difficult to imagine. And so could its potential applications.

Within the framework of the Spanish Quantum Communications Complementary Plan (PCCC, Plan Complementario de Comunicaciones Cuánticas), the Galicia Supercomputing Center (CESGA) is working on the development of network architectures for distributed quantum computing that allow collective operations to be executed and coordinated among multiple nodes or quantum processing units (QPUs).

“Designing how these networks must be configured and what functions are needed on the devices that enable communications between the different nodes is essential,” explains Iago Fernández Llovo, PhD in physics from the University of Santiago de Compostela and researcher at CESGA’s Department of Communications.

His project aims to contribute to the design of connections between quantum devices so that they work together to solve problems that cannot be tackled by a single computer, using a minimum number of operations, time, and resources. These types of operations allow quantum computers connected by the network to behave as a single system, “scalable, modular and reconfigurable depending on the application,” as Dr. Llovo specifies.

Entanglement exchange

“We are developing the means to perform collective operations between several QPUs when you have quantum network devices connecting them,” explains the CESGA researcher, who clarifies that, although the functions of these devices are yet to be established, they could extend further than those of their classical counterparts. Connecting and coordinating quantum computers in different physical locations is more challenging from doing so with classical computers, as the rules governing the construction of the networks that link them are also governed by the laws of quantum mechanics. Of course, this poses unprecedented challenges, both at the hardware and software level.

The most widely discussed model to make quantum communication possible between QPUs in a network is known as entanglement swapping, which seeks to generate pairs of entangled qubits, known as Bell pairs, between distant –and not directly connected to each other– QPUs across a network, using the properties of quantum mechanics. In essence, it makes it possible to extend entanglement through a network between systems that have never interacted directly before.

As a prototypical case, the CESGA researchers propose a network involving multiple QPUs connected exclusively by quantum channels to quantum networking devices – which, in turn, may be connected to others in a higher layer, forming a kind of tree – a structure that resembles the tried and tested network architectures in current data centers, and which have been shown to be optimal given their high bandwidth, low latency, reliability and scalability.

The technology used in quantum links research is generally photonic and relies on performing a type of measurement known as Bell state measurements to generate entanglement. “These operations are not perfect and high-purity quantum states are delicate and difficult to generate, so we need to simulate how these measurements take place and the physics to calculate fidelity losses,” Dr. Llovo clarifies. Indeed, the generation of high-fidelity Bell pairs is a huge experimental challenge on which research groups from all over the world are currently working on. Their results will generate the different pieces that make up this great puzzle.

The piece being designed at CESGA corresponds to the algorithmic part, to unravel how remote operations can be implemented once high-fidelity Bell pairs are available. This is where the team struggles with the constraints of current conventional computers, which cannot keep pace with the scientific advances on the field. “In this case, a major constraint is that our work in DQC is based on simulations on conventional computers. These simulations are very costly in computational terms: a single additional qubit doubles the amount of RAM needed to simulate a quantum system, so it soon becomes impossible to simulate larger quantum systems,” says Dr. Llovo. And this is not an issue with the Galician Finisterrae only: even the entire memory of the world’s most powerful supercomputer, it would be impossible to simulate a quantum system of only 60 qubits.

The team at CESGA has designed a network architecture that would enable quantum computing capabilities to scale faster. “We have developed a technique to perform collective operations using a router, which essentially acts as a sort of glue between different QPUs,” says Dr. Llovo. Their proposal makes it possible to enhance the connectivity between units within the network with fewer quantum connections, while conserving precious Bell pairs compared to other existing proposals.

Much more than the sum of the parts

The relevance of the results of this project can be measured in relation to the importance of scalability in quantum computing, as the power of a quantum computer increases exponentially with each additional qubit. A single qubit increase multiplies the amount of information the system can store and process simultaneously by a factor of two. When a distributed quantum computer is finally realized, it will have access to all the qubits of the individual nodes, so its computing power will be far greater than that of the sum of the parts.

The advent of large-scale quantum computing systems will enable the resolution of problems of enormous size and complexity, which are far beyond the capabilities of even the most advanced supercomputers combined. However, the researchers at CESGA caution that the practical applications of these results will not become apparent until the medium to long term. “Our proposal pushes current knowledge forward and allows us to envision what a quantum data center must look like once Bell pair generation and error correction technologies have matured,” explains Fernández Llovo. While these technologies are not yet available, their development is only a matter of time, with numerous research groups across the globe working on them.

The quantum future

“The average person may never see or use a quantum computer in person. These devices are not coming to replace conventional computers, but to accelerate certain computational tasks, which on today’s most powerful supercomputers would take longer than the entire age of the universe, to be completed on a human time scale, ranging from seconds to months”. This is Iago Fernández Llovo’s vision of the future role of quantum computing, who nevertheless acknowledges that its relevance will be huge in areas that will directly impact everyone’s lives. He particularly emphasizes that the achievement of large-scale quantum computers will lead to ground-breaking discoveries in materials science and the development of new drugs, while also highlighting that results are surging in other fields, such as financial market simulation or the coordination and forecasting of electrical systems.

The race to achieve this is both long and complex. While the major players in technological development are making significant strides to bring these devices to market, there remains a critical need to develop a single quantum computer that is either immune to errors or capable of correcting them. Simultaneously, experimental work is underway to connect different quantum computers. “Until then, we won’t see fully distributed quantum computers, but we are working on building the foundational elements, the science that predates development. This is why it’s crucial for basic research to receive the time and funding it needs to generate the impact and progress it deserves,” concludes Dr. Llovo.