Researchers at the University of California, Riverside have demonstrated that quantum computers can be scaled up by connecting smaller chips together, even if the connections between them are not perfect. The study, published as a letter in Physical Review A, suggests that “scalable” quantum architectures—systems made from many small chips working as one—can still function reliably with imperfect links.
“Our work isn’t about inventing a new chip,” said Mohamed A. Shalby, the first author of the paper and a doctoral candidate in the UCR Department of Physics and Astronomy. “It’s about showing that the chips we already have can be connected to create something much larger and still work. That’s a foundational shift in how we build quantum systems.”
Scaling allows quantum computers to handle more data without performance issues, while fault tolerance ensures they can detect and correct errors automatically for reliable outputs. Connecting multiple smaller chips has been challenging due to increased noise between separate units, especially those housed in different cryogenic refrigerators.
“In practice, connecting multiple smaller chips has been difficult,” Shalby said. “Connections between separate chips — especially those housed in separate cryogenic refrigerators — are much noisier than operations within a single chip. This increased noise can overwhelm the system and prevent error correction from working properly.”
The team found that even when links between chips were up to ten times noisier than the chips themselves, error detection and correction still worked.
“This means we don’t have to wait for perfect hardware to scale quantum computers,” Shalby said. “We now know that as long as each chip is operating with high fidelity, the links between them can be ‘good enough’ — not perfect — and we can still build a fault-tolerant system.”
Shalby explained that building reliable quantum computers requires clusters of many physical qubits forming logical qubits for redundancy and error correction. The most widely used method for this is called the surface code, which enables high-fidelity logical qubits by managing errors within its own architecture.
The researchers ran thousands of simulations using six modular designs under various conditions inspired by Google’s existing quantum infrastructure.
“Until now, most quantum milestones focused on increasing the sheer number of qubits,” Shalby said. “But without fault tolerance, those qubits aren’t useful. Our work shows we can build systems that are both scalable and reliable — now, not years from now.”
The research was inspired by earlier work at MIT and supported by the National Science Foundation. Simulations were performed using tools developed by Google Quantum AI team.
Shalby collaborated with Leonid P. Pryadko and Renyu Wang at UCR and Denis Sedov at University of Stuttgart in Germany.
The paper is titled “Optimized noise-resilient surface code teleportation interfaces.”



