Peter Byrne (@peterjbyrne) notes that researchers have been trying to create a quantum computer for years. “The quest to harness the computational might of quantum weirdness continues to occupy hundreds of researchers around the world,” he writes. “Why hasn’t there been more to show for their work?” [“Quantum Computing Without Qubits,” Quanta Magazine, 22 January 2015] His headline gives a big hint as to why a reliable, universally-recognized quantum computer has yet to be built — they rely on finicky qubits. Larry Hardesty explains, “Quantum computers are experimental devices that promise exponential speedups on some computational problems. Where a bit in a classical computer can represent either a 0 or a 1, a quantum bit, or qubit, can represent 0 and 1 simultaneously, letting quantum computers explore multiple problem solutions in parallel. But such ‘superpositions’ of quantum states are, in practice, difficult to maintain.” [“Qubits with staying power,” MIT News, 29 January 2015] Saying that qubits are “difficult to maintain” is really a gross understatement. Most qubits have to be created at temperatures near absolute zero and then they need to be shielded so that some random quantum particle doesn’t disturb them. Hardesty explains how MIT researchers are trying to overcome the qubit conundrum.
“MIT researchers and colleagues at Brookhaven National Laboratory and the synthetic-diamond company Element Six describe a new design that in experiments extended the superposition time of a promising type of qubit a hundredfold. In the long term, the work could lead toward practical quantum computers. But in the shorter term, it could enable the indefinite extension of quantum-secured communication links, a commercial application of quantum information technology that currently has a range of less than 100 miles. The researchers’ qubit design employs nitrogen atoms embedded in synthetic diamond. When nitrogen atoms happen to be situated next to gaps in the diamond’s crystal lattice, they produce ‘nitrogen vacancies,’ which enable researchers to optically control the magnetic orientation, or ‘spin,’ of individual electrons and atomic nuclei. Spin can be up, down, or a superposition of the two.”
Because qubits are so finicky, they are prone to errors. Fortunately, researchers are also working on that problem. For example, Tom Simonite (@tsimonite) reports, researchers from Google and the University of California, Santa Barbara (UCSB), have demonstrated “a crucial error-correction step needed to make quantum computing practical.” [“Quantum Computing Components More Reliable,” MIT Technology Review, 4 March 2015] He explains:
“The Google and UCSB researchers showed they could program groups of qubits — devices that represent information using fragile quantum physics — to detect certain kinds of error, and to prevent those errors from ruining a calculation. The new advance comes from researchers led by John Martinis, a professor at the University of California, Santa Barbara, who last year joined Google to set up a quantum computing research lab. … Much quantum computing research focuses on trying to get systems of qubits to detect and fix errors. Martinis’s group has demonstrated a piece of one of the most promising schemes for doing this, an approach known as surface codes. The researchers programmed a chip with nine qubits so that they monitored one another for errors called ‘bit flips,’ where environmental noise causes a 1 to flip to a 0 or vice versa. The qubits could not correct bit flips, but they could take action to ensure that they did not contaminate later steps of an operation.”
Lisa Zyga reports that members of the Google/UCSB team aren’t the only researchers working to solve the quantum error problem. She writes, “As powerful as quantum computers could be, they are also delicate in a way, since they must be shielded from the ‘noise’ in the environment that causes detrimental errors. Making quantum computers that are noise-resistant, or fault-tolerant, is one of the biggest challenges facing their development.” [“Magic states offer surprisingly low error rates for quantum computing,” Phys.Org, 13 March 2015] As her headline suggests, she indicates the leading approach to fault-tolerant quantum computing involves “magic states.” She explains:
“First proposed in 2005 by Sergey Bravyi and Alexei Kitaev, magic states are quantum states that contain an acceptably low level of error. In order to create magic states, physicists take noisy quantum states and use a process called distillation to derive a smaller number of improved, i.e., higher fidelity, states. This process is repeated as many times as necessary until the states reach the target fidelity. Although distillation works, it is a resource-intensive process that requires the majority of a quantum computer’s hardware. In some cases, up to 90% of a quantum computer’s qubits are needed to create magic states, before any real computing can be done. To address this problem, Ying Li, a physicist at the University of Oxford, has looked for a way to minimize the noise in raw magic states (before any distillation) in order to reduce the number of distillation steps required, and in turn reduce the resource cost. In his work, he made a surprising discovery: raw magic states can have a fidelity that is superior to that of the operations that created them. Li’s protocol takes advantage of the fact that qubits are more sensitive to noise when the code distance (which is related to the number of qubits in a row of a lattice) is small, and more stable when the code distance is larger. After an initial encoding step, the protocol enlarges the code distance in order to reduce the error rate.”
By now it should be obvious that developing a working quantum computer is no simple task. So why all the bother? Speed is probably the one word that best describes why researchers want to develop a quantum computer. As Hardesty notes, “[Quantum computers] promise exponential speedups on some computational problems.” A breakthrough in quantum switching was recently announced by a team of researchers from the University of Surrey, University College London, Heriot-Watt University in Edinburgh, the Radboud University in Nijmegen, and ETH Zürich/EPF Lausanne/Paul Scherrer Institute in Switzerland. “The team demonstrated a quantum on/off switching time of about a millionth of a millionth of a second, the fastest-ever quantum switch to be achieved with silicon and over a thousand times faster than previous attempts.” [“Superfast computers a step closer as a silicon chip’s quantum capabilities are improved,” Phys.Org, 20 March 2015] One of the researchers involved, Dr. Thornton Greenland of UCL, stated, “What is exciting is that we can see these exotic quantum phenomena in that most common material, silicon, using a measurement as simple as that of the electrical resistance. Thus the time is drawing nearer when we’ll be able to take advantage of a computer that does a tremendous number of calculations simultaneously, and that provides unprecedentedly secure computing, impenetrable to hackers.”
Professor Ivan Deutsch from the University of New Mexico is one the people who has been trying to design and build a quantum computer. He explained to Peter Byrne why progress has been so slow. “The same quantum effects that make a quantum computer so blazingly fast also make it incredibly difficult to operate,” he stated. “From the beginning, it has not been clear whether the exponential speed up provided by a quantum computer would be cancelled out by the exponential complexity needed to protect the system from crashing.” The task is not hopeless Deutsch says, but we’re not there yet. “We now know that a universal quantum computer will not require exponential complexity in design,” he states. “But it is still very hard.”