A Primer on Google's Willow for the Quantum-Curious
What Google achieved, and what needs to happen next.
A few days ago, Google announced Willow, their next-generation 105-quantum bit (qubit) superconducting quantum processor. Among the many impressive fundamental engineering achievements that Willow boasts, two central claims have made mainstream headlines over the last few days:
Willow can solve a computational task 10^25 times faster than a classical computer.
Willow demonstrated, for the first time, an “exponential reduction” in the error rate of error-corrected qubits as those error-corrected qubits scale in size.
Since the announcement, many friends and family have reached out asking about how they should interpret these results. So, here’s a primer on what Google achieved and where this leaves us in our journey towards practical quantum computing.
TLDR: The task that Google achieved a speedup on is called the Random Circuit Sampling (RCS) problem. RCS was specifically designed as a benchmark for quantum computers and has no further practical use. The more significant breakthrough is in error correction: Willow is the first quantum computer to convincingly demonstrate that doing error correction with more qubits actually reduces errors exponentially, instead of making things worse. This is a fundamental milestone for quantum computing. However, we still need far better error correction, and thousands of error-corrected qubits, to achieve precision and scale low enough for practical applications. We have yet to demonstrate quantum advantage on commercially relevant problems.
Now, let’s get one thing out of the way.
Google's claim about Willow solving problems vastly faster than classical computers is true but can be misleading. Google claims that their quantum computer can solve a task that would take a classical computer 10 septillion years in less than 5 minutes. While this is technically accurate, the task they use as a benchmark is called Random Circuit Sampling (RCS), which is a contrived problem designed specifically for quantum computers to solve. In RCS, the quantum computer executes a random sequence of quantum operations and measures the output distribution - essentially demonstrating it can manipulate quantum states in ways that classical computers can't efficiently simulate. While this validates that the quantum hardware is working as intended and demonstrates the advancements in Googles hardware, RCS itself has no practical applications; it's purely a demonstration of quantum capabilities (with some exceptions that have their own caveats).
The vastly more significant breakthrough is Willow's achievement in quantum error correction. The most fundamental obstacle to practical quantum computing is that qubits are extremely sensitive to environmental noise, making them prone to errors. These errors are so prevalent that individual physical qubits will likely never be reliable enough for practical quantum computing applications. The long-term solution to this problem is Quantum Error Correction (QEC). Instead of relying on single, error-prone qubits, we can combine multiple physical qubits to create one more stable “logical” qubit. Then, we can do computation with the logical qubits instead of the physical qubits. From a mathematical standpoint, the error suppression and scaling properties of basic QEC codes have long been understood. In theory, adding more physical qubits to a logical qubit causes an exponential reduction in the error rate of this logical qubit. However, for years, the critical question has been whether we can actually engineer the system such that this is the case.
There are two critical pieces of this engineering puzzle you should understand.
We need to lower the error rate of our physical qubits. If the physical qubits that make up our logical qubit are too erroneous, the resultant logical qubit’s error rate will increase, rather than decrease, with scale. Each QEC code has a “threshold” of error for its physical qubits, below which the benefits of exponential error suppression at scale kick in. Therefore, in order to make error correction work, we need to make physical qubits very high quality in the absence of error correction. In practice, this comes down to precise device engineering, manufacturing, and control: everything from smarter qubit design, to better shielding from electromagnetic interference, to more precise control electronics for manipulating the qubits.
We need to make the classical system that's hooked up to the quantum computer much better and faster. All quantum computers operate in tandem with a classical computer for circuit compilation, pulse generation, readout, and more. The classical system is an integral part of QEC because we need to constantly monitor and correct errors in real-time. This involves a few components: reading the error signature from the quantum computer, deciding what error actually happened, and generating the necessary pulses to fix it. Crucially, all of this needs to happen faster than new errors can occur. It's a race against time: if your classical control system is too slow at detecting, decoding, and correcting errors, new errors will pile up before you can fix the old ones, making the whole error correction process futile.
If these two pieces of the puzzle are in place, then making a logical qubit out of more physical qubits should cause an exponential reduction in the error rate of the logical qubit. If this is true, we then say that the system is "below threshold". Whether or not a system is below threshold depends primarily on the physical error rate and the quality of the classical control system, as discussed above.
Google has demonstrated that Willow operates below this so-called "error correction threshold." They proved this by creating logical qubits out of increasingly larger grids of physical qubits (3x3, then 5x5, then 7x7). Each time they increased the grid size, the logical error rate decreased by half, showing exponential improvement with scale. This is the first clear demonstration that we can achieve reliable quantum computation by scaling up the system - a fundamental requirement for practical quantum computers.
However, significant challenges remain before quantum computers deliver real commercial value.
Our logical qubit error rates need to be thousands of times lower. While proving we can operate below the error correction threshold is important, we need much lower logical error rates for practical applications. For context, classical computers have an error rate of 10^-18, and Willow's error-corrected error rate is about 10^-3. Practical quantum applications become possible at error rates between 10^-6 and 10^-9 (this is a generous estimate), so we're still at least 3 orders of magnitude off. Over time, we can move towards this error regime by (1) improving the base reliability of physical qubits, (2) increasing the number of physical qubits per logical qubit, and (3) improve our classical control systems. Likely, we’ll need a combination of all three.
We need hundreds to thousands of logical qubits for practical quantum applications. This means that devices likely have to scale to millions of physical qubits for practical use-cases. There are many engineering problems baked into scaling quantum devices. For example, while Willow’s average physical qubit errors were low, there were big differences between the best and worst qubits. This demonstrates the challenge of manufacturing uniform superconducting qubits (the kinds of qubits Google is building), which will be a significant obstacle to scaling to the million-qubit, error-corrected regime. Other quantum hardware modalities like neutral atoms and trapped ions do not suffer from this but similarly grapple with their own problems.
We need to start benchmarking performance on problems we care about. Random Circuit Sampling is, for the most part, a specialized physics experiment. Our metrics soon need to include applications that could show commercial promise, such as (at a high level) those that could solve drug discovery, optimization, or cryptography tasks. These metrics will fall short of classical computers in the near future, making them much less exciting. However, we should choose our north star wisely to avoid feeding unfounded hype and building machines that never deliver real value.
Ultimately, Google’s results are a significant step in the right direction, and I’m excited to see what happens next. If you’d like to stay up-to-date on breakthroughs like this, follow me here on State of the Qunion! I’m a quantum computing PhD student at Yale deeply motivated to understand and accelerate practical quantum computing, and I hope to take you on that journey with me. Please also consider following me on X (@r0han.kumar) and LinkedIn (rohanskumar).
Note: For more information and technical perspective on these results, I recommend checking out Shtetl-Optimized by Scott Aaronson and John Preskill’s Q2B talk.