Megaquop with John Preskill and Rob Schoelkopf
E47

Megaquop with John Preskill and Rob Schoelkopf

Well, thank you very much for joining us, John and Rob.

So, John, I thought we'd start by, you just posted a paper to the archive, which is a write-up of yours.

It's adapted from the keynote that you presented at Q2B last December, which was proposing essentially a shift in the phase we're in in the development of quantum technologies, from NISQ, which you famously coined at Q2B, I think, a number of years ago, to something which may not catch on as well as NISQ did, but I understand the pressure to try to come up with something, another catchy term, but Megaquop, I think is what you, is that how you pronounce it?

Yeah, I say Megaquop.

Yeah, Megaquop, yeah.

I mean, what could catch on more than NISQ did?

No, I know.

It's a really hard act to follow, for sure.

So what do you mean by Megaquopwhen you use that term?

Well, thanks for having us, Sebastian, and of course it's an honor to be here with Rob, who has made so many amazing contributions to quantum computing over 20 years.

Well, let's remember the context.

Quantum error correction is, to a theorist, a fascinating concept.

It's also going to be very important for technology.

On the theory side, we've been talking about it for 30 years, and more recently it's become a subject that's really moving ahead on the experimental side.

That was another area in which Rob made pioneering contributions.

And look at how far we have to go, though, right?

Because we want quantum computers to be able to run applications which will be broadly useful and will benefit society, and by some estimates that means we're going to have to run quantum computations with a trillion operations.

And that means we need a correspondingly small error rate per operation.

Well, now under the best conditions, we have an error rate of about one in a thousand when we do a two-qubit operation.

So we've got to cross a chasm of a factor of a billion, and that's not going to be easy.

And we're not going to do it just by making the devices better.

We're going to use quantum error correction, and that's going to have an overhead cost, because we're going to need sufficient redundancy to protect against the noise.

You might estimate it's going to be a device with millions of physical qubits that we're going to have to reach.

So we need intermediate goals along the path from NISQ, which is what we can do now, NISQ meaning that we have devices which you can argue we can't simulate accurately by brute force using our best conventional computers, but they're limited because they're not error corrected.

So along the path to the fault-tolerant world we hope to reach, from where we are now, what are we going to do?

So I proposed a goal of a computation with a number of operations of about a million, because that's probably not something we're going to do anytime soon just by making better devices.

It's going to require that we use error correction, not at such a high overhead cost as we're going to need to do in the future, so something more reachable.

Conceivably, something we can do in a few years.

Optimistically, maybe it'll take longer.

And then you can ask what are we going to do with these mega-quantum machines, and that's something we've got to figure out.

Right, right.

Yeah, so then in that paper you spent a fair amount of time describing new approaches to qubit modalities, and particularly superconducting qubits.

And you mentioned dual rail, which Rob, Quantum Circuits is pursuing as a design.

Is that the way that John just described this inflection point, this phase shift in where we are with the technology?

Does that sort of align with what you were thinking behind Quantum Circuits?

Yeah, I think it's a really exciting time for those of us who've been worrying about error correction.

I've been worrying about it as long as John, but it's been a main focus of our research for 10 or 15 years now.

But it's a really interesting phase where finally we're seeing kind of really significant overlaps and interplay between the theory of quantum error correction and the practice.

And we're focusing now not on just sort of looking to see whether things will eventually extrapolate, but to what can we do in the near term?

What are all the shortcuts?

What are all the efficiencies that we can find?

And so like the very nice results from AWS on the Ocelot chip, the cat qubits is one of these approaches to try and make sure your qubits have different types of noise, and that makes it easier to then error correct.

The dual rail that we've been working on at Quantum Circuits is another one of these approaches.

And I think it's going to be really kind of fascinating to see where we can go.

I also think this idea of how do we extend the duration aloft or something like that to do more operations is a pretty interesting one, because I don't think we're going to just keep scaling until we have fully fault-tolerant machines.

What we have to do in the meantime is figure out, indeed, what does, I sometimes call it partial error correction, look like.

So you do a little bit of correction, or maybe you correct the dominant errors, and then you just detect and mitigate or post-select on the subdominant errors and try and sort of work your way up a ladder or something instead of just trying to jump to the end.

That's the approach with dual rail, right?

It makes erasure errors easily detectable, and therefore it's not so much that you're...

I mean, it lowers the bar for error correction implementation because you're flagging errors sort of in a mechanical way in the hardware to some degree.

Is that right?

That's right.

That's right.

I think the idea of combining erasures with some of the error correction codes goes back quite a ways, including in photonic quantum computing.

But in the context of solid state quantum computing and superconductors, it's a relatively new idea that's being pursued at AWS and at Quantum Circuits.

And John, you're also an AWS scholar, so you've been involved in the Ocelot development.

Is that...

I mean, the way that Rob's describing it sounds like sort of a...

You mentioned error correction has been a topic for theorists.

It feels like there's this moment now where the hardware is progressing enough where you can actually confer with your experimental colleagues and come up with almost a hybrid between an error correction code and a hardware design that's more efficient, that lowers those overheads.

Is that accurate?

Yeah.

To be clear, I do have an affiliation with the AWS Center, which is located at Caltech.

I'm not here speaking on behalf of AWS.

This is social media.

This is just John.

Yeah, exactly.

I'm just spit-falling with my friends.

Yeah.

Well, there are different strategies that we can contemplate if we're going to pursue this big challenge of scaling up quantum computing, getting much lower logical error rates.

And one is what Google is doing, for example.

Google, IBM, a lot of labs around the world are building quantum computers based on transmons, which Rob's group invented almost 20 years ago.

And they're getting better, and they'll probably continue to get better with better fabrication and materials and so on.

And they're getting good enough now to be interesting, so you can try to scale up a device where the underlying qubits are transmons, and that's what Google has recently demonstrated.

It's kind of a milestone, I think, that they were able to show that as you increase the size of a quantum error correcting code, you get better performance, indicating that if you continue to scale, it'll get better still.

But we have so far to go, you can ask yourself, if we're going to have logical qubits with very low logical error rates, maybe it's not a bad idea to make the qubits themselves a little more complicated if the performance is better, so ultimately the cost of building a machine that's truly scalable will be reduced.

And so what you just mentioned are a couple of approaches to doing that.

One is the dual rail idea, and it's based on the concept that if you know in a quantum circuit where and when the errors occur at what qubits and what steps, that makes error correction easier and more efficient, and therefore reduces the cost.

A simple illustration of the concept is suppose you just have a classical bit that you want to protect, and you can use a repetition code, and encode a zero as a bunch of zeros and a one as a bunch of ones, and then some errors occur, and you can decode the result by doing a majority vote.

If it's mostly zeros, it was probably a zero.

If it's mostly ones, probably a one.

But suppose you knew which qubits had the errors.

Then you could just look at one that doesn't have an error, and that would be much easier to decode.

So for quantum, it's a little more complicated because we can't decode an error correcting code just by looking at single qubits because we have a highly entangled state.

But the principle still applies, that if you know where the errors are, you can make error correction more efficient.

So Rob is pursuing that idea very successfully so far, though of course there's still a long way to go.

Now the cat qubits, that's based on another idea, which is that you can take advantage of the structure of the noise in some cases, and in particular if the bit flips that occur are very rare.

In quantum, we have to worry about more than just bit flips.

We have to worry about the errors in the complementary basis, what we call phase errors.

But if you can engineer the device so that the bit flips are very highly suppressed, you can concentrate the quantum error correcting power on the phase errors, and that also can make things more efficient.

And that's what AWS has been pursuing as well as a few other, like Alice and Bob, following the same idea.

I think Nord Quantique as well, I think they're doing cat qubits.

I thought they were doing GKP.

Oh, GKP, you're right.

Which is also an interesting idea.

That's another idea I like.

Yeah, well it's got your name embedded in it, so I hope you like it.

So Rob, I mean, that description that John just shared of being able to flag or erase your errors, are you finding that that's actually making your attempts at error correction easier, more efficient, or getting better results?

Well, I think it's a little early to tell.

We haven't built a large code with these dual rails yet.

We just dropped a preprint that describes our first two qubit entangling gates with the dual rails, and it looks very promising.

So we think we know enough about how things like the surface code work to be able to take the measured numbers in the lab and make predictions.

And I think one of the big things that we're looking toward is something where, again, as you're concatenating these codes, as you're extending and increasing the amount of redundancy in hopes of further suppressing the logical errors, I don't think we want to gain only a factor of two, which is what Google Willow was able to show, which is indeed an important step.

But we want to see something where you're getting orders of magnitude improvement or so.

And so we're pretty excited because we really think that dual rails are a path to that, and the results we're seeing on this new two qubit gate are very encouraging.

It's also a little bit combining some of these ideas.

So in the cavity dual rail that we build, you have erasures as the dominant error, and then you have phase flips that occur five or ten times more rarely than those, and then bit flips that essentially never occur, or we're having trouble measuring them.

So that's kind of combining this efficient idea of erasures with the benefits of biased qubits.

Biased errors, yeah.

And why is the bit flip that suppressed?

And same with the CAD qubit.

What is it that makes it that resistant to a bit flip?

So it's built into the design in a way.

So the dual rail, you have two places a microwave photon can be, but there are two physically different cavities that are at distinct frequencies.

And so any linear coupling that's hard to prevent at some level will not allow the photon to change from being a five gigahertz photon to being a six gigahertz photon.

You need to have a strong nonlinear interaction, which we can turn on when we want, and so that's basically why.

So you're encoding the bit value in something that's really, really difficult to actually change accidentally.

Yeah, so at least during the idling states, it's very well protected.

We don't have a full set of gates that preserves the bias in all cases, but in this two qubit gate, we're able to preserve the bias, which is pretty cool.

The small print always has risks in it.

That's another interesting thing I find in this phase.

We're going to go from physical qubits and direct application of gates to logical qubits and magic state distillation and this more complex kind of operations.

And in some cases, I've seen some, I think LDPC protects Clifford gates, but not to Foley operations if I'm not mistaken.

So there seems to be all kinds of devil in the details.

To your point, John, the road is still very long for actually getting fully operational logical qubits that are capable of these long running computations.

Well, let me just say a little bit more about why Rob's idea is so good.

Okay.

Thanks, John.

Yeah.

So Rob has really good resonators.

He makes very good ones, but they're not perfect.

And every once in a while he loses a photon and that's the source of error that he has to worry about the most.

So as he said, he can encode a qubit by putting a single photon in either one of the two resonators.

And then, well, suppose the photon gets lost.

Now that's a different state with no photon at all than his encoded zero or his encoded one.

So he has a nice way of detecting that there are no photons anymore.

So he knows there was an error.

And the trick is to do that without messing things up when you haven't lost a photon.

So you don't spoil the coherence of photon in cavity one and photon in cavity two.

So in the case, you asked why are the cats so biased?

That has to do with why they're called cats.

So now to encode a qubit, and Rob worked on this too, but to encode a qubit, picture a quantum oscillator and you can think of its state as like a little blob in a space where we have position along one axis and momentum along the other axis and it has some extent in that space because of the uncertainty principle, because we can make the position and the momentum both very certain.

And now you can encode a bit by moving that blob far to the left for a zero or moving it far to the right for a one.

And that gives pretty good protection against bit flips because it's hard to confuse the blob far to the left with the blob far to the right, but that's just classical protection.

The trick is that we also want to consider this delicate superposition of blob on the left and blob on the right.

And that's like the cat, which is both dead and alive and we don't want the environment to find out whether it's left plus right or left minus right.

And the way the environment can find out is if we lose a photon that reveals that information.

So that's the error to worry about.

And we need a code to protect against that error.

And currently that's just a repetition code.

The errors that could cause the bit flip, to confuse the left and the right blob, we're passively protecting against those, making them very rare.

But now the further apart we put the two blobs, the more likely these other errors are and we need a code to protect against those.

So that's how the scheme does what it does.

I see.

Interesting.

That's really interesting.

And John, I remember from very early on getting the message from your talks around the NISQ era that you're sort of pressed into service to make predictions about how long until we get to quantum advantage.

I don't want to do that to you.

But one of the themes you would weave into those talks is the scientific value of building these devices now.

I mean, NISQ devices are, from one perspective, really exotic quantum lab instruments where we're discovering things about quantum physics by trying to build these things and then also by carrying out simulations, maybe analog simulations or maybe limited gate-based simulations of natural phenomena along the lines of what Feynman sort of predicted for quantum computers.

Do you think there is some distinct sort of shift in what we'll be able to explore with mega-cwop era machines as opposed to NISQ era machines?

I hope so.

It's not a sure thing, even in the mega-cwop regime.

What I said when I first talked about NISQ is, of course, we have to go for the fault tolerance because that's where the applications that are really impactful are going to be.

But in the meantime, we want to explore what applications we can run, what science we can do with the devices that are not error-corrected.

And part of the science that we're doing is we're learning more about error correction.

I think for a theorist who's thought about error correction for decades, the experiments have become really interesting because when people try to do things, sometimes they work, sometimes they don't.

The experiments guide us to better ways of doing things.

And that part's been very exciting.

In the case of a mega-cwop, well, I'm thinking of, suppose we had about 100 qubits that are well enough protected that we could run a number of steps in a computation, which is maybe 10,000 or so, about a million operations altogether.

And that should enable us to do things we can't do with NISQ and which we can't do with our classical computers, that we can't do with the programmable quantum simulators that we have.

What exactly are we going to do?

Well, you know, I'm a physicist, so of course what I would want to do is simulate the dynamics of quantum systems out of equilibrium, about which I think quantum computing is going to teach us a lot.

And we'll start to get some instructive scientific lessons in that mega-cwop regime.

Arguably, we're already getting such lessons now, even with NISQ technology.

So John, can I ask?

Sometimes I hear differing numbers about like, "Hey, if you only had 100 qubits and 10 to the minus 4 error rates, that would be quantum advantage." Do we know, like, where's the boundary, even if it's like only a, what would it be, a lower bound?

Like up to this, it's no longer quantum advantage.

You have to be at least this high to ride the quantum advantage ride.

Do you have a, is there, is this known and do you have an opinion on it?

I mean, you said you hope mega-cwop is there, but like, is that, we have to be at least that high?

It's a hard question to answer in part because the classical methods for simulating these systems continue to get better.

I chose a million operations because I thought that was probably pretty safe as far as, you know, simulating quantum dynamics on a classical machine.

But it depends on exactly what you're going to simulate, how hard it is.

So you can really get into some interesting nuances.

And I think also having a million operations will be pretty safe against not just, you know, running things like random circuits, which are not useful except for benchmarking the machines, but some circuits that have structure, you know, will still be hard to simulate classically.

But it's a hard answer.

It's a hard question to answer precisely.

That's why I always feel for you when you're pressed into that role in conferences.

Okay, John, tell us when exactly it's going to happen and what are we going to do with it?

It's a tough job.

You mentioned, you know, the sort of the experience of the theorist now getting to participate in experiments on real hardware that are sort of, you know, finally putting the rubber on the, meeting the road and running surface codes and other error codes, error correction codes.

Has there been anything so far that's been particularly surprising in the results that you've seen or has it sort of been mostly validation of what the theorist has anticipated?

Well I think the theory community has been strongly influenced by some of the recent developments, in particular with Rydberg atoms and tweezers, because of the mobility of the tweezers, you're not limited to operations which are geometrically local in a two-dimensional layout as you typically are with today's superconducting circuits.

That opens up possibilities for different coding schemes that are more efficient than the surface code, for example, which in the long run we're certainly going to use.

It may be the surface code can take us to substantial quantum utility, but there will be more efficient schemes that we'll eventually pivot to.

To be able to actually play with those now in actual experiments and as theorists think about how to do operations using those capabilities I think has really been stimulating.

Interesting, because of the sort of the plasticity of the layout of the qubits in a Rydberg atom array.

That makes a lot of sense.

Yeah, I can actually tell, this is making me think of an interesting story.

I'm not sure if you remember the first time we discussed quantum error correction, John.

So I was a grad student at Caltech, but at the time I was a radio astronomer.

Then I switched and started working on quantum computing in like around 2000 or so.

I came to visit and we were discussing this and John really wanted to know whether all the errors were uncorrelated.

I remember thinking, "Well, isn't that obvious?" But okay, yes, I think they are.

And then I think you were pleased because that meant a very simple error model could be used to extrapolate that things would actually work.

Now we kind of understand that that's actually the worst answer in a sense, that all the errors are uncorrelated and they all occur in the same ways.

So anyway, we've come a long way.

That was the beginning of a dialogue, but it was pretty simplistic dialogue at the time.

I kind of remember that.

Do you?

I remember I said to you, "Rob, I want to talk about noise." Yeah.

And you said, "Noise?

Everything I do is noise." Yeah.

Or not noise.

The goal of building a quantum computer is the not noise experiment.

The not noise.

And Rob, when you think about sort of down the road and implementation of error correction on a dual rail device, do you sort of think that the layout of the chip and the approach of the final sort of implementation of error correction, is that going to be sort of adapting over time as you scale up, as you learn more about how to build these devices and as you're able to actually try these different approaches to error correction in real experiments?

No, I think so and I hope so.

I mean, there's been a lot of work on sort of laying out chips and superconductors that are all nearest neighbors interactions.

But I like to sometimes remind people that in CircuitQED, in this whole field of kind of microwave quantum optics, we can do an entangling gate or convert a bit of quantum information from standing in a transmon or a cavity or whatever you're using as your information carrier to a photon that flies down a transmission line or a waveguide in the same sort of few tens to hundreds of nanoseconds.

And so, in principle, we can build quantum computers with a much more interesting connectivity layout.

The challenge is actually kind of nitty gritty in some sense.

What we're often limited to is just the physical crowding.

So getting the wires in.

And so if you really wanted to have one qubit that talked to 10 transmission lines or 100 transmission lines, that gets really hard.

So I think an interesting thing that we're hoping theorists will take up is what can you do with some amount of long range interactions and long range couplings?

There are these neat ideas about the QLDPC codes, which still require a lot of long range connectivity.

And the wiring for that is pretty complicated.

And then you have to worry about various trade-offs.

Like, by making such complicated wiring layout, did I actually increase the rate of the physical errors?

Oh, shucks.

Am I still winning?

Yeah.

That's interesting.

And you mentioned-- OK, but I thought that there was still an obstacle in that microwave frequencies and telecom frequencies are so far-- I mean, you still have the challenge of transduction and turning into a flying qubit, right?

Even in dual-- Right.

So when I say flying qubit, I mean one of our microwave flying qubits.

So I think that we're much more likely to see long range connectivity or modular quantum computers that are all cryogenic rather than being distributed through a large data center or something.

So a multi-chip kind of-- Yeah.

So then you don't need this challenging physics problem of going back and forth from single photons that have four orders of magnitude different energies without losing any information or creating any unwanted noise or energy that's got lost somewhere.

Our friend Andreas Walleruf has a-- Right.

A 4K2.

Yeah.

Two fridges and that long range connectivity, like Rob said.

And that's pretty good fidelity.

I think he's gotten that link to fairly good connectivity, right?

They can at least prove they have a Bell state in the two things that are 10 meters apart or something.

Yeah.

The point is the whole tube is cold.

Right.

Yeah.

It has a little bit of a Rube Goldberg appearance to it.

Yeah.

I mean, in the field of dark matter detection and so on, people build some really wacky, very, very large scale, big massive experiments that are all at deep cryogenic temperatures.

So I think that's a thing that can be still leveraged for quantum computing.

Yeah.

And I mean, if it ends up being a room full of fridges to get to a million logical qubits, that's fine with me.

We still have a big problem with all those wires.

Yes.

Yeah.

Yeah.

Yeah.

We got to figure that out.

Yeah.

Yeah.

Yes.

That's a tough one.

Well, hopefully that'll be one of the challenges that the mega co-op era addresses in some way.

Well, I really appreciate both of you joining us today.

This has been super interesting and I look forward to talking again with you both.

Thank you.

It's been fun.

Yeah.

It's always fun chatting with the two of you.

Great.

Creators and Guests

Sebastian Hassinger
Host
Sebastian Hassinger
Business development #QuantumComputing @AWScloud Opinions mine, he/him.
John Preskill
Guest
John Preskill
Theoretical physicist @Caltech, Director of @IQIM_Caltech, Amazon Scholar. Mastodon: https://t.co/fBX4BkWGcO
Omar Costa Hamido
Composer
Omar Costa Hamido
OCH is a performer, composer, and technologist, working primarily in multimedia and improvisation. His current research is on quantum computing and music composition, telematics, and multimedia. He is passionate about emerging technology, cinema, teaching, and performing new works. He earned his PhD in Integrated Composition, Improvisation and Technology at University of California, Irvine with his research project Adventures in Quantumland (quantumland.art). He also earned his MA in Music Theory and Composition at ESMAE-IPP Portugal with his research on the relations between music and painting. In recent years, his work has been recognized with grants and awards from MSCA, Fulbright, Fundação para a Ciência e a Tecnologia, Medici, Beall Center for Art+Technology, and IBM.
Rob Schoelkopf
Guest
Rob Schoelkopf
Robert Schoelkopf is director of the Yale Quantum Institute and CTO and co-founder of Quantum Circuits Inc. His research focuses on the development of superconducting devices for quantum information processing, which are leading to revolutionary advances in computing.