pull down to refresh

When I worked at OpenAI, I learned that hardly anyone there knows or cares where the data centers running GPT actually physically are!
In generations of science fiction about AI, the AI was always some giant blue glowing orb or whatever that the scientists in white coats could actually walk up to and see ... did any sci-fi writer have the imagination to foresee that even the AI's creators wouldn't know where it was? :-)
There's no single-sentence answer to "which problems have a fast quantum algorithm?," in exactly the same way as there's no single-sentence answer to "which problems have a fast CLASSICAL algorithm?" The answer is a course or a textbook, not a sentence!
Having said that, the problem solved by Grover's algorithm is actually exceedingly general: it's simply searching through a list of n items for a desired item. And it can be generalized even further than that. The trouble with Grover's algorithm is not at all its generality; rather, it's that the speedup is "merely" quadratic.
Right, the key point here is that, with a few exceptions like the one-time pad, there have NEVER been proofs that ANY of the cryptosystems we use in practice are secure! They all depend on unproven conjectures about computational hardness -- at the very least, the belief that P!=NP.
You could argue that the currently deployed systems have been "battle-tested" for longer than the new quantum-resistant ones. In reality, though, problems like factoring and discrete log have been battle tested for ~50 years, whereas lattice problems have been battle tested for ~25 years, so it's not even that huge of a difference anymore!
This isn't about QC per se, but probably my craziest / least accepted scientific or philosophical ideas are those that I expanded on in a long essay called "The Ghost in the Quantum Turing Machine": https://arxiv.org/abs/1306.0159
If I actually believed something about quantum algorithms, complexity, etc. that none of my extremely smart colleagues believed, that would almost certainly cause me to reconsider. I'm a sheep that way.
I don't think that any of these are blockers. After 30 years of engineering effort, we now actually have sub-threshold error rates in trapped ions, neutral atoms, and superconducting qubits. And we've seen no sign whatsoever of the errors being correlated in a way that would prevent fault-tolerance from working -- to their credit, the skeptics like Gil Kalai stuck their necks out and made a prediction, but their prediction turned out to be wrong. Meanwhile, just two weeks ago, there was a big advance reported in reducing the overhead for fault-tolerance, so that Bitcoin now looks vulnerable to systems with only ~30,000 physical qubits (see e.g. my blog for more details). What remains is to put all the pieces together and scale up -- which is engineering, not science. Like the situation with nuclear weapons circa ~1942.
I love the 80s movie Real Genius with Val Kilmer -- I've rewatched it every few years since first discovering it. Also all the usuals (Austin Powers, Borat, Spaceballs). Recently ChatGPT recommended that my wife, kids, and I watch Galaxy Quest, which I'd somehow missed when it came out, and now that's a new comedy favorite!
It's just the obvious model of computation for our universe if you believe that quantum mechanics, which we've known about since 1926, is true. If quantum computing turns out to be impossible for some deep reason, that's WAY more exciting than if it's merely possible, since it means QM itself is wrong or incomplete that should cause the biggest revolution in physics for a century. QC being possible (and, as a direct result, bitcoin as currently implemented being ultimately insecure) is the "boring," "conservative" possibility by comparison! :-D
Yes and yes.
Example where there's no asymptotic quantum speedup (but only a speedup by a factor of 2, depending on how you count): computing the parity of n bits (i.e., whether there's an even or odd number of 1's).
Example where there's a superquadratic speedup: computing the period of a periodic function (an exponential speedup for this task is the heart of Shor's factoring algorithm). And lots of other examples that have been discovered since, such as my "Forrelation" problem from 2009: given two access to otherwise-random functions, decide whether one of them is correlated with the Fourier transform of the other.
We're arguably already there, for example with Quantinuum's simulation of the Fermi-Hubbard model last year, or Google's measurement of "OTOCs" (out of time order correlators).
("Arguably" because it depends on exactly how hard you think the result would've been to reproduce classically.)
If we're not there, we'll be there extremely soon. So I'm already looking ahead to the next milestone, which is when condensed-matter physicists, materials scientists, etc. who don't "intrinsically" care about quantum computing at all, are nevertheless using it as a tool to help answer the questions they do care about, which they weren't able to answer using high-performance classical computing. And then commercially relevant quantum simulations are a next milestone after that.
Simulating quantum mechanics is the big one, frankly.
This could potentially help with designing new drugs and chemical reactions and materials (better solar cells, batteries, high-temperature superconductors...), and even doing simulations of high-energy physics that could help in understanding the origin of the universe.
And it's sort of what quantum computing was "born" to do (it was Richard Feynman's original application from 1981). That quantum computing can EVER help with purely classical problems, like factoring integers, is a sort of miraculous coincidence that didn't need to be true. It's just that, once Peter Shor discovered his quantum factoring algorithm in 1994, the world started demanding/expecting more and more quantum speedups for classical problems, like in the fable of Rumpelstiltskin.
I often say that, for me personally, the #1 application of quantum computing that motivates and excites me, is simply disproving all the people who confidently claimed that scalable quantum computing was impossible! :-D
I see no reason why anyone would ever need a quantum computer in their home.
Of course, I'm aware that Ken Olson said in the 1970s that there's no reason why anyone would ever need a classical computer in their home, and that gets ridiculed today as one of the wrongest predictions of all time. But if you look at the context, Olson was actually way ahead of his time! He was talking about what today we call the cloud: compute can be centralized, then sold as a commodity for anyone to tap into over the Internet from anywhere in the world.
We're already seeing that model successfully deployed with the small QCs of today, and I expect it to continue to work going forward. Quantum computing mostly helps with a few specific things -- simulating quantum chemistry and materials science, breaking cryptography (!), a few other things, and maybe some other things yet to be discovered. It probably won't help for checking email or playing Candy Crush. So, why not just have it be a resource that people can rent over the cloud in the special cases where it helps?
As a theorist, I confess I don't use any of these platforms, but some of my students do! Yes, try Qiskit, Pennylane, Q#, and whatever else is out there, see which one you find easiest to learn and use, and that's probably the right one for you!
Sorry to burst the bubble, but it's not obvious to me whether quantum computers will help at all with training LLMs. Maybe eventually the Grover speedup will help somewhat. Or maybe there are new quantum algorithms yet to be discovered that will help, or old quantum algorithms that will turn out to work really well when we test them at scale on AI problems. Sadly, though, the articles you'll find on "quantum AI" tend to contain ENORMOUS amounts of misrepresentation and hype -- often simply failing to compare to the best classical solutions that are available, and hoping no one will notice that.
I wrote about this below -- the speedup comes from interference among amplitudes (the complex numbers that replace probabilities in quantum mechanics). The goal, with every quantum algorithm, is to choreograph a pattern of interference wherein the contributions to the amplitude of each wrong answer cancel each other out, whereas the contributions to the amplitude of the right answer reinforce each other. This is a really weird hammer Nature gave us -- one that I doubt any sci-fi writer would've had the imagination to invent -- and it wasn't obvious a priori that it would be good for anything, other than simulating quantum mechanics itself.
The big one is they think that a quantum computer would just magically revolutionize everything -- machine learning, optimization, you name it -- and that it would do so by simply "trying all the possible answers in parallel."
They don't get that quantum speedup is an incredibly special phenomenon that hinges on the way quantum mechanics changes the rules of probability themselves, to allow complex numbers called "amplitudes" that can interfere and cancel each other out (unlike probabilities, which can only add).
And, as a conseqeunce of the nature of quantum speedup, we know of world-changing quantum speedups mostly just for two specific classes of problems, which have the exact right structure to take advantage of interference among amplitudes:
(1) simulating quantum mechanics itself (the "O.G." application from the 1980s), and
(2) breaking the main public-key cryptographic codes that are used to protect the Internet (that was Peter Shor's famous discovery of 1994).
Since this is weird and confusing and takes at least 15 minutes to explain properly, and ALSO is not really what customers or investors want to hear, a very common solution that people hit on is simply to lie about it! :-D
I confess that I haven't! I do have some Bitcoin that I've received for various reasons, in an old Coinbase account if I can still find the username and password.
I should explain that I'm a theoretical computer scientist, and that I often struggle to log in to the websites for my kids' schools. If I were savvier with actual computers, maybe I would've bought or even mined some Bitcoin back in 2010 when I first learned about it! Oh well :-D
It's always possible! But for sufficiently "scrambling" functions like SHA256 (or cellular automata like Wolfram's rule 110, etc.), my guess would be that even the hidden structure, once found, would at most let you shave a small factor off from brute-force search. If you told me that you could invert one specific such function in polynomial time, I'd be shocked but would get over it. But if it turned out to be true again and again for many such functions, that's when I'd start wondering if one-way functions didn't exist after all, or indeed P=NP.
Yes, I know and like Allan MacDonald! I see him maybe a few times per year (would be happy to see him more :-) ). Physicists here who I interact with more reguarly include Nick Hunter-Jones, Andreas Karch, Jacques Distler...
I think that they're ultimately differences in the prefactors (or at any rate, lower-order terms), much like the differences between different possible architectures for classical computers (integrated circuits, vacuum tubes...). Fault-tolerance should ultimately "work" in all of them, and yield the same class of problems solvable in quantum polynomial time; it's mostly a difference of how expensive and hard are the engineering problems are. The biggest difference of this kind is that superconducting qubits are fixed in place on a 2D grid, whereas with trapped-ion and neutral atom qubits you can pick them up and move them around. That's a huge advantage for the latter, maybe more than compensating for their 1000x slower gate speed. Even there, though, we're talking about lower-order factors, since even with superconducting qubits, you can simulate all-to-all connectivity more slowly by using cascades of nearest-neighbor swaps.
OK everyone, thanks so much for all the great questions! Signing off now.