Putting the (almost) vacuum to good use

On Wednesday, I will be giving a short talk at IQC on our most recent research paper: Constant-Energy Quantum Fingerprinting. I have informally presented this work before to other members of the Institute and visitors to our group. One of the things I really like about this paper is that people have a reaction to it, it strikes them in some way or another. On previous occasions, when I discussed aspects of my research with colleagues, it was common to hear mostly plain remarks such as “Yeah, that’s interesting”, or “Hmmm, I see”, but not much more. With this new paper, it’s different: People are puzzled, they push harder to understand what is going on, they ask many questions, they care. I love that.

I think that the reason for this is twofold. Our result is remarkably simple and thus easy to understand. It is hard to get anyone excited about anything if it takes a lot of effort to understand it! On the other hand, the premise of our work is a very curious trick. We broke ground by thinking along different lines and dismissing the usual approaches to the problem. This means that our results look strange at first glance, they are unexpected. I think this is another element that builds up people’s interest. However, in any of these conversations and in the plan I have for my talk, there isn’t a space to mention something that I consider a remarkable feature of our work. So I decided to write a blog post about it.

Let me begin by providing some context. Consider a situation in which the two usual suspects, Alice and Bob, each receive a message. We can always think of this message as a string of n bits and we will refer to n as the input size. Now, suppose that it is important for them to know whether they have received the same message, but they cannot talk to each other, only to a third-party. I usually imagine that Alice and Bob are two field agents that are given a plan of action and headquarters wants to confirm they don’t have different plans. In practice, it is more likely that they will be two elements in a VLSI circuit. In any case, they will have to communicate with this third party, and we ask ourselves: How much communication do they really need (as a function of the input size)? It turns out that if Alice and Bob don’t have any shared randomness and they are allowed to fail with a very small probability, they only need to communicate around \sqrt{n} bits. Since these messages are much shorter than the one they originally receive, people like to call them fingerprints. Now suppose we make a technological upgrade and give Alice and Bob access to a quantum channel. Is there anything to be gained? Of course! In this case, they can get away with sending exponentially smaller quantum fingerprints. Amazing isn’t it?

In our work, we detailed a protocol for quantum fingerprinting that is ready to be deployed with current technology. As I said before, it is a peculiar protocol that doesn’t look quite like what you’d probably expect and I don’t want to discuss it in any detail here. What matters is that it works: Alice and Bob can determine whether their inputs are the same with exponentially less communication that in the classical case. The protocol requires Alice and Bob to communicate by sending a sequence of coherent state pulses. You can think of them as the output of a laser. These states are characterized, amongst other parameters, by the average number of photons they carry. For example, a back-of-the-envelope calculation tells me that a 1 second pulse of a typical laser pointer carries about 10^{14} photons. In our protocol, the average number of photons per pulse is inversely proportional to the input size and is given by the succinct relation

Average # of photons per pulse = \mu/n,

where \mu is an additional parameter that we can choose, but typically satisfies \mu\sim 10^{3}.

Please stare at that equation for a few seconds. What happens when the input size is large, say 10^6, as we would expect from a message of one megabyte? Then we would expect to see around 10^{-3} photons per pulse. On average, there are very, very, very few photons in that pulse. In fact, in the megabyte example, if we wanted to tell whether that pulse was actually just a vacuum instead of the coherent state we were supposed to be sending, no matter how sophisticated our procedure was, the laws of quantum mechanics would forbid us from telling the difference with a probability higher than $10^{-3}$ and the situation would only get worse for larger input sizes. For gigabyte inputs, there would be roughly 10^{-6} photons per pulse. We are essentially sending the vacuum. Amazing.

When I first noticed this feature, I imagined it would be very hard to create such pulses in the lab, let alone control them in any way. I was wrong. Apparently, it doesn’t take much more than a series of attenuators in series to obtain laser pulses with extremely low average photon numbers. Moreover, they can still be made to interfere efficiently, a crucial demand of our protocol. So this is not an unreachable theoretical curiosity, it is something that can be done with the technology we already have. Mind-blowing. Objects that are nearly indistinguishable from the absolute nothingness are the building blocks of quantum fingerprints. Isn’t that fantastic?

Featured image courtesy of my love Aleksandra Ignjatovic.

Advertisements

One thought on “Putting the (almost) vacuum to good use

  1. Pingback: Facebook survey: Are coherent states of light ‘classical’? | Qonaom

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s