Loading...

Postulate is the best way to take and share notes for classes, research, and other learning.

More info

Paper Summary: "A Tutorial on Quantum Convolutional Neural Networks (QCNN)":

Profile picture of Dickson WuDickson Wu
May 28, 20213 min read

Paper Summary: "A Tutorial on Quantum Convolutional Neural Networks (QCNN)":

  • QC = more powerful than classical computers in certain stuff because QC has Entanglement + superposition, thus we can exploit those propertiesto our benefit.

  • CNN = good because if you look at a pixel, the surrounding pixels have correlations to it. Regular NN's lose that. CNNs = convolutional layers (to extra good features to pass on) + Pooling layers (reduce the feature map (output space) to reduce resources + overfitting). When reduced enough we use a fully connected layer, aka a NN

  • But there's a problem. If the input dimensions increase exponentially the CNN isn't efficient. (ex: translting a quantum physics problem (like a many body Hilbert Space) into classical space = exponential as the space grows)

  • But QCNNs are good at this.

  • Instead of translating those problems into classical bits, we can just encode them onto quantum bits

  • 

  • Convolutional layer = applying gates between adjacent qubits. Pooling = measuring a fraction of them, or using multi-gated things. Do that again and again until we can shove them all together to predict a classical result

  • This type of structure is a reverse MERA (Multi-scale Entanglement Renormalization Ansatz). Regular MERA = takes an input(s) and then scales up the size exponentially. Done by adding qubits. This is the opposite where we exponentially decrease the size of the inputs down to an output.

  • A bottleneck here is that MERA has a certain range which it could obtain (Think the arm thing). But you can do QEC (Quantum Error Correction) to increase the degrees of freedom, thus it can more accurately classify things.

  • So we can use QCNNs for Image classification to obtain better results! It uses a quantum Kernal instead of classicla one. By doing so we can take advantage of the quantum properties of superpoition + entanglement for parallel computing.

  • Right now quantum computers are a bit weak, thus we can only have small kernel sizes. So we're gonna apply the kernels to small parts of the image at once.

  • 

  • Takes a part of the image, then encodes it to qubits. Then parameterized gates come in to act as hidden layers. Decoding will map it to a new image. We do this a ton of times.

  • The parameterized gates are learnable, so we can fiddle with the parameters (much like how we can update the kernels in CNNs).

  • Classical CNN = O(n^2), while QCNN = O(log(n)). Where n means the n^2 sized kernel.


Comments (loading...)

Sign in to comment

ML Paper Collection

A Collection of Summaries of my favourite ML papers!