LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Quantum speed-up in global optimization of binary neural nets

Photo from academic.microsoft.com

The performance of a neural network (NN) for a given task is largely determined by the initial calibration of the network parameters. Yet, it has been shown that the calibration,… Click to show full abstract

The performance of a neural network (NN) for a given task is largely determined by the initial calibration of the network parameters. Yet, it has been shown that the calibration, also referred to as training, is generally NP-complete. This includes networks with binary weights, an important class of networks due to their practical hardware implementations. We therefore suggest an alternative approach to training binary NNs. It utilizes a quantum superposition of weight configurations. We show that the quantum training guarantees with high probability convergence towards the globally optimal set of network parameters. This resolves two prominent issues of classical training: (1) the vanishing gradient problem and (2) common convergence to sub-optimal network parameters. We prove that a solution is found after approximately 4n2lognδN˜ calls to a comparing oracle, where δ represents a precision, n is the number of training inputs and N˜ is the number of weight configurations. We give the explicit algorithm and implement it in numerical simulations.

Keywords: quantum speed; network; network parameters; global optimization; speed global; training

Journal Title: New Journal of Physics
Year Published: 2020

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.