used for training with a measure of image similarity; these refinements are adjusted dynamically throughout the training process. These tricks result in a compression procedure that can be applied to… Click to show full abstract
used for training with a measure of image similarity; these refinements are adjusted dynamically throughout the training process. These tricks result in a compression procedure that can be applied to any network architecture, with weights learned on any dataset. But compression produces a loss of accuracy. The ideal way to evaluate this procedure is to find others that produce networks of the same size and speed on the same dataset. Then the compressor that produces the smallest loss in accuracy wins. It’s hard to match size and speed, but a compressor that produces the smallest loss of accuracy with acceptable size and speed is the standard to beat. The procedures described result in accuracies much higher than is achievable with comparable methods. This work has roots in a paper that appeared in the Proceeding of the 2016 European Conference on Computer Vision. Since then, xnor.ai, a company built around some of the technologies in this paper, has flourished. The technologies described mean you can run accurate modern computer vision methods on apparently quite unpromising devices (for example, a pi0). There is an SDK and a set of tutorials for this technology at https:// ai2go.xnor.ai/getting-started/python. Savings in space and computation turn into savings in energy, too. An extreme example—a device that can run accurate detectors and classifiers using only solar power—was just announced (https://www.xnor.ai/blog/ ai-powered-by-solar).
               
Click one of the above tabs to view related content.