While binarized neural networks (BNNs) have attracted great interest, popular approaches proposed so far mainly exploit the symmetric sign function for feature binarization, i.e., to binarize activations into -1 and… Click to show full abstract
While binarized neural networks (BNNs) have attracted great interest, popular approaches proposed so far mainly exploit the symmetric sign function for feature binarization, i.e., to binarize activations into -1 and +1 with a fixed threshold of 0. However, whether this option is optimal has been largely overlooked. In this work, we propose the Sparsity-inducing BNN (Si-BNN) to quantize the activations to be either 0 or +1, which better approximates ReLU using 1-bit. We further introduce trainable thresholds into the backward function of binarization to guide the gradient propagation. Our method dramatically outperforms the current state-of-the-art, lowering the performance gap between full-precision networks and BNNs on mainstream architectures, achieving the new state-of-the-art on binarized AlexNet (Top-1 50.5%), ResNet-18 (Top-1 62.2%), and ResNet-50 (Top-1 68.3%). At inference time, Si-BNN still enjoys the high efficiency of bit-wise operations. In our implementation, the running time of binary AlexNet on the CPU can be competitive with the popular GPU-based deep learning framework.
               
Click one of the above tabs to view related content.