LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

STKD: Distilling Knowledge From Synchronous Teaching for Efficient Model Compression.

Photo by ageing_better from unsplash

Knowledge distillation (KD) transfers discriminative knowledge from a large and complex model (known as teacher) to a smaller and faster one (known as student). Existing advanced KD methods, limited to… Click to show full abstract

Knowledge distillation (KD) transfers discriminative knowledge from a large and complex model (known as teacher) to a smaller and faster one (known as student). Existing advanced KD methods, limited to fixed feature extraction paradigms that capture teacher's structure knowledge to guide the training of the student, often fail to obtain comprehensive knowledge to the student. Toward this end, in this article, we propose a new approach, synchronous teaching knowledge distillation (STKD), to integrate online teaching and offline teaching for transferring rich and comprehensive knowledge to the student. In the online learning stage, a blockwise unit is designed to distill the intermediate-level knowledge and high-level knowledge, which can achieve bidirectional guidance of the teacher and student networks. Intermediate-level information interaction provides more supervisory information to the student network and is useful to enhance the quality of final predictions. In the offline learning stage, the STKD approach applies a pretrained teacher to further improve the performance and accelerate the training process by providing prior knowledge. Trained simultaneously, the student learns multilevel and comprehensive knowledge by incorporating online teaching and offline teaching, which combines the advantages of different KD strategies through our STKD method. Experimental results on the SVHN, CIFAR-10, CIFAR-100, and ImageNet ILSVRC 2012 real-world datasets show that the proposed method achieves significant performance improvements compared with the state-of-the-art methods, especially with satisfying accuracy and model size. Code for STKD is provided at https://github.com/nanxiaotong/STKD.

Keywords: comprehensive knowledge; synchronous teaching; knowledge; model; stkd distilling; student

Journal Title: IEEE transactions on neural networks and learning systems
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.