Sign Up to like & get
recommendations!
2
Published in 2022 at "IEEE transactions on neural networks and learning systems"
DOI: 10.1109/tnnls.2022.3164264
Abstract: Knowledge distillation (KD) transfers discriminative knowledge from a large and complex model (known as teacher) to a smaller and faster one (known as student). Existing advanced KD methods, limited to fixed feature extraction paradigms that…
read more here.
Keywords:
comprehensive knowledge;
synchronous teaching;
knowledge;
model ... See more keywords