Sign Up to like & get
recommendations!
1
Published in 2019 at "Journal of Signal Processing Systems"
DOI: 10.1007/s11265-018-1418-z
Abstract: Deep neural networks (DNNs) contain large number of weights, and usually require many off-chip memory accesses for inference. Weight size compression is a major requirement for on-chip memory based implementation of DNNs, which not only…
read more here.
Keywords:
compression deep;
deep neural;
neural networks;
networks structured ... See more keywords