Sign Up to like & get
recommendations!
0
Published in 2023 at "IEEE Computer Architecture Letters"
DOI: 10.1109/lca.2023.3275909
Abstract: Deep neural networks (DNNs) require abundant multiply-and-accumulate (MAC) operations. Thanks to DNNs’ ability to accommodate noise, some of the computational burden is commonly mitigated by quantization–that is, by using lower precision floating-point operations. Layer granularity…
read more here.
Keywords:
training efficiency;
enhancing dnn;
dynamic asymmetric;
architecture ... See more keywords
Sign Up to like & get
recommendations!
1
Published in 2022 at "IEEE Transactions on Parallel and Distributed Systems"
DOI: 10.1109/tpds.2022.3178443
Abstract: Most of the existing FL systems focus on a data-parallel architecture where training data are partitioned by samples among several parties. In some real-life applications, however, partitioning by features is also of practical relevance and…
read more here.
Keywords:
training efficiency;
unbalanced features;
federated learning;
number ... See more keywords
Sign Up to like & get
recommendations!
1
Published in 2020 at "IEEE Transactions on Vehicular Technology"
DOI: 10.1109/tvt.2020.2982178
Abstract: Beamforming (BF) training is crucial to establishing reliable millimeter-wave communication connections between stations (STAs) and an access point. In IEEE 802.11ad BF training protocol, all STAs contend for limited BF training opportunities, i.e., associated BF…
read more here.
Keywords:
training efficiency;
802 11ad;
performance;
training ... See more keywords