Articles with "gradient aggregation" as a keyword



Data and Channel-Adaptive Sensor Scheduling for Federated Edge Learning via Over-the-Air Gradient Aggregation

Sign Up to like & get
recommendations!
Published in 2022 at "IEEE Internet of Things Journal"

DOI: 10.1109/jiot.2021.3096570

Abstract: Over-the-air gradient aggregation and data-aware scheduling have recently drawn great attention due to the outstanding performance in improving communication efficiency for federated edge learning applications. However, in this case, the estimated gradient suffers from the… read more here.

Keywords: federated edge; gradient aggregation; data channel; air gradient ... See more keywords

Channel-Estimation-Free Gradient Aggregation for Over-the-Air SIMO Federated Learning

Sign Up to like & get
recommendations!
Published in 2024 at "IEEE Wireless Communications Letters"

DOI: 10.1109/lwc.2024.3382498

Abstract: We study the gradient aggregation in over-the-air federated learning (OA-FL), where the parameter server (PS) uses a combining vector to estimate the aggregated gradient. We propose a novel channel-estimation-free (CE-Free) gradient aggregation scheme for OA-FL,… read more here.

Keywords: aggregation air; gradient aggregation; federated learning; aggregation ... See more keywords

Boosting the Transferability of Adversarial Examples Through Gradient Aggregation

Sign Up to like & get
recommendations!
Published in 2025 at "IEEE Transactions on Information Forensics and Security"

DOI: 10.1109/tifs.2025.3574989

Abstract: Deep neural networks(DNNs) have been demonstrated to be vulnerable to meticulously crafted adversarial examples. Transfer-based attacks do not require access to the target model’s information, have emerged as a substantial threat to the deployment of… read more here.

Keywords: loss; gradient aggregation; adversarial examples; aware loss ... See more keywords

ALEPH: Accelerating Distributed Training With eBPF-Based Hierarchical Gradient Aggregation

Sign Up to like & get
recommendations!
Published in 2024 at "IEEE/ACM Transactions on Networking"

DOI: 10.1109/tnet.2024.3404999

Abstract: Distributed training includes two important operations: gradient transmission and gradient aggregation, which will consume massive bandwidth and computing resources. To achieve efficient distributed training, one must overcome two critical challenges: heterogeneity of bandwidth resources and… read more here.

Keywords: gradient aggregation; ebpf; distributed training;