In this letter, we consider the personalized differential privacy (DP) based federated edge learning system. Each edge device adds DP noise to its local machine learning (ML) model updates to… Click to show full abstract
In this letter, we consider the personalized differential privacy (DP) based federated edge learning system. Each edge device adds DP noise to its local machine learning (ML) model updates to prevent the private information contained in the model updates to be obtained by the edge server. However, the noise perturbation can degrade the ML model performance. We aim to optimize the tradeoff between the ML model performance measured by the global loss and the privacy preservation. The closed-form global loss and privacy leakage are first derived. The loss and leakage are then jointly minimized by optimizing the DP noise scales and the local update numbers of the edge devices. Numerical results show that a better loss-leakage trade-off is reached compared to the conventional methods.
               
Click one of the above tabs to view related content.