Medical centers and healthcare providers have concerns and hence restrictions around sharing data with external collaborators. Federated learning, as a privacy-preserving method, involves learning a site-independent model without having direct… Click to show full abstract
Medical centers and healthcare providers have concerns and hence restrictions around sharing data with external collaborators. Federated learning, as a privacy-preserving method, involves learning a site-independent model without having direct access to patient-sensitive data in a distributed collaborative fashion. The federated approach relies on decentralized data distribution from various hospitals and clinics. The collaboratively learned global model is supposed to have acceptable performance for the individual sites. However, existing methods focus on minimizing the average of the aggregated loss functions, leading to a biased model that performs perfectly for some hospitals while exhibiting undesirable performance for other sites. In this paper, we improve model "fairness" among participating hospitals by proposing a novel federated learning scheme called Proportionally Fair Federated Learning, short Prop-FFL. Prop-FFL is based on a novel optimization objective function to decrease the performance variations among participating hospitals. This function encourages a fair model, providing us with more uniform performance across participating hospitals. We validate the proposed Prop-FFL on two histopathology datasets as well as two general datasets to shed light on its inherent capabilities. The experimental results suggest promising performance in terms of learning speed, accuracy, and fairness.
               
Click one of the above tabs to view related content.