Generalized Linear Mixed Model is one of the most pervasive class of statistical models. It is widely used in the medical domain. Training such models in a collaborative setting often… Click to show full abstract
Generalized Linear Mixed Model is one of the most pervasive class of statistical models. It is widely used in the medical domain. Training such models in a collaborative setting often entails privacy risks. Standard privacy preserving mechanisms such as differential privacy can be used to mitigate the privacy risk during training the model. However, experimental evidence suggests that adding differential privacy to the training of the model can cause significant utility loss which makes the model impractical for real world usage. Therefore, it becomes clear that the specific class of generalized linear mixed models which lose their usability under differential privacy requires a different approach for privacy preserving model training. In this work, we propose a value-blind training method in a collaborative setting for generalized linear mixed models. In our proposed training method, the central server optimizes model parameters for a generalized linear mixed model without ever getting access to the raw training data or intermediate computation values. Intermediate computation values that are shared by the collaborating parties with the central server are encrypted using homomorphic encryption. Experimentation on multiple datasets suggests that the model trained by our proposed method achieves very low error rate while preserving privacy. To the best of our knowledge, this is the first work that performs a systematic privacy analysis of generalized linear mixed model training in collaborative setting.
               
Click one of the above tabs to view related content.