In recent years, machine learning as a service (MLaaS) has brought considerable convenience to our daily lives. However, these services raise the issue of leaking users’ sensitive attributes, such as… Click to show full abstract
In recent years, machine learning as a service (MLaaS) has brought considerable convenience to our daily lives. However, these services raise the issue of leaking users’ sensitive attributes, such as race, when provided through the cloud. The present work overcomes this issue by proposing an innovative privacy-preserving approach called privacy-preserving class overlap (PPCO), which incorporates both a Wasserstein generative adversarial network and the idea of class overlapping to obfuscate data for better resilience against the leakage of attribute-inference attacks(i.e., malicious inference on users’ sensitive attributes). Experiments show that the proposed method can be employed to enhance current state-of-the-art works and achieve superior privacy–utility trade-off. Furthermore, the proposed method is shown to be less susceptible to the influence of imbalanced classes in training data. Finally, we provide a theoretical analysis of the performance of our proposed method to give a flavour of the gap between theoretical and empirical performances.
               
Click one of the above tabs to view related content.