Personal data have been increasingly used in data-driven applications to improve quality of life. However, privacy preservation of personal data while sharing it with analysts/ researchers has become an essential… Click to show full abstract
Personal data have been increasingly used in data-driven applications to improve quality of life. However, privacy preservation of personal data while sharing it with analysts/ researchers has become an essential requirement to be met by data owners (hospitals, banks, insurance companies, etc.). The existing literature on privacy preservation does not precisely quantify the vulnerability of each item among user attributes, thereby leading to explicit privacy disclosures and poor data utility during published data analytics. In this work, we propose and implement an automated way of quantifying the vulnerability of each item among the attributes by using a machine learning (ML) technique to significantly preserve the privacy of users without degrading data utility. Our work can solve four technical problems in the privacy preservation field: optimization of the privacy-utility trade-off, privacy guarantees (i.e., safeguard against identity and sensitive information disclosures) in imbalanced data (or clusters), over-anonymization issues, and rectifying or enabling the applicability of prior privacy models when data have skewed distributions. The experiments were performed on two real-world benchmark datasets to prove the feasibility of the concept in practical scenarios. Compared with state-of-the-art (SOTA) methods, the proposed method effectively preserves the equilibrium between utility and privacy in the anonymized data. Furthermore, our method can significantly contribute towards responsible data science (extracting enclosed knowledge from data without violating subjects’ privacy) by controlling higher changes in data during its anonymization.
               
Click one of the above tabs to view related content.