LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Protecting Regression Models With Personalized Local Differential Privacy

Photo by thinkmagically from unsplash

The equation-solving model extraction attack is an intuitively simple but devastating attack to steal confidential information of regression models through a sufficient number of queries. Complete mitigation is difficult. Thus,… Click to show full abstract

The equation-solving model extraction attack is an intuitively simple but devastating attack to steal confidential information of regression models through a sufficient number of queries. Complete mitigation is difficult. Thus, the development of countermeasures is focused on degrading the attack effectiveness as much as possible without losing the model utilities. We investigate a novel personalized local differential privacy mechanism to defend against the attack. We obfuscate the model by adding high-dimensional Gaussian noise on model coefficients. Our solution can adaptively produce the noise to protect the model on the fly. We thoroughly evaluate the performance of our mechanisms using real-world datasets. The experiment shows that the proposed scheme outperforms the existing differential-privacy-enabled solution, i.e., 4 times more queries are required to achieve the same attack result. We also plan to publish the relevant codes to the community for further research.

Keywords: regression models; attack; model; personalized local; differential privacy

Journal Title: IEEE Transactions on Dependable and Secure Computing
Year Published: 2023

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.