LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

A Method of Information Protection for Collaborative Deep Learning under GAN Model Attack

Photo by pawelmc from unsplash

Deep learning is widely used in the medical field owing to its high accuracy in medical image classification and biological applications. However, under collaborative deep learning, there is a serious… Click to show full abstract

Deep learning is widely used in the medical field owing to its high accuracy in medical image classification and biological applications. However, under collaborative deep learning, there is a serious risk of information leakage based on the deep convolutional generation against the network's privacy protection method. Moreover, the risk of such information leakage is greater in the medical field. This paper proposes a deep convolution generative adversarial networks (DCGAN) based privacy protection method to protect the information of collaborative deep learning training and enhance its stability. The proposed method adopts encrypted transmission in the process of deep network parameter transmission. By setting the buried point to detect a generative adversarial network (GAN) attack in the network and adjusting the training parameters, training based on the GAN model attack is forced to be invalid, and the information is effectively protected.

Keywords: protection; method; collaborative deep; deep learning; information

Journal Title: IEEE/ACM Transactions on Computational Biology and Bioinformatics
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.