LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Power Law in Deep Neural Networks: Sparse Network Generation and Continual Learning With Preferential Attachment.

Photo from wikipedia

Training deep neural networks (DNNs) typically requires massive computational power. Existing DNNs exhibit low time and storage efficiency due to the high degree of redundancy. In contrast to most existing… Click to show full abstract

Training deep neural networks (DNNs) typically requires massive computational power. Existing DNNs exhibit low time and storage efficiency due to the high degree of redundancy. In contrast to most existing DNNs, biological and social networks with vast numbers of connections are highly efficient and exhibit scale-free properties indicative of the power law distribution, which can be originated by preferential attachment in growing networks. In this work, we ask whether the topology of the best performing DNNs shows the power law similar to biological and social networks and how to use the power law topology to construct well-performing and compact DNNs. We first find that the connectivities of sparse DNNs can be modeled by truncated power law distribution, which is one of the variations of the power law. The comparison of different DNNs reveals that the best performing networks correlated highly with the power law distribution. We further model the preferential attachment in DNNs evolution and find that continual learning in networks with growth in tasks correlates with the process of preferential attachment. These identified power law dynamics in DNNs can lead to the construction of highly accurate and compact DNNs based on preferential attachment. Inspired by the discovered findings, two novel applications have been proposed, including evolving optimal DNNs in sparse network generation and continual learning tasks with efficient network growth using power law dynamics. Experimental results indicate that the proposed applications can speed up training, save storage, and learn with fewer samples than other well-established baselines. Our demonstration of preferential attachment and power law in well-performing DNNs offers insight into designing and constructing more efficient deep learning.

Keywords: preferential attachment; power law; power; dnns

Journal Title: IEEE transactions on neural networks and learning systems
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.