LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Towards an optimal set of initial weights for a Deep Neural Network architecture

Photo from academic.microsoft.com

Modern neural network architectures are powerful models. They have been proven efficient in many fields, such as imaging and acoustic. However, these neural networks involve a long-running and time-consuming process.… Click to show full abstract

Modern neural network architectures are powerful models. They have been proven efficient in many fields, such as imaging and acoustic. However, these neural networks involve a long-running and time-consuming process. To accelerate the training process, we propose a two-stage approach based on data analysis and focus on the gravity center concept. The neural network is first trained on reduced data represented by a set of centroids of the original data points, and then the learned weights are used to initialize a second training phase of the neural network over the full-blown data. The design of deep neural networks is extremely difficult, and the primary objective is to achieve high performance. In this study, we apply the Taguchi method to select good values for the factors required to build the proposed architecture.

Keywords: deep neural; network; towards optimal; set initial; optimal set; neural network

Journal Title: Neural Network World
Year Published: 2019

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.