Articles with "network training" as a keyword



Photo from wikipedia

MRI Contrast Enhancement Synthesis Using Cascade Networks with Local Supervision.

Sign Up to like & get
recommendations!
Published in 2022 at "Medical physics"

DOI: 10.1002/mp.15578

Abstract: PURPOSES Gadolinium based contrast agents (GBCAs) are widely administrated in MR imaging for diagnostic studies and treatment planning. Although GBCAs are generally thought to be safe, various health and environmental concerns have been raised recently… read more here.

Keywords: contrast; enhanced images; network training; tumor ... See more keywords
Photo from wikipedia

A Levy flight-based grey wolf optimizer combined with back-propagation algorithm for neural network training

Sign Up to like & get
recommendations!
Published in 2017 at "Neural Computing and Applications"

DOI: 10.1007/s00521-017-2952-5

Abstract: In the present study, a new algorithm is developed for neural network training by combining a gradient-based and a meta-heuristic algorithm. The new algorithm benefits from simultaneous local and global search, eliminating the problem of… read more here.

Keywords: search; grey wolf; global search; network training ... See more keywords
Photo from wikipedia

TSUNAMI: Triple Sparsity-Aware Ultra Energy-Efficient Neural Network Training Accelerator With Multi-Modal Iterative Pruning

Sign Up to like & get
recommendations!
Published in 2022 at "IEEE Transactions on Circuits and Systems I: Regular Papers"

DOI: 10.1109/tcsi.2021.3138092

Abstract: This article proposes the TSUNAMI, which supports an energy-efficient deep-neural-network training. The TSUNAMI supports multi-modal iterative pruning to generate zeros in activation and weight. Tile-based dynamic activation pruning unit and weight memory shared pruning unit… read more here.

Keywords: neural network; sparsity; energy; network training ... See more keywords
Photo from wikipedia

Neural Network Training on In-Memory-Computing Hardware With Radix-4 Gradients

Sign Up to like & get
recommendations!
Published in 2022 at "IEEE Transactions on Circuits and Systems I: Regular Papers"

DOI: 10.1109/tcsi.2022.3185556

Abstract: Deep learning training involves a large number of operations, which are dominated by high dimensionality Matrix-Vector Multiplies (MVMs). This has motivated hardware accelerators to enhance compute efficiency, but where data movement and accessing are proving… read more here.

Keywords: neural network; hardware; training memory; network training ... See more keywords
Photo from wikipedia

Deep Neural Network Training with Distributed K-FAC

Sign Up to like & get
recommendations!
Published in 2022 at "IEEE Transactions on Parallel and Distributed Systems"

DOI: 10.1109/tpds.2022.3161187

Abstract: Scaling deep neural network training to more processors and larger batch sizes is key to reducing end-to-end training time; yet, maintaining comparable convergence and hardware utilization at larger scales is a challenge. Increases in training… read more here.

Keywords: neural network; training; deep neural; training distributed ... See more keywords
Photo by siora18 from unsplash

Decoupled neural network training with re-computation and weight prediction

Sign Up to like & get
recommendations!
Published in 2023 at "PLOS ONE"

DOI: 10.1371/journal.pone.0276427

Abstract: To break the three lockings during backpropagation (BP) process for neural network training, multiple decoupled learning methods have been investigated recently. These methods either lead to significant drop in accuracy performance or suffer from dramatic… read more here.

Keywords: computation; neural network; weight prediction; network training ... See more keywords
Photo by victorfreitas from unsplash

Impact of Asymmetric Weight Update on Neural Network Training With Tiki-Taka Algorithm

Sign Up to like & get
recommendations!
Published in 2021 at "Frontiers in Neuroscience"

DOI: 10.3389/fnins.2021.767953

Abstract: Recent progress in novel non-volatile memory-based synaptic device technologies and their feasibility for matrix-vector multiplication (MVM) has ignited active research on implementing analog neural network training accelerators with resistive crosspoint arrays. While significant performance boost… read more here.

Keywords: tiki taka; taka algorithm; network training; network ... See more keywords