Abstract In this paper, firstly, a class of deep (or multi-layer) neural networks with polynomial activation functions (or polynomial activation neural networks: PANN) are created, while feed-forward and recurrent architectures… Click to show full abstract
Abstract In this paper, firstly, a class of deep (or multi-layer) neural networks with polynomial activation functions (or polynomial activation neural networks: PANN) are created, while feed-forward and recurrent architectures and their nonlinear difference modeling are explicated. PANN’s relationships with conventional deep neural networks with sigmoid activation functions are discussed briefly by means of Taylor series. Secondly, numerical stability and stabilization of PANN’s are examined, and the stability conditions are derived with bounded-state trajectory inequalities and small-state linear approximation, under small parametrization assumption; stability analysis implication coincides with what we already learnt about neural network pre-training. Thirdly, based on what we term the coverage back-propagation parametrization, pre-training algorithms for PANN with or without activation functions optimization are constructed; particularly, activation function optimization is a new concept of this study, which brings us with more learning flexibility in general neural networks. Finally, nonlinear function fitting is numerically illustrated to show application of PANN, which reveals high generalization capability of linear parameter-varying neural algorithms.
               
Click one of the above tabs to view related content.