It is generally known that model-based estimation algorithms (such as Kalman filter and its family) perform better than the non-model-based algorithms [such as least mean square (LMS), recursive least squares]… Click to show full abstract
It is generally known that model-based estimation algorithms (such as Kalman filter and its family) perform better than the non-model-based algorithms [such as least mean square (LMS), recursive least squares] due to extra information available in terms of system dynamics (which can be used to provide state space model of the system). However, the computational complexity of the model based algorithms is very high. On the other hand, the convergence performance of model based least mean type algorithms [such as state space least mean (SSLM) algorithms] is slower and highly dependent on the step-size choice. Thus, the larger step size can provide faster convergence but gives poor steady-state excess mean square error (EMSE). To meet this conflicting demand, we propose to employ the q-calculus to minimize the generalized least mean cost function. The main advantage of using the q-calculus is that it can provide a nonlinear correction term in the adaptation of the state estimate vector. Consequently, this results in an intelligent adaptation by providing both faster convergence in the initial phase of adaptation and a lower steady-state EMSE in the final phase. The developed algorithms are termed as q-state space least mean (q-SSLM) algorithms. The performance of the proposed q-state space least mean square (q-SSLMS) algorithm is also investigated both in terms of convergence in the mean and the mean square sense. The supremacy of the proposed algorithm is validated by performing several simulations and it is also contrasted with the performance of the well-known Kalman filter. Finally, the theoretical convergence analysis is also validated via simulations.
               
Click one of the above tabs to view related content.