In several practical applications of the LMS algorithm including certain VLSI implementations the coefficient adaptation can be performed only after some fixed delay. The resulting algorithm is known as the delayed LMS (DLMS) algorithm in the literature. Previous published analyses of this algorithm are based on mean and moment convergence under the independence assumption between successive input vectors. These analyses are interesting and give valuable insights into the convergence properties but from a practical viewpoint they do not guarantee the correct performance of the particular realization with which the user must live. In this paper we consider a normalized version of this algorithm with a decreasing step size μ(n) and prove the almost sure convergence of the nonhomogeneous algorithm assuming a mixing input condition and the satisfaction of a certain law of large numbers.
ASJC Scopus subject areas
- Signal Processing
- Electrical and Electronic Engineering