An ℓ1-penalized adaptive normalized quasi-newton algorithm for sparsity-aware generalized eigen-subspace tracking

この論文をさがす

説明

Abstract This paper presents an l1-penalized extension of the adaptive normalized quasi-Newton algorithm (Nguyen and Yamada, 2013) which was established for online generalized eigenvalue problem. The proposed extension aims to exploit effectively the sparsity as a priori knowledge for efficient subspace tracking in signal processing and is also motivated by recent sparsity-aware eigenvector analysis in data sciences, e.g., sparse principal component analysis. For such an extension, we newly introduce l1 penalty into a non-convex criterion which has been used to characterize, as its stationary point, the generalized eigen-pair. The proposed subspace tracking algorithm is derived by applying a quasi-Newton type step to the new criterion followed by a normalization step. A convergence analysis is given in the case for decaying weight of the penalty. We also discuss potential applications, e.g., online sparse principal component analysis, by controlling the weight sequence of the l1 penalty. Numerical experiments demonstrate that the proposed algorithm (i) can improve the subspace tracking performance even for noisy observation of random vectors whose covariance matrix pencil has sparse principal generalized eigenvector and (ii) can promote the interpretability of the estimate of the principal generalized eigenvector.

収録刊行物

被引用文献 (1)*注記

もっと見る

参考文献 (38)*注記

もっと見る

関連プロジェクト

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ