All Title Author
Keywords Abstract

Publish in OALib Journal
ISSN: 2333-9721
APC: Only $99

ViewsDownloads

Relative Articles

More...

The Equivalence between Orthogonal Iterations and Alternating Least Squares

DOI: 10.4236/alamt.2020.102002, PP. 7-21

Keywords: Alternating Least Squares (ALS), Orthogonal Iterations, Equivalence Relations, Low-Rank Approximations

Full-Text   Cite this paper   Add to My Lib

Abstract:

This note explores the relations between two different methods. The first one is the Alternating Least Squares (ALS) method for calculating a rank-k approximation of a real m×n matrix, A. This method has important applications in nonnegative matrix factorizations, in matrix completion problems, and in tensor approximations. The second method is called Orthogonal Iterations. Other names of this method are Subspace Iterations, Simultaneous Iterations, and block-Power method. Given a real symmetric matrix, G, this method computes k dominant eigenvectors of G. To see the relation between these methods we assume that G = AT A. It is shown that in this case the two methods generate the same sequence of subspaces, and the same sequence of low-rank approximations. This equivalence provides new insight into the convergence properties of both methods.

References

[1]  Baglama, J. and Reichel, L. (2005) Augmented Implicitly Restarted Lanczos Bidiagonalization Methods. SIAM Journal on Scientific Computing, 27, 19-42.
https://doi.org/10.1137/04060593X
[2]  Baglama, J. and Reichel, L. (2013) An Implicitly Restartedf Block Lanczos Bidiagonalization Method, Using Leja Shifts. BIT Numerical Mathematics, 64, 263-293.
https://doi.org/10.1007/s10543-012-0409-x
[3]  Bai, A., Demmel, J., Dongarra, J., Ruhe, A. and van der Vorst, H. (1999) Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide. SIAM, Philadelphia, PA.
https://doi.org/10.1137/1.9780898719581
[4]  Bauer, F.L. (1957) Das Verfahren der Treppeniteration und verwandte Verfahren zur Lösung algebraischers Eigenwertprobleme. Zeitschrift für angewandte Mathematik und Physik, 8, 214-235.
https://doi.org/10.1007/BF01600502
[5]  Bell, R. and Koren, Y. (2007) Scalable Collaborative Filtering with Jointly Derived Neighborhood Interpolation Weights. Proceedings of the 2007 Seventh IEEE International Conference on Data Mining, Omaha, 28-31 October 2007, 43-52.
https://doi.org/10.1109/ICDM.2007.90
[6]  Berry, M.W., Browne, M., Langville, A.N., Pauca, V.P. and Plemmons, R.J. (2006) Algorithms and Applications for Approximate Nonnegative Matrix Factorization, 2006. Computational Statistics and Data Analysis.
https://doi.org/10.1016/j.csda.2006.11.006
[7]  Bjorck, A. (1967) Solving Linear Least-Squares Problems by Gram-Schmidt Orthogonalization. BIT, 7, 1-21.
https://doi.org/10.1007/BF01934122
[8]  Bjorck, A. (1994) Numerics of Gram-Schmidt Orthogonalization. Linear Algebra and Its Applications, 197/198, 297-316.
https://doi.org/10.1016/0024-3795(94)90493-6
[9]  Bjorck, A. (1996) Numerical Methods for Least-Squares Problems. SIAM, Philadelphia, PA.
https://doi.org/10.1137/1.9781611971484
[10]  Calvetti, D., Reichel, L. and Sorenson, D.C. (1994) An Implicitly Restarted Lanczos Method for Large Symmetric Eigenvalue Problems. Electronic Transactions on Numerical Analysis, 2, 1-21.
[11]  Clint, M. and Jennings, A. (1970) The Evaluation of Eigenvalues and Eigenvectors of Real Symmetric Matrices by Simultaneous Iteration. The Computer Journal, 13, 76-80.
https://doi.org/10.1093/comjnl/13.1.76
[12]  Comon, P., Luciani, X. and De Almeida, A.L.F. (2009) Tensor Decompositions, Alternating Least Squares and Other Tales. Journal of Chemometrics, 23, 393-405.
https://doi.org/10.1002/cem.1236
[13]  Dax, A. (2000) A Modified Gram-Schmidt Algorithm with Iterative Orthogonalization and Column Pivoting. Linear Algebra and Its Applications, 310, 25-42.
https://doi.org/10.1016/S0024-3795(00)00022-7
[14]  Dax, A. (2014) Imputing Missing Entries of a Data Matrix: A Review. Journal of Advanced Computing, 3, 98-222.
https://doi.org/10.7726/jac.2014.1007
[15]  Dax, A. (2017) A Gradual Rank Increasing Process for Matrix Completion. Numerical Algorithms, 76, 959-976.
https://doi.org/10.1007/s11075-017-0292-2
[16]  Demmel, J.W. (1997) Applied Numerical Linear Algebra. SIAM, Philadelphia, PA.
https://doi.org/10.1137/1.9781611971446
[17]  Elden, L. (2007) Matrix Methods in Data Mining and Pattern Recognition. SIAM, Philadelphia, PA.
https://doi.org/10.1137/1.9780898718867
[18]  Espig, M. and Khachatryan, A. (2014) Convergence of Alternating Least Squares Optimisation for Rank-One Approximation to High Order Tensors.
https://www.igpm.rwth-aachen.de/forschung/preprints/412
[19]  Golub, G.H. and Van Loan, C.F. (2013) Matrix Computations. 4th Edition, Johns Hopkins University, Baltimore.
[20]  Grung, B. and Manne, R. (1998) Missing Values in Principal Component Analysis. Chemometrics and Intelligent Laboratory System, 42, 125-139.
https://doi.org/10.1016/S0169-7439(98)00031-8
[21]  Hardt, M. (2013) On the Provable Convergence of Alternating Minimization for Matrix Completion. arXiv. 1312.0925.
[22]  Hardt, M. (2014) Understanding Alternating Minimization for Matrix Completion. Proceedings of the2014 55th Foundations of Computer Science (FOCS), Philadelphia, PA, 18-21 October 2014.
https://doi.org/10.1109/FOCS.2014.75
[23]  Hastie, T., Mazumder, R., Lee, J.D. and Zadeh, R. (2015) Matrix Completion and Low-Rank SVD via Fast Alternating Least Squares. Journal of Machine Learning Research, 16, 3367-3402.
[24]  Jain, P., Netrapalli, P. and Sanghavi, S. (2013) Low-Rank Matrix Completion Using Alternating Minimization. In: Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing, ACM, New York, 665-675.
https://doi.org/10.1145/2488608.2488693
[25]  Jennings, A. (1967) A Direct Iteration Method for Obtaining Latent Roots and Vectors of a Symmetric Matrix. Proceedings of the Cambridge Philosophical Society, 63, 755-765.
https://doi.org/10.1017/S030500410004175X
[26]  Jennings, A. (1977) Matrix Computations for Engineers and Scientists. Wiley, London.
[27]  Kiers, H.A.L. (2002) Setting up Alternating Least Squares and Iterative Majorization Algorithm for Solving Various Matrix Optimization Problems. Computational Statistics and Data Analysis, 41, 157-170.
https://doi.org/10.1016/S0167-9473(02)00142-1
[28]  Kim, H. and Park, H. (2006) Sparse Non-Negative Matrix Factorizations via Alternating Non-Negativity-Constrained Least Squares. In: Proceedings of the IASTED International Conference on Computational and Systems Biology (CASB2006), ACM, New York, 95-100.
[29]  Kolda, T.G. and Bader, B.W. (2009) Tensor Decompositions and Applications. SIAM Review, 51, 455-500.
https://doi.org/10.1137/07070111X
[30]  Koren, Y., Bell, R. and Volinsky, C. (2009) Matrix Factorization Techniques for Recommender Systems. Computer, 42, 30-37.
https://doi.org/10.1109/MC.2009.263
[31]  Krijnen, W.P. (2006) Convergence of the Sequences of Parameters Generated by Alternating Least Squares Algorithms. Computational Statistics and Data Analysis, 5, 481-489.
https://doi.org/10.1016/j.csda.2005.09.003
[32]  Kuroda, M., Mori, Y., Lizuka, M. and Sakakihara, M. (2011) Acceleration of the Alternative Least Squares Algorithm. Computational Statistics and Data Analysis, 55, 143-153.
https://doi.org/10.1016/j.csda.2010.06.001
[33]  Larsen, R.M. (1998) Lanczos Bidiagonalization with Partial Reorthogonalization. Ph.D. Thesis, University of Aarhus, Aarhus, Denmark.
https://doi.org/10.7146/dpb.v27i537.7070
[34]  Larsen, R.M. Combining Implicit Restarts and Partial Reorthogonalization in Lanczos Bidiagonalization.
http://soi.stanford.edu/rmunk/PROPACK/
[35]  Lee, D.D. and Seung, H.S. (1999) Learning the Parts of Objects by Nonnegative Matrix Factorization. Nature, 401, 788-791.
https://doi.org/10.1038/44565
[36]  Lee, D.D. and Seung, H.S. (2001) Algorithms for Nonnegative Matrix Factorization. Advances in Neural Information Processing, 13, 556-562.
[37]  de Leeuw, J., Young, F.W. and Takane, Y. (1976) Additive Structure in Qualitative Data: An Alternating Least Squares Method with Optimal Scaling Features. Psychometrika, 4, 471-503.
https://doi.org/10.1007/BF02296971
[38]  Oseledets, I.V., Rakhuba, M. and Uschmajew, A. (2018) Alternating Least Squares as Moving Subspace Correction. SIAM Journal on Numerical Analysis, 56, 3459-3479.
https://doi.org/10.1137/17M1148712
[39]  Parlett, B.N. (1980) The Symmetric Eigenvalue Problem. Prentice-Hall, Englewood Cliffs, NJ.
[40]  Rutishauser, H. (1969) Computational Aspects of F.L. Bauer’s Simultaneous Iteration Method. Numerische Mathematik, 13, 4-13.
https://doi.org/10.1007/BF02165269
[41]  Rutishauser, H. (1970) Simultaneous Iteration Method for Symmetric Matrices. Numerische Mathematik, 16, 205-223.
https://doi.org/10.1007/BF02219773
[42]  Saad, Y. (2011) Numerical Methods for Large Eigenvalue Problems. Revised Edition, SIAM, Philadelphia, PA.
https://doi.org/10.1137/1.9781611970739
[43]  Sorensen, D.C. (1992) Implicit Application of Polynomial Filters in a k-Step Arnoldi Method. SIAM Journal on Matrix Analysis and Applications, 13, 357-385.
https://doi.org/10.1137/0613025
[44]  Stewart, G.W. (1969) Accelerating the Orthogonal Iteration for the Eigenvectors of a Hermitian Matrix. Numerische Mathematik, 13, 362-376.
https://doi.org/10.1007/BF02165413
[45]  Stewart, G.W. (1998) Matrix Algorithms, Vol. I: Basic Decompositions. SIAM, Philadelphia.
https://doi.org/10.1137/1.9781611971408
[46]  Stewart, G.W. (2001) Matrix Algorithms, Vol. II: Eigensystems. SIAM, Philadelphia.
https://doi.org/10.1137/1.9780898718058
[47]  Takane, Y., Young, F.W. and de Leeuw, J. (1977) Nonmetric Individual Differences Multidimensional Scaling: An Alternating Least Squares Method with Optimal Scaling Features. Psychometrika, 42, 7-67.
https://doi.org/10.1007/BF02293745
[48]  Uschmajew, A. (2012) Local Convergence of the Alternating Least Squares Algorithm for Canonical Tensor Approximation. SIAM Journal on Matrix Analysis and Applications, 33, 639-652.
https://doi.org/10.1137/110843587
[49]  Wang, L. and Chu, M.T. (2014) On the Global Convergence of the Alternating Least Squares Method for Rank-One Approximation to Generic Tensors. SIAM Journal on Matrix Analysis and Applications, 35, 1058-1072.
https://doi.org/10.1137/130938207
[50]  Wang, L., Chu, M.T. and Yu, B. (2015) Orthogonal Low Rank Tensor Approximation: Alternating Least Squares Method and Its Global Convergence. SIAM Journal on Matrix Analysis and Applications, 36, 1-19.
https://doi.org/10.1137/130943133
[51]  Wu, K. and Simon, H. (2000) Thick-Restarted Lanczos Method for Large Symmetric Eigenvalue Problems. SIAM Journal on Matrix Analysis and Applications, 22, 602-616.
https://doi.org/10.1137/S0895479898334605
[52]  Young, F.W., de Leeuw, J. and Takane, Y. (1976) Regression with Qualitative and Quantitative Variables: An Alternating Least Squares Method with Optimal Scaling Features. Psychometrika, 41, 505-529.
https://doi.org/10.1007/BF02296972
[53]  Young, F.W., Takane, Y. and de Leeuw, J. (1978) Principal Components of Mixed Measurement Level Multivariate Data: An Alternating Least Squares Method with Optimal Scaling Features. Psychometrika, 43, 279-281.
https://doi.org/10.1007/BF02293871
[54]  Zacharih, D., Sundin, M., Jansson, M. and Chatterjee, S. (2012) Alternating Least-Squares for Low-Rank Matrix Reconstruction. Signal Processing Letters, 19, 231-234.
https://doi.org/10.1109/LSP.2012.2188026

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133