Canonical polyadic decomposition

BO Brian Orcutt-Jahns
JJ Joao Rodrigues Lima Junior
EL Emily Lin
RR Russell C Rockne
AM Adina Matache
SB Sergio Branciamore
EH Ethan Hung
AR Andrei S Rodin
PL Peter P Lee
AM Aaron S Meyer
ask Ask a question
Favorite

Before analysis, we reorganized our measurements into a third- or fourth-order tensor, i.e., a three- or four-dimensional array with axes representing parameters over which the profiling was conducted. Within the CP decomposition model, a tensor is described as a sum of the tensor product () of rank-one components that represent the contribution of each mode. For instance, in the three-mode case:

where ar, br, and cr are the rth column of the factor matrices A, B, and C, which overall summarize how each pattern is represented across the three dimensions.

While many algorithms exist for deriving CP factorizations, we applied alternating least squares (ALS), wherein each mode is iteratively solved using least squares. As an iterative procedure, ALS must be initialized with a starting estimate of the factorization. The CPD decomposition was initialized using the right-hand r eigenvectors from SVD decomposition of the tensor flattened along each mode.

ALS applies the observation that, given factor matrices for two of the modes (e.g., modes 2 and 3), the optimal factor matrix for the remaining mode can be solved for as the least squares solution between the tensor unfolding and Khatri-Rao product of the known factors:

X(1) represents the tensor unfolding of X along mode 1, and CB represents the Khatri-Rao product of C and B. After solving for A, ALS proceeds for each of the remaining modes, completing one iteration of the algorithm by building a representation of the other factors using the Khatri-Rao product, and then applying least squares to solve for the left-out factor matrix.

In addition to the ALS scheme described above, we applied the line search routine described by Bro53. Briefly, after two rounds of ALS, the difference between the last two fitting iterations was used to look ahead by a line search step equal to Nl, where N is the iteration number and l is the line search quotient, initially equal to 2. If the error of the line search-updated factorization is less than the normal ALS update, then the line search result was accepted. Otherwise, the ALS result was used. After four straight rejections the line search quotient was increased by 1 to reduce the line search distance.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A