Gram matrix
Given vectors in an inner product space , the Gram matrix is with entries .
is symmetric positive semidefinite
is positive definite iff are linearly independent.
Sample gram matrix
For , the sample gram matrix is .
Feature gram matrix
For , the feature gram matrix
The second moment matrix refers to the and covariance matrix.
Both and can be the same Gram matrices the difference is whether your vectors are stored as rows or columns of , respectively. They can also be represented as a sum of rank-1 matrices from outer products!
Vectors-as-columns convention is standard in linear algebra where we usually think of vectors as column vectors.
However, in machine learning and data science, we often use the vectors-as-rows convention (each row is a data point with features), which would make the Gram matrix.Let , rows and columns (so ).
We use the same dummy index in both views (its range differs).Vectors-as-rows (ML convention): Gram is .
Entrywise (inner products):
Sum of outer products of columns (same matrix):
Vectors-as-columns (classical LA): Gram is .
Entrywise (inner products):
Sum of outer products of rows (same matrix):