Term | Definition | Symbol | |||
---|---|---|---|---|---|
Vector | An ordered set of numbers or symbols, typically arranged in a column or row. | ||||
Unit Vector | A vector with a magnitude of 1. | ||||
Scalar | A single, standalone numerical value, typically used to represent a quantity such as a magnitude. | ||||
Vector Magnitude | The length of a vector. It may be calculated by squaring all elements in a vector, then calculating the square root of their sum. For instance, a two-dimensional vector, |
Term | Definitions | Symbol | |||
---|---|---|---|---|---|
Matrix | A two-dimensional vector of numbers, symbols, or expressions arranged in rows and columns. | ||||
Identity Matrix | A square matrix with ones on the main diagonal and zeros elsewhere. | ||||
Determinant | A scalar value that can be computed from the elements of a square matrix. | ||||
Matrix Inverse | If a matrix has an inverse, multiplying the matrix by its inverse results in the identity matrix. | ||||
Pseudoinverse | A generalization of the matrix inverse when the matrix is not necessarily square or invertible. |
Term | Definition | ||
---|---|---|---|
Square Matrix | A matrix with the same number of rows and columns. | ||
Diagonal Matrix | A matrix where all elements outside the main diagonal are zero. | ||
Symmetric Matrix | A square matrix that is equal to its transpose. | ||
Dense Matrix | A matrix in which most of the elements are non-zero. | ||
Sparse Matrix | A matrix in which most of the elements are zero. | ||
Cofactor Matrix | A matrix in which each element is the cofactor of the corresponding element of the original matrix, often used in computing determinants and matrix inverses. | ||
Coefficient Matrix | In the context of a system of linear equations, the matrix formed by the coefficients of the variables. | ||
Hilbert Matrix | A square matrix whose entries are the reciprocals of the sums of their row and column indices plus one. | ||
Sub Matrix | A matrix formed by selecting certain rows and columns from a larger matrix. | ||
Singular Matrix | A square matrix that is not invertible, meaning its determinant is zero. | ||
Markov Matrix | A square matrix used to describe transitions between states in a Markov chain, where each element represents the probability of transitioning from one state to another. | ||
Consumption Matrix | In the context of economics or mathematical modeling, a matrix representing the consumption rates of different resources by different entities. | ||
Lower Triangular Matrix | A square matrix in which all the entries above the main diagonal are zero. | ||
Change of Basis Matrix | A matrix that represents the linear transformation between two different bases in a vector space. | ||
Upper Triangular Matrix | A square matrix in which all the entries below the main diagonal are zero. |
Term | Definitions | Formula | |||
---|---|---|---|---|---|
Transpose | The result of swapping a matrix's rows with its columns. | ||||
Inner Product | A binary operation that takes two vectors and produces a scalar. | ||||
Outer Product | A mathematical operation that takes two vectors as input and produces a matrix as output. | ||||
Matrix Multiplication | An operation that takes two matrices and produces another matrix by multiplying corresponding entries and summing the results. | ||||
Hadamard Product | An element-wise product of two matrices of the same dimensions. | ||||
Trace | The sum of the diagonal elements of a square matrix. | ||||
Norm | A measure of a matrix's size. | ||||
Matrix Conjugate | The matrix that results from changing the sign of each imaginary number in the original matrix. | ||||
Singular Value | A singular value, denoted as | ||||
Singular Value Decomposition | A factorization of a matrix into three other matrices, including a diagonal matrix of singular values. |
Term | Definitions | ||
---|---|---|---|
QR Decomposition | A decomposition of a matrix into the product of an orthogonal matrix (Q) and an upper triangular matrix (R). | ||
LU Decomposition | A factorization of a square matrix into a product of a lower triangular matrix (L) and an upper triangular matrix (U). | ||
Jordan Decomposition | A way of representing a square matrix as the sum of a diagonal matrix and a nilpotent matrix. | ||
Free Variables | Variables in a linear system of equations that can take on any value, often corresponding to variables not leading to unique solutions. | ||
Free Columns | Columns in a matrix that do not contain a pivot element when reduced to row-echelon form. | ||
Pivot Variables | Variables in a linear system of equations corresponding to columns containing pivot elements. | ||
Row-Echelon Form | A matrix is in row-echelon form if: All zero rows, if any, are at the bottom of the matrix. The leading entry of each nonzero row is in a column to the right of the leading entry of the previous row. The leading entry in each nonzero row is 1. All entries in the column below a leading 1 are zero. | ||
Reduced Row-Echelon Form | A matrix is in reduced row-echelon form if it satisfies the conditions of row-echelon form and the following additional conditions: The leading 1 in each nonzero row is the only nonzero entry in its column. All entries in the column above and below a leading 1 are zero. |
Term | Definitions | Symbol | |||
---|---|---|---|---|---|
Linear Independence | A set of vectors is linearly independent if no vector in the set is a linear combination of the others. | ||||
Vector Space | The set of all possible vectors satisfying certain properties. | ||||
Row Space | The vector space spanned by the rows of a matrix. | ||||
Column Space | The vector space spanned by the columns of a matrix. | ||||
Nullspace | The set of all solutions to the homogeneous equation | ||||
Rank | The maximum number of linearly independent rows or columns in a matrix. | ||||
Linear Combination | A linear combination is an expression constructed from vectors by multiplying them by scalars and adding the results. | ||||
Span | The set of all possible linear combinations of vectors in a vector space. | ||||
Basis | A set of vectors that spans a vector space and is linearly independent. | ||||
Dimension | The number of vectors in a basis for a vector space. | ||||
Full Column Rank | A matrix whose columns are linearly independent, meaning the rank of the matrix is equal to the number of columns. | ||||
Full Row Rank | A matrix whose rows are linearly independent, meaning the rank of the matrix is equal to the number of rows. |
Term | Definitions | Formula | |||
---|---|---|---|---|---|
Orthogonal Matrix | A square matrix whose rows and columns are perpendicular to each other. | ||||
Orthonormal Matrix | A matrix where the vectors are not only orthogonal to each other but also normalized to have a length of 1. | ||||
Projection | A geometric operation that involves finding the component of one vector along another. | ||||
Change of Basis | The process of expressing vectors in one basis using coordinates from another basis. | ||||
Gram-Schmidt | A process that orthogonalizes a linearly independent set of vectors to form an orthonormal basis. |
Term | Definitions | Formula | |||
---|---|---|---|---|---|
Eigenvalue | A scalar | ||||
Eigenvector | A non-zero vector | ||||
Principal Component | The direction in which the data in a matrix varies the most. It is the eigenvector corresponding to the largest eigenvalue of the covariance matrix of the data. | ||||
Covariance Matrix | A matrix that summarizes covariance relationships between variables in a dataset. The diagonal elements represent variances, and off-diagonal elements represent covariances. | ||||
Repeated Eigenvalues | Eigenvalues that occur more than once, indicating that the corresponding eigenvectors are not linearly independent. | ||||
Sum of Eigenvalues | Equal to the trace of the matrix in a square matrix. | ||||
Positive Definite Matrix | A symmetric matrix where all eigenvalues are positive. |
Term | Definitions | ||
---|---|---|---|
Rank-Nullity Theorem | A theorem stating that the dimension of the column space plus the dimension of the null space of a matrix equals the number of columns in the matrix. | ||
Uniqueness of Reduced Row-Echelon Form | Every matrix | ||
Spectral Theorem | A theorem stating that states that every symmetric matrix is diagonalizable and its eigenvalues are real. |
Term | Definitions | ||
---|---|---|---|
Kernel | The set of all vectors that map to the zero vector in the codomain. | ||
Image Codomain | The set of all possible outputs that can be obtained by applying a transformation to the vectors in its domain. | ||
Onto Matrix Surjective Matrix | A matrix transformation that maps a higher-dimensional vector space onto a lower-dimensional one. Every element in the codomain is mapped to by at least one element from the domain. | ||
One-to-One Matrix Injective Matrix | A matrix transformation where distinct elements in the domain map to distinct elements in the codomain. Each element in the domain maps to a unique element in the codomain, and no two distinct elements in the domain map to the same element in the codomain. |
Term | Definition | ||
---|---|---|---|
Least Squares Regression | |||
SVM Classification | |||
SVM Dual | |||
Logistic Regression |
AI Study Tools for STEM Students Worldwide.
© 2025 CompSciLib™, LLC. All rights reserved.
info@compscilib.comContact Us