Linear Algebra Basics – A Detailed Overview
Linear Algebra is a fundamental branch of mathematics that deals with vectors, vector spaces, linear transformations, and systems of linear equations. It plays a crucial role in various fields such as physics, computer science, engineering, and machine learning.
1. Introduction to Linear Algebra
Linear Algebra primarily focuses on linear equations and their representations using matrices and vector spaces. Some key topics include:
- Vectors and vector operations
- Matrices and matrix operations
- Determinants and inverses
- Linear transformations
- Eigenvalues and eigenvectors
- Vector spaces and subspaces
Now, let’s go step by step in a detailed manner.
2. Vectors and Their Operations
2.1 Definition of a Vector
A vector is an ordered list of numbers. It represents a point in space or a direction with magnitude.
For example: v=[35−2]\mathbf{v} = \begin{bmatrix} 3 \\ 5 \\ -2 \end{bmatrix}
is a 3-dimensional vector.
A vector can also be written as: v=(3,5,−2)\mathbf{v} = (3, 5, -2)
2.2 Types of Vectors
- Zero Vector: A vector with all components as zero, e.g., 0=(0,0,0)\mathbf{0} = (0,0,0).
- Unit Vector: A vector with a magnitude of 1.
- Collinear Vectors: Vectors that lie on the same line or parallel lines.
- Orthogonal Vectors: Vectors that are perpendicular to each other.
2.3 Vector Operations
2.3.1 Addition of Vectors
Vectors are added component-wise: a=[a1a2a3],b=[b1b2b3]\mathbf{a} = \begin{bmatrix} a_1 \\ a_2 \\ a_3 \end{bmatrix}, \quad \mathbf{b} = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix} a+b=[a1+b1a2+b2a3+b3]\mathbf{a} + \mathbf{b} = \begin{bmatrix} a_1 + b_1 \\ a_2 + b_2 \\ a_3 + b_3 \end{bmatrix}
2.3.2 Subtraction of Vectors
a−b=[a1−b1a2−b2a3−b3]\mathbf{a} – \mathbf{b} = \begin{bmatrix} a_1 – b_1 \\ a_2 – b_2 \\ a_3 – b_3 \end{bmatrix}
2.3.3 Scalar Multiplication
ca=[c⋅a1c⋅a2c⋅a3]c\mathbf{a} = \begin{bmatrix} c \cdot a_1 \\ c \cdot a_2 \\ c \cdot a_3 \end{bmatrix}
2.3.4 Dot Product (Scalar Product)
The dot product of two vectors is given by: a⋅b=a1b1+a2b2+a3b3\mathbf{a} \cdot \mathbf{b} = a_1b_1 + a_2b_2 + a_3b_3
If the dot product is zero, the vectors are perpendicular.
2.3.5 Cross Product
For 3D vectors, the cross product results in a new vector: a×b=[a2b3−a3b2a3b1−a1b3a1b2−a2b1]\mathbf{a} \times \mathbf{b} = \begin{bmatrix} a_2b_3 – a_3b_2 \\ a_3b_1 – a_1b_3 \\ a_1b_2 – a_2b_1 \end{bmatrix}
The cross product is perpendicular to both original vectors.
3. Matrices and Their Operations
3.1 Definition of a Matrix
A matrix is a rectangular array of numbers arranged in rows and columns.
Example: A=[123456]A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}
This is a 2 × 3 matrix (2 rows, 3 columns).
3.2 Types of Matrices
- Square Matrix: A matrix with an equal number of rows and columns.
- Identity Matrix (II): A square matrix with 1s on the diagonal and 0s elsewhere.
- Diagonal Matrix: A matrix where all non-diagonal elements are zero.
- Zero Matrix: A matrix where all elements are zero.
3.3 Matrix Operations
3.3.1 Addition and Subtraction
Matrices of the same dimensions can be added or subtracted: A+B=[a11+b11a12+b12a21+b21a22+b22]A + B = \begin{bmatrix} a_{11} + b_{11} & a_{12} + b_{12} \\ a_{21} + b_{21} & a_{22} + b_{22} \end{bmatrix}
3.3.2 Scalar Multiplication
Multiply each element of the matrix by a scalar: cA=[ca11ca12ca21ca22]cA = \begin{bmatrix} ca_{11} & ca_{12} \\ ca_{21} & ca_{22} \end{bmatrix}
3.3.3 Matrix Multiplication
Matrix multiplication follows the rule: C=ABwherecij=∑kaikbkjC = AB \quad \text{where} \quad c_{ij} = \sum_{k} a_{ik} b_{kj}
3.3.4 Determinant of a Matrix
For a 2×2 matrix: det(A)=∣abcd∣=ad−bc\det(A) = \begin{vmatrix} a & b \\ c & d \end{vmatrix} = ad – bc
For a 3×3 matrix: det(A)=a(ei−fh)−b(di−fg)+c(dh−eg)\det(A) = a(ei – fh) – b(di – fg) + c(dh – eg)
3.3.5 Inverse of a Matrix
A matrix AA has an inverse A−1A^{-1} if: AA−1=IA A^{-1} = I
For a 2×2 matrix: A−1=1det(A)[d−b−ca]A^{-1} = \frac{1}{\det(A)} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}
4. Systems of Linear Equations
A system of equations: ax+by=eax + by = e cx+dy=fcx + dy = f
can be written as: AX=BAX = B
where: A=[abcd],X=[xy],B=[ef]A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}, \quad X = \begin{bmatrix} x \\ y \end{bmatrix}, \quad B = \begin{bmatrix} e \\ f \end{bmatrix}
Methods to Solve
- Gaussian Elimination (Row Reduction)
- Cramer’s Rule (Using Determinants)
- Matrix Inversion Method
5. Eigenvalues and Eigenvectors
For a square matrix AA, if: Av=λvA \mathbf{v} = \lambda \mathbf{v}
where λ\lambda is a scalar, then:
- λ\lambda is an eigenvalue.
- v\mathbf{v} is an eigenvector.
To find eigenvalues, solve: det(A−λI)=0\det(A – \lambda I) = 0
6. Vector Spaces and Linear Independence
A vector space is a collection of vectors that satisfy:
- Closure under addition.
- Closure under scalar multiplication.
Vectors are linearly independent if: c1v1+c2v2+…+cnvn=0c_1 \mathbf{v}_1 + c_2 \mathbf{v}_2 + … + c_n \mathbf{v}_n = 0
only has the trivial solution c1=c2=…=cn=0c_1 = c_2 = … = c_n = 0.