Posts

Finding the Inverse of a 2x2 Matrix from Scratch

Finding the Inverse of a 2x2 Matrix from Scratch This post shows a complete, step-by-step derivation of the inverse of a 2x2 matrix. Everything is expressed using stable, browser-safe ASCII formatting so the layout displays correctly on all devices and all templates. FIRST PART. Start with the matrix equation: A = [[a, b], [c, d]] A^(-1) = [[w, x], [y, z]] Goal: A * A^(-1) = I This produces the column equations: [aw + by, cw + dy]^T = [1, 0]^T [ax + bz, cx + dz]^T = [0, 1]^T Which gives the four equations: aw + by = 1 cw + dy = 0 ax + bz = 0 cx + dz = 1 SECOND PART. Use the first two equations to find w. aw + by = 1 cw + dy = 0 Multiply: (ad)w + (bd)y = d (first eq multiplied by d) (bc)w + (bd)y = 0 (second eq multiplied by b) Subtract: (ad - bc)w = d w = d / (ad - bc) (ad - bc != 0) THIRD PART. Use the next pair to find x. ax + bz = 0 cx + dz = 1 Multiply: (ad)x + (bd)z = 0 (bc)x + (bd)z = b Subtract: (ad - bc)...

Converting the Vector Equation of a Line into Cartesian Form

Image
Converting the Vector Equation of a Line into Cartesian Form A straight line in three-dimensional space can be expressed using vectors. One important vector form is (𝐑 − 𝐀) × 𝐁 = 0 This equation states that the displacement vector from a fixed point 𝐀 to a general point 𝐑 is parallel to the direction vector 𝐁. Two non-zero vectors have a zero cross product precisely when they are parallel. From this fact, the Cartesian (symmetric) equation of the line can be derived. 1. Substituting Coordinate Vectors The general point on the line is written as 𝐑 = (x, y, z) The fixed point is 𝐀 = (x₁, y₁, z₁) The direction vector is 𝐁 = (l, m, n) Substituting these into the vector equation yields: ((x, y, z) − (x₁, y₁, z₁)) × (l, m, n) = 0 which simplifies to: (x − x₁, y − y₁, z − z₁) × (l, m, n) = 0 2. Using the Condition for a Zero Cross Product If two non-zero vectors have a zero cross product, then one is a scalar multiple of the other. T...

The Difference Between the Lines 𝐀 + t𝐁 and 𝐁 + t(𝐀 − 𝐁)

Image
The Difference Between the Lines 𝐀 + t𝐁 and 𝐁 + t(𝐀 − 𝐁) A line in vector form is defined by two components: a base point that determines its position, and a direction vector that determines its orientation. Two expressions may involve the same vectors but still represent completely different lines when either the base point or the direction vector changes. The expressions L₁: 𝐀 + t𝐁 L₂: 𝐁 + t(𝐀 − 𝐁) provide a clear example of how distinct lines arise from different vector components. 1. Line L₁: 𝐀 + t𝐁 The expression 𝐀 + t𝐁 describes a line passing through the point represented by vector 𝐀 with direction vector 𝐁. As the real parameter t varies, the expression generates all points on the line. Base point: 𝐀 Direction vector: 𝐁 This is the line through 𝐀 directed along 𝐁. 2. Line L₂: 𝐁 + t(𝐀 − 𝐁) The expression 𝐁 + t(𝐀 − 𝐁) describes a different line. Its base point is 𝐁, and its direction vector is t...

Reversing a Linear Transformation Using an Inverse Matrix

Image
Reversing a Linear Transformation Using an Inverse Matrix In linear algebra, any invertible linear transformation can be reversed. The key tool that makes this possible is the inverse matrix . If a matrix transforms a vector into another, the inverse matrix recovers the original. 1. The Transformation Equation Suppose a vector x₁ is transformed into a vector x₂ using a matrix T : T x₁ = x₂ This equation describes how x₁ is mapped to x₂ . To reverse the transformation, we must apply the inverse matrix. 2. Applying the Inverse Matrix Multiply both sides of the equation by T⁻¹ : T⁻¹ (T x₁) = T⁻¹ x₂ Using the fundamental identity: T⁻¹ T = I the expression simplifies directly to: x₁ = T⁻¹ x₂ 3. Interpretation This tells us that the original vector is obtained by applying the inverse matrix to the transformed vector: Original vector = Inverse matrix × Image vector As long as the matrix is invertible, the reverse transformation always exist...

The Transpose, Symmetric Matrices, Identity Matrices and Zero Matrices

The Transpose, Symmetric Matrices, Identity Matrices and Zero Matrices Matrices contain more structure than simple rows and columns. Many important ideas in linear algebra come from operations such as reflecting a matrix, recognising symmetry, and identifying matrices that leave vectors unchanged. This article covers four core ideas: the transpose of a matrix symmetric matrices the identity matrix the zero matrix The Transpose of a Matrix The transpose of a matrix is created by swapping its rows and columns. If a matrix A has an entry in row i, column j, then AT has the same entry in row j, column i. Example: A = [ 1 4 ] [ 2 5 ] [ 3 6 ] Its transpose is: AT = [ 1 2 3 ] [ 4 5 6 ] If A is n × m, then AT is m × n. A square matrix stays square, but its entries reflect across the main diagonal. Symmetric Matrices A square matrix is symmetric when it equals its own transpose: A = AT This means the matrix does not ch...

Normalised Vectors: A Clear and Intuitive Guide

Image
Normalised Vectors: A Clear and Intuitive Guide Vectors can have any length, but many mathematical problems only depend on direction. To separate direction from magnitude, we normalise the vector. This produces a new vector of length 1 that points the same way as the original. Normalised vectors are central to geometry, physics, 3D graphics, transformations, and any setting where orientation matters. By working with a unit vector, calculations become simpler, cleaner, and more meaningful. What Is a Normalised Vector? A normalised vector is a vector with magnitude 1. It keeps its direction but loses its original size. Some simple unit vectors include: (1, 0, 0) — magnitude 1 (0, 1, 0) — magnitude 1 (0, 0, 1) — magnitude 1 These are the standard basis vectors. In general, any non-zero vector can be transformed into a unit vector by dividing by its magnitude. Normalising a Vector in 2 Dimensions For a 2D vector (a, b) , the length is: √(a² + b²) To n...

Understanding Eigenvectors and Eigenvalues: A Geometric Perspective

Image
Understanding Eigenvectors and Eigenvalues: A Geometric Perspective Every linear transformation has a hidden structure. Most vectors are pushed into new directions when a matrix acts on them, but a handful of special vectors behave differently. These are the eigenvectors — directions that remain perfectly aligned with themselves, even after the transformation has been applied. To understand how a matrix works, you must understand these special directions. What Is an Eigenvector? An eigenvector of a matrix A is a non-zero vector x that satisfies the relation A x = λ x The number λ is the eigenvalue associated with x . This equation expresses a simple but striking fact: the transformation does not rotate the vector at all. The direction is preserved exactly. The only change is a scaling by the factor λ . A positive eigenvalue stretches the vector. A value between 0 and 1 compresses it. A negative eigenvalue reverses the direction. But in every case, the vector re...