Posts

Showing posts with the label pure mathematics

Why the Line ax + by = 0 Passes Through the Point (−b, a)

Image
Why the Line ax + by = 0 Passes Through the Point (−b, a) In ℝ² , the equation ax + by = 0 describes a line that is perpendicular to the vector (a, b) . This article explains exactly why—and why that line always passes through the point (−b, a) . 1. Start with the Vector (a, b) Consider the vector (a, b) . To find a line perpendicular to it, we need a vector whose dot product with (a, b) is zero. Try the vector (−b, a) : (a, b) · (−b, a) = a(−b) + b(a) = −ab + ab = 0 Therefore, (−b, a) is perpendicular to (a, b) . 2. Any Scalar Multiple Also Works If (−b, a) is perpendicular to (a, b) , then any multiple λ(−b, a) is also perpendicular: (a, b) · [λ(−b, a)] = λ[(a, b) · (−b, a)] = λ · 0 = 0 Let this perpendicular vector be (x, y) . Then (x, y) = λ(−b, a) Every point on the line comes from a particular choice of λ . 3. Converting to an Equation Since (x, y) is perpendicular to (a, b) , we have: (a, b) · (x, y) = 0 Expanding ...

2×2 Orthogonal Matrix Mastery — A Generalised Construction

Image
2×2 Orthogonal Matrix Mastery — A Generalised Construction Orthogonal matrices in two dimensions reveal one of the cleanest structures in linear algebra. A 2×2 matrix is orthogonal when its columns (and rows) satisfy two conditions: They are perpendicular (their dot product is zero); They have unit length (their magnitude is one). This article presents a clear generalisation: any pair of perpendicular vectors with equal magnitude can be normalised to form an orthogonal matrix. 1. Begin with two perpendicular vectors Let the first vector be: (a, b) A perpendicular vector can be chosen as: (−b, a) Their dot products confirm orthogonality: (a, b) · (−b, a) = −ab + ab = 0 (a, −b) · (b, a) = ab − ab = 0 2. Compute their shared magnitude Both vectors have the same length: |(a, b)| = √(a² + b²) We can therefore normalise each one by dividing by √(a² + b²). 3. Form the matrix using the normalised vectors Place the two normalised vectors...

Orthogonal Matrices and Mutually Orthogonal Vectors

Orthogonal Matrices and Mutually Orthogonal Vectors Orthogonal matrices appear naturally throughout linear algebra, geometry, physics, and computer graphics. They preserve lengths, angles, and orientation, which makes them fundamental in describing rotations and rigid motions in three-dimensional space. This article provides a clear and carefully structured explanation of what orthogonal matrices are, why they matter, and how to verify that a given matrix is orthogonal. 1. Definition of an Orthogonal Matrix Let M be an n × n square matrix. M is called orthogonal if it satisfies: M M T = I Here: M T is the transpose of M. I is the identity matrix of the same size. Because of this property, every orthogonal matrix has a very useful consequence: M -1 = M T This means that the inverse of an orthogonal matrix is obtained simply by transposing it. This property is central to rigid-body transformations in 3D geometry and computer graphics. 2...

Rules of Logarithms

This article presents the rules of logarithms using complete, line-by-line derivations. Every identity is built directly from its exponential origin, without shortcuts, matching the structure of formal handwritten algebra. 1. Definition We begin with fundamental exponent facts: a⁰ = 1 ⇒ logₐ(1) = 0 a¹ = a ⇒ logₐ(a) = 1 Say: aᵐ = p Then, by definition: logₐ(p) = m Raise both sides of aᵐ = p to the power 1/m (with m ≠ 0 ): p^(1/m) = a Therefore: logₚ(a) = 1/m Since m = logₐ(p) , we obtain: logₐ(p) = 1 / logₚ(a) 2. Product Rule — Full Derivation Say: aᵐ = p and aⁿ = q Multiply: aᵐ · aⁿ = p · q Using index addition: a^(m+n) = p · q Taking logarithms: logₐ(p · q) = m + n Substitute: logₐ(p · q) = logₐ(p) + logₐ(q) 3. Quotient Rule — Full Derivation Say: aᵐ = p and aⁿ = q Divide: aᵐ / aⁿ = p / q Index subtraction gives: a^(m−n) = p / q Taking logarithms: logₐ(p / q) = m − n So: log...

Finding the Inverse of a 2x2 Matrix from Scratch

Finding the Inverse of a 2x2 Matrix from Scratch This post shows a complete, step-by-step derivation of the inverse of a 2x2 matrix. Everything is expressed using stable, browser-safe ASCII formatting so the layout displays correctly on all devices and all templates. FIRST PART. Start with the matrix equation: A = [[a, b], [c, d]] A^(-1) = [[w, x], [y, z]] Goal: A * A^(-1) = I This produces the column equations: [aw + by, cw + dy]^T = [1, 0]^T [ax + bz, cx + dz]^T = [0, 1]^T Which gives the four equations: aw + by = 1 cw + dy = 0 ax + bz = 0 cx + dz = 1 SECOND PART. Use the first two equations to find w. aw + by = 1 cw + dy = 0 Multiply: (ad)w + (bd)y = d (first eq multiplied by d) (bc)w + (bd)y = 0 (second eq multiplied by b) Subtract: (ad - bc)w = d w = d / (ad - bc) (ad - bc != 0) THIRD PART. Use the next pair to find x. ax + bz = 0 cx + dz = 1 Multiply: (ad)x + (bd)z = 0 (bc)x + (bd)z = b Subtract: (ad - bc)...

The Transpose, Symmetric Matrices, Identity Matrices and Zero Matrices

The Transpose, Symmetric Matrices, Identity Matrices and Zero Matrices Matrices contain more structure than simple rows and columns. Many important ideas in linear algebra come from operations such as reflecting a matrix, recognising symmetry, and identifying matrices that leave vectors unchanged. This article covers four core ideas: the transpose of a matrix symmetric matrices the identity matrix the zero matrix The Transpose of a Matrix The transpose of a matrix is created by swapping its rows and columns. If a matrix A has an entry in row i, column j, then AT has the same entry in row j, column i. Example: A = [ 1 4 ] [ 2 5 ] [ 3 6 ] Its transpose is: AT = [ 1 2 3 ] [ 4 5 6 ] If A is n × m, then AT is m × n. A square matrix stays square, but its entries reflect across the main diagonal. Symmetric Matrices A square matrix is symmetric when it equals its own transpose: A = AT This means the matrix does not ch...

Normalised Vectors: A Clear and Intuitive Guide

Image
Normalised Vectors: A Clear and Intuitive Guide Vectors can have any length, but many mathematical problems only depend on direction. To separate direction from magnitude, we normalise the vector. This produces a new vector of length 1 that points the same way as the original. Normalised vectors are central to geometry, physics, 3D graphics, transformations, and any setting where orientation matters. By working with a unit vector, calculations become simpler, cleaner, and more meaningful. What Is a Normalised Vector? A normalised vector is a vector with magnitude 1. It keeps its direction but loses its original size. Some simple unit vectors include: (1, 0, 0) — magnitude 1 (0, 1, 0) — magnitude 1 (0, 0, 1) — magnitude 1 These are the standard basis vectors. In general, any non-zero vector can be transformed into a unit vector by dividing by its magnitude. Normalising a Vector in 2 Dimensions For a 2D vector (a, b) , the length is: √(a² + b²) To n...

Understanding Eigenvectors and Eigenvalues: A Geometric Perspective

Image
Understanding Eigenvectors and Eigenvalues: A Geometric Perspective Every linear transformation has a hidden structure. Most vectors are pushed into new directions when a matrix acts on them, but a handful of special vectors behave differently. These are the eigenvectors — directions that remain perfectly aligned with themselves, even after the transformation has been applied. To understand how a matrix works, you must understand these special directions. What Is an Eigenvector? An eigenvector of a matrix A is a non-zero vector x that satisfies the relation A x = λ x The number λ is the eigenvalue associated with x . This equation expresses a simple but striking fact: the transformation does not rotate the vector at all. The direction is preserved exactly. The only change is a scaling by the factor λ . A positive eigenvalue stretches the vector. A value between 0 and 1 compresses it. A negative eigenvalue reverses the direction. But in every case, the vector re...

The Dot Product Identity and the Cosine Rule in ℝ³

Image
The Dot Product Identity and the Cosine Rule in ℝ 3 In this article we derive the dot product identity A · B = |A| × |B| × cos(θ) and show how this identity leads directly to the cosine rule, using a combination of coordinate algebra and geometric interpretation. 1. Vectors in ℝ 3 Let the vectors be: A = (a 1 , a 2 , a 3 ) B = (b 1 , b 2 , b 3 ) Their difference is: A - B = (a 1 - b 1 , a 2 - b 2 , a 3 - b 3 ) The squared magnitude of this difference vector is: |A - B| 2 = (a 1 - b 1 ) 2 + (a 2 - b 2 ) 2 + (a 3 - b 3 ) 2 . 2. Expanding the Square of the Difference Expand each component: (a 1 - b 1 ) 2 = a 1 2 - 2a 1 b 1 + b 1 2 (a 2 - b 2 ) 2 = a 2 2 - 2a 2 b 2 + b 2 2 (a 3 - b 3 ) 2 = a 3 2 - 2a 3 b 3 + b 3 2 Adding these three expansions gives: |A - B| 2 = (a 1 2 + a 2 2 + a 3 2 ) + (b 1 2 + b 2 2 + b 3 2 ) - 2(a 1 b 1 + a 2 b 2 + a 3 b 3 ). Recognise the squared magnitudes: |A| 2 = a 1 2 + a 2 2 ...

A Gentle Introduction to Function Notation

A Gentle Introduction to Function Notation Understanding f : A → B — the language of modern mathematics. One of the most powerful ideas in mathematics is the concept of a function . We usually meet it in the form f(x) = 2x + 1 , but the structure behind this idea is far richer. The notation f : A → B captures the entire architecture of a function in a single line. In this article, we unpack this notation and explain exactly what it means, why it matters, and how it connects to the familiar expression f(x) = y . 1. What does f : A → B mean? When we write f : A → B we are saying: f is a function, A is the domain — the set of inputs the function accepts, B is the codomain — the set in which all outputs must lie. In words: A function assigns to every element of the domain A exactly one output in the codomain B . Two rules always hold for genuine functions: Every input must have an output. No input may have more than one output. D...

The Method of Differences — A Clean Proof of the Sum of Cubes

The Method of Differences — A Clean Proof of the Sum of Cubes The method of differences is a remarkably elegant tool for evaluating finite sums. When each term of a series can be written in the form f(r+1) − f(r) , the sum “collapses” — all interior terms cancel, leaving only a boundary expression. This behaviour is called a telescoping sum . 1) Telescoping Sums Assume the general term u r can be written as: u r = f(r+1) − f(r). Then the finite sum from r = 1 to r = n becomes: Σ u r = Σ ( f(r+1) − f(r) ). To see what happens, write out a few terms: u₁ = f(2) − f(1) u₂ = f(3) − f(2) u₃ = f(4) − f(3) ⋮ uₙ = f(n+1) − f(n) When these are added, everything cancels except the first and last pieces: Σ u r = f(n+1) − f(1). This is the essence of the method: interior structure disappears, leaving just the difference between the final and initial states. 2) A Classic Application — The Sum of Cubes We will use this technique to prove the well-known ...