Posts

Showing posts from November, 2025

A Rectangle Construction for sin(α − β) and cos(α − β)

Image
The angle–difference identities are: sin(α − β) = sinα cosβ − cosα sinβ cos(α − β) = cosα cosβ + sinα sinβ They can be seen geometrically using a rectangle OZYX with a few right-angled triangles inside it. All side lengths can be written in terms of sinα, cosα, sinβ and cosβ. First, draw a right triangle OPQ with hypotenuse OQ = 1 and angle β at O. By definition: OP = cosβ (horizontal side), QP = sinβ (vertical side). Next, use OP as the hypotenuse of another right triangle OPZ. The right angle is at Z, and the angle at P is α. The hypotenuse is OP = cosβ, so: OZ = sinα cosβ, ZP = cosα cosβ. In a similar way, use QP as the hypotenuse of a right triangle PQY. The right angle is at Y, and the angle at Q is α. The hypotenuse is QP = sinβ, so: QY = cosα sinβ, PY = sinα sinβ. Drop a vertical line from Y to the base at Z, and a horizontal line from Y to the left side at X. This makes OZYX a rectangle with: base OZ, height ZY. On the...

Opposite Angles in a Cyclic Quadrilateral Add Up to π Radians

Image
Consider a cyclic quadrilateral: all four of its vertices lie on a circle. Join the centre of the circle to each vertex. This creates four isosceles triangles, each made from two radii and one side of the quadrilateral. Label the angles of the quadrilateral at the circumference by w, x, y and z. In each isosceles triangle, the angles at the base are equal, so the angle at the centre is: π − 2w, π − 2x, π − 2y, π − 2z. These four central angles meet at a point, so together they make one full turn: (π − 2w) + (π − 2x) + (π − 2y) + (π − 2z) = 2π. Rearranging gives: −2w − 2x − 2y − 2z + 4π = 2π so 2w + 2x + 2y + 2z = 2π and therefore w + x + y + z = π. From this, the opposite-angle relations in the quadrilateral follow directly: w + z = π − (x + y), z + y = π − (w + x). So in any cyclic quadrilateral, each pair of opposite angles adds up to π radians .

A Geometric Way to Visualise sin(x + y) and cos(x + y)

Image
The angle–addition identities for sine and cosine often appear as algebraic formulas, but they can also be understood by combining two right triangles in a simple geometric construction. The calculations for the side lengths follow directly from the definitions of sine and cosine. sin(x + y) = sin x cos y + cos x sin y cos(x + y) = cos x cos y − sin x sin y Start with a right triangle of angle y and hypotenuse 1. From basic trigonometry, its horizontal and vertical sides are: cos y and sin y. Next, attach a second right triangle with angle x . Its hypotenuse is the side of length cos y from the first triangle, so its adjacent and opposite sides become: adjacent = cos x · cos y opposite = sin x · cos y Likewise, if the first triangle's vertical side sin y is used as a hypotenuse in a similar way, it contributes: adjacent = cos x · sin y opposite = sin x · sin y When the horizontal components are combined, they give the expression for cos(x + y...

The Pilot FriXion Ball 0.7 mm: A Reliable Erasable Pen for Everyday Work

Image
When I write mathematical proofs, clarity and precision matter. A single misplaced symbol can alter the entire meaning of an argument. That is why my writing tool must be reliable, smooth, and forgiving. After years of trying different pens, pencils and fineliners, I now rely on one tool for all my handwritten proofs: the Pilot FriXion Ball 0.7 mm . This pen has become an essential part of my workflow. It writes smoothly, erases cleanly and can be refilled easily, making it ideal for long mathematical sessions where neatness and accuracy are critical. Video Review (1 Minute) Here is a short demonstration showing the pens up close and how the ink writes and erases. Why This Pen Works Perfectly for Mathematical Proofs Proof-writing is a process of refinement. You revise, adjust, correct and restructure your ideas repeatedly. With most pens, each correction introduces visual noise — scribbles, crossings-out or rewritten pages. The FriXion Ball removes that issue entirely....

A Clear Introduction to Diagonal Matrices

Image
A diagonal matrix is a square matrix in which every entry away from the main (leading) diagonal is zero. The leading diagonal runs from the top-left corner of the matrix to the bottom-right corner, and these diagonal entries are the only positions that may contain non-zero values. All off-diagonal entries must be zero. The diagonal entries themselves can be any real numbers, including zero. This strict structure is what makes diagonal matrices especially simple to analyse and compute with in linear algebra. Examples of Diagonal Matrices The general 2×2 diagonal matrix has the form: (a 0) (0 b) The general 3×3 diagonal matrix has the form: (a 0 0) (0 b 0) (0 0 c) In both cases, the values on the leading diagonal (a, b, c, …) are the only entries that may be non-zero. Every position above or below this diagonal is fixed at 0. The General n×n Diagonal Matrix For an n×n dia...

Why the Line ax + by = 0 Passes Through the Point (−b, a)

Image
Why the Line ax + by = 0 Passes Through the Point (−b, a) In ℝ² , the equation ax + by = 0 describes a line that is perpendicular to the vector (a, b) . This article explains exactly why—and why that line always passes through the point (−b, a) . 1. Start with the Vector (a, b) Consider the vector (a, b) . To find a line perpendicular to it, we need a vector whose dot product with (a, b) is zero. Try the vector (−b, a) : (a, b) · (−b, a) = a(−b) + b(a) = −ab + ab = 0 Therefore, (−b, a) is perpendicular to (a, b) . 2. Any Scalar Multiple Also Works If (−b, a) is perpendicular to (a, b) , then any multiple λ(−b, a) is also perpendicular: (a, b) · [λ(−b, a)] = λ[(a, b) · (−b, a)] = λ · 0 = 0 Let this perpendicular vector be (x, y) . Then (x, y) = λ(−b, a) Every point on the line comes from a particular choice of λ . 3. Converting to an Equation Since (x, y) is perpendicular to (a, b) , we have: (a, b) · (x, y) = 0 Expanding ...

2×2 Orthogonal Matrix Mastery — A Generalised Construction

Image
2×2 Orthogonal Matrix Mastery — A Generalised Construction Orthogonal matrices in two dimensions reveal one of the cleanest structures in linear algebra. A 2×2 matrix is orthogonal when its columns (and rows) satisfy two conditions: They are perpendicular (their dot product is zero); They have unit length (their magnitude is one). This article presents a clear generalisation: any pair of perpendicular vectors with equal magnitude can be normalised to form an orthogonal matrix. 1. Begin with two perpendicular vectors Let the first vector be: (a, b) A perpendicular vector can be chosen as: (−b, a) Their dot products confirm orthogonality: (a, b) · (−b, a) = −ab + ab = 0 (a, −b) · (b, a) = ab − ab = 0 2. Compute their shared magnitude Both vectors have the same length: |(a, b)| = √(a² + b²) We can therefore normalise each one by dividing by √(a² + b²). 3. Form the matrix using the normalised vectors Place the two normalised vectors...

Orthogonal Matrices and Mutually Orthogonal Vectors

Orthogonal Matrices and Mutually Orthogonal Vectors Orthogonal matrices appear naturally throughout linear algebra, geometry, physics, and computer graphics. They preserve lengths, angles, and orientation, which makes them fundamental in describing rotations and rigid motions in three-dimensional space. This article provides a clear and carefully structured explanation of what orthogonal matrices are, why they matter, and how to verify that a given matrix is orthogonal. 1. Definition of an Orthogonal Matrix Let M be an n × n square matrix. M is called orthogonal if it satisfies: M M T = I Here: M T is the transpose of M. I is the identity matrix of the same size. Because of this property, every orthogonal matrix has a very useful consequence: M -1 = M T This means that the inverse of an orthogonal matrix is obtained simply by transposing it. This property is central to rigid-body transformations in 3D geometry and computer graphics. 2...

Linear Transformations in ℝ³ and 3×3 Matrices

Linear Transformations in ℝ³ and 3×3 Matrices Matrices give us a compact way to describe linear transformations in three-dimensional space. A linear transformation is a mapping T : ℝ³ → ℝ³ that sends a point with position vector (x, y, z) to another point, according to a rule with two key properties. What Makes a Transformation Linear? A transformation T : ℝ³ → ℝ³ is called linear if, for all real numbers λ and all vectors (x, y, z) in ℝ³, T(λx, λy, λz) = λ T(x, y, z), and for all vectors (x₁, y₁, z₁) and (x₂, y₂, z₂) in ℝ³, T(x₁ + x₂, y₁ + y₂, z₁ + z₂) = T(x₁, y₁, z₁) + T(x₂, y₂, z₂). The point that (x, y, z) is sent to is called the image of (x, y, z) under T. The Standard Basis Vectors To find the matrix that represents a particular transformation, it is enough to know what happens to three special vectors, called the standard basis for ℝ³: î = (1, 0, 0) ĵ = (0, 1, 0) k̂ = (0, 0, 1) Once we know the images of î, ĵ and k̂, th...

Rules of Logarithms

This article presents the rules of logarithms using complete, line-by-line derivations. Every identity is built directly from its exponential origin, without shortcuts, matching the structure of formal handwritten algebra. 1. Definition We begin with fundamental exponent facts: a⁰ = 1 ⇒ logₐ(1) = 0 a¹ = a ⇒ logₐ(a) = 1 Say: aᵐ = p Then, by definition: logₐ(p) = m Raise both sides of aᵐ = p to the power 1/m (with m ≠ 0 ): p^(1/m) = a Therefore: logₚ(a) = 1/m Since m = logₐ(p) , we obtain: logₐ(p) = 1 / logₚ(a) 2. Product Rule — Full Derivation Say: aᵐ = p and aⁿ = q Multiply: aᵐ · aⁿ = p · q Using index addition: a^(m+n) = p · q Taking logarithms: logₐ(p · q) = m + n Substitute: logₐ(p · q) = logₐ(p) + logₐ(q) 3. Quotient Rule — Full Derivation Say: aᵐ = p and aⁿ = q Divide: aᵐ / aⁿ = p / q Index subtraction gives: a^(m−n) = p / q Taking logarithms: logₐ(p / q) = m − n So: log...

Finding the Inverse of a 2x2 Matrix from Scratch

Finding the Inverse of a 2x2 Matrix from Scratch This post shows a complete, step-by-step derivation of the inverse of a 2x2 matrix. Everything is expressed using stable, browser-safe ASCII formatting so the layout displays correctly on all devices and all templates. FIRST PART. Start with the matrix equation: A = [[a, b], [c, d]] A^(-1) = [[w, x], [y, z]] Goal: A * A^(-1) = I This produces the column equations: [aw + by, cw + dy]^T = [1, 0]^T [ax + bz, cx + dz]^T = [0, 1]^T Which gives the four equations: aw + by = 1 cw + dy = 0 ax + bz = 0 cx + dz = 1 SECOND PART. Use the first two equations to find w. aw + by = 1 cw + dy = 0 Multiply: (ad)w + (bd)y = d (first eq multiplied by d) (bc)w + (bd)y = 0 (second eq multiplied by b) Subtract: (ad - bc)w = d w = d / (ad - bc) (ad - bc != 0) THIRD PART. Use the next pair to find x. ax + bz = 0 cx + dz = 1 Multiply: (ad)x + (bd)z = 0 (bc)x + (bd)z = b Subtract: (ad - bc)...

Converting the Vector Equation of a Line into Cartesian Form

Image
Converting the Vector Equation of a Line into Cartesian Form A straight line in three-dimensional space can be expressed using vectors. One important vector form is (𝐑 − 𝐀) × 𝐁 = 0 This equation states that the displacement vector from a fixed point 𝐀 to a general point 𝐑 is parallel to the direction vector 𝐁. Two non-zero vectors have a zero cross product precisely when they are parallel. From this fact, the Cartesian (symmetric) equation of the line can be derived. 1. Substituting Coordinate Vectors The general point on the line is written as 𝐑 = (x, y, z) The fixed point is 𝐀 = (x₁, y₁, z₁) The direction vector is 𝐁 = (l, m, n) Substituting these into the vector equation yields: ((x, y, z) − (x₁, y₁, z₁)) × (l, m, n) = 0 which simplifies to: (x − x₁, y − y₁, z − z₁) × (l, m, n) = 0 2. Using the Condition for a Zero Cross Product If two non-zero vectors have a zero cross product, then one is a scalar multiple of the other. T...

The Difference Between the Lines 𝐀 + t𝐁 and 𝐁 + t(𝐀 − 𝐁)

Image
The Difference Between the Lines 𝐀 + t𝐁 and 𝐁 + t(𝐀 − 𝐁) A line in vector form is defined by two components: a base point that determines its position, and a direction vector that determines its orientation. Two expressions may involve the same vectors but still represent completely different lines when either the base point or the direction vector changes. The expressions L₁: 𝐀 + t𝐁 L₂: 𝐁 + t(𝐀 − 𝐁) provide a clear example of how distinct lines arise from different vector components. 1. Line L₁: 𝐀 + t𝐁 The expression 𝐀 + t𝐁 describes a line passing through the point represented by vector 𝐀 with direction vector 𝐁. As the real parameter t varies, the expression generates all points on the line. Base point: 𝐀 Direction vector: 𝐁 This is the line through 𝐀 directed along 𝐁. 2. Line L₂: 𝐁 + t(𝐀 − 𝐁) The expression 𝐁 + t(𝐀 − 𝐁) describes a different line. Its base point is 𝐁, and its direction vector is t...

Reversing a Linear Transformation Using an Inverse Matrix

Image
Reversing a Linear Transformation Using an Inverse Matrix In linear algebra, any invertible linear transformation can be reversed. The key tool that makes this possible is the inverse matrix . If a matrix transforms a vector into another, the inverse matrix recovers the original. 1. The Transformation Equation Suppose a vector x₁ is transformed into a vector x₂ using a matrix T : T x₁ = x₂ This equation describes how x₁ is mapped to x₂ . To reverse the transformation, we must apply the inverse matrix. 2. Applying the Inverse Matrix Multiply both sides of the equation by T⁻¹ : T⁻¹ (T x₁) = T⁻¹ x₂ Using the fundamental identity: T⁻¹ T = I the expression simplifies directly to: x₁ = T⁻¹ x₂ 3. Interpretation This tells us that the original vector is obtained by applying the inverse matrix to the transformed vector: Original vector = Inverse matrix × Image vector As long as the matrix is invertible, the reverse transformation always exist...

The Transpose, Symmetric Matrices, Identity Matrices and Zero Matrices

The Transpose, Symmetric Matrices, Identity Matrices and Zero Matrices Matrices contain more structure than simple rows and columns. Many important ideas in linear algebra come from operations such as reflecting a matrix, recognising symmetry, and identifying matrices that leave vectors unchanged. This article covers four core ideas: the transpose of a matrix symmetric matrices the identity matrix the zero matrix The Transpose of a Matrix The transpose of a matrix is created by swapping its rows and columns. If a matrix A has an entry in row i, column j, then AT has the same entry in row j, column i. Example: A = [ 1 4 ] [ 2 5 ] [ 3 6 ] Its transpose is: AT = [ 1 2 3 ] [ 4 5 6 ] If A is n × m, then AT is m × n. A square matrix stays square, but its entries reflect across the main diagonal. Symmetric Matrices A square matrix is symmetric when it equals its own transpose: A = AT This means the matrix does not ch...

Normalised Vectors: A Clear and Intuitive Guide

Image
Normalised Vectors: A Clear and Intuitive Guide Vectors can have any length, but many mathematical problems only depend on direction. To separate direction from magnitude, we normalise the vector. This produces a new vector of length 1 that points the same way as the original. Normalised vectors are central to geometry, physics, 3D graphics, transformations, and any setting where orientation matters. By working with a unit vector, calculations become simpler, cleaner, and more meaningful. What Is a Normalised Vector? A normalised vector is a vector with magnitude 1. It keeps its direction but loses its original size. Some simple unit vectors include: (1, 0, 0) — magnitude 1 (0, 1, 0) — magnitude 1 (0, 0, 1) — magnitude 1 These are the standard basis vectors. In general, any non-zero vector can be transformed into a unit vector by dividing by its magnitude. Normalising a Vector in 2 Dimensions For a 2D vector (a, b) , the length is: √(a² + b²) To n...

Understanding Eigenvectors and Eigenvalues: A Geometric Perspective

Image
Understanding Eigenvectors and Eigenvalues: A Geometric Perspective Every linear transformation has a hidden structure. Most vectors are pushed into new directions when a matrix acts on them, but a handful of special vectors behave differently. These are the eigenvectors — directions that remain perfectly aligned with themselves, even after the transformation has been applied. To understand how a matrix works, you must understand these special directions. What Is an Eigenvector? An eigenvector of a matrix A is a non-zero vector x that satisfies the relation A x = λ x The number λ is the eigenvalue associated with x . This equation expresses a simple but striking fact: the transformation does not rotate the vector at all. The direction is preserved exactly. The only change is a scaling by the factor λ . A positive eigenvalue stretches the vector. A value between 0 and 1 compresses it. A negative eigenvalue reverses the direction. But in every case, the vector re...

The Dot Product Identity and the Cosine Rule in ℝ³

Image
The Dot Product Identity and the Cosine Rule in ℝ 3 In this article we derive the dot product identity A · B = |A| × |B| × cos(θ) and show how this identity leads directly to the cosine rule, using a combination of coordinate algebra and geometric interpretation. 1. Vectors in ℝ 3 Let the vectors be: A = (a 1 , a 2 , a 3 ) B = (b 1 , b 2 , b 3 ) Their difference is: A - B = (a 1 - b 1 , a 2 - b 2 , a 3 - b 3 ) The squared magnitude of this difference vector is: |A - B| 2 = (a 1 - b 1 ) 2 + (a 2 - b 2 ) 2 + (a 3 - b 3 ) 2 . 2. Expanding the Square of the Difference Expand each component: (a 1 - b 1 ) 2 = a 1 2 - 2a 1 b 1 + b 1 2 (a 2 - b 2 ) 2 = a 2 2 - 2a 2 b 2 + b 2 2 (a 3 - b 3 ) 2 = a 3 2 - 2a 3 b 3 + b 3 2 Adding these three expansions gives: |A - B| 2 = (a 1 2 + a 2 2 + a 3 2 ) + (b 1 2 + b 2 2 + b 3 2 ) - 2(a 1 b 1 + a 2 b 2 + a 3 b 3 ). Recognise the squared magnitudes: |A| 2 = a 1 2 + a 2 2 ...

Homogeneous Coordinates: A Simple and Intuitive Primer

Image
Homogeneous Coordinates: A Simple and Intuitive Primer In ordinary geometry, we use familiar coordinates such as (x, y) in 2D or (x, y, z) in 3D. These work well, but they have one major limitation: not all geometric transformations fit neatly into this system—especially translations and perspective projections. To unify everything into one clean mathematical framework, we introduce homogeneous coordinates . They provide a simple way to treat every transformation—from translations to perspective projection— using only matrix multiplication. 1. Why Do We Need Something New? In ordinary coordinates: rotations are matrices, scalings are matrices, shears are matrices, translations are not matrices . Translation is the “odd one out.” This creates friction in computer graphics, robotics, and projective geometry, where we want one system that handles everything the same way. Homogeneous coordinates fix this by adding one extra coordinate. 2. The Bas...

What Is an Isometry?

Image
What Is an Isometry? In geometry, some transformations distort shapes by stretching, squashing, or bending them. Others preserve the shape perfectly. These distortion-free transformations are called isometries . An isometry keeps every distance between points exactly the same. The object may move, rotate, or flip, but its size and structure remain unchanged. 1. The Core Idea A transformation f is an isometry if: distance( f(x), f(y) ) = distance( x, y ) for every pair of points x and y. This is the mathematical way of saying: nothing is stretched, compressed, or distorted. 2. A Simple Real-Life Analogy Place a phone on a desk. Slide it. Rotate it. Flip it. It remains the same phone—same size, same shape, same geometry. Each of those motions is an isometry . 3. What Isometries Never Do An isometry does not : stretch or shrink a shape, shear it diagonally, bend or curve it, change angles or proportions. It behaves exactly like movi...

The Shortest Distance Between Two Skew Lines in ℝ³

The Shortest Distance Between Two Skew Lines in ℝ³ This post derives, from first principles, a vector formula for the shortest distance between two skew lines in ℝ³. The argument uses only the definitions and basic properties of the dot product and cross product; no higher results are assumed. 1. Vector Equations of the Lines Let the two lines be given in vector form by r = a + λb r = c + μd where: a , c are position vectors of fixed points on each line, b , d are non-zero direction vectors, λ, μ ∈ ℝ are parameters. The lines are skew if they are neither parallel nor intersecting. Our goal is to find a closed-form expression for the minimum distance between them. 2. Direction of the Common Perpendicular The segment that realises the shortest distance lies along a line perpendicular to both direction vectors b and d . A vector perpendicular to both is given by their cross product: b × d. Assuming b and d are not parallel, b × d ≠ 0. A unit ...

Area of a Triangle in R3 in Terms of Vectors

Image
Area of a Triangle in R3 in Terms of Vectors This post presents a complete derivation of the area of triangle ABC in R3 using vectors alone. The result follows directly from the definition of the cross product and basic algebraic properties. No geometric shortcuts or previously established vector identities are assumed. Every step is obtained from first principles. 1. Vector Representation of the Triangle Let A, B and C be points in R3 with position vectors A, B and C. Two sides of triangle ABC are represented by: B - A C - A If theta is the angle between these vectors, the classical formula for the area of the triangle is: Area = (1/2)*|B - A|*|C - A|*sin(theta) 2. Magnitude of the Cross Product For any vectors U and V in R3, the magnitude of the cross product is: |U x V| = |U|*|V|*sin(theta) Applying this to U = B - A and V = C - A gives: | (B - A) x (C - A) | = |B - A|*|C - A|*sin(theta) Substituting into the area formula yields: Area(ABC) = (1/2)*| (B ...

Full Coordinate Derivation of (B - A) x (C - A) in R3

Full Coordinate Derivation of (B - A) x (C - A) in R3 This derivation shows every algebraic step involved in expanding the cross product (B - A) x (C - A) using only coordinates. No vector identities are assumed in advance. All identities that appear at the end arise directly from the coordinate formula and elementary algebra. This method provides complete transparency and is the foundation for many geometric and analytic results involving the cross product. 1. Vectors and Cross Product Formula A = (a1, a2, a3) B = (b1, b2, b3) C = (c1, c2, c3) For vectors U = (u1, u2, u3) and V = (v1, v2, v3), the cross product is defined in coordinates by: U x V = ( u2*v3 - u3*v2, u3*v1 - u1*v3, u1*v2 - u2*v1 ) This is the only formula used. Every identity later in the derivation follows from substituting coordinates into this definition. 2. Basic Cross Products Needed for the Expansion A x A A x A = ( a2*a3 - a3*a2, a3*a1 - a1*a3, a1*a2 - a2*a1 ) = (0, 0, 0) A vector...

Understanding Skew Lines in Three-Dimensional Space

Understanding Skew Lines in Three-Dimensional Space Why they exist, how they behave, and how to analyse them rigorously. In two-dimensional geometry, any two lines must either intersect or be parallel. There is no third possibility because both lines are trapped inside a single plane. However, in three-dimensional space, a new type of configuration becomes possible: two lines that do not meet, are not parallel, and do not lie in the same plane. These are called skew lines . They represent one of the first truly three-dimensional concepts in mathematics and geometry. 1. What Are Skew Lines? Two lines L₁ and L₂ in 3D are skew if: they do not intersect , they are not parallel , and they are not coplanar (there is no single plane that contains both). This gives the mathematical definition: L₁ and L₂ are skew ⇔ (1) L₁ ∩ L₂ = ∅ (2) L₁ is not parallel to L₂ (3) No plane contains both lines Skew lines cannot exist in 2D. They are a purely three-dim...

A Deep Introduction to Set Theory

A Deep Introduction to Set Theory The foundational language of all modern mathematics. Set theory is the backbone of mathematics. Every branch — algebra, geometry, analysis, topology, probability, algorithms, mathematical physics — rests on the structures introduced here. This article gives a deeper, more complete introduction to the core objects and operations of set theory: sets, elements, subsets, operations, relations, functions, power sets and cardinality. 1. What Is a Set? A set is a collection of distinct objects, called elements , grouped together into a single mathematical entity. We denote sets with braces: A = {1, 2, 3}, B = {x, y, z}. Membership is written as: x ∈ A (x is an element of A) x ∉ A (x is not an element of A) Important points: Sets care about membership , not order: {1, 2} = {2, 1}. Sets do not contain duplicates: {1, 1, 1, 2} = {1, 2}. Sets can contain abstract objects: numbers, functions, points, vectors, sh...