Matrices

f is a linear-function (-map, -transformation, ...) iff
f(v + w) = f(v) + f(w),
f(k v) = k f(v)
(consequently
f(0) = 0.)
f is often called a linear-transformation if the input and output spaces of f are the same.
A linear transformation over a finite-dimension vector-space can be represented by a matrix.

Examples

Identity, I:
I =
1 0
0 1
I
x
y
=
x
y
 
Projection of <x, y>, onto the line through the origin whose unit normal is n = <nx, ny>:
1-nx2 -nxny
-nxny 1-ny2
x
y
(<x, y> → <x, y> - (<x, y> . n) n = <x, y> - (x nx - y ny) n = <x - (x nx + y ny) nx, y - (x nx + y ny) ny>.)
 
Reflection of <x, y>, in the line through the origin whose unit normal is n = <nx, ny>:
1-2nx2 -2nxny
-2nxny 1-2ny2
x
y
 
Anti-clockwise rotation of <x, y>, by angle θ, about the origin:
cosθ - sinθ
sinθ cosθ
x
y

Equations

Given constants a, b, c, d, p, q, s, and t, and variables w, x, y, and z, in matrices
a b
c d
w x
y z
=
p q
r s
i.e.,
a w + b y = p,
a x + b z = q,
c w + d y = r,
c x + d z = s.
 
The following are all 2×2 but generalize ...
 
a b
c d
w x
y z
=
p q
r s
is equivalent
(wrt solving for
w, x, y, and z) to
b a
d c
y z
w x
=
p q
r s
col
swap
row
swap
 
to  
c d
a b
w x
y z
=
r s
p q
row
swap
  row
swap
 
to and to
ka kb
c d
w x
y z
=
kp kq
r s
row multiplication
by constant k
a+kc b+kd
c d
w x
y z
=
p+kr q+ks
r s
row addition
(or subtraction)
 
If we can work on M and P to reduce
M X = P
to an equivalent
I X = P',
using the relations above, then we can just read off the solution, P', for X.
X and P can be n×1 column vectors, or n×n matrices, etc..
For example, let P=I, the identity, and reduce
M X = I
to
I X = M-1
giving the matrix inverse of M.
(Note that any column swaps cause row swaps, in X, which must be undone to get the final answer.)