Controls
Build a chain of transforms, then compare step-by-step vs. a single composed matrix — they always reach the same result.
Linear Transformations
A linear transformation maps vectors from one space to another while
preserving addition and scalar multiplication. In neural networks, each
layer applies a weight matrix W to its input.
How to Use
- Edit the matrix or pick a preset to set the target
- Press Transform to animate the change
- Toggle layers to show/hide grid, vectors, circle
- Watch the metrics update during animation
The Formula
y = Wx
Each output component is a dot product of a row of W with the input:
y₁ = w₁₁x₁ + w₁₂x₂y₂ = w₂₁x₁ + w₂₂x₂
Neural Network Connection
Each layer in a neural network applies a linear transformation
y = Wx + b followed by a non-linear activation.
The weight matrix W determines how the input space is warped.
- Scaling stretches/compresses features
- Rotation mixes features together
- Projection collapses dimensions (information loss)
- Stacking layers composes multiple transformations
Why Composition Matters
Applying W₁ then W₂ is identical to the single product matrix W₂·W₁:
W₂(W₁x) = (W₂ · W₁)x
This is why neural networks need non-linear activations. Without them, any stack of linear layers collapses to one matrix — making depth useless.
Try It
- Pick a preset above, click Add to Chain
- Repeat with a different preset
- Click Step-by-Step to watch each transform apply
- Click Composed to see the single-matrix shortcut reach the same result
Key Concepts
- Determinant — area scaling factor. Zero means collapse to lower dimension.
- Eigenvalues — scaling factors along special directions that remain unchanged.
- Eigenvectors — directions that only get scaled, not rotated.
- Singular values — the semi-axis lengths of the transformed unit circle (ellipse).
- Rank — dimension of the output space. Rank < 2 means information loss.
Matrix Analysis
| Determinant | 1.00 |
| Type | Identity |
| Rank | 2 |
| Eigenvalues | 1, 1 |
| Singular Values | 1.00, 1.00 |