Transformation Space det(W) = 1.00

Controls

[
]
5

Linear Transformations

A linear transformation maps vectors from one space to another while preserving addition and scalar multiplication. In neural networks, each layer applies a weight matrix W to its input.

How to Use

  • Edit the matrix or pick a preset to set the target
  • Press Transform to animate the change
  • Toggle layers to show/hide grid, vectors, circle
  • Watch the metrics update during animation

The Formula

y = Wx

Each output component is a dot product of a row of W with the input:

y₁ = w₁₁x₁ + w₁₂x₂
y₂ = w₂₁x₁ + w₂₂x₂

Neural Network Connection

Each layer in a neural network applies a linear transformation y = Wx + b followed by a non-linear activation. The weight matrix W determines how the input space is warped.

  • Scaling stretches/compresses features
  • Rotation mixes features together
  • Projection collapses dimensions (information loss)
  • Stacking layers composes multiple transformations

Key Concepts

  • Determinant — area scaling factor. Zero means collapse to lower dimension.
  • Eigenvalues — scaling factors along special directions that remain unchanged.
  • Eigenvectors — directions that only get scaled, not rotated.
  • Singular values — the semi-axis lengths of the transformed unit circle (ellipse).
  • Rank — dimension of the output space. Rank < 2 means information loss.

Matrix Analysis

Determinant 1.00
Type Identity
Rank 2
Eigenvalues 1, 1
Singular Values 1.00, 1.00