Decision Boundary
Network Architecture

Controls

0.050
Epoch: 0

Feed-Forward Neural Network

A feed-forward neural network passes inputs through layers of interconnected neurons, each applying a weighted sum followed by a non-linear activation function. The network learns by adjusting weights via backpropagation to minimize classification error.

How to Use

  • Press Play to train the network continuously
  • Step to advance one epoch at a time
  • Click on the decision boundary canvas to add points
  • Adjust layers/neurons to change the architecture
  • Switch activation to see ReLU vs Sigmoid vs Tanh
  • Try XOR or Spiral to see why depth matters

Forward Pass

  1. Input features [x1, x2] enter the network
  2. Each neuron computes z = Wx + b
  3. Apply activation: a = f(z)
  4. Output layer uses softmax for class probabilities
  5. Predict the class with highest probability

Backpropagation

  1. Compute cross-entropy loss at output
  2. Propagate gradients backward through layers
  3. Update weights: W -= lr * dL/dW
  4. Repeat for each training sample (SGD)

Activation Functions

  • ReLU: f(x) = max(0, x)
  • Sigmoid: f(x) = 1 / (1 + e^-x)
  • Tanh: f(x) = (e^x - e^-x) / (e^x + e^-x)

Cross-Entropy Loss

L = -[y log(p) + (1-y) log(1-p)]

Measures how far predicted probabilities are from true labels.

Weight Update

W = W - lr * dL/dW

Gradient descent moves weights in the direction that reduces loss.

Metrics

Epoch 0
Loss -
Accuracy 0%
Architecture 2-4-2
Learning Rate 0.050
Status Ready