Decision Boundary
Network Architecture
Controls
Epoch: 0
Feed-Forward Neural Network
A feed-forward neural network passes inputs through layers of interconnected neurons, each applying a weighted sum followed by a non-linear activation function. The network learns by adjusting weights via backpropagation to minimize classification error.
How to Use
- Press Play to train the network continuously
- Step to advance one epoch at a time
- Click on the decision boundary canvas to add points
- Adjust layers/neurons to change the architecture
- Switch activation to see ReLU vs Sigmoid vs Tanh
- Try XOR or Spiral to see why depth matters
Forward Pass
- Input features
[x1, x2]enter the network - Each neuron computes
z = Wx + b - Apply activation:
a = f(z) - Output layer uses softmax for class probabilities
- Predict the class with highest probability
Backpropagation
- Compute cross-entropy loss at output
- Propagate gradients backward through layers
- Update weights:
W -= lr * dL/dW - Repeat for each training sample (SGD)
Activation Functions
- ReLU:
f(x) = max(0, x) - Sigmoid:
f(x) = 1 / (1 + e^-x) - Tanh:
f(x) = (e^x - e^-x) / (e^x + e^-x)
Cross-Entropy Loss
L = -[y log(p) + (1-y) log(1-p)]
Measures how far predicted probabilities are from true labels.
Weight Update
W = W - lr * dL/dW
Gradient descent moves weights in the direction that reduces loss.
Metrics
| Epoch | 0 |
| Loss | - |
| Accuracy | 0% |
| Architecture | 2-4-2 |
| Learning Rate | 0.050 |
| Status | Ready |