Classification Space
Sigmoid Function
Controls
Ready
Logistic Regression
Logistic regression models the probability of a binary outcome using the sigmoid function. Unlike linear regression, the output is bounded between 0 and 1.
How to Use
- Click canvas to add points (select class)
- Press Train to animate gradient descent
- Watch the boundary evolve during training
- Probability gradient shows classification confidence
Training
- Compute linear combination
z = w·x + b - Apply sigmoid
σ(z) = 1/(1+e−z) - Compute log-loss
L = −[y·log(p) + (1−y)·log(1−p)] - Compute gradients
∂L/∂w, ∂L/∂b - Update parameters via gradient descent
- Repeat until convergence
Sigmoid Function
σ(z) = 1 / (1 + e−z)
Maps any real value to (0, 1) — interpretable as probability.
Binary Cross-Entropy Loss
L = −(1/n) Σ [yᵢ log(pᵢ) + (1−yᵢ) log(1−pᵢ)]
Penalizes confident wrong predictions heavily.
Gradient
∂L/∂w = (1/n) Σ (pᵢ − yᵢ)xᵢ
Same form as linear regression gradient, but p is sigmoid output.
Decision Boundary
w·x + b = 0
The line where P(y=1) = 0.5.
Training Metrics
| Points | 0 |
| Iteration | - |
| Log-Loss | - |
| Accuracy | - |
| Weights | - |
| Bias | - |
| Status | Add points to begin |