Classification Space
Click to add data points

Sigmoid Function

Controls

0.50
Ready

Logistic Regression

Logistic regression models the probability of a binary outcome using the sigmoid function. Unlike linear regression, the output is bounded between 0 and 1.

How to Use

  • Click canvas to add points (select class)
  • Press Train to animate gradient descent
  • Watch the boundary evolve during training
  • Probability gradient shows classification confidence

Training

  1. Compute linear combination z = w·x + b
  2. Apply sigmoid σ(z) = 1/(1+e−z)
  3. Compute log-loss L = −[y·log(p) + (1−y)·log(1−p)]
  4. Compute gradients ∂L/∂w, ∂L/∂b
  5. Update parameters via gradient descent
  6. Repeat until convergence

Sigmoid Function

σ(z) = 1 / (1 + e−z)

Maps any real value to (0, 1) — interpretable as probability.

Binary Cross-Entropy Loss

L = −(1/n) Σ [yᵢ log(pᵢ) + (1−yᵢ) log(1−pᵢ)]

Penalizes confident wrong predictions heavily.

Gradient

∂L/∂w = (1/n) Σ (pᵢ − yᵢ)xᵢ

Same form as linear regression gradient, but p is sigmoid output.

Decision Boundary

w·x + b = 0

The line where P(y=1) = 0.5.

Training Metrics

Points 0
Iteration -
Log-Loss -
Accuracy -
Weights -
Bias -
Status Add points to begin