Ready
Word Probabilities 0 words
Word P(w|Spam) P(w|Ham) Log LR
Classify a message to see word probabilities
Training Data

What is Naive Bayes?

Naive Bayes is a probabilistic classifier based on Bayes' theorem. It predicts the class of an input by computing the posterior probability of each class given the observed features.

Why "Naive"?

The algorithm is called "naive" because it assumes that all features (words) are conditionally independent given the class. While this assumption is rarely true in practice, the classifier often performs surprisingly well.

How to Use

  • Type a message and click Classify to see the prediction
  • Examine the word probability table to see per-word contributions
  • Adjust α to see the effect of Laplace smoothing
  • Add training data to improve the classifier
  • Toggle log probabilities to see the math behind the scenes

Training Phase

  1. Count the number of spam and ham messages
  2. Compute prior probabilities: P(Spam), P(Ham)
  3. Tokenize each message into words
  4. Count word frequencies per class
  5. Compute P(word|class) with Laplace smoothing

Classification Phase

  1. Tokenize the input message
  2. For each class, compute log P(class) + Σ log P(word|class)
  3. The class with the highest score wins
  4. Convert log-scores to probabilities via softmax

Fitted Parameters

Training parameters will appear here.

Live Calculation

Classify a message to see the calculation breakdown.
1.0