Bernoulli Naive Bayes, Defined: A Visible Information with Code Examples for Freshmen | by Samy Baladram | Aug, 2024

Unlocking Predictive Energy By Binary Simplicity

All illustrations on this article have been created by writer, incorporating licensed design parts from Canva Professional.

Not like the baseline strategy of dummy classifiers or the similarity-based reasoning of KNN, Naive Bayes leverages chance idea. It combines the person chances of every “clue” (or characteristic) to make a closing prediction. This simple but highly effective methodology has confirmed invaluable in varied machine studying purposes.

Naive Bayes is a machine studying algorithm that makes use of chance to categorise knowledge. It’s primarily based on Bayes’ Theorem, a components for calculating conditional chances. The “naive” half refers to its key assumption: it treats all options as unbiased of one another, even when they may not be in actuality. This simplification, whereas typically unrealistic, drastically reduces computational complexity and works properly in lots of sensible situations.

Naive Bayes strategies is a straightforward algorithms in machine studying utilizing chance as its base.

There are three major forms of Naive Bayes classifiers. The important thing distinction between these varieties lies within the assumption they make in regards to the distribution of options:

  1. Bernoulli Naive Bayes: Fitted to binary/boolean options. It assumes every characteristic is a binary-valued (0/1) variable.
  2. Multinomial Naive Bayes: Sometimes used for discrete counts. It’s typically utilized in textual content classification, the place options may be phrase counts.
  3. Gaussian Naive Bayes: Assumes that steady options comply with a standard distribution.
Bernoulli NB assumes binary knowledge, Multinomial NB works with discrete counts, and Gaussian NB handles steady knowledge assuming a standard distribution.

It’s a good begin to deal with the best one which is Bernoulli NB. The “Bernoulli” in its title comes from the idea that every characteristic is binary-valued.

All through this text, we’ll use this synthetic golf dataset (impressed by [1]) for example. This dataset predicts whether or not an individual will play golf primarily based on climate situations.

Columns: ‘Outlook’, ‘Temperature’ (in Fahrenheit), ‘Humidity’ (in %), ‘Wind’ and ‘Play’ (goal characteristic)
# IMPORTING DATASET #
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import pandas as pd
import numpy as np

dataset_dict = {
'Outlook': ['sunny', 'sunny', 'overcast', 'rain', 'rain', 'rain', 'overcast', 'sunny', 'sunny', 'rain', 'sunny', 'overcast', 'overcast', 'rain', 'sunny', 'overcast', 'rain', 'sunny', 'sunny', 'rain', 'overcast', 'rain', 'sunny', 'overcast', 'sunny', 'overcast', 'rain', 'overcast'],
'Temperature': [85.0, 80.0, 83.0, 70.0, 68.0, 65.0, 64.0, 72.0, 69.0, 75.0, 75.0, 72.0, 81.0, 71.0, 81.0, 74.0, 76.0, 78.0, 82.0, 67.0, 85.0, 73.0, 88.0, 77.0, 79.0, 80.0, 66.0, 84.0],
'Humidity': [85.0, 90.0, 78.0, 96.0, 80.0, 70.0, 65.0, 95.0, 70.0, 80.0, 70.0, 90.0, 75.0, 80.0, 88.0, 92.0, 85.0, 75.0, 92.0, 90.0, 85.0, 88.0, 65.0, 70.0, 60.0, 95.0, 70.0, 78.0],
'Wind': [False, True, False, False, False, True, True, False, False, False, True, True, False, True, True, False, False, True, False, True, True, False, True, False, False, True, False, False],
'Play': ['No', 'No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'Yes', 'No', 'No', 'Yes', 'Yes', 'No', 'No', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'Yes', 'Yes', 'No', 'Yes']
}
df = pd.DataFrame(dataset_dict)

# ONE-HOT ENCODE 'Outlook' COLUMN
df = pd.get_dummies(df, columns=['Outlook'], prefix='', prefix_sep='', dtype=int)

# CONVERT 'Windy' (bool) and 'Play' (binary) COLUMNS TO BINARY INDICATORS
df['Wind'] = df['Wind'].astype(int)
df['Play'] = (df['Play'] == 'Sure').astype(int)

# Set characteristic matrix X and goal vector y
X, y = df.drop(columns='Play'), df['Play']

# Cut up the info into coaching and testing units
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.5, shuffle=False)

print(pd.concat([X_train, y_train], axis=1), finish='nn')
print(pd.concat([X_test, y_test], axis=1))

We’ll adapt it barely for Bernoulli Naive Bayes by changing our options to binary.

As all the info must be in 0 & 1 format, the ‘Outlook’ is one-hot encoded whereas the Temperature is separated into ≤ 80 and > 80. Equally, Humidity is separated into ≤ 75 and > 75.
# One-hot encode the categorized columns and drop them after, however do it individually for coaching and take a look at units
# Outline classes for 'Temperature' and 'Humidity' for coaching set
X_train['Temperature'] = pd.minimize(X_train['Temperature'], bins=[0, 80, 100], labels=['Warm', 'Hot'])
X_train['Humidity'] = pd.minimize(X_train['Humidity'], bins=[0, 75, 100], labels=['Dry', 'Humid'])

# Equally, outline for the take a look at set
X_test['Temperature'] = pd.minimize(X_test['Temperature'], bins=[0, 80, 100], labels=['Warm', 'Hot'])
X_test['Humidity'] = pd.minimize(X_test['Humidity'], bins=[0, 75, 100], labels=['Dry', 'Humid'])

# One-hot encode the categorized columns
one_hot_columns_train = pd.get_dummies(X_train[['Temperature', 'Humidity']], drop_first=True, dtype=int)
one_hot_columns_test = pd.get_dummies(X_test[['Temperature', 'Humidity']], drop_first=True, dtype=int)

# Drop the categorized columns from coaching and take a look at units
X_train = X_train.drop(['Temperature', 'Humidity'], axis=1)
X_test = X_test.drop(['Temperature', 'Humidity'], axis=1)

# Concatenate the one-hot encoded columns with the unique DataFrames
X_train = pd.concat([one_hot_columns_train, X_train], axis=1)
X_test = pd.concat([one_hot_columns_test, X_test], axis=1)

print(pd.concat([X_train, y_train], axis=1), 'n')
print(pd.concat([X_test, y_test], axis=1))

Bernoulli Naive Bayes operates on knowledge the place every characteristic is both 0 or 1.

  1. Calculate the chance of every class within the coaching knowledge.
  2. For every characteristic and sophistication, calculate the chance of the characteristic being 1 and 0 given the category.
  3. For a brand new occasion: For every class, multiply its chance by the chance of every characteristic worth (0 or 1) for that class.
  4. Predict the category with the best ensuing chance.
For our golf dataset, a Bernoulli NB classifier have a look at the chance of every characteristic occurring for every class (YES & NO) then make determination primarily based on which class has increased probability.

The coaching course of for Bernoulli Naive Bayes includes calculating chances from the coaching knowledge:

  1. Class Likelihood Calculation: For every class, calculate its chance: (Variety of situations on this class) / (Complete variety of situations)
In our golf instance, the algorithm would calculate how typically golf is performed total.
from fractions import Fraction

def calc_target_prob(attr):
total_counts = attr.value_counts().sum()
prob_series = attr.value_counts().apply(lambda x: Fraction(x, total_counts).limit_denominator())
return prob_series

print(calc_target_prob(y_train))

2.Function Likelihood Calculation: For every characteristic and every class, calculate:

  • (Variety of situations the place characteristic is 0 on this class) / (Variety of situations on this class)
  • (Variety of situations the place characteristic is 1 on this class) / (Variety of situations on this class)
For every climate situation (e.g., sunny), how typically golf is performed when it’s sunny and the way typically it’s not performed when it’s sunny.
from fractions import Fraction

def sort_attr_label(attr, lbl):
return (pd.concat([attr, lbl], axis=1)
.sort_values([attr.name, lbl.name])
.reset_index()
.rename(columns={'index': 'ID'})
.set_index('ID'))

def calc_feature_prob(attr, lbl):
total_classes = lbl.value_counts()
counts = pd.crosstab(attr, lbl)
prob_df = counts.apply(lambda x: [Fraction(c, total_classes[x.name]).limit_denominator() for c in x])

return prob_df

print(sort_attr_label(y_train, X_train['sunny']))
print(calc_feature_prob(X_train['sunny'], y_train))

The identical course of is utilized to all the different options.
for col in X_train.columns:
print(calc_feature_prob(X_train[col], y_train), "n")

3. Smoothing (Elective): Add a small worth (normally 1) to the numerator and denominator of every chance calculation to keep away from zero chances

We add 1 to all numerators, and add 2 to all denominators, to maintain the whole class chance 1.
# In sklearn, all processes above is summarized on this 'match' methodology:
from sklearn.naive_bayes import BernoulliNB
nb_clf = BernoulliNB(alpha=1)
nb_clf.match(X_train, y_train)

4. Retailer Outcomes: Save all calculated chances to be used throughout classification.

Smoothing is already utilized to all characteristic chances. We’ll use these tables to make predictions.

Given a brand new occasion with options which can be both 0 or 1:

  1. Likelihood Assortment: For every attainable class:
  • Begin with the chance of this class occurring (class chance).
  • For every characteristic within the new occasion, acquire the chance of this characteristic being 0/1 for this class.
For ID 14, we choose the chances of every of the characteristic (both 0 or 1) occurring.

2. Rating Calculation & Prediction: For every class:

  • Multiply all of the collected chances collectively
  • The result’s the rating for this class
  • The category with the best rating is the prediction
After multiplying the category chance and all the characteristic chances, we choose the category that has the upper rating.
y_pred = nb_clf.predict(X_test)
print(y_pred)
This straightforward probabilistic mannequin give a fantastic accuracy for this straightforward dataset.
# Consider the classifier
print(f"Accuracy: {accuracy_score(y_test, y_pred)}")

Bernoulli Naive Bayes has a couple of vital parameters:

  1. Alpha (α): That is the smoothing parameter. It provides a small rely to every characteristic to forestall zero chances. Default is normally 1.0 (Laplace smoothing) as what was proven earlier than.
  2. Binarize: In case your options aren’t already binary, this threshold converts them. Any worth above this threshold turns into 1, and any worth beneath turns into 0.
For BernoulliNB in scikit-learn, numerical options are sometimes standardized moderately than manually binarized. The mannequin then internally converts these standardized values to binary, normally utilizing 0 (the imply) as the edge.

3. Match Prior: Whether or not to be taught class prior chances or assume uniform priors (50/50).

For our golf dataset, we would begin with the default α=1.0, no binarization (since we’ve already made our options binary), and fit_prior=True.

Like several algorithm in machine studying, Bernoulli Naive Bayes has its strengths and limitations.

  1. Simplicity: Simple to implement and perceive.
  2. Effectivity: Quick to coach and predict, works properly with massive characteristic areas.
  3. Efficiency with Small Datasets: Can carry out properly even with restricted coaching knowledge.
  4. Handles Excessive-Dimensional Information: Works properly with many options, particularly in textual content classification.
  1. Independence Assumption: Assumes all options are unbiased, which is commonly not true in real-world knowledge.
  2. Restricted to Binary Options: In its pure kind, solely works with binary knowledge.
  3. Sensitivity to Enter Information: Will be delicate to how the options are binarized.
  4. Zero Frequency Downside: With out smoothing, zero chances can strongly have an effect on predictions.

The Bernoulli Naive Bayes classifier is a straightforward but highly effective machine studying algorithm for binary classification. It excels in textual content evaluation and spam detection, the place options are sometimes binary. Identified for its pace and effectivity, this probabilistic mannequin performs properly with small datasets and high-dimensional areas.

Regardless of its naive assumption of characteristic independence, it typically rivals extra complicated fashions in accuracy. Bernoulli Naive Bayes serves as a superb baseline and real-time classification device.

# Import wanted libraries
import pandas as pd
from sklearn.naive_bayes import BernoulliNB
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split

# Load the dataset
dataset_dict = {
'Outlook': ['sunny', 'sunny', 'overcast', 'rainy', 'rainy', 'rainy', 'overcast', 'sunny', 'sunny', 'rainy', 'sunny', 'overcast', 'overcast', 'rainy', 'sunny', 'overcast', 'rainy', 'sunny', 'sunny', 'rainy', 'overcast', 'rainy', 'sunny', 'overcast', 'sunny', 'overcast', 'rainy', 'overcast'],
'Temperature': [85.0, 80.0, 83.0, 70.0, 68.0, 65.0, 64.0, 72.0, 69.0, 75.0, 75.0, 72.0, 81.0, 71.0, 81.0, 74.0, 76.0, 78.0, 82.0, 67.0, 85.0, 73.0, 88.0, 77.0, 79.0, 80.0, 66.0, 84.0],
'Humidity': [85.0, 90.0, 78.0, 96.0, 80.0, 70.0, 65.0, 95.0, 70.0, 80.0, 70.0, 90.0, 75.0, 80.0, 88.0, 92.0, 85.0, 75.0, 92.0, 90.0, 85.0, 88.0, 65.0, 70.0, 60.0, 95.0, 70.0, 78.0],
'Wind': [False, True, False, False, False, True, True, False, False, False, True, True, False, True, True, False, False, True, False, True, True, False, True, False, False, True, False, False],
'Play': ['No', 'No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'Yes', 'No', 'No', 'Yes', 'Yes', 'No', 'No', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'Yes', 'Yes', 'No', 'Yes']
}
df = pd.DataFrame(dataset_dict)

# Put together knowledge for mannequin
df = pd.get_dummies(df, columns=['Outlook'], prefix='', prefix_sep='', dtype=int)
df['Wind'] = df['Wind'].astype(int)
df['Play'] = (df['Play'] == 'Sure').astype(int)

# Cut up knowledge into coaching and testing units
X, y = df.drop(columns='Play'), df['Play']
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.5, shuffle=False)

# Scale numerical options (for automated binarization)
scaler = StandardScaler()
float_cols = X_train.select_dtypes(embody=['float64']).columns
X_train[float_cols] = scaler.fit_transform(X_train[float_cols])
X_test[float_cols] = scaler.remodel(X_test[float_cols])

# Prepare the mannequin
nb_clf = BernoulliNB()
nb_clf.match(X_train, y_train)

# Make predictions
y_pred = nb_clf.predict(X_test)

# Examine accuracy
print(f"Accuracy: {accuracy_score(y_test, y_pred)}")