Encoding Categorical Information, Defined: A Visible Information with Code Instance for Newcomers | by Samy Baladram | Sep, 2024

DATA PREPROCESSING

Six methods of matchmaking classes and numbers

Ah, categorical information — the colourful characters in our datasets that machines simply can’t appear to grasp. That is the place “crimson” turns into 1, “blue” 2, and information scientists flip into language translators (or extra like matchmakers?).

Now, I do know what you’re considering: “Encoding? Isn’t that simply assigning numbers to classes?” Oh, if solely it had been that easy! We’re about to discover six totally different encoding strategies, all on (once more) a single, tiny dataset (with visuals, in fact!) From easy labels to mind-bending cyclic transformations, you’ll see why choosing the proper encoding may be as vital as selecting the right algorithm.

Cartoon illustration of two figures embracing, with letters ‘A’, ‘B’, ‘C’ and numbers ‘1’, ‘2’, ‘3’ floating around them. A pink heart hovers above, symbolizing affection. The background is a pixelated pattern of blue and green squares, representing data or encoding. This image metaphorically depicts the concept of encoding categorical data, where categories (ABC) are transformed into numerical representations (123).
All visuals: Writer-created utilizing Canva Professional. Optimized for cellular; might seem outsized on desktop.

Earlier than we soar into our dataset and encoding strategies, let’s take a second to grasp what categorical information is and why it wants particular therapy on the earth of machine studying.

What Is Categorical Information?

Categorical information is just like the descriptive labels we use in on a regular basis life. It represents traits or qualities that may be grouped into classes.

Why Does Categorical Information Want Encoding?

Right here’s the catch: most machine studying algorithms are like choosy eaters — they solely digest numbers. They’ll’t instantly perceive that “sunny” is totally different from “wet”. That’s the place encoding is available in. It’s like translating these classes right into a language that machines can perceive and work with.

Forms of Categorical Information

Not all classes are created equal. We usually have two sorts:

  1. Nominal: These are classes with no inherent order.
    Ex: “Outlook” (sunny, overcast, wet) is nominal. There’s no pure rating between these climate circumstances.
  2. Ordinal: These classes have a significant order.
    Ex: “Temperature” (Very Low, Low, Excessive, Very Excessive) is ordinal. There’s a transparent development from coldest to hottest.
Two panels comparing nominal and ordinal data types. The nominal panel shows a cartoon figure with an umbrella in the rain, illustrating weather as a nominal variable with examples like sunny, rainy, or cloudy. The ordinal panel depicts a sweating figure eating ice cream, demonstrating temperature as an ordinal variable with examples ranging from warm to very hot. Each panel includes a table with example categories.

Why Care About Correct Encoding?

  1. It preserves vital info in your information.
  2. It might probably considerably influence your mannequin’s efficiency.
  3. Incorrect encoding can introduce unintended biases or relationships.

Think about if we encoded “sunny” as 1 and “wet” as 2. The mannequin would possibly suppose wet days are “better than” sunny days, which isn’t what we wish!

Now that we perceive what categorical information is and why it wants encoding, let’s check out our dataset and see how we are able to sort out its categorical variables utilizing six totally different encoding strategies.

Let’s use a easy golf dataset as an example our encoding strategies (and it has largely categorical columns). This dataset information varied climate circumstances and the ensuing crowdedness at a golf course.

Weather dataset table spanning March 25 to April 5. Columns include date, day, month, temperature (Low/High/Extreme), humidity (Dry/Humid), wind (Yes/No), outlook (sunny/rainy/overcast), and a count. Icons above represent data types. The table shows varied weather conditions and corresponding visitor numbers across 12 days.
import pandas as pd
import numpy as np

information = {
'Date': ['03-25', '03-26', '03-27', '03-28', '03-29', '03-30', '03-31', '04-01', '04-02', '04-03', '04-04', '04-05'],
'Weekday': ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri'],
'Month': ['Mar', 'Mar', 'Mar', 'Mar', 'Mar', 'Mar', 'Mar', 'Apr', 'Apr', 'Apr', 'Apr', 'Apr'],
'Temperature': ['High', 'Low', 'High', 'Extreme', 'Low', 'High', 'High', 'Low', 'High', 'Extreme', 'High', 'Low'],
'Humidity': ['Dry', 'Humid', 'Dry', 'Dry', 'Humid', 'Humid', 'Dry', 'Humid', 'Dry', 'Dry', 'Humid', 'Dry'],
'Wind': ['No', 'Yes', 'Yes', 'Yes', 'No', 'No', 'Yes', 'No', 'Yes', 'Yes', 'No', 'Yes'],
'Outlook': ['sunny', 'rainy', 'overcast', 'sunny', 'rainy', 'overcast', 'sunny', 'rainy', 'sunny', 'overcast', 'sunny', 'rainy'],
'Crowdedness': [85, 30, 65, 45, 25, 90, 95, 35, 70, 50, 80, 45]
}
# Create a DataFrame from the dictionary
df = pd.DataFrame(information)

As we are able to see, we’ve got quite a lot of categorical variables. Our process is to encode these variables so {that a} machine studying mannequin can use them to foretell, say, the Crowdedness of the golf course.

Let’s get into it.

Label Encoding assigns a novel integer to every class in a categorical variable.

Frequent Use 👍: It’s typically used for ordinal variables the place there’s a transparent order to the classes, resembling training ranges (e.g., major, secondary, tertiary) or product scores (e.g., 1 star, 2 stars, 3 stars).

In Our Case: We might use Label Encoding for the ‘Weekday’ column in our golf dataset. Every day of the week could be assigned a novel quantity (e.g., Monday = 0, Tuesday = 1, and so on.). Nevertheless, we must be cautious as this would possibly indicate that Sunday (6) is “better than” Saturday (5), which might not be significant for our evaluation.

Two columns showing weekday encoding. Left column lists days from Monday to Friday with corresponding numbers 0–11. Right column shows encoded values 0–6 repeating, where 0 represents Monday and 6 Sunday. A calendar icon above indicates these relate to days of the week.
# 1. Label Encoding for Weekday
df['Weekday_label'] = pd.factorize(df['Weekday'])[0]

One-Sizzling Encoding creates a brand new binary column for every class in a categorical variable.

Frequent Use 👍: It’s usually used for nominal variables the place there’s no inherent order to the classes. It’s notably helpful when coping with variables which have a comparatively small variety of classes.

In Our Case: One-Sizzling Encoding could be excellent for our ‘Outlook’ column. We’d create three new columns: ‘Outlook_sunny’, ‘Outlook_overcast’, and ‘Outlook_rainy’. Every row would have a 1 in one in every of these columns and 0 within the others, representing the climate situation for that day.

Two columns showing weather encoding. Left column lists weather conditions (sunny, rainy, overcast) for 12 days. Right column shows one-hot encoded values: 3 sub-columns for sunny, overcast, and rainy, with 1 indicating the condition and 0 otherwise. Weather icons above represent the three conditions.
# 2. One-Sizzling Encoding for Outlook
df = pd.get_dummies(df, columns=['Outlook'], prefix='Outlook', dtype=int)

Binary Encoding represents every class as a binary quantity (0 and 1).

Frequent Use 👍: It’s typically used when there are solely two classes, largely in a yes-no scenario.

In Our Case: Whereas our ‘Windy’ column solely has two classes (Sure and No), we might use Binary Encoding to exhibit the method. It could end in a single binary column, the place one class (e.g., No) is represented as 0 and the opposite (Sure) as 1.

Two columns showing binary encoding. Left column lists “Yes” or “No” values for 12 entries. Right column shows the encoded values: 1 for “Yes” and 0 for “No”. A wind icon above indicates this likely represents wind presence.
# 3. Binary Encoding for Wind
df['Wind_binary'] = (df['Wind'] == 'Sure').astype(int)

Goal Encoding replaces every class with the imply of the goal variable for that class.

Frequent Use 👍: It’s used when there’s possible a relationship between the explicit variable and the goal variable. It’s notably helpful for high-cardinality options in datasets with an inexpensive variety of rows.

In Our Case: We might apply Goal Encoding to our ‘Humidity’ column, utilizing ‘Crowdedness’ because the goal. Every ‘Dry’ or ‘Humid’ within the ‘Windy’ column would get replaced with the typical crowdedness noticed for humid and dry days respectively.

Image shows target encoding for humidity. Left column lists “Dry” or “Humid” with corresponding visitor numbers. Right column replaces “Dry” with 65 (average visitors on dry days) and “Humid” with 52 (average on humid days). Icons above indicate humidity and visitor count.
# 4. Goal Encoding for Humidity
df['Humidity_target'] = df.groupby('Humidity')['Crowdedness'].rework('imply')

Ordinal Encoding assigns ordered integers to ordinal classes primarily based on their inherent order.

Frequent Use 👍: It’s used for ordinal variables the place the order of classes is significant and also you wish to protect this order info.

In Our Case: Ordinal Encoding is ideal for our ‘Temperature’ column. We might assign integers to characterize the order: Low = 1, Excessive = 2, Excessive = 3. This preserves the pure ordering of temperature classes.

Image shows ordinal encoding for temperature. Left column lists temperatures as “Low”, “High”, or “Extreme” for 12 entries. Right column shows encoded values: 1 for “Low”, 2 for “High”, and 3 for “Extreme”. A sun icon above indicates this represents temperature levels.
# 5. Ordinal Encoding for Temperature
temp_order = {'Low': 1, 'Excessive': 2, 'Excessive': 3}
df['Temperature_ordinal'] = df['Temperature'].map(temp_order)

Cyclic Encoding transforms a cyclical categorical variable into two numerical options that protect the variable’s cyclical nature. It usually makes use of sine and cosine transformations to characterize the cyclical sample. For instance, for the column “Month” we’d make it numerical first (1–12) then create two new options:

  • Month_cos = cos(2 π (m — 1) / 12)
  • Month_sin = sin(2 π (m — 1) / 12)

the place m is a quantity from 1 to 12 representing January to December.

Circular diagram representing cyclical encoding of time. A circle with 12 points labeled 1 to 12 clockwise, resembling a clock face. Point 3 is highlighted, with its coordinates (0.5, 0.866) calculated using cosine and sine functions. The formula (cos(2π(3–1)/12), sin(2π(3–1)/12)) is shown above, demonstrating how the position is derived from the hour number.
Think about the encoding to be the (x,y) coordinate on this bizarre clock, ranging from 1–12. To protect the cyclical order, we have to characterize them utilizing two columns as a substitute of 1.

Frequent Use: It’s used for categorical variables which have a pure cyclical order, resembling days of the week, months of the yr, or hours of the day. Cyclic encoding is especially helpful when the “distance” between classes issues and wraps round (e.g., the gap between December and January ought to be small, similar to the gap between another consecutive months).

In Our Case: In our golf dataset, the perfect column for cyclic encoding could be the ‘Month’ column. Months have a transparent cyclical sample that repeats yearly. This might be notably helpful for our golf dataset, as it will seize seasonal patterns in {golfing} exercise which may repeat yearly. Right here’s how we might apply it:

Image shows cyclical encoding for months. Left column lists months (Mar, Apr) for 12 entries. Middle column assigns numbers (3 for Mar, 4 for Apr). Right columns show sin and cos values calculated using formulas sin(2π(m-1)/12) and cos(2π(m-1)/12), where m is the month number. This creates a cyclical representation of months, with March values at (0.866, 0.5) and April at (1, 0).
# 6. Cyclic Encoding for Month
month_order = {'Jan': 1, 'Feb': 2, 'Mar': 3, 'Apr': 4, 'Could': 5, 'Jun': 6,
'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12}
df['Month_num'] = df['Month'].map(month_order)
df['Month_sin'] = np.sin(2 * np.pi * (df['Month_num']-1) / 12)
df['Month_cos'] = np.cos(2 * np.pi * (df['Month_num']-1) / 12)

So, there you’ve it! Six alternative ways to encode categorical information, all utilized to our golf course dataset. Now, all classes are remodeled into numbers!

Let’s recap how every methodology tackled our information:

  1. Label Encoding: Turned our ‘Weekday’ into numbers, making Monday 0 and Sunday 6 — easy however probably deceptive.
  2. One-Sizzling Encoding: Gave ‘Outlook’ its personal columns, letting ‘sunny’, ‘overcast’, and ‘wet’ stand independently.
  3. Binary Encoding: Compressed our ‘Humidity’ into environment friendly binary code, saving house with out dropping info.
  4. Goal Encoding: Changed ‘Windy’ classes with common ‘Crowdedness’, capturing hidden relationships.
  5. Ordinal Encoding: Revered the pure order of ‘Temperature’, from ‘Very Low’ to ‘Very Excessive’.
  6. Cyclic Encoding: Reworked ‘Month’ into sine and cosine elements, preserving its round nature.

There’s no one-size-fits-all resolution in categorical encoding. One of the best methodology is determined by your particular information, the character of your classes, and the necessities of your machine studying mannequin.

Encoding categorical information would possibly appear to be a small step within the grand scheme of a machine studying venture, nevertheless it’s typically these seemingly minor particulars that may make or break a mannequin’s efficiency.