Resolution Tree algorithms have all the time fascinated me. They’re simple to implement and obtain good outcomes on numerous classification and regression duties. Mixed with boosting, choice timber are nonetheless state-of-the-art in lots of purposes.
Frameworks akin to sklearn, Lightgbm, xgboost and catboost have completed an excellent job till immediately. Nonetheless, prior to now few months, I’ve been lacking assist for arrow datasets. Whereas lightgbm has not too long ago added assist for that, it’s nonetheless lacking in most different frameworks. The arrow information format may very well be an ideal match for choice timber because it has a columnar construction optimized for environment friendly information processing. Pandas already added assist for that and likewise polars makes use of the benefits.
Polars has proven some important efficiency benefits over most different information frameworks. It makes use of the info effectively and avoids copying the info unnecessarily. It additionally gives a streaming engine that permits the processing of bigger information than reminiscence. Because of this I made a decision to make use of polars as a backend for constructing a call tree from scratch.
The purpose is to discover some great benefits of utilizing polars for choice timber by way of reminiscence and runtime. And, in fact, studying extra about polars, effectively defining expressions, and the streaming engine.
The code for the implementation will be discovered on thisΒ repository.
Code overview
To get a primary overview of the code, I’ll present the construction of the DecisionTreeClassifier
first:
The primary vital factor will be seen within the imports. It was vital for me to maintain the import part clear and with as few dependencies as doable. This was profitable with solely having dependencies to polars, pickle, and typing.
The init methodology permits to outline if the polars streaming engine ought to be used. Additionally, the max_depth
of the tree will be set right here. One other function within the definition of categorical columns. These are dealt with differently than numerical options utilizing a goal encoding.
It’s doable to save lots of and cargo the choice tree mannequin. It’s represented as a nested dict and will be saved to disk as a pickled file.
The polars magic occurs within the match()
and build_tree()
strategies. These settle for each LazyFrames and DataFrames to have assist for in-memory processing and streaming.
There are two prediction strategies out there, predict()
and predict_many()
. The predict()
methodology can be utilized on a small instance dimension, and the info must be offered as a dict. If we now have a giant take a look at set, it’s extra environment friendly to make use of the predict_many()
methodology. Right here, the info will be offered as a Polars Dataframe or LazyFrame.
import pickle
from typing import Iterable, Record, Union
import polars as pl
class DecisionTreeClassifier:
def __init__(self, streaming=False, max_depth=None, categorical_columns=None):
...
def save_model(self, path: str) -> None:
...
def load_model(self, path: str) -> None:
...
def apply_categorical_mappings(self, information: Union[pl.DataFrame, pl.LazyFrame]) -> Union[pl.DataFrame, pl.LazyFrame]:
...
def match(self, information: Union[pl.DataFrame, pl.LazyFrame], target_name: str) -> None:
...
def predict_many(self, information: Union[pl.DataFrame, pl.LazyFrame]) -> Record[Union[int, float]]:
...
def predict(self, information: Iterable[dict]):
...
def get_majority_class(self, df: Union[pl.DataFrame, pl.LazyFrame], target_name: str) -> str:
...
def _build_tree(
self,
information: Union[pl.DataFrame, pl.LazyFrame],
feature_names: checklist[str],
target_name: str,
unique_targets: checklist[int],
depth: int,
) -> dict:
...
Becoming the tree
To coach the choice tree classifier, the match()
methodology must be used.
def match(self, information: Union[pl.DataFrame, pl.LazyFrame], target_name: str) -> None:
"""
Match methodology to coach the choice tree.
:param information: Polars DataFrame or LazyFrame containing the coaching information.
:param target_name: Identify of the goal column
"""
columns = information.collect_schema().names()
feature_names = [col for col in columns if col != target_name]
# Shrink dtypes
information = information.choose(pl.all().shrink_dtype()).with_columns(
pl.col(target_name).solid(pl.UInt64).shrink_dtype().alias(target_name)
)
# Put together categorical columns with goal encoding
if self.categorical_columns:
categorical_mappings = {}
for categorical_column in self.categorical_columns:
categorical_mappings[categorical_column] = {
worth: index
for index, worth in enumerate(
information.lazy()
.group_by(categorical_column)
.agg(pl.col(target_name).imply().alias("avg"))
.kind("avg")
.acquire(streaming=self.streaming)[categorical_column]
)
}
self.categorical_mappings = categorical_mappings
information = self.apply_categorical_mappings(information)
unique_targets = information.choose(target_name).distinctive()
if isinstance(unique_targets, pl.LazyFrame):
unique_targets = unique_targets.acquire(streaming=self.streaming)
unique_targets = unique_targets[target_name].to_list()
self.tree = self._build_tree(information, feature_names, target_name, unique_targets, depth=0)
It receives a polars LazyFrame or DataFrame that accommodates all options and the goal column. To establish the goal column, the target_name
must be offered.
Polars gives a handy technique to optimize the reminiscence utilization of the info.
information.choose(pl.all().shrink_dtype())
With that, all columns are chosen and evaluated. It would convert the dtype
to the smallest doable worth.
The specific encoding
To encode categorical values, a goal encoding is used. For that, all cases of a categorical function will probably be aggregated, and the typical goal worth will probably be calculated. Then, the cases are sorted by the typical goal worth, and a rank is assigned. This rank will probably be used because the illustration of the function worth.
(
information.lazy()
.group_by(categorical_column)
.agg(pl.col(target_name).imply().alias("avg"))
.kind("avg")
.acquire(streaming=self.streaming)[categorical_column]
)
Since it’s doable to supply polars DataFrames and LazyFrames, I take advantage of information.lazy()
first. If the given information is a DataFrame, it will likely be transformed to a LazyFrame. Whether it is already a LazyFrame, it solely returns self. With that trick, it’s doable to make sure that the info is processed in the identical means for LazyFrames and DataFrames and that the acquire()
methodology can be utilized, which is simply out there for LazyFrames.
For example the result of the calculations within the completely different steps of the becoming course of, I apply it to a dataset for coronary heart illness prediction. It may be discovered onΒ KaggleΒ and is revealed underneath the Database Contents License.
Right here is an instance of the specific function illustration for the glucose ranges:
ββββββββ¬βββββββ¬βββββββββββ
β rank β gluc β avg β
β --- β --- β --- β
β u32 β i8 β f64 β
ββββββββͺβββββββͺβββββββββββ‘
β 0 β 1 β 0.476139 β
β 1 β 2 β 0.586319 β
β 2 β 3 β 0.620972 β
ββββββββ΄βββββββ΄βββββββββββ
For every of the glucose ranges, the likelihood of getting a coronary heart illness is calculated. That is sorted after which ranked so that every of the degrees is mapped to a rank worth.
Getting the goal values
Because the final a part of the match()
methodology, the distinctive goal values are decided.
unique_targets = information.choose(target_name).distinctive()
if isinstance(unique_targets, pl.LazyFrame):
unique_targets = unique_targets.acquire(streaming=self.streaming)
unique_targets = unique_targets[target_name].to_list()
self.tree = self._build_tree(information, feature_names, target_name, unique_targets, depth=0)
This serves because the final preparation earlier than calling the _build_tree()
methodology recursively.
Constructing the tree
After the info is ready within the match()
methodology, the _build_tree()
methodology known as. That is completed recursively till a stopping criterion is met, e.g., the max depth of the tree is reached. The primary name is executed from the match()
methodology with a depth of zero.
def _build_tree(
self,
information: Union[pl.DataFrame, pl.LazyFrame],
feature_names: checklist[str],
target_name: str,
unique_targets: checklist[int],
depth: int,
) -> dict:
"""
Builds the choice tree recursively.
If max_depth is reached, returns a leaf node with the bulk class.
In any other case, finds one of the best break up and creates inner nodes for left and proper kids.
:param information: The dataframe to judge.
:param feature_names: Identify of the function columns.
:param target_name: Identify of the goal column.
:param unique_targets: distinctive goal values.
:param depth: The present depth of the tree.
:return: A dictionary representing the node.
"""
if self.max_depth just isn't None and depth >= self.max_depth:
return {"sort": "leaf", "worth": self.get_majority_class(information, target_name)}
# Make information lazy right here to keep away from that it's evaluated in every loop iteration.
information = information.lazy()
# Consider entropy per function:
information_gain_dfs = []
for feature_name in feature_names:
feature_data = information.choose([feature_name, target_name]).filter(pl.col(feature_name).is_not_null())
feature_data = feature_data.rename({feature_name: "feature_value"})
# No streaming (but)
information_gain_df = (
feature_data.group_by("feature_value")
.agg(
[
pl.col(target_name)
.filter(pl.col(target_name) == target_value)
.len()
.alias(f"class_{target_value}_count")
for target_value in unique_targets
]
+ [pl.col(target_name).len().alias("count_examples")]
)
.kind("feature_value")
.choose(
[
pl.col(f"class_{target_value}_count").cum_sum().alias(f"cum_sum_class_{target_value}_count")
for target_value in unique_targets
]
+ [
pl.col(f"class_{target_value}_count").sum().alias(f"sum_class_{target_value}_count")
for target_value in unique_targets
]
+ [
pl.col("count_examples").cum_sum().alias("cum_sum_count_examples"),
pl.col("count_examples").sum().alias("sum_count_examples"),
]
+ [
# From previous select
pl.col("feature_value"),
]
)
.filter(
# No less than one instance out there
pl.col("sum_count_examples")
> pl.col("cum_sum_count_examples")
)
.choose(
[
(pl.col(f"cum_sum_class_{target_value}_count") / pl.col("cum_sum_count_examples")).alias(
f"left_proportion_class_{target_value}"
)
for target_value in unique_targets
]
+ [
(
(pl.col(f"sum_class_{target_value}_count") - pl.col(f"cum_sum_class_{target_value}_count"))
/ (pl.col("sum_count_examples") - pl.col("cum_sum_count_examples"))
).alias(f"right_proportion_class_{target_value}")
for target_value in unique_targets
]
+ [
(pl.col(f"sum_class_{target_value}_count") / pl.col("sum_count_examples")).alias(
f"parent_proportion_class_{target_value}"
)
for target_value in unique_targets
]
+ [
# From previous select
pl.col("cum_sum_count_examples"),
pl.col("sum_count_examples"),
pl.col("feature_value"),
]
)
.choose(
(
-1
* pl.sum_horizontal(
[
(
pl.col(f"left_proportion_class_{target_value}")
* pl.col(f"left_proportion_class_{target_value}").log(base=2)
).fill_nan(0.0)
for target_value in unique_targets
]
)
).alias("left_entropy"),
(
-1
* pl.sum_horizontal(
[
(
pl.col(f"right_proportion_class_{target_value}")
* pl.col(f"right_proportion_class_{target_value}").log(base=2)
).fill_nan(0.0)
for target_value in unique_targets
]
)
).alias("right_entropy"),
(
-1
* pl.sum_horizontal(
[
(
pl.col(f"parent_proportion_class_{target_value}")
* pl.col(f"parent_proportion_class_{target_value}").log(base=2)
).fill_nan(0.0)
for target_value in unique_targets
]
)
).alias("parent_entropy"),
# From earlier choose
pl.col("cum_sum_count_examples"),
pl.col("sum_count_examples"),
pl.col("feature_value"),
)
.choose(
(
pl.col("cum_sum_count_examples") / pl.col("sum_count_examples") * pl.col("left_entropy")
+ (pl.col("sum_count_examples") - pl.col("cum_sum_count_examples"))
/ pl.col("sum_count_examples")
* pl.col("right_entropy")
).alias("child_entropy"),
# From earlier choose
pl.col("parent_entropy"),
pl.col("feature_value"),
)
.choose(
(pl.col("parent_entropy") - pl.col("child_entropy")).alias("information_gain"),
# From earlier choose
pl.col("parent_entropy"),
pl.col("feature_value"),
)
.filter(pl.col("information_gain").is_not_nan())
.kind("information_gain", descending=True)
.head(1)
.with_columns(function=pl.lit(feature_name))
)
information_gain_dfs.append(information_gain_df)
if isinstance(information_gain_dfs[0], pl.LazyFrame):
information_gain_dfs = pl.collect_all(information_gain_dfs, streaming=self.streaming)
information_gain_dfs = pl.concat(information_gain_dfs, how="vertical_relaxed").kind(
"information_gain", descending=True
)
information_gain = 0
if len(information_gain_dfs) > 0:
best_params = information_gain_dfs.row(0, named=True)
information_gain = best_params["information_gain"]
if information_gain > 0:
left_mask = information.choose(filter=pl.col(best_params["feature"]) <= best_params["feature_value"])
if isinstance(left_mask, pl.LazyFrame):
left_mask = left_mask.acquire(streaming=self.streaming)
left_mask = left_mask["filter"]
# Break up information
left_df = information.filter(left_mask)
right_df = information.filter(~left_mask)
left_subtree = self._build_tree(left_df, feature_names, target_name, unique_targets, depth + 1)
right_subtree = self._build_tree(right_df, feature_names, target_name, unique_targets, depth + 1)
if isinstance(information, pl.LazyFrame):
target_distribution = (
information.choose(target_name)
.acquire(streaming=self.streaming)[target_name]
.value_counts()
.kind(target_name)["count"]
.to_list()
)
else:
target_distribution = information[target_name].value_counts().kind(target_name)["count"].to_list()
return {
"sort": "node",
"function": best_params["feature"],
"threshold": best_params["feature_value"],
"information_gain": best_params["information_gain"],
"entropy": best_params["parent_entropy"],
"target_distribution": target_distribution,
"left": left_subtree,
"proper": right_subtree,
}
else:
return {"sort": "leaf", "worth": self.get_majority_class(information, target_name)}
This methodology is the center of constructing the timber and I’ll clarify it step-by-step. First, when getting into the strategy, it’s checked if the max depth stopping criterion is met.
if self.max_depth just isn't None and depth >= self.max_depth:
return {"sort": "leaf", "worth": self.get_majority_class(information, target_name)}
If the present depth is the same as or higher than the max_depth
, a node of the kind leaf will probably be returned. The worth of the leaf corresponds to the bulk class of the info. That is calculated as follows:
def get_majority_class(self, df: Union[pl.DataFrame, pl.LazyFrame], target_name: str) -> str:
"""
Returns the bulk class of a dataframe.
:param df: The dataframe to judge.
:param target_name: Identify of the goal column.
:return: majority class.
"""
majority_class = df.group_by(target_name).len().filter(pl.col("len") == pl.col("len").max()).choose(target_name)
if isinstance(majority_class, pl.LazyFrame):
majority_class = majority_class.acquire(streaming=self.streaming)
return majority_class[target_name][0]
To get the bulk class, the depend of rows per goal is set by grouping over the goal column and aggregating with len()
. The goal occasion, which is current in a lot of the rows, is returned as the bulk class.
Info Achieve as Splitting Standards
To discover a good break up of the info, the data acquire is used.
![](https://towardsdatascience.com/wp-content/uploads/2025/02/1_wvX6aHlQ4uHbvaFwN04DiA-1024x219.png)
To get the data acquire, the dad or mum entropy and youngster entropy must be calculated.
![](https://towardsdatascience.com/wp-content/uploads/2025/02/1_xvu-vWjfq-_l14EQV2ZhHA-1024x320.png)
Calculating The Info Achieve in Polars
The knowledge acquire is calculated for every function worth that’s current in a function column.
information_gain_df = (
feature_data.group_by("feature_value")
.agg(
[
pl.col(target_name)
.filter(pl.col(target_name) == target_value)
.len()
.alias(f"class_{target_value}_count")
for target_value in unique_targets
]
+ [pl.col(target_name).len().alias("count_examples")]
)
.kind("feature_value")
The function values are grouped, and the depend of every of the goal values is assigned to it. Moreover, the full depend of rows for that function worth is saved as count_examples
. Within the final step, the info is sorted by feature_value
. That is wanted to calculate the splits within the subsequent step.
For the center illness dataset, after the primary calculation step, the info seems like this:
βββββββββββββββββ¬ββββββββββββββββ¬ββββββββββββββββ¬βββββββββββββββββ
β feature_value β class_0_count β class_1_count β count_examples β
β --- β --- β --- β --- β
β i8 β u32 β u32 β u32 β
βββββββββββββββββͺββββββββββββββββͺββββββββββββββββͺβββββββββββββββββ‘
β 29 β 2 β 0 β 2 β
β 30 β 1 β 0 β 1 β
β 39 β 1068 β 331 β 1399 β
β 40 β 975 β 263 β 1238 β
β 41 β 1052 β 438 β 1490 β
β β¦ β β¦ β β¦ β β¦ β
β 60 β 1054 β 1460 β 2514 β
β 61 β 695 β 1408 β 2103 β
β 62 β 566 β 1125 β 1691 β
β 63 β 572 β 1517 β 2089 β
β 64 β 479 β 1217 β 1696 β
βββββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββββ΄βββββββββββββββββ
Right here, the function age_years
is processed. Class 0
stands for βno coronary heart illness,β and sophistication 1 stands for βcoronary heart illness.β The info is sorted by the age of years function, and the columns include the depend of class 0
, class 1
, and the full depend of examples with the respective function worth.
Within the subsequent step, the cumulative sum over the depend of courses is calculated for every function worth.
.choose(
[
pl.col(f"class_{target_value}_count").cum_sum().alias(f"cum_sum_class_{target_value}_count")
for target_value in unique_targets
]
+ [
pl.col(f"class_{target_value}_count").sum().alias(f"sum_class_{target_value}_count")
for target_value in unique_targets
]
+ [
pl.col("count_examples").cum_sum().alias("cum_sum_count_examples"),
pl.col("count_examples").sum().alias("sum_count_examples"),
]
+ [
# From previous select
pl.col("feature_value"),
]
)
.filter(
# No less than one instance out there
pl.col("sum_count_examples")
> pl.col("cum_sum_count_examples")
)
The instinct behind it’s that when a break up is executed over a selected function worth, it consists of the depend of goal values from smaller function values. To have the ability to calculate the proportion, the full sum of the goal values is calculated. The identical process is repeated for count_examples
, the place the cumulative sum and the full sum are calculated as effectively.
After the calculation, the info seems like this:
ββββββββββββββββ¬ββββββββββββββ¬ββββββββββββββ¬ββββββββββββββ¬ββββββββββββββ¬ββββββββββββββ¬ββββββββββββββ
β cum_sum_clas β cum_sum_cla β sum_class_0 β sum_class_1 β cum_sum_cou β sum_count_e β feature_val β
β s_0_count β ss_1_count β _count β _count β nt_examples β xamples β ue β
β --- β --- β --- β --- β --- β --- β --- β
β u32 β u32 β u32 β u32 β u32 β u32 β i8 β
ββββββββββββββββͺββββββββββββββͺββββββββββββββͺββββββββββββββͺββββββββββββββͺββββββββββββββͺββββββββββββββ‘
β 3 β 0 β 27717 β 26847 β 3 β 54564 β 29 β
β 4 β 0 β 27717 β 26847 β 4 β 54564 β 30 β
β 1097 β 324 β 27717 β 26847 β 1421 β 54564 β 39 β
β 2090 β 595 β 27717 β 26847 β 2685 β 54564 β 40 β
β 3155 β 1025 β 27717 β 26847 β 4180 β 54564 β 41 β
β β¦ β β¦ β β¦ β β¦ β β¦ β β¦ β β¦ β
β 24302 β 20162 β 27717 β 26847 β 44464 β 54564 β 59 β
β 25356 β 21581 β 27717 β 26847 β 46937 β 54564 β 60 β
β 26046 β 23020 β 27717 β 26847 β 49066 β 54564 β 61 β
β 26615 β 24131 β 27717 β 26847 β 50746 β 54564 β 62 β
β 27216 β 25652 β 27717 β 26847 β 52868 β 54564 β 63 β
ββββββββββββββββ΄ββββββββββββββ΄ββββββββββββββ΄ββββββββββββββ΄ββββββββββββββ΄ββββββββββββββ΄ββββββββββββββ
Within the subsequent step, the proportions are calculated for every function worth.
.choose(
[
(pl.col(f"cum_sum_class_{target_value}_count") / pl.col("cum_sum_count_examples")).alias(
f"left_proportion_class_{target_value}"
)
for target_value in unique_targets
]
+ [
(
(pl.col(f"sum_class_{target_value}_count") - pl.col(f"cum_sum_class_{target_value}_count"))
/ (pl.col("sum_count_examples") - pl.col("cum_sum_count_examples"))
).alias(f"right_proportion_class_{target_value}")
for target_value in unique_targets
]
+ [
(pl.col(f"sum_class_{target_value}_count") / pl.col("sum_count_examples")).alias(
f"parent_proportion_class_{target_value}"
)
for target_value in unique_targets
]
+ [
# From previous select
pl.col("cum_sum_count_examples"),
pl.col("sum_count_examples"),
pl.col("feature_value"),
]
)
To calculate the proportions, the outcomes from the earlier step can be utilized. For the left proportion, the cumulative sum of every goal worth is split by the cumulative sum of the instance depend. For the best proportion, we have to know what number of examples we now have on the best facet for every goal worth. That’s calculated by subtracting the full sum for the goal worth from the cumulative sum of the goal worth. The identical calculation is used to find out the full depend of examples on the best facet by subtracting the sum of the instance depend from the cumulative sum of the instance depend. Moreover, the dad or mum proportion is calculated. That is completed by dividing the sum of the goal values counts by the full depend of examples.
That is the outcome information after this step:
βββββββββββββ¬ββββββββββββ¬ββββββββββββ¬ββββββββββββ¬ββββ¬ββββββββββββ¬ββββββββββββ¬ββββββββββββ¬βββββββββββ
β left_prop β left_prop β right_pro β right_pro β β¦ β parent_pr β cum_sum_c β sum_count β feature_ β
β ortion_cl β ortion_cl β portion_c β portion_c β β oportion_ β ount_exam β _examples β worth β
β ass_0 β ass_1 β lass_0 β lass_1 β β class_1 β ples β --- β --- β
β --- β --- β --- β --- β β --- β --- β u32 β i8 β
β f64 β f64 β f64 β f64 β β f64 β u32 β β β
βββββββββββββͺββββββββββββͺββββββββββββͺββββββββββββͺββββͺββββββββββββͺββββββββββββͺββββββββββββͺβββββββββββ‘
β 1.0 β 0.0 β 0.506259 β 0.493741 β β¦ β 0.493714 β 3 β 54564 β 29 β
β 1.0 β 0.0 β 0.50625 β 0.49375 β β¦ β 0.493714 β 4 β 54564 β 30 β
β 0.754902 β 0.245098 β 0.499605 β 0.500395 β β¦ β 0.493714 β 1428 β 54564 β 39 β
β 0.765596 β 0.234404 β 0.492739 β 0.507261 β β¦ β 0.493714 β 2709 β 54564 β 40 β
β 0.741679 β 0.258321 β 0.486929 β 0.513071 β β¦ β 0.493714 β 4146 β 54564 β 41 β
β β¦ β β¦ β β¦ β β¦ β β¦ β β¦ β β¦ β β¦ β β¦ β
β 0.545735 β 0.454265 β 0.333563 β 0.666437 β β¦ β 0.493714 β 44419 β 54564 β 59 β
β 0.539065 β 0.460935 β 0.305025 β 0.694975 β β¦ β 0.493714 β 46922 β 54564 β 60 β
β 0.529725 β 0.470275 β 0.297071 β 0.702929 β β¦ β 0.493714 β 49067 β 54564 β 61 β
β 0.523006 β 0.476994 β 0.282551 β 0.717449 β β¦ β 0.493714 β 50770 β 54564 β 62 β
β 0.513063 β 0.486937 β 0.296188 β 0.703812 β β¦ β 0.493714 β 52859 β 54564 β 63 β
βββββββββββββ΄ββββββββββββ΄ββββββββββββ΄ββββββββββββ΄ββββ΄ββββββββββββ΄ββββββββββββ΄ββββββββββββ΄βββββββββββ
Now that the proportions can be found, the entropy will be calculated.
.choose(
(
-1
* pl.sum_horizontal(
[
(
pl.col(f"left_proportion_class_{target_value}")
* pl.col(f"left_proportion_class_{target_value}").log(base=2)
).fill_nan(0.0)
for target_value in unique_targets
]
)
).alias("left_entropy"),
(
-1
* pl.sum_horizontal(
[
(
pl.col(f"right_proportion_class_{target_value}")
* pl.col(f"right_proportion_class_{target_value}").log(base=2)
).fill_nan(0.0)
for target_value in unique_targets
]
)
).alias("right_entropy"),
(
-1
* pl.sum_horizontal(
[
(
pl.col(f"parent_proportion_class_{target_value}")
* pl.col(f"parent_proportion_class_{target_value}").log(base=2)
).fill_nan(0.0)
for target_value in unique_targets
]
)
).alias("parent_entropy"),
# From earlier choose
pl.col("cum_sum_count_examples"),
pl.col("sum_count_examples"),
pl.col("feature_value"),
)
For the calculation of the entropy, Equation 2 is used. The left entropy is calculated utilizing the left proportion, and the best entropy makes use of the best proportion. For the dad or mum entropy, the dad or mum proportion is used. On this implementation, pl.sum_horizontal()
is used to calculate the sum of the proportions to utilize doable optimizations from polars. This may also be changed with the python-native sum()
methodology.
The info with the entropy values look as follows:
ββββββββββββββββ¬ββββββββββββββββ¬βββββββββββββββββ¬ββββββββββββββββββ¬βββββββββββββββββ¬ββββββββββββββββ
β left_entropy β right_entropy β parent_entropy β cum_sum_count_e β sum_count_exam β feature_value β
β --- β --- β --- β xamples β ples β --- β
β f64 β f64 β f64 β --- β --- β i8 β
β β β β u32 β u32 β β
ββββββββββββββββͺββββββββββββββββͺβββββββββββββββββͺββββββββββββββββββͺβββββββββββββββββͺββββββββββββββββ‘
β -0.0 β 0.999854 β 0.999853 β 3 β 54564 β 29 β
β -0.0 β 0.999854 β 0.999853 β 4 β 54564 β 30 β
β 0.783817 β 1.0 β 0.999853 β 1427 β 54564 β 39 β
β 0.767101 β 0.999866 β 0.999853 β 2694 β 54564 β 40 β
β 0.808516 β 0.999503 β 0.999853 β 4177 β 54564 β 41 β
β β¦ β β¦ β β¦ β β¦ β β¦ β β¦ β
β 0.993752 β 0.918461 β 0.999853 β 44483 β 54564 β 59 β
β 0.995485 β 0.890397 β 0.999853 β 46944 β 54564 β 60 β
β 0.997367 β 0.880977 β 0.999853 β 49106 β 54564 β 61 β
β 0.99837 β 0.859431 β 0.999853 β 50800 β 54564 β 62 β
β 0.999436 β 0.872346 β 0.999853 β 52877 β 54564 β 63 β
ββββββββββββββββ΄ββββββββββββββββ΄βββββββββββββββββ΄ββββββββββββββββββ΄βββββββββββββββββ΄ββββββββββββββββ
Virtually there! The ultimate step is lacking, which is calculating the kid entropy and utilizing that to get the data acquire.
.choose(
(
pl.col("cum_sum_count_examples") / pl.col("sum_count_examples") * pl.col("left_entropy")
+ (pl.col("sum_count_examples") - pl.col("cum_sum_count_examples"))
/ pl.col("sum_count_examples")
* pl.col("right_entropy")
).alias("child_entropy"),
# From earlier choose
pl.col("parent_entropy"),
pl.col("feature_value"),
)
.choose(
(pl.col("parent_entropy") - pl.col("child_entropy")).alias("information_gain"),
# From earlier choose
pl.col("parent_entropy"),
pl.col("feature_value"),
)
.filter(pl.col("information_gain").is_not_nan())
.kind("information_gain", descending=True)
.head(1)
.with_columns(function=pl.lit(feature_name))
)
information_gain_dfs.append(information_gain_df)
For the kid entropy, the left and proper entropy are weighted by the depend of examples for the function values. The sum of each weighted entropy values is used as youngster entropy. To calculate the data acquire, we merely must subtract the kid entropy from the dad or mum entropy, as will be seen in Equation 1. The most effective function worth is set by sorting the info by info acquire and choosing the primary row. It’s appended to an inventory that gathers all one of the best function values from all options.
Earlier than making use of .head(1)
, the info seems as follows:
ββββββββββββββββββββ¬βββββββββββββββββ¬ββββββββββββββββ
β information_gain β parent_entropy β feature_value β
β --- β --- β --- β
β f64 β f64 β i8 β
ββββββββββββββββββββͺβββββββββββββββββͺββββββββββββββββ‘
β 0.028388 β 0.999928 β 54 β
β 0.027719 β 0.999928 β 52 β
β 0.027283 β 0.999928 β 53 β
β 0.026826 β 0.999928 β 50 β
β 0.026812 β 0.999928 β 51 β
β β¦ β β¦ β β¦ β
β 0.010928 β 0.999928 β 62 β
β 0.005872 β 0.999928 β 39 β
β 0.004155 β 0.999928 β 63 β
β 0.000072 β 0.999928 β 30 β
β 0.000054 β 0.999928 β 29 β
ββββββββββββββββββββ΄βββββββββββββββββ΄ββββββββββββββββ
Right here, it may be seen that the age function worth of 54 has the very best info acquire. This function worth will probably be collected for the age function and must compete in opposition to the opposite options.
Choosing Finest Break up and Outline Sub Timber
To pick out one of the best break up, the very best info acquire must be discovered throughout all options.
if isinstance(information_gain_dfs[0], pl.LazyFrame):
information_gain_dfs = pl.collect_all(information_gain_dfs, streaming=self.streaming)
information_gain_dfs = pl.concat(information_gain_dfs, how="vertical_relaxed").kind(
"information_gain", descending=True
)
For that, the pl.collect_all()
methodology is used on information_gain_dfs
. This evaluates all LazyFrames in parallel, which makes the processing very environment friendly. The result’s an inventory of polars DataFrames, that are concatenated and sorted by info acquire.
For the center illness instance, the info seems like this:
ββββββββββββββββββββ¬βββββββββββββββββ¬ββββββββββββββββ¬ββββββββββββββ
β information_gain β parent_entropy β feature_value β function β
β --- β --- β --- β --- β
β f64 β f64 β f64 β str β
ββββββββββββββββββββͺβββββββββββββββββͺββββββββββββββββͺββββββββββββββ‘
β 0.138032 β 0.999909 β 129.0 β ap_hi β
β 0.09087 β 0.999909 β 85.0 β ap_lo β
β 0.029966 β 0.999909 β 0.0 β ldl cholesterol β
β 0.028388 β 0.999909 β 54.0 β age_years β
β 0.01968 β 0.999909 β 27.435041 β bmi β
β β¦ β β¦ β β¦ β β¦ β
β 0.000851 β 0.999909 β 0.0 β energetic β
β 0.000351 β 0.999909 β 156.0 β top β
β 0.000223 β 0.999909 β 0.0 β smoke β
β 0.000098 β 0.999909 β 0.0 β alco β
β 0.000031 β 0.999909 β 0.0 β gender β
ββββββββββββββββββββ΄βββββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββ
Out of all options, the ap_hi (Systolic blood stress) function worth of 129 leads to one of the best info acquire and thus will probably be chosen for the primary break up.
information_gain = 0
if len(information_gain_dfs) > 0:
best_params = information_gain_dfs.row(0, named=True)
information_gain = best_params["information_gain"]
In some circumstances, information_gain_dfs
may be empty, for instance, when all splits end in having solely examples on the left or proper facet. If that is so, the data acquire is zero. In any other case, we get the function worth with the very best info acquire.
if information_gain > 0:
left_mask = information.choose(filter=pl.col(best_params["feature"]) <= best_params["feature_value"])
if isinstance(left_mask, pl.LazyFrame):
left_mask = left_mask.acquire(streaming=self.streaming)
left_mask = left_mask["filter"]
# Break up information
left_df = information.filter(left_mask)
right_df = information.filter(~left_mask)
left_subtree = self._build_tree(left_df, feature_names, target_name, unique_targets, depth + 1)
right_subtree = self._build_tree(right_df, feature_names, target_name, unique_targets, depth + 1)
if isinstance(information, pl.LazyFrame):
target_distribution = (
information.choose(target_name)
.acquire(streaming=self.streaming)[target_name]
.value_counts()
.kind(target_name)["count"]
.to_list()
)
else:
target_distribution = information[target_name].value_counts().kind(target_name)["count"].to_list()
return {
"sort": "node",
"function": best_params["feature"],
"threshold": best_params["feature_value"],
"information_gain": best_params["information_gain"],
"entropy": best_params["parent_entropy"],
"target_distribution": target_distribution,
"left": left_subtree,
"proper": right_subtree,
}
else:
return {"sort": "leaf", "worth": self.get_majority_class(information, target_name)}
When the data acquire is bigger than zero, the sub-trees are outlined. For that, the left masks is outlined utilizing the function worth that resulted in one of the best info acquire. The masks is utilized to the dad or mum information to get the left information body. The negation of the left masks is used to outline the best information body. Each left and proper information frames are used to name the _build_tree()
methodology once more with an elevated depth+1. Because the final step, the goal distribution is calculated. That is used as further info on the node and will probably be seen when plotting the tree together with the opposite info.
When info acquire is zero, a leaf occasion will probably be returned. This accommodates the bulk class of the given information.
Make predictions
It’s doable to make predictions in two alternative ways. If the enter information is small, the predict()
methodology can be utilized.
def predict(self, information: Iterable[dict]):
def _predict_sample(node, pattern):
if node["type"] == "leaf":
return node["value"]
if pattern[node["feature"]] <= node["threshold"]:
return _predict_sample(node["left"], pattern)
else:
return _predict_sample(node["right"], pattern)
predictions = [_predict_sample(self.tree, sample) for sample in data]
return predictions
Right here, the info will be offered as an iterable of dicts. Every dict accommodates the function names as keys and the function values as values. Through the use of the _predict_sample()
methodology, the trail within the tree is adopted till a leaf node is reached. This accommodates the category that’s assigned to the respective instance.
def predict_many(self, information: Union[pl.DataFrame, pl.LazyFrame]) -> Record[Union[int, float]]:
"""
Predict methodology.
:param information: Polars DataFrame or LazyFrame.
:return: Record of predicted goal values.
"""
if self.categorical_mappings:
information = self.apply_categorical_mappings(information)
def _predict_many(node, temp_data):
if node["type"] == "node":
left = _predict_many(node["left"], temp_data.filter(pl.col(node["feature"]) <= node["threshold"]))
proper = _predict_many(node["right"], temp_data.filter(pl.col(node["feature"]) > node["threshold"]))
return pl.concat([left, right], how="diagonal_relaxed")
else:
return temp_data.choose(pl.col("temp_prediction_index"), pl.lit(node["value"]).alias("prediction"))
information = information.with_row_index("temp_prediction_index")
predictions = _predict_many(self.tree, information).kind("temp_prediction_index").choose(pl.col("prediction"))
# Convert predictions to an inventory
if isinstance(predictions, pl.LazyFrame):
# Regardless of the execution plans says there isn't a streaming, utilizing streaming right here considerably
# will increase the efficiency and reduces the reminiscence meals print.
predictions = predictions.acquire(streaming=True)
predictions = predictions["prediction"].to_list()
return predictions
If a giant instance set ought to be predicted, it’s extra environment friendly to make use of the predict_many()
methodology. This makes use of the benefits that polars gives by way of parallel processing and reminiscence effectivity.
The info will be offered as a polars DataFrame or LazyFrame. Equally to the _build_tree()
methodology within the coaching course of, a _predict_many()
methodology known as recursively. All examples within the information are filtered into sub-trees till the leaf node is reached. Examples that went the identical path to the leaf node get the identical prediction worth assigned. On the finish of the method, all sub-frames of examples are concatenated once more. For the reason that order can’t be preserved with that, a brief prediction index is ready firstly of the method. When all predictions are completed, the unique order is restored with sorting by that index.
Utilizing the classifier on a dataset
A utilization instance for the choice tree classifier will be discoveredΒ right here. The choice tree is skilled on a coronary heart illness dataset. A practice and take a look at set is outlined to check the efficiency of the implementation. After the coaching, the tree is plotted and saved to a file.
With a max depth of 4, the ensuing tree seems as follows:
![](https://towardsdatascience.com/wp-content/uploads/2025/02/1_8wEPWI1OzNJwVKqctm5Rlw-1024x328.png)
It achieves a practice and take a look at accuracy of 73% on the given information.
Runtime comparability
One purpose of utilizing polars as a backend for choice timber is to discover the runtime and reminiscence utilization and examine it to different frameworks. For that, I created a reminiscence profiling script that may be discoveredΒ right here.
The script compares this implementation, which known as βefficient-treesβ in opposition to sklearn and lightgbm. For efficient-trees, the lazy streaming variant and non-lazy in-memory variant are examined.
![](https://towardsdatascience.com/wp-content/uploads/2025/02/1_Wqrg3C7I2D6tPqtebZBKqw.png)
Within the graph, it may be seen that lightgbm is the quickest and most memory-efficient framework. Because it launched the potential for utilizing arrow datasets some time in the past, the info will be processed effectively. Nonetheless, for the reason that entire dataset nonetheless must be loaded and mightβt be streamed, there are nonetheless potential scaling points.
The following greatest framework is efficient-trees with out and with streaming. Whereas efficient-trees with out streaming has a greater runtime, the streaming variant makes use of much less reminiscence.
The sklearn implementation achieves the worst outcomes by way of reminiscence utilization and runtime. For the reason that information must be offered as a numpy array, the reminiscence utilization grows lots. The runtime will be defined through the use of just one CPU core. Assist for multi-threading or multi-processing doesnβt exist but.
Deep dive: Streaming in polars
As will be seen within the comparability of the frameworks, the potential for streaming the info as a substitute of getting it in reminiscence makes a distinction to all different frameworks. Nonetheless, the streaming engine continues to be thought of an experimental function, and never all operations are appropriate with streaming but.
To get a greater understanding of what occurs within the background, a glance into the execution plan is beneficial. Letβs bounce again into the coaching course of and get the execution plan for the next operation:
def match(self, information: Union[pl.DataFrame, pl.LazyFrame], target_name: str) -> None:
"""
Match methodology to coach the choice tree.
:param information: Polars DataFrame or LazyFrame containing the coaching information.
:param target_name: Identify of the goal column
"""
columns = information.collect_schema().names()
feature_names = [col for col in columns if col != target_name]
# Shrink dtypes
information = information.choose(pl.all().shrink_dtype()).with_columns(
pl.col(target_name).solid(pl.UInt64).shrink_dtype().alias(target_name)
)
The execution plan for information will be created with the next command:
information.clarify(streaming=True)
This returns the execution plan for the LazyFrame.
WITH_COLUMNS:
[col("cardio").strict_cast(UInt64).shrink_dtype().alias("cardio")]
SELECT [col("gender").shrink_dtype(), col("height").shrink_dtype(), col("weight").shrink_dtype(), col("ap_hi").shrink_dtype(), col("ap_lo").shrink_dtype(), col("cholesterol").shrink_dtype(), col("gluc").shrink_dtype(), col("smoke").shrink_dtype(), col("alco").shrink_dtype(), col("active").shrink_dtype(), col("cardio").shrink_dtype(), col("age_years").shrink_dtype(), col("bmi").shrink_dtype()] FROM
STREAMING:
DF ["gender", "height", "weight", "ap_hi"]; PROJECT 13/13 COLUMNS; SELECTION: None
The key phrase that’s vital right here is STREAMING
. It may be seen that the preliminary dataset loading occurs within the streaming mode, however when shrinking the dtypes
, the entire dataset must be loaded into reminiscence. For the reason that dtype
shrinking just isn’t a needed half, I take away it quickly to discover till what operation streaming is supported.
The following problematic operation is assigning the specific options.
def apply_categorical_mappings(self, information: Union[pl.DataFrame, pl.LazyFrame]) -> Union[pl.DataFrame, pl.LazyFrame]:
"""
Apply categorical mappings on enter body.
:param information: Polars DataFrame or LazyFrame with categorical columns.
:return: Polars DataFrame or LazyFrame with mapped categorical columns
"""
return information.with_columns(
[pl.col(col).replace(self.categorical_mappings[col]).solid(pl.UInt32) for col in self.categorical_columns]
)
The change expression doesnβt assist the streaming mode. Even after eradicating the solid, streaming just isn’t used which will be seen within the execution plan.
WITH_COLUMNS:
[col("gender").replace([Series, Series]), col("ldl cholesterol").change([Series, Series]), col("gluc").change([Series, Series]), col("smoke").change([Series, Series]), col("alco").change([Series, Series]), col("energetic").change([Series, Series])]
STREAMING:
DF ["gender", "height", "weight", "ap_hi"]; PROJECT */13 COLUMNS; SELECTION: None
Transferring on, I additionally take away the assist for categorical options. What occurs subsequent is the calculation of the data acquire.
information_gain_df = (
feature_data.group_by("feature_value")
.agg(
[
pl.col(target_name)
.filter(pl.col(target_name) == target_value)
.len()
.alias(f"class_{target_value}_count")
for target_value in unique_targets
]
+ [pl.col(target_name).len().alias("count_examples")]
)
.kind("feature_value")
)
Sadly, already within the first a part of calculating, the streaming mode just isn’t supported anymore. Right here, utilizing pl.col().filter()
prevents us from streaming the info.
SORT BY [col("feature_value")]
AGGREGATE
[col("cardio").filter([(col("cardio")) == (1)]).depend().alias("class_1_count"), col("cardio").filter([(col("cardio")) == (0)]).depend().alias("class_0_count"), col("cardio").depend().alias("count_examples")] BY [col("feature_value")] FROM
STREAMING:
RENAME
easy Ο 2/2 ["gender", "cardio"]
DF ["gender", "height", "weight", "ap_hi"]; PROJECT 2/13 COLUMNS; SELECTION: col("gender").is_not_null()
Since this isn’t really easy to alter, I’ll cease the exploration right here. It may be concluded that within the choice tree implementation with polars backend, the complete potential of streaming canβt be used but since vital operators are nonetheless lacking streaming assist. For the reason that streaming mode is underneath energetic improvement, it may be doable to run a lot of the operators and even the entire calculation of the choice tree within the streaming mode sooner or later.
Conclusion
On this weblog submit, I offered my customized implementation of a call tree utilizing polars as a backend. I confirmed implementation particulars and in contrast it to different choice tree frameworks. The comparability exhibits that this implementation can outperform sklearn by way of runtime and reminiscence utilization. However there are nonetheless different frameworks like lightgbm that present a greater runtime and extra environment friendly processing. There may be a number of potential within the streaming mode when utilizing polars backend. At present, some operators stop an end-to-end streaming method on account of a scarcity of streaming assist, however that is underneath energetic improvement. When polars makes progress with that, it’s price revisiting this implementation and evaluating it to different frameworks once more.