Excessive-Efficiency Python Knowledge Processing: pandas 2 vs. Polars, a vCPU Perspective | by Saar Berkovich | Aug, 2024

The setup

I used an AWS m6a.xlarge machine that has 4 vCores and 16GB RAM obtainable and utilized taskset to assign 1 vCore and a couple of vCores to the method at a time to simulate a machine with fewer vCores every time. For lib variations, I took probably the most up-to-date steady releases obtainable on the time:
pandas==2.2.2; polars=1.2.1

The info

The dataset was randomly generated to be made up of 1M rows and 5 columns, and is supposed to function a historical past of 100k consumer operations made in 10k classes inside a sure product:
user_id (int)
action_types (enum, can take the values in [“click”, “view”, “purchase”])
timestamp (datetime)
session_id (int)
session_duration (float)

The premise

Given the dataset, we wish to discover the highest 10% of most engaged customers, judging by their common session length. So, we’d first wish to calculate the typical session length per consumer (grouping and aggregation), discover the ninetieth quantile (quantile computation), choose all of the customers above the quantile (filtering), and ensure the record is ordered by the typical session length (sorting).

Testing

Every of the operations had been run 200 occasions (utilizing timeit), taking the imply run time every time and the usual error to function the measurement error. The code could be discovered right here.

A be aware on keen vs lazy analysis

One other distinction between pandas and Polars is that the previous makes use of keen execution (statements are executed as they’re written) by default and the latter makes use of lazy execution (statements are compiled and run solely when wanted). Polar’s lazy execution helps it optimize queries, which makes a really good characteristic in heavy knowledge evaluation duties. The selection to separate our process and have a look at 4 operations is made to remove this side and concentrate on evaluating extra primary efficiency elements.

Group by + Combination

Imply Execution Time for the group by and combination operation, by library and vCores. Picture and knowledge by creator.

We are able to see how pandas doesn’t scale with vCores — as anticipated. This pattern will stay all through our take a look at. I made a decision to maintain it within the plots, however we gained’t reference it once more.

polars’ outcomes are fairly spectacular right here — with a 1vCore setup it managed to complete quicker than pandas by a 3rd of the time, and as we scale to 2, 4 cores it finishes roughly 35% and 50% quicker respectively.

Quantile Computation

Imply execution time for the Quantile Computation operation, by library and vCores. Picture and knowledge by creator.

This one is attention-grabbing. In all vCores setups, polars completed round 5x quicker than pandas. On the 1vCore setup, it measured 0.2ms on common, however with a major commonplace error (which means that the operation would typically end effectively after 0.2ms, and at different occasions it might end effectively earlier than 0.2ms). When scaling to a number of cores we get stabler run occasions — 2vCores at 0.21ms and 4vCores at 0.19 (round 10% quicker).

Filtering

Imply execution time for the Filter operation, by library and vCores. Picture and knowledge by creator.

In all instances, Polars finishes quicker than pandas (the more severe run time continues to be 2 occasions quicker than pandas). Nonetheless, we will see a really uncommon pattern right here — the run time will increase with vCores (we’re anticipating it to lower). The run time of the operation with 4 vCores is roughly 35% slower than the run time with 1 vCore. Whereas parallelization offers you extra computing energy, it typically comes with some overhead — managing and orchestrating parallel processes is usually very troublesome.

This Polars scaling situation is perplexing — the implementation on my finish may be very easy, and I used to be not capable of finding a related open situation on the Polars repo (there are at present over 1k open points there, although).
Do you may have any thought as to why this might have occurred? Let me know within the feedback.

Sorting

Imply execution time for the Type operation, by library and vCores. Picture and knowledge by creator.

After filtering, we’re left with round 13.5k rows.

On this one, we will see that the 1vCore Polars case is considerably slower than pandas (by round 45%). As we scale to 2vCores the run time turns into aggressive with pandas’, and by the point we scale to 4vCores Polars turns into considerably quicker than pandas. The probably state of affairs right here is that Polars makes use of a sorting algorithm that’s optimized for parallelization — such an algorithm could have poor efficiency on a single core.

Trying extra carefully on the docs, I discovered that the type operation in Polars has a multithreaded parameter that controls whether or not a multi-threaded sorting algorithm is used or a single-threaded one.

Sorting (with multithreading=False)

Imply execution time for the Type operation (with multithreading=False), by library and vCores. Picture and knowledge by creator.

This time, we will see way more constant run occasions, which don’t scale with cores however do beat pandas.