Are your top-selling merchandise making or breaking your online business?
It’s terrifying to suppose your complete income would possibly collapse if one or two merchandise fall out of favor. But spreading too skinny throughout a whole lot of merchandise typically results in mediocre outcomes and brutal worth wars.
Uncover how a 6-year Shopify case examine uncovered the proper steadiness between focus and diversification.
Why hassle?
Understanding focus in your product portfolio is greater than merely an mental train; it has a direct affect on essential enterprise selections. From stock planning to advertising and marketing spend, understanding how your income is distributed amongst items impacts your method.
This publish walks via sensible methods for monitoring focus, explaining what these measurements really imply and the right way to get helpful insights out of your information.
I’ll take you thru basic metrics and superior evaluation, together with interactive visualisations that deliver the info to life.
I’m additionally sharing chunks of R code used on this evaluation. Use it straight or adapt the logic to your most popular programming language.
market evaluation or funding principle, we frequently concentrate on focus — how worth is distributed throughout completely different parts. In e-commerce, this interprets right into a basic query: How a lot of your income ought to come out of your prime merchandise?
Is it higher to have a number of sturdy sellers or a broad product vary? This isn’t only a theoretical query …
Having most of your income tied to few merchandise means your operations are streamlined and targeted. However what occurs when market preferences shift? Conversely, spreading income throughout a whole lot of merchandise might sound safer, however it typically means you lack any actual aggressive benefit.
So the place’s the optimum level? Or fairly what’s the optimum vary, and the way numerous ratios describe it.
What makes this evaluation significantly precious is that it’s based mostly on actual information from a enterprise that stored increasing its product vary over time.
On Datasets
This evaluation was executed for an actual US-based e-commerce retailer — one in all our purchasers who kindly agreed to share their information for this text. The info spans six years of their progress, giving us a wealthy view of how product focus evolves as enterprise matures.
Whereas working with precise enterprise information provides us real insights, I’ve additionally created an artificial dataset in one of many later sections. This small, synthetic dataset helps illustrate the relationships between numerous ratios in a extra managed setting — exhibiting patterns “relying on fingers”.
To be clear: this artificial information was created totally from scratch and solely loosely mimics normal patterns seen in actual e-commerce — it has no direct connection to our shopper’s precise information. That is completely different from my earlier article, the place I generated artificial information based mostly on actual patterns utilizing Snowflake performance.
Information Export
The primary evaluation attracts from actual information, however that small synthetic dataset serves an necessary goal — it helps clarify relationships between numerous ratios in a manner that’s straightforward to know. And belief me, having such a micro dataset with clear visuals is available in actually useful when explaining complicated dependencies to stakeholders 😉
The uncooked transaction export from Shopify comprises every part we require, however we should prepare it correctly for focus evaluation. The info comprises the entire merchandise for every transaction, however the date is simply in a single row per transaction, thus we should propagate it to all merchandise whereas retaining the transaction id. Most likely not for the primary iteration of the examine, but when we wish to fine-tune it, we should always contemplate the right way to deal with reductions, returns, and so forth. Within the case of overseas gross sales, conduct a worldwide and country-specific examine.
We now have a product title and an SKU, each of which ought to adhere to some naming conference and logic when coping with variants. If we’ve got a grasp catalogue with all of those descriptions and codes, we’re very lucky. When you’ve got it, use it, however examine it to the ‘floor fact’ with precise transaction information.
Product Variants
In my case, the product names had been structured with a base title and a variant separated by a splash. Quite simple to make use of, divided into predominant product and variants. Exceptions? After all, they’re all the time current, particularly when coping with 6 years of extremely profitable ecommerce information:). As an illustration, some names (e.g. “All-purpose”) included a splash, whereas others didn’t. Then, some did have variants, whereas some didn’t. So count on for some tweaks right here, however this can be a vital stage.
If you happen to’re questioning why we have to exclude variations from focus evaluation, the determine above illustrates it clearly. The values are significantly completely different, and we might count on radically completely different outcomes if we analysed focus with variants.
The evaluation relies on transactions, counting variety of merchandise with/with out variants in a given month. But when we’ve got a lot of variants, not all of them will probably be current in one-month transactions. Sure, that’s right — so allow us to contemplate a bigger time vary, one yr.
I calculated the variety of variants per base product in a calendar yr based mostly on what we’ve got in transactions. The variety of variants per base product is split into a number of bins. Let’s take the yr 2024. The plot reveals that we’ve got considerably round 170 base gadgets, with lower than half having just one variant (gentle inexperienced bar). Nonetheless, the opposite half had multiple model, and what’s noteworthy (and, I consider, non-obvious, until you’re employed in attire ecommerce) is that we’ve got merchandise with a very massive variety of variations. The black bin comprises gadgets that are available in 100 or extra completely different variants.
If you happen to guessed that they had been growing their choices by introducing new merchandise whereas preserving previous ones obtainable, you’re right. However wouldn’t or not it’s attention-grabbing to know whether or not the variations stem from heritage or new merchandise? What if we simply included merchandise launched within the present yr? We could examine it by utilizing the date of product introduction fairly than transactions. As a result of our solely dataset is a transaction dump, the primary transaction for every product is taken because the introduction date. And for every product, we take all variations that appeared in transactions, with no time constraints (from product introduction to essentially the most present report).
Now let’s have these two plots facet by facet for straightforward comparability. Taking transactions dates we’ve got extra merchandise in annually, and the distinction grows — since there are additionally transactions with merchandise launched beforehand. No suprises right here, as anticipated. If you happen to had been questioning why information for 2019 differ — good catch. The truth is, store began operation in 2018, however I eliminated these few preliminary months; nonetheless, it’s their affect what makes the distinction in 2019.
Merchandise variants and it’s affect on income will not be our focus on this article. However as it’s typically in actual evaluation, there are ‘branching’ choices, as we progress, even within the preliminary section. We haven’t even completed information preparation, and it’s already getting attention-grabbing.
Understanding the product construction is vital for conducting significant focus analyses. Now that our information is appropriately formatted, we are able to study precise focus measurements and what they reveal about very best portfolio construction. Within the following half, we’ll have a look at these measurements and what they imply for e-commerce companies.
In relation to figuring out focus, economists and market analysts have executed the heavy lifting for us. Over a long time of analysis into markets, competitiveness, and inequality, they’ve produced highly effective analytical strategies which have confirmed helpful in a wide range of sectors. Slightly than creating novel metrics for e-commerce portfolio evaluation, we are able to use current time-tested strategies.
Let’s see how theoretical frameworks can make clear sensible e-commerce questions.
Herfindahl-Hirschman Index
HHI (Herfindahl-Hirschman Index) might be the most typical technique to measure focus. Regulators use it to examine if a market isn’t changing into too concentrated — they take percentages of every firm’s market share, sq. them, and add up. Easy as that. The consequence could be anyplace from practically 0 (many small gamers) to 10,000 (one firm takes all of it).
Why use HHI for e-commerce portfolio evaluation? The logic is simple — as an alternative of corporations competing in a market, we’ve got merchandise competing for income. The mathematics works precisely the identical manner — we take every product’s share of whole income, sq. it, and sum up. Excessive HHI means income depends upon few merchandise, whereas low HHI reveals income is unfold throughout many merchandise. This provides us a single quantity to trace portfolio focus over time.
Pareto
Who has not heard of Pareto’s guidelines? In 1896, Italian economist Vilfredo Pareto noticed that 20% of the inhabitants held 80% of Italy’s land. Since then, this sample has been present in a wide range of fields, together with wealth distribution and retail gross sales.
Whereas popularly known as the “80/20 rule,” the Pareto precept will not be restricted to those figures. We will use any x-axis criterion (for instance, the highest 30% of merchandise) to find out the suitable y worth (income contribution). The Lorenz curve, shaped by linking these places, supplies an entire image of focus.
The chart above reveals what number of merchandise do we have to obtain sure income share (of the month-to-month income). I took arbitrarily cuts at .2, .3, .5, .8, .95, and naturally additionally together with 1 — which implies whole variety of merchandise, contributing to 100% of income in a given month.
Lorenz curve
If we kind merchandise by their income contribition, and chart the road, we get Lorenz curve. On each axis we’ve got percentages, of merchandise and their reveue share. I case of completely uniform income distribution, we’d have a straight line, whereas in case of “good focus”, very steep curve, climbing near 100% income, after which quickly turning proper, to incorporate some residual income from different merchandise.
It’s attention-grabbing to see that line, however typically it’ll look fairly comparable, like a “bended stick”. So allow us to now examine these strains for few earlier months, and likewise few years again (sticking to October). The month-to-month strains are fairly comparable, and in case you suppose — it will be good to have some interactivity on this plot, you’re completely proper. The yearly comparability reveals extra variations (we nonetheless have month-to-month information, taking October in annually), and that is comprehensible, since these measurements are extra distant in time.
So we do see variations between the strains, however can’t we quantify them by some means, to not rely totally on visible similarity? Undoubtedly, and there’s a ratio for that — Gini Ratio. And by the best way, we may have various ratios in subsequent chapters.
Gini Ratio
To translate form of Lorenz curve into numeric worth, we are able to use Gini ratio — outlined as a ratio between two areas, above and under the equality line. On a plot under it’s a ratio between darkish and lightweight blue areas.
Allow us to then visualize for 2 intervals — October 2019, and October 2024, very same intervals, as we’ve got on one of many plots earlier than.
As soon as we’ve got good understanding, with visuals, how the Gini ratio is calculated, let’s plot it, over the entire interval.
I exploit R for evaluation, so I’ve Gini ratio simply obtainable (in addition to different ratios, which I’ll present later). The preliminary information desk (x3a_dt) comprises income per product, per 30 days. The ensuing one has Gini ratio per 30 days.
#-- calculate Gini ratio, month-to-month
library(information.desk, ineq)
x3a_ineq_dt <- x3a_dt[, .(gini = ineq::ineq(revenue, type = "Gini")), month]
Good we’ve got all these packages for heavy lifting. The mathematics behind will not be tremendous difficult, however our time is valuable.
The plot under reveals the results of calculations.
I haven’t included a smoothing line, with its confidence interval channel, since we do not need measurement factors, however the results of Gini calculation, with its personal errors distribution. To be very strict and exact on math, we’d must calculate the arrogance interval, and based mostly on that plot smoothed line. The outcomes are under.
Since we don’t use straight statistical significance of calculated ratio, this tremendous strict method is slightly bit an overkill. I haven’t executed it whereas charting development line for HHI, nor will do in subsequent plots. However it’s good to pay attention to this nuance.
We now have seen to date two ratios — HHI and Gini, and they’re removed from being equivalent. Lorenz curve nearer to diagonal signifies extra uniform distribution, which is what we’ve got for October 2019, however the HHI is increased, than for 2024, indicating extra focus in 2019. Possibly I made a mistake in calculations, even worse, early on throughout information preparation? That may be actually unlucky. Or the info is okay, however we’re combating correct interpretation?
I’ve very often moments of such doubts, particularly when transferring with the evaluation actually fast. So how will we address that, tightening grip on information and our understanding of dependencies? Keep in mind, that no matter evaluation you do, there’s all the time first time. And very often we do not need a luxurious of ‘leisure’ analysis, it’s extra typically already work for a Consumer (or a superior, stakeholder, whoever requested it, even ourselves, whether it is our initiative).
We have to have a very good understanding of the right way to interpret all these ratios, together with dependencies between them. If you happen to plan to current your outcomes to others, questions listed below are assured, so higher to be effectively ready. We will work with an current dataset, or we are able to generate a small set, the place it is going to be simpler to catch dependencies. Allow us to observe the latter method.
Allow us to begin with making a small dataset,
library(information.desk)#-- Create pattern income information
income <- record(
"2021" = rep(15, 10), # 10 values of 15
"2022" = c(rep(100, 5), rep(10, 25)), # 5 values of 100, 25 values of 10
"2023" = rep(25, 50), # 50 values of 25
"2024" = c(rep(100, 30), rep(10, 70)) # 30 values of 100, 70 values of 10
)
combining it into an information.desk.
#-- Convert to information.desk in a single step
x_dt <- information.desk(
yr = rep(names(income), sapply(income, size)),
income = unlist(income)
)
A fast overview of the info.
It appears we’ve got what we would have liked — a easy dataset, however nonetheless fairly sensible. Now we’re continuing with calculations and charts, just like what we had for an actual dataset earlier than.
#-- HHI, Gini
xh_dt <- x_dt[, .(hhi = ineq::Herfindahl(revenue),
gini = ineq::Gini(revenue)), year]
#-- Lorenz
xl_dt <- x_dt[order(-revenue), .(
cum_prod_pct = seq_len(.N)/.N,
cum_rev_pct = cumsum(revenue)/sum(revenue)), year]
And rendering plots.
These charts assist rather a lot in understanding ratios, relations between them and to information. It’s all the time a good suggestion to have such micro evaluation, for ourselves and for stakeholders — as ‘again pocket’ slides, and even sharing them upfront.
Nerdy element — the right way to barely shift the road, so it doesn’t overlap, and add labels inside a plot? Render a plot, after which make guide high-quality tuning, anticipating a number of iterations.
#-- shift the road
xl_dt[year == "2021", `:=` (cum_rev_pct = cum_rev_pct - .01)]
For labelling I exploit ggrepel, however as a default, it’ll label all of the factors, whereas we want just one per line. An as well as deciding which one, for good trying chart.
#-- resolve which factors to label
labs_key2_dt <- information.desk(
yr = c("2021", "2022", "2023", "2024"), place = c(4, 5, 25, 30))#-- set keys
record(xl_dt, labs_key2_dt) |> lapply(setkey, yr)
#-- be part of
label_positions2 <- xl_dt[
labs_key2_dt, on = .(year), # join on 'year'
.SD[get('position')], # Use get('place') to reference the place from labs_key_dt
by = .EACHI] # for annually
Render the plot.
#-- render plot
plot_22b <- xl_dt |>
ggplot(aes(cum_prod_pct, cum_rev_pct, coloration = yr, group = yr, label = yr)) +
geom_line(linewidth = .2) +
geom_point(alpha = .8, form = 21) +
theme_bw() +
scale_color_viridis_d(possibility = "H", start = 0, finish = 1) +
ggrepel::geom_label_repel(
information = label_positions2, pressure = 10,
field.padding = 2.5, level.padding = .3,
seed = 3, course = "x") +
... further styling
I started with HHI, the Lorenz curve, and the accompanying Gini ratios, as they gave the impression to be good beginning factors for focus and inequality measurements. Nonetheless, there are quite a few completely different ratios used to outline distributions, whether or not for inequality or on the whole. It’s unlikely that we’d make use of all of them directly, due to this fact choose the subset that gives essentially the most insights on your particular problem.
With a correct construction of a dataset, it’s fairly simple to calculate them. I’m sharing code snippets, with a number of ratios calculated month-to-month. We use a dataset, we have already got — month-to-month income per product (base merchandise, excluding variants).
Beginning with ratios from the ineq
bundle.
#---- inequality ----
x3_ineq_dt <- x3a_dt[, .(
# Classical inequality/concentration measures
gini = ineq::ineq(revenue, type = "Gini"), # Gini coefficient
hhi = ineq::Herfindahl(revenue), # Herfindahl-Hirschman Index
hhi_f = sum((rev_pct*100)^2), # HHI - formula
atkinson = ineq::ineq(revenue, type = "Atkinson"), # Atkinson index
theil = ineq::ineq(revenue, type = "Theil"), # Theil entropy index
kolm = ineq::ineq(revenue, type = "Kolm"), # Kolm index
rs = ineq::ineq(revenue, type = "RS"), # Ricci-Schutz index
entropy = ineq::entropy(revenue), # Entropy measure
hoover = mean(abs(revenue - mean(revenue)))/(2 * mean(revenue)), # Hoover (Robin Hood) index
Diustribution shape and top/bottom shares and ratios.
# Distribution shape measures
cv = sd(revenue)/mean(revenue), # Coefficient of Variation
skewness = moments::skewness(revenue), # Skewness
kurtosis = moments::kurtosis(revenue), # Kurtosis# Ratio measures
p90p10 = quantile(revenue, 0.9)/quantile(revenue, 0.1), # P90/P10 ratio
p75p25 = quantile(revenue, 0.75)/quantile(revenue, 0.25), # Interquartile ratio
palma = sum(rev_pct[1:floor(.N*.1)])/sum(rev_pct[floor(.N*.6):(.N)]), # Palma ratio
# Focus ratios and shares
top1_share = max(rev_pct), # Share of prime product
top3_share = sum(head(kind(rev_pct, reducing = TRUE), 3)), # CR3
top5_share = sum(head(kind(rev_pct, reducing = TRUE), 5)), # CR5
top10_share = sum(head(kind(rev_pct, reducing = TRUE), 10)), # CR10
top20_share = sum(head(kind(rev_pct, reducing = TRUE), flooring(.N*.2))), # Prime 20% share
mid40_share = sum(kind(rev_pct, reducing = TRUE)[floor(.N*.2):floor(.N*.6)]), # Center 40% share
bottom40_share = sum(tail(kind(rev_pct), flooring(.N*.4))), # Backside 40% share
bottom20_share = sum(tail(kind(rev_pct), flooring(.N*.2))), # Backside 20% share
Primary statistics, quantiles.
# Primary statistics
unique_products = .N, # Variety of distinctive merchandise
revenue_total = sum(income), # Whole income
mean_revenue = imply(income), # Imply income per product
median_revenue = median(income), # Median income
revenue_sd = sd(income), # Income commonplace deviation# Quantile values
q20 = quantile(income, 0.2), # twentieth percentile
q40 = quantile(income, 0.4), # fortieth percentile
q60 = quantile(income, 0.6), # sixtieth percentile
q80 = quantile(income, 0.8), # eightieth percentile
Rely measures.
# Rely measures
above_mean_n = sum(income > imply(income)), # Variety of merchandise above imply
above_2mean_n = sum(income > 2*imply(income)), # Variety of merchandise above 2x imply
top_quartile_n = sum(income > quantile(income, 0.75)), # Variety of merchandise in prime quartile
zero_revenue_n = sum(income == 0), # Variety of merchandise with zero income
within_1sd_n = sum(abs(income - imply(income)) <= sd(income)), # Merchandise inside 1 SD
within_2sd_n = sum(abs(income - imply(income)) <= 2*sd(income)), # Merchandise inside 2 SD
Income above (or under) the edge.
# Income above threshold
rev_above_mean = sum(income[revenue > mean(revenue)]) # Income from merchandise above imply
), month]
The ensuing desk has 40 columns, and 72 rows (months).
As talked about earlier, it’s tough to think about, one would work with 40 ratios, so I’m fairly exhibiting a way the right way to calculate them, and one ought to choose related ones. As all the time, it’s good to visualise and see how they relate to one another.
We will calculate correlation matrix between all ratios, or chosen subset.
# Choose key metrics for a clearer visualization
key_metrics <- c("gini", "hhi", "atkinson", "theil", "entropy", "hoover",
"top1_share", "top3_share", "top5_share", "unique_products")cor_matrix <- x3_ineq_dt[, .SD, .SDcols = key_metrics] |> cor()
Change column names to extra pleasant names.
# Make variable names extra readable
pretty_names <- c(
"Gini", "HHI", "Atkinson", "Theil", "Entropy", "Hoover",
"Prime 1%", "Prime 3%", "Prime 5%", "Merchandise"
)
colnames(cor_matrix) <- rownames(cor_matrix) <- pretty_names
And render the plot.
corrplot::corrplot(cor_matrix,
sort = "higher",
technique = "coloration",
tl.col = "black",
tl.srt = 45,
diag = F,
order = "AOE")
After which we are able to plot some attention-grabbing pairs. After all, a few of them have optimistic or damaging correlation by definition, whereas in different instances it’s not that apparent.
We began evaluation with ratios and Lorenz curve as a top-down overview. It’s a good begin, however there are two issues — the ratios have a comparatively broad vary, when the enterprise is doing okay, and there’s hardly connection to actionable insights. Even when we discover that the ratio is on the sting, or outdoors of the protected vary, it’s unclear what we should always do. And directions like “lower focus” are slightly ambiguous.
E-commerce talks and breaths merchandise, so to make evaluation relatable, we have to reference to explicit merchandise. Folks would additionally like to grasp which merchandise represent core 50%, 80% of income, and equally necessary, if these merchandise keep persistently as prime contributors.
Allow us to take one month, August 2024 and see which merchandise contributed to 50% income in that month. Then, we examine income from these actual merchandise in different months. There are 5 merchandise, producing (a minimum of) 50% income in August.
We will additionally render extra visually interesting plot with a streamgraph. Each plots present very same dataset, however they complement one another properly — bar plots for precision, whereas streamgraph for a narrative.
The purple line indicated chosen month. If you happen to really feel “itching” to shift that line, like in an old style radio, you’re completely proper — that needs to be an interactive chart, and really it’s, together with a slider for income share share (we produced it for a Consumer).
So what if we shift that purple ‘tuning line’ slightly bit backwards, possibly to 2020? The logic in information preparation may be very comparable — get merchandise contributing to a sure income share threshold, and examine the income from these merchandise in different months.
With interactivity on two parts — income contribution share and the date, one can be taught rather a lot concerning the enterprise, and that is precisely the purpose of those charts. One can look from completely different angles:
- focus, what number of merchandise do we want for sure income threshold,
- merchandise themselves, do they keep in sure income contribution bin, or do they alter and why? Is it seasonality, a legitimate alternative, misplaced provider or one thing else?
- time window, whether or not we have a look at one month or a complete yr,
- seasonality, evaluating comparable time of a yr with earlier intervals.
What the Information Tells Us
Our 6-year dataset reveals the evolution of an e-commerce enterprise from excessive focus to balanced progress. Listed below are the important thing patterns and classes:
With 6 years of information, I had a singular likelihood to observe focus metrics evolve because the enterprise grew. Beginning with only a handful of merchandise, I noticed precisely what you’d count on — sky-high focus. However as new merchandise entered the combination, issues acquired extra attention-grabbing. The enterprise discovered its rhythm with a dozen or so prime performers, and the HHI settled into a snug 700–800 vary.
Right here’s one thing fascinating I found: focus and inequality would possibly sound like twins, however they’re extra like distant cousins. I observed this whereas evaluating HHI in opposition to Lorenz curves and their Gini ratios. Belief me, you’ll wish to get comfy with the maths earlier than explaining these patterns to stakeholders — they’ll odor uncertainty from a mile away.
Need to actually perceive these metrics? Do what I did: create a dummy dataset so easy it’s nearly embarrassing. I’m speaking primary patterns {that a} fifth-grader may grasp. Feels like overkill? Possibly, however it saved me numerous hours of head-scratching and misinterpretation. Hold these examples in your again pocket — or higher but, share them upfront. Nothing builds confidence like exhibiting you’ve executed your homework.
Look, calculating these ratios isn’t rocket science. The actual magic occurs once you dig into how every product contributes to your income. That’s why I added the “present me the cash” part — I don’t consider in fast fixes or magic formulation. It’s about rolling up your sleeves and understanding how every product actually behaves.
As you’ve most likely observed your self, these streamgraphs I confirmed you’re virtually begging for interactivity. And boy, does that add worth! When you’ve acquired your keys and joins sorted out, it’s not even that difficult. Give your customers an interactive device, and instantly you’re not drowning in one-off questions anymore — they’re discovering insights themselves.
Right here’s a professional tip: use this focus evaluation as your foot within the door with stakeholders. Present your product groups that streamgraph, and I assure their eyes will gentle up. Once they begin asking for interactive variations, you’ve acquired them hooked. One of the best half? They’ll suppose it was their thought all alongside. That’s the way you get actual adoption — by letting them uncover the worth themselves.
Information Engineering Takeaways
Whereas very often we typically know what to anticipate in a dataset, it’s nearly assured that there will probably be some nuances, exceptions, or possibly even surprises. It’s good to spend a while reviewing datasets, utilizing devoted features (like str, glimpse in R), on the lookout for empty fields, outliers, but in addition merely scrolling via to grasp the info. I like comparisons, and on this case, I’d examine to smelling fish on a market earlier than leaping to organize sushi 🙂
Then, if we work with a uncooked information export, fairly possible there will probably be a number of columns within the information dump; in any case, if we click on ‘export all’, wouldn’t we count on precisely that? For many evaluation we are going to want a subset of those columns, so it’s good to trim and maintain solely what we want. I assume we work with a script, so if it seems, we want extra, not a difficulty, simply add missed column and rerun that chunk.
Within the dataset dump there was a timestamp in a single row per transaction, whereas we would have liked it per every product. Therefore some gentle information wrangling to propagate these timestamps to all of the merchandise.
After cleansing the dataset, it’s necessary to think about the context of study, together with the inquiries to be answered and the required adjustments to the info. This “contextual cleansing/wrangling” is vital because it determines whether or not the evaluation succeeds or fails. In our state of affairs, the objective was to analyse product focus, due to this fact filtering out variants (dimension, color, and many others.) was important. If we had skipped that, the result would have been radically completely different.
Very often we are able to count on some “traps”, the place initially it appears we are able to apply easy method, whereas really, we should always add a little bit of sophistication. For instance — Lorenz curve, the place we have to calculate what number of merchandise do we have to get to a sure income threshold. That is the place I exploit rolling joins, which match right here completely.
The core logic to provide streamgraphs is to search out merchandise which represent sure income share in a given month, then “freeze” them and get their income in different months. The toolset I used was including further column, with a product quantity, after sorting per 30 days, after which enjoying with keys and joins.
An necessary component of this evaluation was including interactivity, permitting customers to play with some parameters. That raises the bar, as we want all these operations to be carried out lightning quick. The components we want are proper information construction, further columns, correct keys and joins. Put together as a lot as attainable, precalculating in an information warehouse, so the dashboarding device will not be overloaded. Take caching under consideration.
Tips on how to Begin?
Strike a steadiness between delivering what stakeholders request and exploring doubtlessly precious insights they haven’t requested for but. The evaluation I introduced follows this sample — getting preliminary focus ratios is simple, whereas constructing an interactive streamgraph optimized for lightning-fast operation requires vital effort.
Begin small and interact others. Share primary findings, focus on what you possibly can be taught collectively, and solely then proceed with extra labor-intensive evaluation when you’ve secured real curiosity. And all the time preserve a stable grip in your uncooked information — it’s invaluable for answering these inevitable ad-hoc questions rapidly.
Constructing a prototype earlier than full manufacturing permits for validation of curiosity and suggestions with out devoting an excessive amount of time. In my case, such easy focus ratios sparked debates that ultimately led to the extra superior interactive research on which stakeholders rely at this time.
I’ll present you the way I ready the info at every step of this evaluation. Since I used R, I’ll embody the precise code snippets — they’ll enable you to get began quicker, even in case you’re working in a unique language. That is the code I used for the examine, although you’ll most likely must adapt it to your particular wants fairly than simply copying it over. I made a decision to maintain the code separate from the primary evaluation, to make it extra streamlined and readable for each technical and enterprise customers.
Whereas I’m presenting evaluation based mostly on Shopify export, there isn’t any limitation for a specific platform, we simply want transactions information.
Shopify export
Let’s begin with getting our information from Shopify. The uncooked export wants some work earlier than we are able to dive into focus evaluation — right here’s what I needed to cope with first.
We begin with export of uncooked transactions information from Shopify. It’d take a while, and when prepared, we get an e-mail with hyperlinks to obtain.
#-- 0. libs
pacman::p_load(information.desk)#-- 1.1 load information; the csv recordsdata are what we get as a full export from Shopify
xs1_dt <- fread(file = "shopify_raw/orders_export_1.csv")
xs2_dt <- fread(file = "shopify_raw/orders_export_2.csv")
xs3_dt <- fread(file = "shopify_raw/orders_export_3.csv")
As soon as we’ve got information, we have to mix these recordsdata into one dataset, trim columns and carry out some cleaning.
#-- 1.2 examine all columns, restrict them to important (for this evaluation) and bind into one information.desk
xs1_dt |> colnames()
# there are 79 columns in full export,
# so we choose a subset, related for this evaluation
sel_cols <- c("Identify", "E-mail", "Paid at", "Achievement Standing", "Accepts Advertising and marketing", "Forex", "Subtotal",
"Lineitem amount", "Lineitem title", "Lineitem worth", "Lineitem sku", "Low cost Quantity",
"Billing Province", "Billing Nation")#-- mix into one information.desk, with a subset of columns
xs_dt <- information.desk::rbindlist(l = record(xs1_dt, xs2_dt, xs3_dt),
use.names = T, fill = T, idcol = T) %>% .[, ..sel_cols]
Some information preparations.
#-- 2. information prep
#-- 2.1 change areas in column names, for simpler dealing with
sel_cols_new <- sel_cols |> stringr::str_replace(sample = " ", alternative = "_")
setnames(xs_dt, previous = sel_cols, new = sel_cols_new)#-- 2.2 transaction as integer
xs_dt[, `:=` (Transaction_id = stringr::str_remove(Name, pattern = "#") |> as.integer())]
Anonymize emails, as we don’t want/wish to cope with actual emails throughout evaluation.
#-- 2.3 anonymize e-mail
new_cols <- c("Email_hash")
xs_dt[, (new_cols) := .(digest::digest(Email, algo = "md5")), .I]
Change column sorts; this depends upon private preferences.
#-- 2.4 change Accepts_Marketing to logical column
xs_dt[, `:=` (Accepts_Marketing_lgcl = fcase(
Accepts_Marketing == "yes", TRUE,
Accepts_Marketing == "no", FALSE,
default = NA))]
Now we concentrate on transactions dataset. Within the export recordsdata, the transaction quantity and timestamp is in just one row per all gadgets within the basket. We have to get these timestamps and propagate to all gadgets.
#-- 3 transactions dataset
#-- 3.1 subset transactions
#-- restrict columns to important for transaction solely
trans_sel_cols <- c("Transaction_id", "Email_hash", "Paid_at",
"Subtotal", "Forex", "Billing_Province", "Billing_Country")#-- get transactions desk based mostly on requirement of non-null fee - as fee (date, quantity) will not be for all merchandise, it is just as soon as per basket
xst_dt <- xs_dt[!is.na(Paid_at) & !is.na(Transaction_id), ..trans_sel_cols]
#-- date columns
xst_dt[, `:=` (date = as.Date(`Paid_at`))]
xst_dt[, `:=` (month = lubridate::floor_date(date, unit = "months"))]
Some further data, as I name them, derivatives.
#-- 3.2 is person returning? their n-th transaction
setkey(xst_dt, Paid_at)
xst_dt[, `:=` (tr_n = 1)][, `:=` (tr_n = cumsum(tr_n)), Email_hash]xst_dt[, `:=` (returning = fcase(tr_n == 1, FALSE, default = TRUE))]
Do we’ve got any NA’s within the dataset?
xst_dt[!complete.cases(xst_dt), ]
Merchandise dataset.
#-- 4 merchandise dataset
#-- 4.1 subset of columns
sel_prod_cols <- c("Transaction_id", "Lineitem_quantity", "Lineitem_name",
"Lineitem_price", "Lineitem_sku", "Discount_Amount")
Now we be part of these two datasets, to have transaction traits (trans_sel_cols) for all of the merchandise.
#-- 5 be part of two datasets
record(xs_dt, xst_dt) |> lapply(setkey, Transaction_id)
x3_dt <- xs_dt[, ..sel_prod_cols][xst_dt]
Let’s examine which columns we’ve got in x3_dt dataset.
And additionally it is a second to examine the dataset.
x3_dt |> str()
x3_dt |> dplyr::glimpse()
x3_dt |> head()
Time for information cleansing. First up: splitting the Lineitem_name into base merchandise and their variants. In principle, these are separated by a splash (“-”). Easy, proper? Not fairly — some product names, like ‘All-Goal’, include dashes as a part of their title. So we have to deal with these particular instances first, briefly changing problematic dashes, doing the break up, after which restoring the unique product names.
#-- 6. cleansing, aggregation on product names
#-- 6.1 break up product title into base and variants
#-- break up product names into core and variants
product_cols <- c("base_product", "variants")
#-- with particular remedy for 'all-purpose'
x3_dt[stringr::str_detect(string = Lineitem_name, pattern = "All-Purpose"),
(product_cols) := {
tmp = stringr::str_replace(Lineitem_name, "All-Purpose", "AllPurpose")
s = stringr::str_split_fixed(tmp, pattern = "[-/]", n = 2)
s = stringr::str_replace(s, "AllPurpose", "All-Goal")
.(s[1], s[2])
}, .I]
It’s good to make validation after every step.
# validation
x3_dt[stringr::str_detect(
string = Lineitem_name, pattern = "All-Purpose"), .SD,
.SDcols = c("Transaction_id", "Lineitem_name", product_cols)]
We maintain transferring with information cleansing — the precise steps rely in fact on a specific dataset, however I share my stream, for example.
#-- two eventualities, to deal with `(32-ounce)` in prod title; we do not need that hyphen to chop the title
x3_dt[stringr::str_detect(string = `Lineitem_name`, pattern = "ounce", negate = T) &
stringr::str_detect(string = `Lineitem_name`, pattern = "All-Purpose", negate = T),
(product_cols) := {
s = stringr::str_split_fixed(string = `Lineitem_name`, pattern = "[-/]", n = 2); .(s[1], s[2])
}, .I]x3_dt[stringr::str_detect(string = `Lineitem_name`, pattern = "ounce", negate = F) &
stringr::str_detect(string = `Lineitem_name`, pattern = "All-Purpose", negate = T),
(product_cols) := {
s = stringr::str_split_fixed(string = `Lineitem_name`, pattern = ") - ", n = 2); .(paste0(s[1], ")"), s[2])
}, .I]
#-- small patch for exceptions
x3_dt[stringr::str_detect(string = base_product, pattern = "))$", negate = F),
base_product := stringr::str_replace(string = base_product, pattern = "))$", replacement = ")")]
Validation.
# validation
x3_dt[stringr::str_detect(string = `Lineitem_name`, pattern = "ounce")
][, .SD, .SDcols = c(eval(sel_cols[6]), product_cols)
][, .N, c(eval(sel_cols[6]), product_cols)]x3_dt[stringr::str_detect(string = `Lineitem_name`, pattern = "All")
][, .SD, .SDcols = c(eval(sel_cols[6]), product_cols)
][, .N, c(eval(sel_cols[6]), product_cols)]
x3_dt[stringr::str_detect(string = base_product, pattern = "All")]
We use eval(sel_cols[6])
to get the title of a column sel_cols[6]
which is Forex.
We additionally must cope with NA’s, however with an understanding of a dataset — the place we may have NA’s and the place they don’t seem to be presupposed to be, indicating a difficulty. In some columns, like `Discount_Amount`, we’ve got values (precise low cost), zeros, but in addition generally NA’s. Checking closing worth, we conclude they’re zeros.
#-- cope with NA'a - change them with 0
sel_na_cols <- c("Discount_Amount")
x3_dt[, (sel_na_cols) := lapply(.SD, fcoalesce, 0), .SDcols = sel_na_cols]
For consistency and comfort, altering all column names to lowercase.
setnames(x3_dt, tolower(names(x3_dt)))
And verification.
After all evaluation dataset, with some check aggregations, and likewise simply printing it out.
Save dataset as each Rds (native R format) and csv.
x3_dt |> fwrite(file = "information/merchandise.csv")
x3_dt |> saveRDS(file = "information/x3_dt.Rds")
Conducting steps above we should always have a clear dataset, for futher evaluation. The code ought to function a suggestion, but in addition can be utilized straight, in case you work in R.
Variations
As a primary glimpse, we are going to examine variety of merchandise per 30 days, each base_product, and together with all variations.
As a small cleansing, I take solely full months.
month_last <- x3_dt[, max(month)] - months(1)
Then we rely month-to-month numbers, storing in momentary desk, that are then joined.
x3_a_dt <- x3_dt[month <= month_last, .N, .(base_product, month)
][, .(base_products = .N), keyby = month]x3_b_dt <- x3_dt[month <= month_last, .N, .(lineitem_name, month)
][, .(products = .N), keyby = month]
x3_c_dt <- x3_a_dt[x3_b_dt]
Some information wrangling.
#-- names, as we would like them on plot
setnames(x3_c_dt, previous = c("base_products", "merchandise"), new = c("base", "all, with variants"))#-- lengthy kind
x3_d_dt <- x3_c_dt[, melt.data.table(.SD, id.vars = "month", variable.name = "Products")]
#-- reverse components, so they seem on plot in a correct order
x3_d_dt[, `:=` (Products = forcats::fct_rev(Products))]
We’re able to plot the dataset.
plot_01_w <- x3_d_dt |>
ggplot(aes(month, worth, coloration = Merchandise, fill = Merchandise)) +
geom_line(present.legend = FALSE) +
geom_area(alpha = .8, place = position_dodge()) +
theme_bw() +
scale_fill_viridis_d(course = -1, possibility = "G", start = 0.3, finish = .7) +
scale_color_viridis_d(course = -1, possibility = "G", start = 0.3, finish = .7) +
labs(x = "", y = "Merchandise",
title = "Distinctive merchandise, month-to-month", subtitle = "Impression of aggregation") +
theme(... further styling)
The subsequent plot reveals the variety of variants grouped into bins. This provides us an opportunity to speak about chaining operations in R, significantly with the info.desk bundle. In information.desk, we are able to chain operations by opening a brand new bracket proper after closing one — leading to ][ syntax. It creates a compact, readable chain that’s still easy to debug since you can execute it piece by piece. I prefer succinct code, but that’s just my style — use whatever approach works best for you. We can write code in one line, or multi-line, with logical steps.
On one of the plots we look at a date, when each product was first seen. To get that date, we set a key on date, and then take the first occurrence date[1]
per every base_product.
#-- variations per yr, product, with a date, when it was 1st seen
x3c_dt <- x3_dt[, .N, .(base_product, variants)
][, .(variants = .N), base_product][order(-variants)]x3_dt |> setkey(date)
x3d_dt <- x3_dt[, .(date = date[1]), base_product]
record(x3c_dt, x3d_dt) |> lapply(setkey, base_product)x3e_dt <- x3c_dt[x3d_dt][order(variants)
][, `:=` (year = year(date) |> as.factor())][year != 2018
][, .(products = .N), .(variants, year)][order(-variants)
][, `:=` (
variant_bin = cut(
variants,
breaks = c(0, 1, 2, 5, 10, 20, 100, Inf),
include.lowest = TRUE,
right = FALSE
))
][, .(total_products = sum(products)), .(variant_bin, year)
][order(variant_bin)
][, `:=` (year_group = fcase(
year %in% c(2019, 2020, 2021), "2019-2021",
year %in% c(2022, 2023, 2024), "2022-2024"
))
][, `:=` (variant_bin = forcats::fct_rev(variant_bin))]
The ensuing desk is strictly as we want it for charting.
The second plot makes use of transaction date, so the info wrangling is analogous, however with out date[1]
step.
If we wish to have a few plots mixed, we are able to produce them individually, and mix utilizing for instance ggpubr::ggarrange()
or we are able to mix tables into one dataset after which use faceting performance. The previous is when plots are of utterly completely different nature, whereas latter is helpful, once we can naturally have mixed dataset.
For instance, few extra strains from my script.
x3h_dt <- information.desk::rbindlist(
l = record(
introduction = x3e_dt[, `:=` (year = as.numeric(as.character(year)))],
transaction = x3g_dt),
use.names = T, fill = T, idcol = T)
And a plot code.
plot_04_w <- x3h_dt |>
ggplot(aes(yr, total_products,
coloration = variant_bin, fill = variant_bin, group = .id)) +
geom_col(alpha = .8) +
theme_bw() +
scale_fill_viridis_d(course = 1, possibility = "G") +
scale_color_viridis_d(course = 1, possibility = "G") +
labs(x = "", y = "Base Merchandise",
title = "Merchandise, and their variants",
subtitle = "Yearly",
fill = "Variants",
coloration = "Variants") +
facet_wrap(".id", ncol = 2) +
theme(... different styling choices)
Faceting has large benefit, as a result of we function on one desk, which helps rather a lot in assuring information consistency.
Pareto
The essence of Pareto calculation is to search out what number of merchandise do we have to obtain sure income share. We have to put together the dataset, in a few steps.
#-- calculate amount and income per base_product, month-to-month
x3a_dt <- x3_dt[, {
items = sum(lineitem_quantity, na.rm = T);
revenue = sum(lineitem_quantity * lineitem_price);
.(items, revenue)}, keyby = .(month, base_product)
][, `:=` (i = 1)][order(-revenue)][revenue > 0, ]#-- calculate share share, and cumulative share
x3a_dt[, `:=` (
rev_pct = revenue / sum(revenue),
cum_rev_pct = cumsum(revenue) / sum(revenue), prod_n = cumsum(i)), month]
In case we’d must masks actual product names, allow us to create a brand new variable.
#-- merchandise title masking
x3a_dt[, masked_name := paste("Product", .GRP), by = base_product]
And dataset printout, with a subset of columns.
And filtered for one month, exhibiting few strains from prime and from the underside.
The important column is cum_rev_pct
, which signifies cumulative share income from merchandise 1-n. We have to discover which prod_n
covers income share threshold, as within the pct_thresholds_dt
desk.
So we’re prepared for precise Pareto calculation. The code under, with feedback.
#-- pareto
#-- set share thresholds
pct_thresholds_dt <- information.desk(cum_rev_pct = c(0, .2, .3, .5, .8, .95, 1))#-- set key for be part of
record(x3a_dt, pct_thresholds_dt) |> lapply(setkey, cum_rev_pct)
#-- subset columns (non-obligatory)
sel_cols <- c("month", "cum_rev_pct", "prod_n")
#-- carry out a rolling be part of - essential step!
x3b_dt <- x3a_dt[, .SD[pct_thresholds_dt, roll = -Inf], month][, ..sel_cols]
Why will we carry out a rolling be part of? We have to discover the primary cum_rev_pct
to cowl every threshold.
We want 2 merchandise for 20% income, 4 merchandise for 30% and so forth. And to have 100% income, in fact we want contribution from all 72 merchandise.
And a plot.
#-- information prep
x3b1_dt <- x3b_dt[month < month_max,
.(month, cum_rev_pct = as.factor(cum_rev_pct) |> forcats::fct_rev(), prod_n)]#-- charting
plot_07_w <- x3b1_dt |>
ggplot(aes(month, prod_n, coloration = cum_rev_pct, fill = cum_rev_pct)) +
geom_line() +
theme_bw() +
geom_area(alpha = .2, present.legend = F, place = position_dodge(width = 0)) +
scale_fill_viridis_d(course = -1, possibility = "G", start = 0.2, finish = .9) +
scale_color_viridis_d(course = -1, possibility = "G", start = 0.2, finish = .9,
labels = operate(x) scales::%(as.numeric(as.character(x))) # Convert issue to numeric first
) +
... different styling choices ...
Lorenz curve
To plot Lorenz curve, we have to kind merchandise by it’s contribution to whole income, and normalize each variety of merchandise and income.
Earlier than the primary code, a useful technique to select n-th month from the dataset, from starting or from the tip.
month_sel <- x3a_dt$month |> distinctive() |> kind(reducing = T) |> dplyr::nth(2)
And the code.
xl_oct24_dt <- x3a_dt[month == month_sel,
][order(-revenue), .(
cum_prod_pct = seq_len(.N)/.N,
cum_rev_pct = cumsum(revenue)/sum(revenue))]
To chart separate strains per every time interval, we have to modify accordingly.
#-- Lorenz curve, yearly aggregation
xl_dt <- x3a_dt[order(-revenue), .(
cum_prod_pct = seq_len(.N)/.N,
cum_rev_pct = cumsum(revenue)/sum(revenue)), month]
The xl_dt
dataset is prepared for charting.
Indices, ratios
The code is simple right here, assuming ample prior information preparation. The logic and a few snippets in the primary physique of this text.
Streamgraph
The streamgraph proven earlier is an instance of a chart which will seem tough to render, particularly when interactivity is required. One of many causes I included it on this weblog is to indicate how we are able to simplify duties with keys, joins, and information.desk syntax particularly. Utilizing keys, we are able to obtain very efficient filtering for interactivity. As soon as we’ve got a deal with on the info, we’re just about executed; all that is still are some settings to fine-tune the plot.
We begin with thresholds desk.
#-- set share thresholds
pct_thresholds_dt <- information.desk(cum_rev_pct = c(0, .2, .3, .5, .8, .95, 1))
Since we would like joins carried out month-to-month, it’s good to create an information subset overlaying one month, to check the logic, earlier than extending for a full dataset.
#-- check logic for one month
month_sel <- as.Date("2020-01-01")
sel_a_cols <- c("month", "rev_pct", "cum_rev_pct", "prod_n", "masked_name")
x3a1_dt <- x3a_dt[month == month_sel, ..sel_a_cols]
We now have 23 merchandise in January 2020, sorted by income share, and we even have cumulative income, reaching 100% with the final, twenty third product.
Now we have to create an intermediate desk, telling us what number of merchandise do we have to obtain every income threshold.
#-- set key for be part of
record(x3a1_dt, pct_thresholds_dt) |> lapply(setkey, cum_rev_pct)#-- carry out a rolling be part of - essential step!
sel_b_cols <- c("month", "cum_rev_pct", "prod_n")
x3b1_dt <- x3a1_dt[, .SD[pct_thresholds_dt, roll = -Inf], month][, ..sel_b_cols]
As a result of we work with a one-month information subset (and selecting month with not that many merchandise), it is rather straightforward to examine the result — evaluating x3a1_dt
and x3b1_dt
tables.
And now we have to get merchandise names, for chosen threshold.
#-- get merchandise
#-- set keys
record(x3a1_dt, x3b1_dt) |> lapply(setkey, month, prod_n)#-- specify threshold
x3b1_dt[cum_rev_pct == .8][x3a1_dt, roll = -Inf, nomatch = 0]
#-- or, an equal, specify desk's row
x3b1_dt[5, ][x3a1_dt, roll = -Inf, nomatch = 0]
To realize 80% income, we want 7 merchandise, and from the be part of above, we get their names.
I believe you already see, why we use rolling joins, and may’t use simle <
or >
logic.
Now, we have to prolong the logic for all months.
#-- prolong for all months#-- set key for be part of
record(x3a_dt, pct_thresholds_dt) |> lapply(setkey, cum_rev_pct)
#-- subset columns (non-obligatory)
sel_cols <- c("month", "cum_rev_pct", "prod_n")
#-- carry out a rolling be part of - essential step!
x3b_dt <- x3a_dt[, .SD[pct_thresholds_dt, roll = -Inf], month][, ..sel_cols]
Get the merchandise.
#-- set keys, be part of
record(x3a_dt, x3b_dt) |> lapply(setkey, month, prod_n)
x3b6_dt <- x3b_dt[cum_rev_pct == .8][x3a_dt, roll = -Inf, nomatch = 0][, ..sel_a_cols]
And confirm, for a similar month as in a check information subset.
If we wish to freeze merchandise for a sure month, and see income from them in the entire interval (what second streamgraphs reveals), we are able to set key on product title and carry out a be part of.
#-- freeze merchandise
x3b6_key_dt <- x3b6_dt[month == month_sel, .(masked_name)]
record(x3a_dt, x3b6_key_dt) |> lapply(setkey, masked_name)sel_b2_cols <- c("month", "income", "masked_name")
x3a6_dt <- x3a_dt[x3b6_key_dt][, ..sel_b2_cols]
And we get precisely, what we would have liked.
Utilizing joins, together with rolls, and deciding what could be precalculated in a warehouse, and what’s left for dynamic filtering in a dashboard does require some observe, however it positively pays off.