Preventing Fraud Pretty: Improve Your AI Toolkit 🧰 | by César Ortega Quintero | Jan, 2025

Understanding the automobile insurance coverage fraud detection dataset

For this train, we are going to work with a publicly accessible automobile insurance coverage fraud detection dataset [31], which accommodates 15,420 observations and 33 options. The goal variable, FraudFound_P, labels whether or not a declare is fraudulent, with 933 observations (5.98%) recognized as fraud associated. The dataset features a vary of potential predictors, similar to:.

  • Demographic and policy-related options: gender, age, marital standing, automobile class, coverage sort, coverage quantity, driver score.
  • Declare-related options: day of week claimed, month claimed, days coverage declare, witness current.
  • Coverage-related options: deductible, make, automobile worth, variety of endorsements.

Amongst these, gender and age are thought-about protected attributes, which implies we have to pay particular consideration to how they could affect the mannequin’s predictions. Understanding the dataset’s construction and figuring out potential sources of bias are important.

The enterprise problem

The aim of this train is to construct a machine studying mannequin to determine doubtlessly fraudulent motor insurance coverage claims. Fraud detection can considerably enhance declare dealing with effectivity, scale back investigation prices, and reduce losses paid out on fraudulent claims. Nonetheless, the dataset presents a major problem as a result of high-class imbalance, with solely 5.98% of the claims labeled as fraudulent.

Within the context of fraud detection, false negatives (i.e., missed fraudulent claims) are notably costly, as they lead to monetary losses and investigation delays. To handle this, we are going to prioritize the recall metric for figuring out the optimistic class (FraudFound_P = 1). Recall measures the power of the mannequin to seize fraudulent claims, even on the expense of precision, guaranteeing that as many fraudulent claims as potential are recognized and dealt with in a well timed trend by analysts within the fraud group.

Baseline mannequin

Right here, we are going to construct the preliminary mannequin for fraud detection utilizing a set of predictors that embrace demographic and policy-related options, with an emphasis on the gender attribute. For the needs of this train, the gender characteristic has explicitly been included as a predictor to deliberately introduce bias and power its look within the mannequin, provided that excluding it could lead to a baseline mannequin that isn’t biased. Furthermore, in a real-world setting with a extra complete dataset, there are normally oblique proxies which will leak bias into the mannequin. In observe, it’s common for fashions to inadvertently use such proxies, resulting in undesirable biased predictions, even when the delicate attributes themselves aren’t immediately included.

As well as, we excluded age as a predictor, aligning with the person equity strategy often called “equity by way of unawareness,” the place we deliberately take away any delicate attributes that might result in discriminatory outcomes.

Within the following picture, we current the Classification Outcomes, Distribution of Predicted Possibilities, and Carry Chart for the baseline mannequin utilizing the XGBoost classifier with a customized threshold of 0.1 (y_prob >= threshold) to determine predicted optimistic fraudulent claims. This mannequin will function a place to begin for measuring and mitigating bias, which we are going to discover in later sections.

Baseline mannequin outcomes. Picture compiled by the writer

Based mostly on the classification outcomes and visualizations introduced under, we will see that the mannequin reaches a Recall of 86%, which is consistent with our enterprise necessities. Since our major aim is to determine as many fraudulent claims as potential, excessive recall is essential. The mannequin appropriately identifies a lot of the fraudulent claims, although the precision for fraudulent claims (17%) is decrease. This trade-off is appropriate given that prime recall ensures that the fraud investigation group can deal with most fraudulent claims, minimizing potential monetary losses.

The distribution of predicted possibilities reveals a major focus of predictions close to zero, indicating that the mannequin is classifying many claims as non-fraudulent. That is anticipated given the extremely imbalanced nature of the dataset (fraudulent claims signify solely 5.98% of the full claims). Furthermore, the Carry Chart highlights that specializing in the highest deciles offers vital beneficial properties in figuring out fraudulent claims. The mannequin’s skill to extend the detection of fraud within the increased deciles (with a elevate of three.5x within the tenth decile) helps the enterprise goal of prioritizing the investigation of claims which might be extra prone to be fraudulent, growing the effectivity of the efforts of the fraud detection group.

These outcomes align with the enterprise aim of bettering fraud detection effectivity whereas minimizing prices related to investigating non-fraudulent claims. The recall worth of 86% ensures that we’re not lacking a big portion of fraudulent claims, whereas the elevate chart permits us to prioritize sources successfully.

Measuring bias

Based mostly on the XGBoost classifier, we consider the potential bias in our fraud detection mannequin utilizing binary metrics from the Holistic AI library. The code snippet under illustrates this.

from holisticai.bias.metrics import classification_bias_metrics
from holisticai.bias.plots import bias_metrics_report

# Outline protected attributes (group_a and group_b)
group_a_test = X_test['PA_Female'].values
group_b_test = X_test['PA_Male'].values

# Consider bias metrics with the customized threshold
metrics = classification_bias_metrics( group_a=group_a_test, group_b=group_b_test, y_pred=y_pred, y_true=y_test)

print("Bias Metrics with Customized Threshold: n", metrics)
bias_metrics_report(model_type='binary_classification', table_metrics=metrics)

Given the character of the dataset and the enterprise problem, we deal with Equality of Alternative metrics to make sure that people from each teams have equal probabilities of being appropriately labeled primarily based on their true traits. Particularly, we goal to make sure that errors in prediction, similar to false positives or false negatives, are distributed evenly throughout teams. This fashion, no group experiences disproportionately extra errors than others, which is important for reaching equity in decision-making. For this train, we deal with the gender attribute (female and male), which is deliberately included as a predictor within the mannequin to evaluate its impression on equity.

Bias metrics report of baseline mannequin. Picture compiled by the writer

The Equality of Alternative bias metrics generated utilizing a customized threshold of 0.1 for classification are introduced under.

  • Equality of Alternative Distinction: -0.126
    This metric immediately evaluates whether or not the true optimistic price is equal throughout the teams. A unfavorable worth means that females are barely much less prone to be appropriately labeled as fraudulent in comparison with males, indicating a possible bias favoring males in appropriately figuring out fraud.
  • False Constructive Fee Distinction: -0.076
    The False Constructive Fee distinction is throughout the truthful interval [-0.1, 0.1], indicating no vital disparity within the false optimistic charges between teams.
  • Common Odds Distinction: -0.101
    Common odds distinction measures the steadiness of true optimistic and false optimistic charges throughout teams. A unfavorable worth right here means that the mannequin may be barely much less correct in figuring out fraudulent claims for females than for males.
  • Accuracy Distinction: 0.063
    The Accuracy distinction is throughout the truthful interval [-0.1, 0.1], indicating minimal bias in general accuracy between teams.

There are small however vital disparities in Equality of Alternative and Common Odds Distinction, with females being barely much less prone to be appropriately labeled as fraudulent. This means a possible space for enchancment, the place additional steps could possibly be taken to scale back these biases and improve equity for each teams.

As we proceed within the subsequent sections, we’ll discover strategies for mitigating this bias and bettering equity, whereas striving to take care of mannequin efficiency.

Mitigating bias

Within the effort to mitigate bias from the baseline mannequin, the binary mitigation algorithms included within the Holistic AI library have been examined. These algorithms used could be categorized into three sorts:

  • Pre-processing strategies goal to change the enter information such that any mannequin skilled on it could now not exhibit biases. These strategies regulate the information distribution to make sure equity earlier than coaching begins. The algorithm evaluated have been, Correlation Remover, Disparate Impression Remover, Studying Honest Representations and Reweighing.
  • In-processing strategies alter the educational course of itself, immediately influencing the mannequin throughout coaching to make sure fairer predictions. These strategies goal to realize equity through the optimization course of. The algorithm evaluated have been, Adversarial Debiasing, Exponentiated Gradient, Grid Search Discount, Meta Honest Classifier, and Prejudice Remover.
  • Publish-processing strategies regulate the mannequin’s predictions after it has been skilled, guaranteeing that the ultimate predictions fulfill some statistical measure of equity. The algorithm evaluated have been, Calibrated Equalized Odds, Equalized Odds, LP Debiaser, ML Debiaser, and Reject Choice.

The outcomes from making use of the assorted mitigation algorithms, specializing in key efficiency and equity metrics are introduced within the accompanying desk.

Comparability desk of mitigation algorithm outcomes. Desk compiled by the writer

Whereas not one of the algorithms examined outperformed the baseline mannequin, the Disparate impression remover (a pre-processing methodology) and Equalized odds (a post-processing methodology) confirmed promising outcomes. Each algorithms improved the equity metrics considerably, however neither produced outcomes as near the baseline mannequin’s efficiency as anticipated. Furthermore, I discovered that adjusting the edge for Disparate impression remover and Equalized odds facilitated matching baseline efficiency whereas protecting equality of alternative bias metrics throughout the truthful interval.

Following tutorial suggestions stating that post-processing strategies could be considerably sub-optimal (Woodworth et al., 2017)[32], in that they impression on the mannequin after it was realized and may result in increased efficiency degradation when in comparison with different strategies (Ding et al., 2021)[33], I made a decision to prioritize the Disparate Impression Remover pre-processing algorithm over the post-processing Equalized Odds methodology. The code snippet under illustrates this course of.

from holisticai.bias.mitigation import (AdversarialDebiasing, ExponentiatedGradientReduction, GridSearchReduction, MetaFairClassifier, 
PrejudiceRemover, CorrelationRemover, DisparateImpactRemover, LearningFairRepresentation, Reweighing,
CalibratedEqualizedOdds, EqualizedOdds, LPDebiaserBinary, MLDebiaser, RejectOptionClassification)
from holisticai.pipeline import Pipeline

# Step 1: Outline the Disparate Impression Remover (Pre-processing)
mitigator = DisparateImpactRemover(repair_level=1.0) # Restore stage: 0.0 (no change) to 1.0 (full restore)

# Step 2: Outline the XGBoost mannequin
mannequin = XGBClassifier(use_label_encoder=False, eval_metric='logloss')

# Step 3: Create a pipeline with Disparate Impression Remover and XGBoost
pipeline = Pipeline(steps=[
('scaler', StandardScaler()), # Standardize the data
('bm_preprocessing', mitigator), # Apply bias mitigation
('estimator', model) # Train the XGBoost model
])

# Step 4: Match the pipeline
pipeline.match(
X_train_processed, y_train,
bm__group_a=group_a_train, bm__group_b=group_b_train # Move delicate teams
)

# Step 5: Make predictions with the pipeline
y_prob = pipeline.predict_proba(
X_test_processed,
bm__group_a=group_a_test, bm__group_b=group_b_test
)[:, 1] # Likelihood for the optimistic class

# Step 6: Apply a customized threshold
threshold = 0.03
y_pred = (y_prob >= threshold).astype(int)

We additional personalized the Disparate impression remover algorithm by reducing the chance threshold, aiming to enhance mannequin equity whereas sustaining key efficiency metrics. This adjustment was made to discover the potential impression on each mannequin efficiency and bias mitigation.

Outcomes of de-biased mannequin (Disparate Impression Remover). Picture compiled by the writer

The outcomes present that by adjusting the edge from 0.1 to 0.03, we considerably improved recall for fraudulent claims (from 0.528 within the baseline to 0.863), however at the price of precision (which dropped from 0.225 to 0.172). This aligns with the enterprise goal of minimizing undetected fraudulent claims, regardless of a slight improve in false positives. The tradeoff is sufficient: lowering the edge will increase the mannequin’s sensitivity (increased recall) however results in extra false positives (decrease precision). Nonetheless, the general accuracy of the mannequin is barely barely impacted (from 0.725 to 0.716), reflecting the broader tradeoff between recall and precision that usually accompanies threshold changes in imbalanced datasets like fraud detection.

Bias metrics report for de-biased mannequin (Disparate Impression Remover). Picture compiled by the writer

The equality of alternative bias metrics present minimal impression after adjusting the edge to 0.03. The Equality of alternative distinction stays throughout the truthful interval at -0.070, indicating that the mannequin nonetheless offers equal probabilities of being appropriately labeled for each teams. The False optimistic price distinction of -0.041 and the Common odds distinction of -0.056 each keep throughout the acceptable vary, suggesting no vital bias favoring one group over the opposite. The Accuracy distinction of 0.032 additionally stays small, confirming that the mannequin’s general accuracy isn’t disproportionately affected by the edge adjustment. These outcomes show that the equity of the mannequin, by way of equality of alternative, is well-maintained even with the edge change.

Furthermore, adjusting the chance threshold is critical when working with imbalanced datasets similar to fraud detection. The distribution of predicted possibilities will change with every mitigation technique utilized, and thresholds ought to be reviewed and tailored accordingly to steadiness each efficiency and equity, in addition to different dimensions not thought-about on this article (e.g., explainability or privateness). The selection of threshold can considerably affect the mannequin’s habits, and closing selections ought to be rigorously adjusted primarily based on enterprise wants.

In conclusion, the Disparate impression remover with a threshold of 0.03 presents an inexpensive compromise, bettering recall for fraudulent claims whereas sustaining equity in equality of alternative metrics. This technique aligns with each enterprise goals and equity concerns, making it a viable strategy for mitigating bias in fraud detection fashions.