First, for our instance, we have to develop a mannequin. Since this text focuses on mannequin deployment, we is not going to fear concerning the efficiency of the mannequin. As an alternative, we are going to construct a easy mannequin with restricted options to deal with studying mannequin deployment.
On this instance, we are going to predict a knowledge skilled’s wage primarily based on a couple of options, similar to expertise, job title, firm measurement, and many others.
See knowledge right here: https://www.kaggle.com/datasets/ruchi798/data-science-job-salaries (CC0: Public Area). I barely modified the info to scale back the variety of choices for sure options.
#import packages for knowledge manipulation
import pandas as pd
import numpy as np#import packages for machine studying
from sklearn import linear_model
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder, OrdinalEncoder
from sklearn.metrics import mean_squared_error, r2_score
#import packages for knowledge administration
import joblib
First, let’s check out the info.
Since all of our options are categorical, we are going to use encoding to rework our knowledge to numerical. Under, we use ordinal encoders to encode expertise degree and firm measurement. These are ordinal as a result of they signify some sort of development (1 = entry degree, 2 = mid-level, and many others.).
For job title and employment kind, we are going to create a dummy variables for every choice (observe we drop the primary to keep away from multicollinearity).
#use ordinal encoder to encode expertise degree
encoder = OrdinalEncoder(classes=[['EN', 'MI', 'SE', 'EX']])
salary_data['experience_level_encoded'] = encoder.fit_transform(salary_data[['experience_level']])#use ordinal encoder to encode firm measurement
encoder = OrdinalEncoder(classes=[['S', 'M', 'L']])
salary_data['company_size_encoded'] = encoder.fit_transform(salary_data[['company_size']])
#encode employmeny kind and job title utilizing dummy columns
salary_data = pd.get_dummies(salary_data, columns = ['employment_type', 'job_title'], drop_first = True, dtype = int)
#drop authentic columns
salary_data = salary_data.drop(columns = ['experience_level', 'company_size'])
Now that we’ve remodeled our mannequin inputs, we will create our coaching and take a look at units. We are going to enter these options right into a easy linear regression mannequin to foretell the worker’s wage.
#outline impartial and dependent options
X = salary_data.drop(columns = 'salary_in_usd')
y = salary_data['salary_in_usd']#break up between coaching and testing units
X_train, X_test, y_train, y_test = train_test_split(
X, y, random_state = 104, test_size = 0.2, shuffle = True)
#match linear regression mannequin
regr = linear_model.LinearRegression()
regr.match(X_train, y_train)
#make predictions
y_pred = regr.predict(X_test)
#print the coefficients
print("Coefficients: n", regr.coef_)
#print the MSE
print("Imply squared error: %.2f" % mean_squared_error(y_test, y_pred))
#print the adjusted R2 worth
print("R2: %.2f" % r2_score(y_test, y_pred))
Let’s see how our mannequin did.
Appears to be like like our R-squared is 0.27, yikes. Much more work would must be accomplished with this mannequin. We’d seemingly want extra knowledge and extra data on the observations. However for the sake of this text, we are going to transfer ahead and save our mannequin.
#save mannequin utilizing joblib
joblib.dump(regr, 'lin_regress.sav')