Minimal Viable MLE. Constructing a minimal production-ready… | by Lenix Carter | Oct, 2024

Using a Customized Mannequin

Whereas the method of constructing and fine-tuning a mannequin will not be the intent of this mission, you will need to perceive how a mannequin may be added to this course of.

# Filename: prepare.py

import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from datasets import load_dataset
from torch.utils.information import DataLoader

def train_model():
# Load dataset
full_dataset = load_dataset("stanfordnlp/imdb", cut up="prepare")
dataset = full_dataset.shuffle(seed=42).choose(vary(10000))

model_name = "distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
mannequin = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2)

optimizer = torch.optim.AdamW(mannequin.parameters(), lr=2e-5)

# Use GPU if obtainable
system = torch.system("cuda" if torch.cuda.is_available() else "cpu")
mannequin.to(system)

mannequin.prepare()

# Create a DataLoader for batching
dataloader = DataLoader(dataset, batch_size=8, shuffle=True)

# Coaching loop
num_epochs = 3 # Set the variety of epochs
for epoch in vary(num_epochs):
total_loss = 0
for batch in dataloader:
inputs = tokenizer(batch["text"], truncation=True, padding=True, return_tensors="pt", max_length=512).to(system)
labels = torch.tensor(batch["label"]).to(system)

optimizer.zero_grad()
outputs = mannequin(**inputs, labels=labels)
loss = outputs.loss

loss.backward()
optimizer.step()
total_loss += loss.merchandise()

avg_loss = total_loss / len(dataloader)
print(f"Epoch {epoch + 1}/{num_epochs}, Loss: {avg_loss:.4f}")

# Save the mannequin
mannequin.save_pretrained("./mannequin/")
tokenizer.save_pretrained("./mannequin/")

# Take a look at the mannequin with pattern sentences
test_sentences = [
"This movie was fantastic!",
"I absolutely hated this film.",
"It was just okay, not great.",
"An absolute masterpiece!",
"Waste of time!",
"A beautiful story and well acted.",
"Not my type of movie.",
"It could have been better.",
"A thrilling adventure from start to finish!",
"Very disappointing."
]

# Swap mannequin to analysis mode
mannequin.eval()

# Put together tokenizer for take a look at inputs
inputs = tokenizer(test_sentences, truncation=True, padding=True, return_tensors="pt", max_length=512).to(system)

with torch.no_grad():
outputs = mannequin(**inputs)
predictions = torch.argmax(outputs.logits, dim=1)

# Print predictions
for sentence, prediction in zip(test_sentences, predictions):
sentiment = "optimistic" if prediction.merchandise() == 1 else "adverse"
print(f"Enter: "{sentence}" -> Predicted sentiment: {sentiment}")

# Name the perform to coach the mannequin and take a look at it
train_model()

To be sure that we are able to question our new mannequin that now we have educated now we have to replace just a few of our present recordsdata. For example, in important.py we now use the mannequin from ./mannequin and cargo it as a pretrained mannequin. Moreover, for comparability’s sake, we add now have two endpoints to make use of, /predict/naive and predict/educated.

# Filename: important.py

from fastapi import FastAPI
from pydantic import BaseModel
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from transformers import pipeline
from prometheus_client import Counter, Histogram, start_http_server
import time

# Begin prometheus metrics server on port 8001
start_http_server(8001)

app = FastAPI()

# Load the educated mannequin and tokenizer from the native listing
model_path = "./mannequin" # Path to your saved mannequin
tokenizer = AutoTokenizer.from_pretrained(model_path)
trained_model = AutoModelForSequenceClassification.from_pretrained(model_path)

# Create pipelines
naive_classifier = pipeline("sentiment-analysis", system=-1)
trained_classifier = pipeline("sentiment-analysis", mannequin=trained_model, tokenizer=tokenizer, system=-1)

# Metrics
PREDICTION_TIME = Histogram('prediction_duration_seconds', 'Time spent processing prediction')
REQUESTS = Counter('prediction_requests_total', 'Whole requests')
SENTIMENT_SCORE = Histogram('sentiment_score', 'Histogram of sentiment scores', buckets=[0.0, 0.25, 0.5, 0.75, 1.0])

class TextInput(BaseModel):
textual content: str

class SentimentOutput(BaseModel):
textual content: str
sentiment: str
rating: float

@app.put up("/predict/naive", response_model=SentimentOutput)
async def predict_naive_sentiment(input_data: TextInput):
REQUESTS.inc()
start_time = time.time()

end result = naive_classifier(input_data.textual content)[0]

rating = end result["score"]
SENTIMENT_SCORE.observe(rating) # File the sentiment rating

PREDICTION_TIME.observe(time.time() - start_time)

return SentimentOutput(
textual content=input_data.textual content,
sentiment=end result["label"],
rating=rating
)

@app.put up("/predict/educated", response_model=SentimentOutput)
async def predict_trained_sentiment(input_data: TextInput):
REQUESTS.inc()
start_time = time.time()

end result = trained_classifier(input_data.textual content)[0]

rating = end result["score"]
SENTIMENT_SCORE.observe(rating) # File the sentiment rating

We additionally should replace our Dockerfile to incorporate our mannequin recordsdata.

# Filename: Dockerfile
FROM python:3.9-slim

WORKDIR /app

COPY necessities.txt .
RUN pip set up -r necessities.txt

COPY important.py .
COPY ./mannequin ./mannequin

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

Importantly, in case you are utilizing git, just remember to add the pytorch_model.bin file to git lfs, as a way to push to GitHub. git lfs lets you use model management on very massive recordsdata.