Load testing Self-Hosted LLMs | In direction of Knowledge Science

Do you want extra GPUs or a contemporary GPU? How do you make infrastructure choices?

A man pulling an elephant with his bare hands
Picture created by the creator utilizing Dalle-E-2024

How does it really feel when a bunch of customers immediately begin utilizing an app that solely you and your dev group have used earlier than?

That’s the million-dollar query of shifting from prototype to manufacturing.

So far as LLMs are involved, you are able to do a number of dozen tweaks to run your app throughout the finances and acceptable qualities. As an example, you possibly can select a quantized mannequin for decrease reminiscence utilization. Or you possibly can fine-tune a tiny mannequin and beat the efficiency of big LLMs.

You possibly can even tweak your infrastructure to realize higher outcomes. For instance, it’s possible you’ll need to double the variety of GPUs you employ or select the latest-generation GPU.

However how may you say Possibility A performs higher than Possibility B and C?

This is a crucial query to ask ourselves on the earliest phases of going into manufacturing. All these choices have their prices…