I typically see knowledge scientists getting within the growth of LLMs when it comes to mannequin structure, coaching methods or knowledge assortment. Nonetheless, I’ve seen that many occasions, outdoors the theoretical side, in many individuals have issues in serving these fashions in a means that they’ll really be utilized by customers.
On this temporary tutorial, I believed I’d present in a quite simple means how one can serve an LLM, particularly llama-3, utilizing BentoML.
BentoML is an end-to-end answer for machine studying mannequin serving. It facilitates Information Science groups to develop production-ready mannequin serving endpoints, with DevOps finest practices and efficiency optimization at each stage.
We want GPU
As you realize in Deep Studying having the fitting {hardware} accessible is vital. Particularly for very giant fashions like LLMs, this turns into much more necessary. Sadly, I don’t have any GPU 😔
That’s why I depend on exterior suppliers, so I lease one in every of their machines and work there. I selected for this text to work on Runpod as a result of I do know their companies and I believe it’s an reasonably priced worth to comply with this tutorial. However if in case you have GPUs accessible or wish to…