Customizing Your Fantastic-tuning Code Utilizing HuggingFace’s Transformers Library | by Maeda Hanafi, PhD | Jan, 2025

Examples of customized callbacks and customized fine-tuning code from totally different libraries

Picture generated by Gemini

The HuggingFace transformer library affords many fundamental constructing blocks and a wide range of performance to kickstart your AI code. Many merchandise and libraries have been constructed on high of it and on this brief weblog, I’ll speak about a number of the methods folks have prolonged it so as to add customized coaching code on high of the HuggingFace transformer’s library:

  1. Reimplement the coaching code by iterating by means of the coaching knowledge to recreate the fine-tuning loop after which including in customized code, and
  2. Creating customized callbacks tacked onto the Coach class in order that customized code be added to the callbacks.

Clearly, there could also be different methods to customise the fine-tuning loop, however this weblog is meant to give attention to these two approaches.

Usually while you practice a mannequin, a Coach object is created that permits you to specify the parameters for coaching a mannequin. The Coach object surfaces a practice() methodology that you may name that initiates the coaching loop: