The HuggingFace transformer library affords many fundamental constructing blocks and a wide range of performance to kickstart your AI code. Many merchandise and libraries have been constructed on high of it and on this brief weblog, I’ll speak about a number of the methods folks have prolonged it so as to add customized coaching code on high of the HuggingFace transformer’s library:
- Reimplement the coaching code by iterating by means of the coaching knowledge to recreate the fine-tuning loop after which including in customized code, and
- Creating customized callbacks tacked onto the Coach class in order that customized code be added to the callbacks.
Clearly, there could also be different methods to customise the fine-tuning loop, however this weblog is meant to give attention to these two approaches.
Usually while you practice a mannequin, a Coach object is created that permits you to specify the parameters for coaching a mannequin. The Coach object surfaces a practice() methodology that you may name that initiates the coaching loop: