I Spent My Cash on Benchmarking LLMs on Dutch Exams So You Don’t Have To | by Maarten Sukel | Sep, 2024

OpenAI’s new o1-preview is approach too costly for the way it performs on the outcomes

Lots of my prospects ask for recommendation on which LLM (Giant Language Mannequin) to make use of for constructing merchandise tailor-made to Dutch-speaking customers. Nevertheless, most accessible benchmarks are multilingual and don’t particularly concentrate on Dutch. As a machine studying engineer and PhD researcher into machine studying on the College of Amsterdam, I understand how essential benchmarks have been to the development of AI — however I additionally perceive the dangers when benchmarks are trusted blindly. That is why I made a decision to experiment and run some Dutch-specific benchmarking of my very own.

On this put up, you’ll discover an in-depth have a look at my first try at benchmarking a number of giant language fashions (LLMs) on actual Dutch examination questions. I’ll information you thru all the course of, from gathering over 12,000 examination PDFs to extracting question-answer pairs and grading the fashions’ efficiency robotically utilizing LLMs. You’ll see how fashions like o1-preview, o1-mini, GPT-4o, GPT-4o-mini, and Claude-3 carried out throughout completely different Dutch instructional ranges, from VMBO to VWO, and whether or not the upper prices of sure fashions result in higher outcomes. That is only a first go on the downside, and I’ll dive deeper with extra posts like this sooner or later, exploring different fashions and duties. I’ll additionally discuss concerning the challenges and prices concerned and share some insights on which fashions provide the perfect worth for Dutch-language duties. If you happen to’re constructing or scaling LLM-based merchandise for the Dutch market, this put up will present invaluable insights to assist information your decisions as of September 2024.

It’s turning into extra frequent for firms like OpenAI to make daring, virtually extravagant claims concerning the capabilities of their fashions, usually with out sufficient real-world validation to again them up. That’s why benchmarking these fashions is so vital — particularly after they’re marketed as fixing every little thing from advanced reasoning to nuanced language understanding. With such grand claims, it’s important to run goal checks to see how properly they honestly carry out, and extra particularly, how they deal with the distinctive challenges of the Dutch language.

I used to be stunned to search out that there hasn’t been intensive analysis into benchmarking LLMs for Dutch, which is what led me to take issues into my very own fingers on a wet afternoon. With so many establishments and corporations counting on these fashions increasingly, it felt like the precise time to dive in and begin validating these fashions. So, right here’s my first try to start out filling that hole, and I hope it presents invaluable insights for anybody working with the Dutch-language.

Lots of my prospects work with Dutch-language merchandise, and so they want AI fashions which can be each cost-effective and extremely performant in understanding and processing Dutch. Though giant language fashions (LLMs) have made spectacular strides, a lot of the accessible benchmarks concentrate on English or multilingual capabilities, usually neglecting the nuances of smaller languages like Dutch. This lack of concentrate on Dutch is important as a result of linguistic variations can result in giant efficiency gaps when a mannequin is requested to know non-English texts.

5 years in the past, NLP — deep studying fashions for Dutch had been removed from mature (Like the primary variations of BERT). On the time, conventional strategies like TF-IDF paired with logistic regression usually outperformed early deep-learning fashions on Dutch language duties I labored on. Whereas fashions (and datasets) have since improved tremendously, particularly with the rise of transformers and multilingual pre-trained LLMs, it’s nonetheless vital to confirm how properly these advances translate to particular languages like Dutch. The idea that efficiency beneficial properties in English carry over to different languages isn’t at all times legitimate, particularly for advanced duties like studying comprehension.

That’s why I targeted on making a customized benchmark for Dutch, utilizing actual examination information from the Dutch “Nederlands” exams (These exams enter the general public area after they’ve been revealed). These exams don’t simply contain easy language processing; they take a look at “begrijpend lezen” (studying comprehension), requiring college students to know the intent behind numerous texts and reply nuanced questions on them. This kind of activity is especially vital as a result of it’s reflective of real-world functions, like processing and summarizing authorized paperwork, information articles, or buyer queries written in Dutch.

By benchmarking LLMs on this particular activity, I needed to realize deeper insights into how fashions deal with the complexity of the Dutch language, particularly when requested to interpret intent, draw conclusions, and reply with correct solutions. That is essential for companies constructing merchandise tailor-made to Dutch-speaking customers. My purpose was to create a extra focused, related benchmark to assist determine which fashions provide the perfect efficiency for Dutch, relatively than counting on common multilingual benchmarks that don’t absolutely seize the intricacies of the language.