OpenAI Immediate Cache Monitoring. A labored instance utilizing Python and the… | by Thomas Reid | Dec, 2024

A labored instance utilizing Python and the chat completion API

As a part of their current DEV Day presentation, OpenAI introduced that Immediate Caching was now obtainable for varied fashions. On the time of writing, these fashions have been:-

GPT-4o, GPT-4o mini, o1-preview and o1-mini, in addition to fine-tuned variations of these fashions.

This information shouldn’t be underestimated, as it’s going to enable builders to avoid wasting on prices and scale back software runtime latency.

API calls to supported fashions will routinely profit from Immediate Caching on prompts longer than 1,024 tokens. The API caches the longest prefix of a immediate that has been beforehand computed, beginning at 1,024 tokens and rising in 128-token increments. In case you reuse prompts with widespread prefixes, OpenAI will routinely apply the Immediate Caching low cost with out requiring you to vary your API integration.

As an OpenAI API developer, the one factor you might have to fret about is the way to monitor your Immediate Caching use, i.e. test that it’s being utilized.

On this article, I’ll present you the way to do this utilizing Python, a Jupyter Pocket book and a chat completion instance.

Set up WSL2 Ubuntu