Methods to Get JSON Output from LLMs: A Sensible Information | by Youness Mansar | Aug, 2024

Tutorial on imposing JSON output with Llama.cpp or the Gemini’s API

Picture by Etienne Girardet on Unsplash

Massive Language Fashions (LLMs) are nice at producing textual content, however getting structured output like JSON normally requires intelligent prompting and hoping the LLM understands. Fortunately, JSON mode is turning into extra frequent in LLM frameworks and companies. This allows you to outline the precise output schema you need.

This put up will get into constrained era utilizing JSON mode. We’ll use a fancy, nested and sensible JSON schema instance to information LLM frameworks/APIs like Llama.cpp or Gemini API to generate structured knowledge, particularly vacationer location data. This builds on a earlier put up about constrained era utilizing Steering, however focuses on the extra extensively adopted JSON mode.

Whereas extra restricted than Steering, JSON mode’s broader help makes it extra accessible, particularly with cloud-based LLM suppliers.

Throughout a private undertaking, I found that whereas JSON mode was simple with Llama.cpp, getting it to work with Gemini API required some further steps. This put up shares these options that will help you make the most of JSON mode successfully.

Our instance schema represents a TouristLocation. It is a non-trivial construction with nested objects, lists, enums, and varied knowledge sorts like strings and numbers.

Right here’s a simplified model:

{
"identify": "string",
"location_long_lat": ["number", "number"],
"climate_type": {"sort": "string", "enum": ["tropical", "desert", "temperate", "continental", "polar"]},
"activity_types": ["string"],
"attraction_list": [
{
"name": "string",
"description": "string"
}
],
"tags": ["string"],
"description": "string",
"most_notably_known_for": "string",
"location_type": {"sort": "string", "enum": ["city", "country", "establishment", "landmark", "national park", "island", "region", "continent"]},
"dad and mom": ["string"]
}

You may write any such schema by hand or you’ll be able to generate it utilizing the Pydantic library. Right here is how you are able to do it on a simplified instance:

from typing import Checklist
from pydantic import BaseModel, Subject

class TouristLocation(BaseModel):
"""Mannequin for a vacationer location"""

high_season_months: Checklist[int] = Subject(
[], description="Checklist of months (1-12) when the placement is most visited"
)

tags: Checklist[str] = Subject(
...,
description="Checklist of tags describing the placement (e.g. accessible, sustainable, sunny, low-cost, expensive)",
min_length=1,
)
description: str = Subject(..., description="Textual content description of the placement")

# Instance utilization and schema output
location = TouristLocation(
high_season_months=[6, 7, 8],
tags=["beach", "sunny", "family-friendly"],
description="A ravishing seaside with white sand and clear blue water.",
)

schema = location.model_json_schema()
print(schema)

This code defines a simplified model of TouristLocation knowledge class utilizing Pydantic. It has three fields:

  • high_season_months: An inventory of integers representing the months of the 12 months (1-12) when the placement is most visited. Defaults to an empty checklist.
  • tags: An inventory of strings describing the placement with tags like “accessible”, “sustainable”, and so forth. This area is required (...) and will need to have no less than one factor (min_length=1).
  • description: A string area containing a textual content description of the placement. This area can also be required.

The code then creates an occasion of the TouristLocation class and makes use of model_json_schema() to get the JSON Schema illustration of the mannequin. This schema defines the construction and forms of the info anticipated for this class.

model_json_schema() returns:

{'description': 'Mannequin for a vacationer location',
'properties': {'description': {'description': 'Textual content description of the '
'location',
'title': 'Description',
'sort': 'string'},
'high_season_months': {'default': [],
'description': 'Checklist of months (1-12) '
'when the placement is '
'most visited',
'gadgets': {'sort': 'integer'},
'title': 'Excessive Season Months',
'sort': 'array'},
'tags': {'description': 'Checklist of tags describing the placement '
'(e.g. accessible, sustainable, sunny, '
'low-cost, expensive)',
'gadgets': {'sort': 'string'},
'minItems': 1,
'title': 'Tags',
'sort': 'array'}},
'required': ['tags', 'description'],
'title': 'TouristLocation',
'sort': 'object'}

Now that we’ve our schema, lets see how we will implement it. First in Llama.cpp with its Python wrapper and second utilizing Gemini’s API.

Llama.cpp, a C++ library for working Llama fashions domestically. It’s beginner-friendly and has an lively neighborhood. We might be utilizing it by its Python wrapper.

Right here’s easy methods to generate TouristLocation knowledge with it:

# Imports and stuff

# Mannequin init:
checkpoint = "lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF"

mannequin = Llama.from_pretrained(
repo_id=checkpoint,
n_gpu_layers=-1,
filename="*Q4_K_M.gguf",
verbose=False,
n_ctx=12_000,
)

messages = [
{
"role": "system",
"content": "You are a helpful assistant that outputs in JSON."
f"Follow this schema {TouristLocation.model_json_schema()}",
},
{"role": "user", "content": "Generate information about Hawaii, US."},
{"role": "assistant", "content": f"{location.model_dump_json()}"},
{"role": "user", "content": "Generate information about Casablanca"},
]
response_format = {
"sort": "json_object",
"schema": TouristLocation.model_json_schema(),
}

begin = time.time()

outputs = mannequin.create_chat_completion(
messages=messages, max_tokens=1200, response_format=response_format
)

print(outputs["choices"][0]["message"]["content"])

print(f"Time: {time.time() - begin}")

The code first imports obligatory libraries and initializes the LLM mannequin. Then, it defines a listing of messages for a dialog with the mannequin, together with a system message instructing the mannequin to output in JSON format in accordance with a selected schema, consumer requests for details about Hawaii and Casablanca, and an assistant response utilizing the desired schema.

Llama.cpp makes use of context-free grammars below the hood to constrain the construction and generate legitimate JSON output for a brand new metropolis.

Within the output we get the next generated string:

{'activity_types': ['shopping', 'food and wine', 'cultural'],
'attraction_list': [{'description': 'One of the largest mosques in the world '
'and a symbol of Moroccan architecture',
'name': 'Hassan II Mosque'},
{'description': 'A historic walled city with narrow '
'streets and traditional shops',
'name': 'Old Medina'},
{'description': 'A historic square with a beautiful '
'fountain and surrounding buildings',
'name': 'Mohammed V Square'},
{'description': 'A beautiful Catholic cathedral built in '
'the early 20th century',
'name': 'Casablanca Cathedral'},
{'description': 'A scenic waterfront promenade with '
'beautiful views of the city and the sea',
'name': 'Corniche'}],
'climate_type': 'temperate',
'description': 'A big and bustling metropolis with a wealthy historical past and tradition',
'location_type': 'metropolis',
'most_notably_known_for': 'Its historic structure and cultural '
'significance',
'identify': 'Casablanca',
'dad and mom': ['Morocco', 'Africa'],
'tags': ['city', 'cultural', 'historical', 'expensive']}

Which might then be parsed into an occasion of our Pydantic class.

Gemini API, Google’s managed LLM service, claims restricted JSON mode help for Gemini Flash 1.5 in its documentation. Nonetheless, it may be made to work with just a few changes.

Listed here are the final directions to get it to work:

schema = TouristLocation.model_json_schema()
schema = replace_value_in_dict(schema.copy(), schema.copy())
del schema["$defs"]
delete_keys_recursive(schema, key_to_delete="title")
delete_keys_recursive(schema, key_to_delete="location_long_lat")
delete_keys_recursive(schema, key_to_delete="default")
delete_keys_recursive(schema, key_to_delete="default")
delete_keys_recursive(schema, key_to_delete="minItems")

print(schema)

messages = [
ContentDict(
role="user",
parts=[
"You are a helpful assistant that outputs in JSON."
f"Follow this schema {TouristLocation.model_json_schema()}"
],
),
ContentDict(position="consumer", elements=["Generate information about Hawaii, US."]),
ContentDict(position="mannequin", elements=[f"{location.model_dump_json()}"]),
ContentDict(position="consumer", elements=["Generate information about Casablanca"]),
]

genai.configure(api_key=os.environ["GOOGLE_API_KEY"])

# Utilizing `response_mime_type` with `response_schema` requires a Gemini 1.5 Professional mannequin
mannequin = genai.GenerativeModel(
"gemini-1.5-flash",
# Set the `response_mime_type` to output JSON
# Move the schema object to the `response_schema` area
generation_config={
"response_mime_type": "utility/json",
"response_schema": schema,
},
)

response = mannequin.generate_content(messages)
print(response.textual content)

Right here’s easy methods to overcome Gemini’s limitations:

  1. Change $ref with Full Definitions: Gemini stumbles on schema references ($ref). These are used when you have got a nested object definition. Change them with the whole definition out of your schema.
def replace_value_in_dict(merchandise, original_schema):
# Supply: https://github.com/pydantic/pydantic/points/889
if isinstance(merchandise, checklist):
return [replace_value_in_dict(i, original_schema) for i in item]
elif isinstance(merchandise, dict):
if checklist(merchandise.keys()) == ["$ref"]:
definitions = merchandise["$ref"][2:].cut up("/")
res = original_schema.copy()
for definition in definitions:
res = res[definition]
return res
else:
return {
key: replace_value_in_dict(i, original_schema)
for key, i in merchandise.gadgets()
}
else:
return merchandise
  1. Take away Unsupported Keys: Gemini doesn’t but deal with keys like “title”, “AnyOf”, or “minItems”. Take away these out of your schema. This has the consequence of a much less readable and fewer restrictive schema however we don’t have one other selection if insist on utilizing Gemini.
def delete_keys_recursive(d, key_to_delete):
if isinstance(d, dict):
# Delete the important thing if it exists
if key_to_delete in d:
del d[key_to_delete]
# Recursively course of all gadgets within the dictionary
for ok, v in d.gadgets():
delete_keys_recursive(v, key_to_delete)
elif isinstance(d, checklist):
# Recursively course of all gadgets within the checklist
for merchandise in d:
delete_keys_recursive(merchandise, key_to_delete)
  1. One-Shot or Few-shot Prompting for Enums: Gemini generally struggles with enums, outputting all doable values as an alternative of a single choice. The values are additionally separated by “|” in a single string, making them invalid in accordance with our schema. Use one-shot prompting, offering a accurately formatted instance, to information it in direction of the specified habits.

By making use of these transformations and offering clear examples, you’ll be able to efficiently generate structured JSON output with Gemini API.

JSON mode lets you get structured knowledge immediately out of your LLMs, making them extra helpful for sensible functions. Whereas frameworks like Llama.cpp supply simple implementations, you would possibly encounter points with cloud companies like Gemini API.

Hopefully, this weblog allowed you to get a greater sensible understanding on how JSON mode works and the way you need to use it even when utilizing Gemini’s API which solely has partial help to this point.

Now that I used to be capable of get Gemini to considerably work with JSON mode, I can full the implementation of my LLM workflow the place having knowledge structured in a selected manner is critical.

You will discover the principle code of this put up right here: https://gist.github.com/CVxTz/8eace07d9bd2c5123a89bf790b5cc39e