30 Oct 2025 - tsp
Last update 30 Oct 2025
8 mins
When working with large language models (LLMs), getting predictable structured responses is often way more valuable than plain text generation. For example, you might want the model to return JSON matching a specific schema rather than a natural language summary to continue processing information in a programmatic way. While OpenAI, Mistral and xAI as well as Ollama handle this with direct response_format options in which you can pass JSON Schemas or pydantic models, Anthropics Claude API take a slightly different path that is not so obvious on the first glance but even more flexible than the other approaches - there is no direct support for structured output by supplying some parameter in the same way as in other client libraries and thus no limitation to JSON output.
This short article explores how to achieve structured output using Anthropics API utilizing a workaround that has been suggested by numerous people. I will contrast it with how other clients like OpenAI handle this, then explore the idea and explain an example of the approach used with Anthropics models.

OpenAIs APIs - as well as Mistrals, xAIs and Ollamas - provide a direct and simple way to pass a schema to produce structured output. Below is an example that defines a pydantic model, passes its JSON Schema to the OpenAI client via it’s response_format argument and parses/validates the reply back into that pydantic model.
from typing import List
from pydantic import BaseModel, Field
from openai import OpenAI
import json
# Define your structured shape using Pydantic
class Summary(BaseModel):
headline: str
highlights: List[str] = Field(default_factory=list)
# Convert to JSON Schema for the API
schema = Summary.model_json_schema()
# Instantiate the OpenAI client and call the chat completions API
client = OpenAI()
resp = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You output only valid JSON matching the schema."},
{"role": "user", "content": "Summarise the article at a high level."},
],
response_format={
"type": "json_schema",
"json_schema": {
"name": "Summary",
"schema": schema,
# Optional: set to true to require all fields in the schema
"strict": True,
},
},
)
# Parse the JSON string in the assistant message into your pydantic model
content = resp.choices[0].message.content # guaranteed valid JSON under response_format
summary = Summary.model_validate_json(content)
# And for the sake of the example provide output on STDOUT
print(summary.headline)
print(summary.highlights)
response_format={"type": "json_schema", ...} lets the model validate against your schema server-side. The content returned is valid JSON (as a string) that you can feed directly into Summary.model_validate_json(...).Summary.schema() instead of model_json_schema() and Summary.parse_raw(...) instead of model_validate_json(...).Claudes chat completion endpoint does not directly accept a response_format parameter. Instead, most users apply tools to handle structured outputs. This is also suggested in Anthropics official docs though I know some people who did not get the idea without seeing an example. The very simple trick is to synthesize a virtual tool definition from the users provided schema, forcing the assistant to call it. One has to remember that any exchange with the model including tool calls are just strings, there is no requirement that a tool call really equals to a function call in your orchestrator, framework or programming language. It’s just the request for some behaviour by the model.
The workflow looks like this:
response_format field (e.g. { "type": "json_object" } or { "type": "json_schema", ... })tools list while keeping any tools already defined. This step:
$ref entries and removing unsupported JSON Schema helpers)structured_response (or a unique name if one already exists)tool_choice = {"type": "tool", "name": "structured_response"}tool_use block containing the structured JSON payloadThis approach mirrors Anthropics own guidance and allows structured responses even though the public API doesn’t yet have a native response_format option.
Let’s look at an example request for a structured summary. For OpenAI the request would look like the following:
{
"model": "claude-3-5-sonnet",
"messages": [
{"role": "user", "content": "Summarise this article"}
],
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "summary",
"schema": {
"type": "object",
"properties": {
"headline": {"type": "string"},
"highlights": {
"type": "array",
"items": {"type": "string"}
}
},
"required": ["headline", "highlights"]
}
}
}
}
Before passing this to Anthropic, then we translate the schema specification to the following tool specification and tool choice. The tool choice forces the model to execute the tool:
"tools": [
{
"name": "summary",
"description": "Synthetic tool enforcing structured output.",
"input_schema": { /* normalised schema */ }
}
],
"tool_choice": {"type": "tool", "name": "summary"}
Claude will then output for example:
{
"type": "tool_use",
"name": "summary",
"input": {
"headline": "AI Models Evolve Toward Safer Reasoning",
"highlights": [
"Anthropic refines structured response API",
"OpenAI and Ollama offer direct response formats",
"Structured output aids automation and validation"
]
}
}
The ìnput to our tool_use contains the JSON response that we desire.
This pattern provides structured output from Anthropic models - it also works for other models that do not support JSON mode. It’s a clean, powerful trick that works with any model that supports function calling / tool use. It is a bridge between the ecosystems of strict response_format and validated JSON datastructures and Anthropics flexible tool-call mechanism. I actually use this trick in my mini-apigw API gateway to translate from OpenAI requests to Anthropic requests.
Note: The following links are Amazon affiliate links, this pages author profits from qualified purchases
This article is tagged:
Dipl.-Ing. Thomas Spielauer, Wien (webcomplains389t48957@tspi.at)
This webpage is also available via TOR at http://rh6v563nt2dnxd5h2vhhqkudmyvjaevgiv77c62xflas52d5omtkxuid.onion/