Photo by Pixabay

Getting Perfectly Structured Data from LLMs

by Omar Kamali / February 23, 2025 / in AI, OpenAI, Anthropic, Tips

If you've ever struggled to get consistent JSON output from large language models, tool calling offers a surprisingly elegant solution. Here's the secret: We can repurpose function definitions as output templates that force the model to produce data in specific formats.

Read on, this article explains how to get predictable and reliable structured output from any LLM that supports function calling.

Why Structured Output Matters

When building applications with LLMs, we often need machine-readable data - consistent formats that other systems can process. Imagine trying to extract phone numbers from text: without structure, you might get variations like "555-1234" or "call me at five five five...".

This is even more of a problem when you have a complex data generation or extraction task, where you might want a specific data structure with multiple fields as the output.

What is Tool Calling?

Let's review a few essential concepts to grok this approach.

  1. Tool Calling: A feature where LLMs can "call functions" during conversations, whereby the LLM can decide to perform an action before giving an answer. Think "get weather", "send email", "call an API"..
  2. JSON Schema: A way to describe data formats using rules. Think of it as a blueprint for what your data should look like, in JSON format.

Tool calling works by giving an LLM a definition of the tools available to it. This definition is specified in the JSON Schema format which gives precise control of the nature of arguments the function takes, their names, types, and structure.

If the LLM decides to use a function, for example "send email", then it will populate the arguments "to", "subject", and "body" with the appropriate values depending on the context of the conversation. Asking this LLM to write a cold email to a prospect about a product X will result in the following output:

{
"subject": "Cut Customer Churn by 40% with AI-Driven Insights",
"body": "As a SaaS leader, you understand...
Our analysis of 200+ SaaS companies shows... [data-driven second paragraph]
Product X automatically... [solution-focused third paragraph].
Can we schedule 15 minutes Thursday to discuss your churn reduction goals?"
}

What does this look like? Perfectly structured data! Had you tasked the LLM with the same inquiry, but without providing it with a tool, it will probably write something like this:

Sure! I will help you write an email to your prospect:

Cut Customer Churn by 40% with AI-Driven Insights

As a SaaS leader, you understand... 
Our analysis of 200+ SaaS companies shows... [data-driven second paragraph]
Product X automatically... [solution-focused third paragraph].
Can we schedule 15 minutes Thursday to discuss your churn reduction goals?

Let me know if you need anything else!

The content is the same, but it's indistinguishable from the LLM's regular response. Which would be a nightmare to use in a non-conversational setting. Forget about your dreams of sales automation, you don't want to be parsing the content of the email from a highly variable LLM response.

So tool calling it is.

The "Aha!" Moment

What if we use tool parameters as a template? Instead of actual functions, we define our desired output format as a schema for a dummy tool. The LLM fills this template like it's preparing function arguments, giving us perfectly structured data!

Traditional approach - unstructured output

response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Extract contact info: John, [email protected]"}]

)

The output might be "Name: John, Email: [email protected]" - again, hard to parse.

tools = [{
  "type": "function",
  "function": {
    "name": "extract_data", # A dummy name. Must be related to the task or the LLM might get confused.
    "parameters": { # Our output template
      "type": "object",
      "properties": {
        "name": {"type": "string"},
        "email": {"type": "string"}
      }
    }
  }
}]

response = client.chat.completions.create(

model="gpt-4o",

messages=[{"role": "user", "content": "Extract contact info: John, [email protected]"}],

tools=tools

)


print(completion.choices[0].message.tool_calls)


# The result will be something like

[{
    "id": "call_12345xyz",
    "type": "function",
    "function": {
        "name": "extract_data",
        "arguments": "{\"name\":\"John\", \"email\":\"[email protected]\"}"
    }
}]



# So our structured output is accessible as follows

output = json.loads(
  completion.choices[0].message.tool_calls[0].function.arguments
)

Why This Works Better

  1. Schema Enforcement: The JSON Schema in the tool definition acts as both documentation and validation, secretly guiding the LLM's formatting and improving its reliability by orders of magnitude.
  2. Built-in Validation: We get automatic type checking through the parameter definitions. Invalid types (e.g. number instead of text) get auto-rejected.
  3. Consistency First: Model prioritizes matching structure over creative formatting, and we avoid the need to parse the response.

This technique works with any JSON-schema compatible output structure. By framing data extraction as "function calling," we get all the benefits of structured output while working with the model's natural capabilities.

To conclude

That's it! That was the trick, this is all it takes to get better outputs. I hope you'll find it helpful. I have used it with great results in countless use cases, even with LLMs that do not support OpenAI's Structured Output officially. In fact I don't use Structured Output anymore and always opt for function calling. Maybe you should too?

Pro tips:

  • Use enums for fixed options for even tighter conrol: "enum": ["home", "work", "mobile"]
  • Get familiar with JSON Schema so you are able to express the specific requirements you need.

You can learn more about JSON Schema and Function Calling in OpenAI's docs.

Get my latest articles and updates
At most one email a month and no spam.

Omar Kamali
Written by Omar Kamali, Founder, CEO @ Monitoro, & Strategic Technology Advisor.