Guaranteeing valid JSON responses with OpenAI
Using OpenAI functions in PlayFetch to format your responses
Functions are a powerful feature offered by OpenAI. You can use them to provide your prompts with additional information as and when the model asks for it. They’re really easy to implement in PlayFetch because we handle all of the state for you - just call the API and pass the response, we handle the rest.
But they’re also really powerful for something that I come up against much more frequently: wanting to format my endpoint return values as JSON. All of the foundation models will, with varying degrees of success, format things as JSON if you ask them to. It can be as simple as saying “the response must be valid JSON with the following schema:” and if you’re lucky it’ll work every time. If you’re not you can easily end up with mis-formatted JSON which is a pain to deal with in an app.
Using functions allows you to guarantee that the response is valid JSON with minimal effort. The trick here is to specify a function (I like to think of it as a ‘data display’ function) that your prompt should call when it’s done to present the result to your app.
Take this example of a prompt that suggests the correct clothing for a specific weather forecast.
If I run this I get what you might expect - a verbose response with the answers to my query. It looks something like this:
In my weather app I want to have this response structured in a way that makes it easier for me to present. For example it would be great if needing an umbrella was a boolean, and I could refer to each other item of clothing easily. I’d love it as JSON object that I can easily extract the data from.
Using functions I can do this without touching my prompt. All I have to do is click on the “Functions” tab and specify a function that takes its parameters in the format I want for my output. Something like this:
Now if I run my prompt again, instead of verbose prose I get something like this:
Pretty neat.
As you can see, asking for the response to be formatted as JSON seems to have changed the suggestion detail - they’re now one word rather than something like “a light jacket or sweater”. This is something you may have noticed when using other methods to format your responses such as including a schema in the prompt. How you ask the model to present the answer impacts what the answer is. However I’ve found this to be less of a problem when using functions in this way, and it’s good practice to be verbose about the type of data you’re asking for. Adding ‘For each item include some options where appropriate such as "a light jacket or sweater" rather than just saying "sweater".’ solves the problem and makes it much more likely I’ll get consistent results.
Right, I need to go and buy a hat.