Structured outputs

You can force your Assistant to reply using structured outputs. This can be beneficial when you want the Assistant to return data in the form of json.

Step-by-step guide

  1. Start by creating an LLM, a Tool and an Agent. Note that you usually only need to create the llm object once and re-use it for subsequent agents.
1import os
2from superagent.client import Superagent
3
4client = Superagent(
5 base_url="https://api.beta.superagent.sh",
6 token=os.environ["SUPERAGENT_API_KEY"]
7)
8
9# We recommend querying for existing LLMs prior to creating.
10llm = client.llm.create(request={
11 "provider": "OPENAI",
12 "apiKey": os.environ["OPENAI_API_KEY"]
13})
14
15agent = client.agent.create(
16 name="Structured Assistant",
17 description="An Assistant that returns responses in json",
18 avatar="https://mylogo.com/logo.png", # Replace with a real image
19 is_active=True,
20 initial_message="Hi there! How can I help you?",
21 llm_model="GPT_4_1106_PREVIEW",
22 prompt="Use the Browser to answer the user's question."
23)
24
25tool = client.tool.create(
26 name="Browser",
27 description="useful for analyzing and summarizing websites and urls.",
28 type="BROWSER"
29)
30
31client.agent.add_tool(agent_id=agent.data.id, tool_id=tool.data.id)
32client.agent.add_llm(agent_id=agent.data.id, llm_id=llm.data.id)
  1. Invoke your Agent with the output_schema parameter. This parameter should hold the desired schema.
1prediction = client.agent.invoke(
2 agent_id=agent.data.id,
3 input="List the top 5 articles on https://news.ycombinator.com.",
4 enable_streaming=False,
5 session_id="my_session_id",
6 output_schema="[{title: string, points: number, url: string}]" # Your desired output schema
7)
8
9print(prediction.data.get("output"))
10
11# [{
12# "title": "...",
13# "points": "...",
14# "url": "..."
15# }, {
16# ...
17# }]

By passing the output_schema we make sure the Assistant returns a json repsonse in our desired output schema.

You can also define the output schema for workflows.

  1. If you want to define output schema for specific steps in a workflow, you can do so by passing the output_schemas to the invoke method. output_schemas is a list of dictionaries where each dictionary contains the step_id and the output_schema for that step. e.g. output_schemas=[{step_id: "step_id", output_schema: "schema"}, ...]

  2. If you want to define output schema for only the final step, you can pass the output_schema to the invoke method.

Full code

1import os
2from superagent.client import Superagent
3
4client = Superagent(
5 base_url="https://api.beta.superagent.sh",
6 token=os.environ["SUPERAGENT_API_KEY"]
7)
8
9# We recommend querying for existing LLMs prior to creating.
10llm = client.llm.create(request={
11 "provider": "OPENAI",
12 "apiKey": os.environ["OPENAI_API_KEY"]
13})
14
15agent = client.agent.create(
16 name="Structured Assistant",
17 description="An Assistant that returns responses in json",
18 avatar="https://mylogo.com/logo.png", # Replace with a real image
19 is_active=True,
20 initial_message="Hi there! How can I help you?",
21 llm_model="GPT_4_1106_PREVIEW",
22 prompt="Use the Browser to answer the user's question."
23)
24
25tool = client.tool.create(
26 name="Browser",
27 description="useful for analyzing and summarizing websites and urls.",
28 type="BROWSER"
29)
30
31client.agent.add_tool(agent_id=agent.data.id, tool_id=tool.data.id)
32client.agent.add_llm(agent_id=agent.data.id, llm_id=llm.data.id)
33
34prediction = client.agent.invoke(
35 agent_id=agent.data.id,
36 input="List the top 5 articles on https://news.ycombinator.com.",
37 enable_streaming=False,
38 session_id="my_session_id",
39 output_schema="[{title: string, points: number, url: string}]" # Your desired output schema
40)
41
42print(prediction.data.get("output"))
43
44# [{
45# "title": "...",
46# "points": "...",
47# "url": "..."
48# }, {
49# ...
50# }]