Getting Started

Basic example

Creating your first assistant is straightforward. You start by configuring the Language Model (LLM) you want to use, then you create your agent and connect the language model to that agent.

Step by step guide

  1. Start by configuring a Language Model, in the example below we will configure OpenAI.

Note that you usually only need to create the llm object once and re-use it for subsequent agents you create.


1import os
2
3from superagent.client import Superagent
4
5
6client = Superagent(
7 base_url="https://api.beta.superagent.sh",
8 token=os.environ["SUPERAGENT_API_KEY"]
9)
10
11llm = client.llm.create(request={
12 "provider": "OPENAI",
13 "apiKey": os.environ["OPENAI_API_KEY"]
14})
  1. Create an Assistant.
1agent = client.agent.create(
2 name="Chat Assistant",
3 description="My first Assistant",
4 type="SUPERAGENT",
5 avatar="https=//myavatar.com/homanp.png",
6 is_active=True,
7 initial_message="Hi there! How can I help you?",
8 llm_model="GPT_3_5_TURBO_16K_0613",
9 prompt="You are a helpful AI Assistant",
10)
  1. Attach the LLM to the Assistant and invoke it.
1client.agent.add_llm(agent_id=agent.data.id, llm_id=llm.data.id)
2
3prediction = client.agent.invoke(
4 agent_id=agent.data.id,
5 input="Hi there!",
6 enable_streaming=False,
7 session_id="my_session" # Best practice is to create a unique session per user
8)
9
10print(prediction.data.get("output"))
11
12# Hello there, how can I help you?

By seperating the creation of each object you can reuse, LLMs, Agents or any other object such as Tools and Datasources multiple times without having to re-create them.

Full code

1import os
2
3from superagent.client import Superagent
4
5
6client = Superagent(
7 base_url="https://api.beta.superagent.sh",
8 token=os.environ["SUPERAGENT_API_KEY"]
9)
10
11llm = client.llm.create(request={
12 "provider": "OPENAI",
13 "apiKey": os.environ["OPENAI_API_KEY"]
14})
15
16agent = client.agent.create(request={
17 "name": "Chat Assistant",
18 "description": "My first Assistant",
19 "avatar": "https://myavatar.com/homanp.png", # A valid image URL (jpg or png)
20 "isActive": True,
21 "initialMessage": "Hi there! How can I help you?",
22 "llmModel": "GPT_3_5_TURBO_16K_0613",
23 "prompt": "You are a helpful AI Assistant",
24})
25
26client.agent.add_llm(agent_id=agent.data.id, llm_id=llm.data.id)
27
28prediction = client.agent.invoke(
29 agent_id=agent.data.id,
30 input="Hi there!",
31 enable_streaming=False,
32 session_id="my_session" # Best practice is to create a unique session per user
33)
34
35print(prediction.data.get("output"))
36
37# Hello there, how can I help you?

In just a couple of lines of code we have created a production-ready chat Assistant using one of the GPT models. Check out the other examples on how to add datasources and tools to your Assistants.

Replit template

We’ve created a Replit template for this which you can run here.