Getting started

Basic example

Creating your first assistant is straight forward. You start but configuring the Language Model (LLM) you want to use, then you create your agent and connect the language model to that agent.

Step by step guide

  1. Start by configuring a Language Model, in the example below we will configure OpenAI.

Note that you usually only need to create the llm object once and re-use it for subsequent agents you create.


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import os

from superagent.client import Superagent


client = Superagent(
    base_url="https://api.beta.superagent.sh",
    token=os.environ["SUPERAGENT_API_KEY"]
)

llm = client.llm.create(request={
    "provider": "OPENAI",
    "apiKey": os.environ["OPENAI_API_KEY"]
}) 

  1. Create an Assistant.
1
2
3
4
5
6
7
8
9
10
agent = client.agent.create(request={
    "name": "Chat Assistant",
    "description": "My first Assistant",
    "type": "SUPERAGENT",
    "avatar": "https://myavatar.com/homanp.png",
    "isActive": True,
    "initialMessage": "Hi there! How can I help you?",
    "llmModel": "GPT_3_5_TURBO_16K_0613",
    "prompt": "You are an helpful AI Assistant", 
})
  1. Attach the LLM to the Assistant and invoke it.
1
2
3
4
5
6
7
8
9
10
11
12
client.agent.add_llm(agent_id=agent.data.id, llm_id=llm.data.id)

prediction = client.agent.invoke(
    agent_id=agent.data.id,
    input="Hi there!",
    enable_streaming=False,
    session_id="my_session" # Best practice is to create a unique session per user
)

print(prediction.data.get("output"))

# Hello there, how can I help you?

By seperating the creation of each object you can reuse, LLMs, Agents or any other object such as Tools and Datasources multiple times without having to re-create them.

Full code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
import os

from superagent.client import Superagent


client = Superagent(
    base_url="https://api.beta.superagent.sh",
    token=os.environ["SUPERAGENT_API_KEY"]
)

llm = client.llm.create(request={
    "provider": "OPENAI",
    "apiKey": os.environ["OPENAI_API_KEY"]
})

agent = client.agent.create(request={
    "name": "Chat Assistant",
    "description": "My first Assistant",
    "avatar": "https://myavatar.com/homanp.png", # A valid image URL (jpg or png)
    "isActive": True,
    "initialMessage": "Hi there! How can I help you?",
    "llmModel": "GPT_3_5_TURBO_16K_0613",
    "prompt": "You are an helpful AI Assistant", 
})

client.agent.add_llm(agent_id=agent.data.id, llm_id=llm.data.id)

prediction = client.agent.invoke(
    agent_id=agent.data.id,
    input="Hi there!",
    enable_streaming=False,
    session_id="my_session" # Best practice is to create a unique session per user
)

print(prediction.data.get("output"))

# Hello there, how can I help you?

In just a couple of lines of code we have created a production ready chat Assistant using one of the GPT models. Checkout the other examples on how to add datasources and tools to your Assistants.

Replit template

We've created a Replit template for this which you can run here.