Getting Started

Configure Language Model

This page describes how to specify the LLM parameters in an agent invocation.

When invoking an Agent, you can specify the following LLM parameters:

  • temperature (default: 0.0): This parameter controls the randomness of the agent’s responses. A higher value makes the output more random, while a lower value makes it more deterministic.

  • max_tokens (default: None): This parameter sets the maximum length of the agent’s response. If not specified, the agent will use the default maximum length.

Here’s an example of how to set these parameters when invoking an agent:

Note: We assume that you have already created agent. If not, please refer to the quickstart guide.


1response = client.agent.invoke(
2 agent_id=agent.data.id,
3 input="What was Tesla's revenue?",
4 enable_streaming=False,
5 session_id="my_session_id",
6 llm_params={"temperature": 0.0, "max_tokens": 100}
7)