Configure Language Model
This page describes how to specify the LLM parameters in an agent invocation.
When invoking an Agent, you can specify the following LLM parameters:
-
temperature
(default: 0.0): This parameter controls the randomness of the agent’s responses. A higher value makes the output more random, while a lower value makes it more deterministic. -
max_tokens
(default: None): This parameter sets the maximum length of the agent’s response. If not specified, the agent will use the default maximum length.
Here’s an example of how to set these parameters when invoking an agent:
Note: We assume that you have already created agent
. If not, please refer to the quickstart guide.