Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
The OpenAIWrapper
from autogen
tracks token counts and costs of your API calls. Use the create()
method to initiate requests and print_usage_summary()
to retrieve a detailed usage report, including total cost and token usage for both cached and actual requests.
mode=["actual", "total"]
(default): print usage summary for non-caching completions and all completions (including cache).mode='actual'
: only print non-cached usage.mode='total'
: only print all usage (including cache).Reset your session's usage data with clear_usage_summary()
when needed.
We also support cost estimation for agents. Use Agent.print_usage_summary()
to print the cost summary for the agent.
You can retrieve usage summary in a dict using Agent.get_actual_usage()
and Agent.get_total_usage()
. Note that Agent.reset()
will also reset the usage summary.
To gather usage data for a list of agents, we provide an utility function autogen.gather_usage_summary(agents)
where you pass in a list of agents and gather the usage summary.
If you are using azure OpenAI, the model returned from completion doesn't have the version information. The returned model is either 'gpt-35-turbo' or 'gpt-4'. From there, we are calculating the cost based on gpt-3.5-0613: ((0.0015, 0.002) per 1k prompt and completion tokens) and gpt-4-0613: (0.03,0.06). This means the cost is wrong if you are using the 1106 version of the models from azure OpenAI.
This will be improved in the future. However, the token count summary is accurate. You can use the token count to calculate the cost yourself.
AutoGen requires Python>=3.8
:
pip install "pyautogen"
The config_list_from_json
function loads a list of configurations from an environment variable or a json file.
import autogen
from autogen import OpenAIWrapper
from autogen import AssistantAgent, UserProxyAgent
from autogen import gather_usage_summary
# config_list = autogen.config_list_from_json(
# "OAI_CONFIG_LIST",
# filter_dict={
# "model": ["gpt-3.5-turbo", "gpt-4-1106-preview"],
# },
# )
config_list = autogen.config_list_from_json(
"OAI_CONFIG_LIST",
# filter_dict={
# "model": ["gpt-3.5-turbo", "gpt-35-turbo"],
# },
)
It first looks for environment variable "OAI_CONFIG_LIST" which needs to be a valid json string. If that variable is not found, it then looks for a json file named "OAI_CONFIG_LIST". It filters the configs by models (you can filter by other keys as well).
The config list looks like the following:
config_list = [
{
"model": "gpt-4",
"api_key": "<your OpenAI API key>",
}, # OpenAI API endpoint for gpt-4
{
"model": "gpt-35-turbo-0613", # 0613 or newer is needed to use functions
"base_url": "<your Azure OpenAI API base>",
"api_type": "azure",
"api_version": "2024-02-15-preview", # 2023-07-01-preview or newer is needed to use functions
"api_key": "<your Azure OpenAI API key>"
}
]
You can set the value of config_list in any way you prefer. Please refer to this notebook for full code examples of the different methods.
client = OpenAIWrapper(config_list=config_list)
messages = [
{"role": "user", "content": "Can you give me 3 useful tips on learning Python? Keep it simple and short."},
]
response = client.create(messages=messages, model="gpt-3.5-turbo", cache_seed=None)
print(response.cost)
0.00861
When creating a instance of OpenAIWrapper, cost of all completions from the same instance is recorded. You can call print_usage_summary()
to checkout your usage summary. To clear up, use clear_usage_summary()
.
client = OpenAIWrapper(config_list=config_list)
messages = [
{"role": "user", "content": "Can you give me 3 useful tips on learning Python? Keep it simple and short."},
]
client.print_usage_summary() # print usage summary
No usage summary. Please call "create" first.
# The first creation
# By default, cache_seed is set to 41 and enabled. If you don't want to use cache, set cache_seed to None.
response = client.create(messages=messages, model="gpt-35-turbo-1106", cache_seed=41)
client.print_usage_summary() # default to ["actual", "total"]
client.print_usage_summary(mode="actual") # print actual usage summary
client.print_usage_summary(mode="total") # print total usage summary
---------------------------------------------------------------------------------------------------- No actual cost incurred (all completions are using cache). Usage summary including cached usage: Total cost: 0.01059 * Model 'gpt-4': cost: 0.01059, prompt_tokens: 25, completion_tokens: 164, total_tokens: 189 ---------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------- No actual cost incurred (all completions are using cache). ---------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------- Usage summary including cached usage: Total cost: 0.01059 * Model 'gpt-4': cost: 0.01059, prompt_tokens: 25, completion_tokens: 164, total_tokens: 189 ----------------------------------------------------------------------------------------------------
# take out cost
print(client.actual_usage_summary)
print(client.total_usage_summary)
None {'total_cost': 0.01059, 'gpt-4': {'cost': 0.01059, 'prompt_tokens': 25, 'completion_tokens': 164, 'total_tokens': 189}}
# Since cache is enabled, the same completion will be returned from cache, which will not incur any actual cost.
# So actual cost doesn't change but total cost doubles.
response = client.create(messages=messages, model="gpt-35-turbo-1106", cache_seed=41)
client.print_usage_summary()
---------------------------------------------------------------------------------------------------- No actual cost incurred (all completions are using cache). Usage summary including cached usage: Total cost: 0.02118 * Model 'gpt-4': cost: 0.02118, prompt_tokens: 50, completion_tokens: 328, total_tokens: 378 ----------------------------------------------------------------------------------------------------
# clear usage summary
client.clear_usage_summary()
client.print_usage_summary()
No usage summary. Please call "create" first.
# all completions are returned from cache, so no actual cost incurred.
response = client.create(messages=messages, model="gpt-35-turbo-1106", cache_seed=41)
client.print_usage_summary()
---------------------------------------------------------------------------------------------------- No actual cost incurred (all completions are using cache). Usage summary including cached usage: Total cost: 0.01059 * Model 'gpt-4': cost: 0.01059, prompt_tokens: 25, completion_tokens: 164, total_tokens: 189 ----------------------------------------------------------------------------------------------------
Agent.print_usage_summary()
will print the cost summary for the agent.Agent.get_actual_usage()
and Agent.get_total_usage()
will return the usage summary in a dict. When an agent doesn't use LLM, they will return None.Agent.reset()
will reset the usage summary.autogen.gather_usage_summary
will gather the usage summary for a list of agents.assistant = AssistantAgent(
"assistant",
system_message="You are a helpful assistant.",
llm_config={
"timeout": 600,
"cache_seed": None,
"config_list": config_list,
},
)
ai_user_proxy = UserProxyAgent(
name="ai_user",
human_input_mode="NEVER",
max_consecutive_auto_reply=1,
code_execution_config=False,
llm_config={
"config_list": config_list,
},
# In the system message the "user" always refers to the other agent.
system_message="You ask a user for help. You check the answer from the user and provide feedback.",
)
assistant.reset()
math_problem = "$x^3=125$. What is x?"
ai_user_proxy.initiate_chat(
assistant,
message=math_problem,
)
ai_user (to assistant): $x^3=125$. What is x? -------------------------------------------------------------------------------- assistant (to ai_user): To find the value of $x$ when $x^3 = 125$, you can find the cube root of 125. The cube root of a number is a value that, when multiplied by itself three times, gives the original number. The cube root of 125 can be written as $125^{1/3}$ or $\sqrt[3]{125}$. Since $5 \times 5 \times 5 = 125$, it follows that: $$x = \sqrt[3]{125} = 5$$ Therefore, $x = 5$. -------------------------------------------------------------------------------- ai_user (to assistant): Your calculation is correct. The value of $x$ when $x^3 = 125$ is indeed $x = 5$. Great job! -------------------------------------------------------------------------------- assistant (to ai_user): Thank you for the confirmation! I'm glad the answer was helpful. If you have any more questions or need assistance with anything else, feel free to ask! --------------------------------------------------------------------------------
ChatResult(chat_history=[{'content': '$x^3=125$. What is x?', 'role': 'assistant'}, {'content': 'To find the value of $x$ when $x^3 = 125$, you can find the cube root of 125. The cube root of a number is a value that, when multiplied by itself three times, gives the original number.\n\nThe cube root of 125 can be written as $125^{1/3}$ or $\\sqrt[3]{125}$. Since $5 \\times 5 \\times 5 = 125$, it follows that:\n\n$$x = \\sqrt[3]{125} = 5$$\n\nTherefore, $x = 5$.', 'role': 'user'}, {'content': 'Your calculation is correct. The value of $x$ when $x^3 = 125$ is indeed $x = 5$. Great job!', 'role': 'assistant'}, {'content': "Thank you for the confirmation! I'm glad the answer was helpful. If you have any more questions or need assistance with anything else, feel free to ask!", 'role': 'user'}], summary="Thank you for the confirmation! I'm glad the answer was helpful. If you have any more questions or need assistance with anything else, feel free to ask!", cost=({'total_cost': 0.022019999999999998, 'gpt-4': {'cost': 0.022019999999999998, 'prompt_tokens': 372, 'completion_tokens': 181, 'total_tokens': 553}}, {'total_cost': 0.022019999999999998, 'gpt-4': {'cost': 0.022019999999999998, 'prompt_tokens': 372, 'completion_tokens': 181, 'total_tokens': 553}}), human_input=[])
ai_user_proxy.print_usage_summary()
print()
assistant.print_usage_summary()
Agent 'ai_user': ---------------------------------------------------------------------------------------------------- Usage summary excluding cached usage: Total cost: 0.00669 * Model 'gpt-4': cost: 0.00669, prompt_tokens: 161, completion_tokens: 31, total_tokens: 192 All completions are non-cached: the total cost with cached completions is the same as actual cost. ---------------------------------------------------------------------------------------------------- Agent 'assistant': ---------------------------------------------------------------------------------------------------- Usage summary excluding cached usage: Total cost: 0.01533 * Model 'gpt-4': cost: 0.01533, prompt_tokens: 211, completion_tokens: 150, total_tokens: 361 All completions are non-cached: the total cost with cached completions is the same as actual cost. ----------------------------------------------------------------------------------------------------
user_proxy = UserProxyAgent(
name="user",
human_input_mode="NEVER",
max_consecutive_auto_reply=2,
code_execution_config=False,
default_auto_reply="That's all. Thank you.",
)
user_proxy.print_usage_summary()
No cost incurred from agent 'user'.
print("Actual usage summary for assistant (excluding completion from cache):", assistant.get_actual_usage())
print("Total usage summary for assistant (including completion from cache):", assistant.get_total_usage())
print("Actual usage summary for ai_user_proxy:", ai_user_proxy.get_actual_usage())
print("Total usage summary for ai_user_proxy:", ai_user_proxy.get_total_usage())
print("Actual usage summary for user_proxy:", user_proxy.get_actual_usage())
print("Total usage summary for user_proxy:", user_proxy.get_total_usage())
Actual usage summary for assistant (excluding completion from cache): {'total_cost': 0.01533, 'gpt-4': {'cost': 0.01533, 'prompt_tokens': 211, 'completion_tokens': 150, 'total_tokens': 361}} Total usage summary for assistant (including completion from cache): {'total_cost': 0.01533, 'gpt-4': {'cost': 0.01533, 'prompt_tokens': 211, 'completion_tokens': 150, 'total_tokens': 361}} Actual usage summary for ai_user_proxy: {'total_cost': 0.00669, 'gpt-4': {'cost': 0.00669, 'prompt_tokens': 161, 'completion_tokens': 31, 'total_tokens': 192}} Total usage summary for ai_user_proxy: {'total_cost': 0.00669, 'gpt-4': {'cost': 0.00669, 'prompt_tokens': 161, 'completion_tokens': 31, 'total_tokens': 192}} Actual usage summary for user_proxy: None Total usage summary for user_proxy: None
total_usage_summary, actual_usage_summary = gather_usage_summary([assistant, ai_user_proxy, user_proxy])
total_usage_summary
{'total_cost': 0.022019999999999998, 'gpt-4': {'cost': 0.022019999999999998, 'prompt_tokens': 372, 'completion_tokens': 181, 'total_tokens': 553}}