#!/usr/bin/env python
# coding: utf-8
# # Conversations with Chat History Compression Enabled
#
#
#
Deprecation Notice: CompressibleAgent
has been deprecated and will no longer be available as of version
0.2.30
. Please transition to using
TransformMessages
, which is now the recommended approach. For a detailed guide on implementing this new standard, refer to our user guide on
Compressing Text with LLMLingua. This guide provides examples for effectively utilizing LLMLingua transform as a replacement for
CompressibleAgent
.
#
#
# AutoGen offers conversable agents powered by LLM, tools, or humans, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participance through multi-agent conversation. Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).
#
# In this notebook, we demonstrate how to enable compression of history messages using the `CompressibleAgent`. While this agent retains all the default functionalities of the `AssistantAgent`, it also provides the added feature of compression when activated through the `compress_config` setting.
#
# Different compression modes are supported:
#
# 1. `compress_config=False` (Default): `CompressibleAgent` is equivalent to `AssistantAgent`.
# 2. `compress_config=True` or `compress_config={"mode": "TERMINATE"}`: no compression will be performed. However, we will count token usage before sending requests to the OpenAI model. The conversation will be terminated directly if the total token usage exceeds the maximum token usage allowed by the model (to avoid the token limit error from OpenAI API).
# 3. `compress_config={"mode": "COMPRESS", "trigger_count":