使用AutoGen框架进行多智能体协作:AI Agentic Design Patterns with AutoGen

时间:2024-06-08 08:30:06

AI Agentic Design Patterns with AutoGen

本文是学习https://www.deeplearning.ai/short-courses/ai-agentic-design-patterns-with-autogen/ 这门课的学习笔记。

在这里插入图片描述

What you’ll learn in this course

In AI Agentic Design Patterns with AutoGen you’ll learn how to build and customize multi-agent systems, enabling agents to take on different roles and collaborate to accomplish complex tasks using AutoGen, a framework that enables development of LLM applications using multi-agents.

In this course you’ll create:

  • A two-agent chat that shows a conversation between two standup comedians, using “ConversableAgent,” a built-in agent class of AutoGen for constructing multi-agent conversations.
  • A sequence of chats between agents to provide a fun customer onboarding experience for a product, using the multi-agent collaboration design pattern.
  • A high-quality blog post by using the agent reflection framework. You’ll use the “nested chat” structure to develop a system where reviewer agents, nested within a critic agent, reflect on the blog post written by another agent.
  • A conversational chess game where two agent players can call a tool and make legal moves on the chessboard, by implementing the tool use design pattern.
  • A coding agent capable of generating the necessary code to plot stock gains for financial analysis. This agent can also integrate user-defined functions into the code.
  • Agents with coding capabilities to complete a financial analysis task. You’ll create two systems where agents collaborate and seek human feedback. The first system will generate code from scratch using an LLM, and the second will use user-provided code.

You can use the AutoGen framework with any model via API call or locally within your own environment.

By the end of the course, you’ll have hands-on experience with AutoGen’s core components and a solid understanding of agentic design patterns. You’ll be ready to effectively implement multi-agent systems in your workflows.

文章目录

  • AI Agentic Design Patterns with AutoGen
    • What you’ll learn in this course
  • Lesson 1: Multi-Agent Conversation and Stand-up Comedy
    • Setup
    • Define an AutoGen agent
    • Conversation
    • Print some results
    • Get a better summary of the conversation
    • Chat Termination
  • Lesson 2: Sequential Chats and Customer Onboarding
    • Setup
    • Creating the needed agents
    • Creating tasks
    • Start the onboarding process
    • Print out the summary
    • Print out the cost
  • Lesson 3: Reflection and Blogpost Writing
    • The task!
    • Create a writer agent
    • Adding reflection
    • Nested chat
    • Orchestrate the nested chats to solve the task
    • Get the summary
  • Lesson 4: Tool Use and Conversational Chess
    • Setup
    • Initialize the chess board
    • Define the needed tools
      • 1. Tool for getting legal moves
      • 2. Tool for making a move on the board
    • Create agents
    • Register the tools
    • Register the nested chats
    • Start the Game
    • Adding a fun chitchat to the game!
  • Lesson 5: Coding and Financial Analysis
    • Define a code executor
    • Create agents
      • 1. Agent with code executor configuration
      • 2. Agent with code writing capability
    • The task!
    • Let's see the plot!
    • User-Defined Functions
      • Create a new executor with the user-defined functions
      • Let's update the agents with the new system message
      • Start the same task again!
  • Lesson 6: Planning and Stock Report Generation
    • The task!
    • Build a group chat
    • Define the group chat
    • Start the group chat!
    • Add a speaker selection policy
  • 后记

Lesson 1: Multi-Agent Conversation and Stand-up Comedy

在这里插入图片描述

Setup

from utils import get_openai_api_key
OPENAI_API_KEY = get_openai_api_key()
llm_config = {"model": "gpt-3.5-turbo"}

requirements.txt

# requirements file
# note which revision of python, for example 3.9.6
# in this file, insert all the pip install needs, include revision

# python==3.10.13
pyautogen==0.2.25
chess==1.10.0
matplotlib
numpy
pandas
yfinance

utils.py

# Add your utilities or helper functions to this file.

import os
from dotenv import load_dotenv, find_dotenv

# these expect to find a .env file at the directory above the lesson.                                                                                                                     # the format for that file is (without the comment)                                                                                                                                       #API_KEYNAME=AStringThatIsTheLongAPIKeyFromSomeService                                                                                                                                     
def load_env():
    _ = load_dotenv(find_dotenv())

def get_openai_api_key():
    load_env()
    openai_api_key = os.getenv("OPENAI_API_KEY")
    return openai_api_key

Define an AutoGen agent

from autogen import ConversableAgent

agent = ConversableAgent(
    name="chatbot",
    llm_config=llm_config,
    human_input_mode="NEVER",
)
reply = agent.generate_reply(
    messages=[{"content": "Tell me a joke.", "role": "user"}]
)
print(reply)

Output

Sure! Here you go:

Why don't scientists trust atoms?

Because they make up everything!
reply = agent.generate_reply(
    messages=[{"content": "Repeat the joke.", "role": "user"}]
)
print(reply)

Output

Sure, which joke would you like me to repeat for you?

Conversation

Setting up a conversation between two agents, Cathy and Joe, where the memory of their interactions is retained.

cathy = ConversableAgent(
    name="cathy",
    system_message=
    "Your name is Cathy and you are a stand-up comedian.",
    llm_config=llm_config,
    human_input_mode="NEVER",
)

joe = ConversableAgent(
    name="joe",
    system_message=
    "Your name is Joe and you are a stand-up comedian. "
    "Start the next joke from the punchline of the previous joke.",
    llm_config=llm_config,
    human_input_mode="NEVER",
)
chat_result = joe.initiate_chat(
    recipient=cathy, 
    message="I'm Joe. Cathy, let's keep the jokes rolling.",
    max_turns=2,
)

Output

scarecrow: 美 [ˈskerkroʊ] 稻草人

joe (to cathy):

I'm Joe. Cathy, let's keep the jokes rolling.

--------------------------------------------------------------------------------
cathy (to joe):

Sure thing, Joe! Why did the scarecrow win an award? Because he was outstanding in his field!

--------------------------------------------------------------------------------
joe (to cathy):

Haha, that scarecrow sure knows how to stand out! Just like me at a buffet!

--------------------------------------------------------------------------------
cathy (to joe):

Haha, I bet you're a real "stand-out" at the buffet line, Joe! I always try to stand out too, but usually I just end up blending in with the crowd...just like my dad's dad jokes!

Print some results

You can print out:

  1. Chat history
  2. Cost
  3. Summary of the conversation
import pprint

pprint.pprint(chat_result.chat_history)

Output

[{'content': "I'm Joe. Cathy, let's keep the jokes rolling.",
  'role': 'assistant'},
 {'content': 'Sure thing, Joe! Why did the scarecrow win an award? Because he '
             'was outstanding in his field!',
  'role': 'user'},
 {'content': 'Haha, that scarecrow sure knows how to stand out! Just like me '
             'at a buffet!',
  'role': 'assistant'},
 {'content': 'Haha, I bet you\'re a real "stand-out" at the buffet line, Joe! '
             'I always try to stand out too, but usually I just end up '
             "blending in with the crowd...just like my dad's dad jokes!",
  'role': 'user'}]
pprint.pprint(chat_result.cost)

Output

{'usage_excluding_cached_inference': {'gpt-3.5-turbo-0125': {'completion_tokens': 90,
                                                             'cost': 0.00023350000000000004,
                                                             'prompt_tokens': 197,
                                                             'total_tokens': 287},
                                      'total_cost': 0.00023350000000000004},
 'usage_including_cached_inference': {'gpt-3.5-turbo-0125': {'completion_tokens': 90,
                                                             'cost': 0.00023350000000000004,
                                                             'prompt_tokens': 197,
                                                             'total_tokens': 287},
                                      'total_cost': 0.00023350000000000004}}
pprint.pprint(chat_result.summary)

Output

('Haha, I bet you\'re a real "stand-out" at the buffet line, Joe! I always try '
 'to stand out too, but usually I just end up blending in with the '
 "crowd...just like my dad's dad jokes!")

Get a better summary of the conversation

chat_result = joe.initiate_chat(
    cathy, 
    message="I'm Joe. Cathy, let's keep the jokes rolling.", 
    max_turns=2, 
    summary_method="reflection_with_llm",
    summary_prompt="Summarize the conversation",
)
pprint.pprint(chat_result.summary)

Output

('Joe and Cathy exchanged jokes and banter, poking fun at themselves and each '
 'other in a light-hearted manner. They shared humorous quips about standing '
 'out and blending in, showcasing their playful personalities.')

Chat Termination

Chat can be terminated using a termination conditions.

cathy = ConversableAgent(
    name="cathy",
    system_message=
    "Your name is Cathy and you are a stand-up comedian. "
    "When you're ready to end the conversation, say 'I gotta go'.",
    llm_config=llm_config,
    human_input_mode="NEVER",
    is_termination_msg=lambda msg: "I gotta go" in msg["content"],
)

joe = ConversableAgent(
    name="joe",
    system_message=
    "Your name is Joe and you are a stand-up comedian. "
    "When you're ready to end the conversation, say 'I gotta go'.",
    llm_config=llm_config,
    human_input_mode="NEVER",
    is_termination_msg=lambda msg: "I gotta go" in msg["content"] or "Goodbye" in msg["content"],
)
chat_result = joe.initiate_chat(
    recipient=cathy,
    message="I'm Joe. Cathy, let's keep the jokes rolling."
)

Output


joe (to cathy):

I'm Joe. Cathy, let's keep the jokes rolling.

--------------------------------------------------------------------------------
cathy (to joe):

Hey Joe, glad you're enjoying the jokes! Alright, here's one for you: Why did the math book look sad? Because it had too many problems. 

Alright, your turn! Hit me with your best joke, Joe!
......
--
cathy (to joe):

Haha, that's a timely joke, Joe! Alright, here's one for you: Why did the coffee file a police report? It got mugged. 

You're crushing it, Joe! Your turn!

--------------------------------------------------------------------------------
joe (to cathy):

Haha, I love that one, Cathy! Alright, here's a joke: I told my wife she should embrace her mistakes. She gave me a hug. 

Your turn, Cathy!

--------------------------------------------------------------------------------
cathy (to joe):

Haha, that's a good one, Joe! Let's end with this one: Why did the old man fall in the well? Because he couldn't see that well. 

Thanks for the laughs, Joe! I gotta go.

--------------------------------------------------------------------------------
cathy.send(message="What's last joke we talked about?", recipient=joe)

Output

cathy (to joe):

What's last joke we talked about?

--------------------------------------------------------------------------------
joe (to cathy):

The last joke we talked about was: Why did the old man fall in the well? Because he couldn't see that well. Thanks for the laughs, and have a great day!

--------------------------------------------------------------------------------
cathy (to joe):

Thank you! You too!

--------------------------------------------------------------------------------
joe (to cathy):

You're welcome! Take care, Cathy!

--------------------------------------------------------------------------------
cathy (to joe):

You too, Joe! Bye!

--------------------------------------------------------------------------------
joe (to cathy):

Goodbye!

--------------------------------------------------------------------------------
cathy (to joe):

Lesson 2: Sequential Chats and Customer Onboarding

在这里插入图片描述

Setup

llm_config={"model": "gpt-3.5-turbo"}
from autogen import ConversableAgent

Creating the needed agents

onboarding_personal_information_agent = ConversableAgent(
    name="Onboarding Personal Information Agent",
    system_message='''You are a helpful customer onboarding agent,
    you are here to help new customers get started with our product.
    Your job is to gather customer's name and location.
    Do not ask for other information. Return 'TERMINATE' 
    when you have gathered all the information.''',
    llm_config=llm_config,
    code_execution_config=False,
    human_input_mode="NEVER",
)
onboarding_topic_preference_agent = ConversableAgent(
    name="Onboarding Topic preference Agent",
    system_message='''You are a helpful customer onboarding agent,
    you are here to help new customers get started with our product.
    Your job is to gather customer's preferences on news topics.
    Do not ask for other information.
    Return 'TERMINATE' when you have gathered all the information.''',
    llm_config=llm_config,
    code_execution_config=False,
    human_input_mode="NEVER",
)
customer_engagement_agent = ConversableAgent(
    name="Customer Engagement Agent",
    system_message='''You are a helpful customer service agent
    here to provide fun for the customer based on the user's
    personal information and topic preferences.
    This could include fun facts, jokes, or interesting stories.
    Make sure to make it engaging and fun!
    Return 'TERMINATE' when you are done.''',
    llm_config=llm_config,
    code_execution_config=False,
    human_input_mode="NEVER",
    is_termination_msg=lambda msg: "terminate" in msg.get("content").lower(),
)
customer_proxy_agent = ConversableAgent(
    name="customer_proxy_agent",
    llm_config=False,
    code_execution_config=False,
    human_input_mode="ALWAYS",
    is_termination_msg=lambda msg: "terminate" in msg.get("content").lower(),
)

Creating tasks

Now, you can craft a series of tasks to facilitate the onboarding process.

chats = [
    {
        "sender": onboarding_personal_information_agent,
        "recipient": customer_proxy_agent,
        "message": 
            "Hello, I'm here to help you get started with our product."
            "Could you tell me your name and location?"