Your First AI Agent

Extension Patterns: Advanced Agent Techniques

Enhance your agent with multi-turn planning, tool chaining, and proactive suggestions

Beyond Basic Agents

The agent built in the core chapters handles single-turn conversations well: receive request, execute action, return response. However, real-world tasks often require more sophisticated behavior. Complex research questions need planning before execution. Multi-step workflows benefit from combining multiple tools in sequence. Domain expertise enables anticipating what users need before they ask.

This chapter introduces three extension patterns that transform basic conversational agents into intelligent assistants capable of handling complex, multi-faceted tasks.

Pattern 1: Multi-Turn Planning

Multi-turn planning enables agents to decompose complex tasks into sequential steps before execution. Instead of responding immediately, the agent first outlines a plan, validates the approach, then executes step-by-step. This reduces errors on ambiguous requests and provides transparency into the agent's reasoning process.

Implementation Approach

The planning pattern adds a reasoning phase before tool execution:

# Agent sees request, generates plan first
plan = agent.create_plan(user_request)
# Execute plan steps sequentially
results = [agent.execute_step(s) for s in plan]

When to Use Multi-Turn Planning

Research tasks benefit most from explicit planning. Consider a request like "analyze the impact of AI on semiconductor supply chains." Without planning, the agent might search randomly or miss key aspects. With planning, the agent first outlines: define scope, gather industry data, identify AI applications, analyze supply chain effects, synthesize findings. Each step builds on previous results.

Planning works best when tasks have multiple valid approaches, require gathering context before acting, or involve domain-specific workflows that benefit from explicit structure.

Pattern 2: Tool Chaining

Tool chaining combines multiple tool calls in sequence to accomplish complex tasks. The output of one tool becomes the input to the next, creating powerful data transformation pipelines. This pattern enables agents to perform multi-step research, analysis, and synthesis workflows that would be difficult to handle in a single tool call.

Implementation Approach

Tool chaining passes results between successive tool calls:

# Chain: search → extract → analyze
papers = search_tool("AI productivity research")
summaries = [extract_tool(p.text) for p in papers]
report = analyze_tool(summaries)

When to Use Tool Chaining

Economic analysis workflows naturally fit tool chaining. A competitive intelligence request chains web search (gather competitor data), data extraction (structure findings), calculator (compute metrics), then file output (save report). Each tool focuses on one transformation step.

Software engineering tasks benefit similarly. Code review chains file read (get source code), linter (check syntax), test generator (create tests), then file write (save test suite). Business intelligence chains search (market data), analysis (compute trends), then visualization (create charts).

The pattern works when tasks require multiple distinct operations, each operation has clear inputs and outputs, and intermediate results have value for subsequent steps.

Pattern 3: Proactive Suggestions

Proactive suggestions enable agents to anticipate user needs based on conversation context and domain patterns. Instead of waiting for explicit requests, the agent monitors the conversation flow and offers relevant next steps, related resources, or potential issues to consider. This transforms reactive assistants into collaborative partners.

Implementation Approach

Proactive behavior requires context awareness and pattern recognition:

# Agent monitors conversation state
if detects_incomplete_analysis(context):
    suggest("Consider analyzing temporal trends?")

When to Use Proactive Suggestions

Research agents recognize patterns in academic workflows. After summarizing papers, the agent suggests citation analysis. After finding contradictory results, it recommends methodology comparison. After completing literature review, it proposes research gap identification.

Code assistants notice missing test coverage and suggest generating unit tests. Business agents detect incomplete metric tracking and recommend additional KPIs. The pattern works when domain expertise enables predicting valuable next steps, conversation history reveals user goals, and suggestions add value without being intrusive.

Combining Patterns for Advanced Workflows

A comprehensive research agent combines all three patterns. User asks: "What are emerging AI applications in supply chain management?"

Planning phase: Agent outlines approach (define scope, search academic papers, search industry reports, identify patterns, synthesize findings).

Tool chaining: Search returns papers → Extract key findings → Analyze themes → Generate summary.

Proactive suggestions: After synthesis, agent suggests "Would you like me to identify research gaps in this area?" or "Should I track new publications on this topic?"

The combined workflow handles ambiguous requests, executes complex multi-step research, and guides users toward deeper insights.

A code assistant agent applies patterns to collaborative development workflows.

Planning phase: When reviewing a pull request, agent first outlines review steps (check syntax, verify tests, assess security, evaluate performance, suggest improvements).

Tool chaining: Linter checks code quality → Test runner validates coverage → Security scanner detects vulnerabilities → Performance analyzer identifies bottlenecks.

Proactive suggestions: After identifying missing tests, agent offers "I can generate unit tests for these new functions" or "This pattern might benefit from refactoring for better maintainability."

The workflow provides structured code review with actionable feedback and anticipates developer needs.

A business intelligence agent uses patterns for strategic decision support.

Planning phase: Request like "evaluate market entry strategy" triggers planning: gather market data, analyze competitors, assess financial requirements, identify risks, recommend approach.

Tool chaining: Search market reports → Extract key metrics → Calculate financial projections → Compare scenarios → Generate executive summary.

Proactive suggestions: After presenting analysis, agent suggests "Would you like me to model different pricing strategies?" or "Should I track competitor product launches?"

The combined approach delivers strategic insights with context-aware guidance.

Implementation Considerations

Adding these patterns increases agent complexity and execution time. Multi-turn planning adds overhead for simple requests. Tool chaining requires careful error handling when intermediate steps fail. Proactive suggestions need calibration to avoid becoming intrusive.

Start by implementing patterns selectively based on task characteristics. Use planning for complex, ambiguous requests. Apply tool chaining when multi-step workflows provide clear value. Enable proactive suggestions for domain-specific patterns with high user value.

Monitor agent behavior and gather user feedback to refine when each pattern activates. Well-implemented extension patterns transform basic agents into intelligent collaborators that handle sophisticated real-world tasks effectively.