Quick Start - Build Your First Agent in 15 Minutes
Create a minimal conversational agent using the Anthropic API and Python
Quick Start: Your First Agent in 15 Minutes
Build a minimal conversational agent that uses Claude to answer questions. This is the foundation for more advanced agents.
Goal
Create an agent that:
- Takes user input from terminal
- Sends messages to Claude API
- Returns AI-generated responses
- Runs in a simple loop
Setup
Install Anthropic SDK
pip install anthropicVerify installation:
python -c "import anthropic; print(anthropic.__version__)"Expected output: Version number (e.g., 0.18.1)
Set API Key
macOS/Linux:
export ANTHROPIC_API_KEY="your-api-key-here"Windows:
set ANTHROPIC_API_KEY=your-api-key-hereVerify:
echo $ANTHROPIC_API_KEY  # macOS/Linux
echo %ANTHROPIC_API_KEY%  # WindowsCreate Project Structure
mkdir my-first-agent
cd my-first-agent
touch agent.pyDirectory should contain:
my-first-agent/
└── agent.pyCore Concepts
The Agent Loop
Every agent follows this pattern:
# Simplified agent loop
while user_wants_to_chat:
    user_input = get_input()
    response = send_to_llm(user_input)
    display(response)System Prompts
System prompts define agent behavior and personality:
system_prompt = "You are a helpful research assistant."
# Agent will behave according to this instructionSystem prompts shape agent personality, expertise, and response style. They are the most powerful configuration tool.
Message Format
Anthropic API uses structured message format:
messages = [
    {"role": "user", "content": "What is an AI agent?"},
    {"role": "assistant", "content": "An AI agent is..."}
]Build the Agent
Minimal Code Policy: Complete implementation available in curriculum GitHub repo. Here we show key patterns only.
Pattern 1: Initialize API Client
import anthropic
client = anthropic.Anthropic()Pattern 2: Create Message
response = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello"}]
)Pattern 3: Extract Response
answer = response.content[0].text
print(answer)Test Your Agent
Run the Agent
python agent.pyExpected behavior:
- Displays prompt: You:
- Waits for input
- Shows AI response
- Repeats
Test Conversation
Try these inputs:
You: What is an AI agent?
Agent: [Explains AI agents]
You: How do they differ from chatbots?
Agent: [Explains differences]
You: quit
[Program exits]Verify Success
Your agent should:
- ✅ Accept user input from terminal
- ✅ Send to Claude API successfully
- ✅ Display AI-generated responses
- ✅ Exit cleanly on "quit" command
Understanding the Flow
graph LR
    A[User Input] --> B[Send to API]
    B --> C[Claude Processes]
    C --> D[Generate Response]
    D --> E[Display to User]
    E --> AKey Points:
- Each message is independent (no memory yet)
- API call happens every turn
- System prompt shapes responses
- User controls the loop
Common Issues
What You Built
Current Capabilities:
- Single-turn Q&A
- Claude API integration
- Terminal interface
- System prompt configuration
Missing Features:
- ❌ Conversation memory
- ❌ Tool use (web search, calculator)
- ❌ Conversation persistence
- ❌ Multi-turn context
Next Step: Core Build adds memory, tools, and persistence to transform this into a production-ready agent.
Key Takeaways
- Agent Loop: Get input → Send to LLM → Display response → Repeat
- System Prompts: Define agent behavior and personality
- API Integration: Anthropic SDK handles communication with Claude
- Stateless by Default: Each turn is independent without memory
Extension Ideas
Before moving to Core Build, try customizing:
- System Prompt: Change agent personality (formal, casual, expert)
- Model Selection: Try different Claude models (Opus, Sonnet, Haiku)
- Response Length: Adjust max_tokensparameter
- Temperature: Add temperatureparameter for creativity control
Time Check: 15 minutes complete. Ready for Core Build (50 min) where we add memory, tools, and persistence.