Chapter 7: Extension Patterns
Advanced prompting techniques - chain prompts, XML structuring, few-shot workflows, automated processes, and state management
Extension Patterns
Goal: Take your 20-prompt library to the next level with advanced techniques for complex workflows.
You've built a production-ready library of 20 prompts. Now you'll learn to chain them together, use advanced formatting techniques, and create multi-step workflows that solve complex problems your individual prompts can't handle alone.
Prerequisites: Complete Core Build section and have 20 working prompts in your library. These advanced patterns build on fundamental prompt engineering skills.
What You'll Learn
By the end of this section, you'll be able to:
- Chain prompts - Use output from one prompt as input to another
- Structure with XML - Use XML tags for complex, nested instructions
- Implement few-shot chains - Teach multi-step reasoning with examples
- Build multi-step workflows - Automate complex processes with prompt sequences
- Handle state across prompts - Maintain context in long workflows
Time to complete: 10-15 minutes Difficulty level: Intermediate (requires Core Build completion)
Pattern 1: Prompt Chaining
Use Case: Break complex tasks into specialized steps for higher quality
The Concept
Instead of asking one prompt to do everything (research → analyze → write → format), chain specialized prompts together:
- Prompt 1: Extract information
- Prompt 2: Analyze findings
- Prompt 3: Generate recommendations
- Prompt 4: Format for presentation
Complete Example - Literature Review Chain
Extract (Use Prompt 5 - Literature Extractor from Quick Start)
You are a research librarian extracting structured data from academic papers.
Extract the following from each paper:
**PAPER DETAILS:**
- Title: [exact title]
- Authors: [all authors]
- Year: [publication year]
- Journal/Venue: [where published]
**METHODOLOGY:**
- Research design: [experimental/observational/theoretical]
- Sample size: [N]
- Key variables: [list]
**FINDINGS:**
- Main result: [primary finding in one sentence]
- Statistical significance: [p-values if reported]
- Effect size: [magnitude of effect]
**CONTRIBUTIONS:**
- Gap filled: [what problem this solves]
- Limitations: [acknowledged weaknesses]
Papers to analyze:
"""
[PASTE 5 PAPER ABSTRACTS]
"""Output from Step 1:
PAPER 1
Title: AI and Productivity
Authors: Chen & Rodriguez
Year: 2023
Journal: Journal of Economic Perspectives
Methodology: RCT, N=1,247, variables: remote work, productivity
Findings: 13% productivity increase (p < 0.05)
Contributions: Causal evidence for remote work, limited to knowledge workers
PAPER 2
[...]Analyze Gaps (Use Prompt 7 - Literature Gap Identifier)
Take the structured extraction from Step 1 and identify research gaps:
You are a research strategist identifying gaps in the literature.
Analyze the following papers and identify research gaps:
**ESTABLISHED FINDINGS:**
What do we now know? (consensus findings across multiple papers)
**CONTRADICTIONS:**
Where do papers disagree?
**METHODOLOGICAL GAPS:**
What methods haven't been tried?
**CONTEXTUAL GAPS:**
What populations/regions/time periods are understudied?
**HIGHEST-VALUE RESEARCH QUESTIONS:**
Based on these gaps, what 3 studies would most advance the field?
Papers to analyze:
"""
[PASTE STRUCTURED OUTPUT FROM STEP 1]
"""Output from Step 2:
ESTABLISHED FINDINGS:
- Remote work increases productivity (3/5 papers, 8-13% range)
- Effect stronger for individual tasks vs. collaborative work
METHODOLOGICAL GAPS:
- No RCTs longer than 6 months
- No studies in manufacturing or service sectors
[...]
HIGHEST-VALUE RESEARCH QUESTIONS:
1. Long-term effects (2+ years): Does productivity gain persist?
2. Industry variation: Which sectors see negative effects?
3. Mechanism: Is it autonomy, reduced commute, or environment?Generate Research Proposal (Use Prompt 1 - Chain-of-Thought from Core Build)
Use the gap analysis to create a research proposal:
You are a critical thinking expert helping design research studies.
Problem to analyze:
"""
Based on this literature gap analysis:
[PASTE OUTPUT FROM STEP 2]
Design a research study that would fill the most valuable gap identified.
"""
Think through this step-by-step:
**Step 1: Problem Decomposition**
Break down the research design into sub-questions:
- What is the precise research question?
- What methodology would provide causal evidence?
- What data would we need?
- How long would the study take?
- What are the ethical considerations?
**Step 2: Answer Each Sub-Question**
[Continue with chain-of-thought structure...]
**Step 3: Synthesize Proposal**
[Final research proposal]What This Achieves
Paper abstracts → Structured data → Gap analysis → Research proposal - each step specialized for higher quality than one mega-prompt. Intermediate outputs are useful on their own, and it's easy to debug: if final output is wrong, check which step failed.
When to Use Prompt Chaining: Complex analysis requiring multiple expertise types, tasks where intermediate outputs have independent value, workflows you'll repeat (automate the chain), or when quality matters more than speed.
Pattern 2: XML Structuring for Complex Instructions
Use Case: Organize complex, multi-part prompts with clear boundaries
The Concept
Use XML-like tags to separate different parts of your instruction. This helps AI models parse complex requirements without confusion.
Complete Example - Multi-Document Synthesis with XML
You are a research analyst synthesizing information from multiple sources.
<task>
Create a comparative analysis of the papers below, identifying agreements, contradictions, and gaps.
</task>
<output_structure>
<section name="executive_summary">
3-sentence overview of the state of research on this topic
</section>
<section name="consensus_findings">
For each finding where 3+ papers agree:
- Finding: [statement]
- Supporting papers: [list with citations]
- Strength of evidence: [Strong/Moderate/Weak]
</section>
<section name="contradictions">
For each major disagreement:
- Question: [what they disagree about]
- Position A: [view 1 with citations]
- Position B: [view 2 with citations]
- Possible explanation: [why they might differ]
</section>
<section name="research_gaps">
- Methodological: [what methods are missing]
- Contextual: [what populations/settings understudied]
- Theoretical: [what mechanisms unexplored]
</section>
<section name="quality_assessment">
For each paper:
- Citation: [Author Year]
- Sample size: [N]
- Design quality: [High/Medium/Low with brief reason]
- Generalizability: [High/Medium/Low with brief reason]
</section>
</output_structure>
<quality_requirements>
- All claims must cite specific papers
- No speculation beyond what papers state
- Flag when evidence is weak or conflicting
- Use academic tone but remain accessible
</quality_requirements>
<input_documents>
Paper 1:
"""
[PASTE PAPER 1 ABSTRACT OR FULL TEXT]
"""
Paper 2:
"""
[PASTE PAPER 2 ABSTRACT OR FULL TEXT]
"""
Paper 3:
"""
[PASTE PAPER 3 ABSTRACT OR FULL TEXT]
"""
</input_documents>
<constraints>
- Response length: 1,000-1,500 words
- Format: Markdown with proper headers
- Citations: Use (Author Year) format
- Do not introduce information not in the papers
</constraints>
Generate the analysis.What This Does
Clear boundaries make each section unambiguous. Nested structure keeps complex requirements organized. Easier debugging - if one section is wrong, you know which part of the prompt to adjust. Better parsing - AI models handle structured input more reliably.
XML Tag Patterns You Can Use
Complete Software Example - Code Review with XML
<persona>
You are a senior software engineer conducting a security-focused code review.
</persona>
<task>
Review the code below for security vulnerabilities, following OWASP Top 10 guidelines.
</task>
<review_structure>
<security_assessment>
For each vulnerability found:
- Vulnerability: [OWASP category]
- Severity: [Critical/High/Medium/Low]
- Location: [file:line or function name]
- Exploit scenario: [how an attacker could exploit this]
- Fix: [specific code change needed]
</security_assessment>
<secure_patterns_used>
List security best practices already implemented:
- Pattern: [what's done well]
- Location: [where it's implemented]
- Benefit: [what attack this prevents]
</secure_patterns_used>
<recommendations>
Priority order:
1. [Critical fixes - must do before deployment]
2. [Important improvements - should do this sprint]
3. [Best practices - nice to have]
</recommendations>
</review_structure>
<review_criteria>
- SQL injection vulnerabilities
- XSS attack vectors
- Authentication/authorization flaws
- Insecure data storage
- Insufficient logging
- Dependency vulnerabilities
</review_criteria>
<code_to_review>
[Python code here - see example below]
Example vulnerable code:
from flask import Flask, request, render_template
import sqlite3
app = Flask(__name__)
@app.route('/user/<user_id>')
def get_user(user_id):
conn = sqlite3.connect('users.db')
cursor = conn.cursor()
query = f"SELECT * FROM users WHERE id = " + user_id
cursor.execute(query)
user = cursor.fetchone()
return render_template('user.html', user=user)
</code_to_review>
<constraints>
- Be specific: cite exact line numbers
- Provide working code fixes, not just descriptions
- Assume production environment with real user data
</constraints>
Perform the security review.Expected Output:
SECURITY ASSESSMENT
Vulnerability 1: SQL Injection (OWASP A03:2021)
- Severity: CRITICAL
- Location: Line 9 (query construction)
- Exploit scenario: Attacker sends /user/1%20OR%201=1 to dump entire user table
- Fix:
```python
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))Vulnerability 2: Cross-Site Scripting (XSS) (OWASP A03:2021)
- Severity: HIGH
- Location: Line 16 (search results)
- Exploit scenario: Attacker sends /search?q=
- Fix:
from markupsafe import escape return f"<h1>Results for: {escape(search_term)}</h1>"
[...]
<Callout type="warn">
**When to Use XML Structuring:** Prompts with 5+ distinct requirements, multi-document analysis, complex output formats (reports with many sections), or situations where small misunderstandings cause big problems.
</Callout>
---
## Pattern 3: Few-Shot Workflow Chains
**Use Case:** Teach complex multi-step processes with examples
### The Concept
Combine few-shot learning with workflow chaining. Show the AI an example of the complete process, then have it replicate it.
### Complete Example - Research Workflow (Abstract → Analysis → Implications)
```text
You are a research analyst following a structured analysis workflow.
I'll show you one complete example of the workflow, then ask you to apply it to a new paper.
<example>
<input_paper>
Title: "Remote Work and Innovation" (Smith 2024)
Abstract: We surveyed 500 software teams over 18 months, finding that remote teams produced 22% fewer patent applications but 31% more incremental improvements to existing products. Suggests trade-off between radical and incremental innovation.
</input_paper>
<step_1_extraction>
**Research Design:** Survey, longitudinal (18 months)
**Sample:** N=500 software teams
**Key Variables:** Remote vs. office, patent applications (radical innovation proxy), product improvements (incremental innovation proxy)
**Main Finding:** Remote work decreases radical innovation (-22%) but increases incremental innovation (+31%)
**Statistical Significance:** Not stated in abstract
</step_1_extraction>
<step_2_analysis>
**Implications for Theory:**
Challenges assumption that remote work is universally positive or negative. Suggests innovation TYPE matters, not just quantity.
**Limitations:**
- Software teams only (generalizability?)
- Patent applications as innovation proxy (misses non-patentable innovation)
- Correlation not causation (selection effects?)
**Practical Significance:**
Large effect sizes (22%, 31%) if causal. Suggests companies should match work location to innovation goals.
</step_2_analysis>
<step_3_recommendations>
**For Researchers:**
- RCT needed to establish causality
- Expand to other industries
- Mechanism unclear (collaboration barriers? Deep work time?)
**For Practitioners:**
- Use office time for brainstorming/ideation (radical innovation)
- Use remote time for execution/refinement (incremental innovation)
- Hybrid model: alternate based on project phase
**Follow-Up Questions:**
- What's the optimal remote/office ratio for different innovation goals?
- Does team size moderate the effect?
- How long do effects take to emerge?
</step_3_recommendations>
</example>
Now apply this exact workflow to the new paper below:
<new_paper>
Title: "AI Coding Assistants and Developer Productivity" (Johnson et al. 2024)
Abstract: Randomized controlled trial with 200 professional developers. Treatment group used GitHub Copilot for 3 months. Found 26% faster task completion but 14% more bugs in initial code. Bug rate normalized after code review. Suggests speed-quality trade-off with AI assistance.
</new_paper>
Follow the same structure:
1. Extract research design details
2. Analyze implications and limitations
3. Generate recommendations for researchers and practitioners
Output:What This Achieves
Consistent structure means every analysis follows the same pattern. Teachable process allows team members to replicate your workflow. Quality control through standardized approach reduces errors. Efficiency gains by not re-explaining the process each time.
Workflow Template Structure
<workflow_template>
Step 1: [First action - extract/identify/categorize]
Step 2: [Second action - analyze/compare/evaluate]
Step 3: [Third action - synthesize/recommend/decide]
</workflow_template>
<example_complete_workflow>
<input>[Example input]</input>
<step_1_output>[What Step 1 produces]</step_1_output>
<step_2_output>[What Step 2 produces]</step_2_output>
<step_3_output>[What Step 3 produces]</step_3_output>
</example_complete_workflow>
Now apply to:
<new_input>[Your actual content]</new_input>Pattern 4: Multi-Step Automated Workflows
Use Case: Automate repetitive multi-prompt sequences
The Concept
Document your successful prompt chains as reusable workflows. Eventually, you can script these (we'll do this in T1.2 with Claude Projects).
Complete Example - Weekly Competitive Intelligence Workflow
Workflow Overview:
- Collect competitor information (manual web research)
- Extract structured data (Prompt: Competitive Analysis Extractor)
- Compare to last week (Prompt: Custom comparison prompt)
- Generate executive summary (Prompt: Structured output)
- Create action items (Prompt: Decision analyzer)
Collect (Manual - 10 minutes)
Save competitor blog posts, product updates, pricing changes to a document.
Extract Structured Data (Prompt 6 from Business domain)
You are a competitive intelligence analyst.
Extract structured intelligence from competitor information:
**COMPETITOR PROFILE:**
[Auto-extracted from input]
**POSITIONING:**
[Value prop, differentiators, pricing]
**PRODUCT FEATURES:**
[Core, unique, and missing features]
**STRATEGIC INSIGHTS:**
[Position, growth, vulnerabilities, threat level]
Competitor information:
"""
[PASTE COLLECTED INFORMATION FROM STEP 1]
"""Save output to: competitor_analysis_2025-01-15.md
Compare to Last Week (Custom Prompt)
You are a competitive intelligence analyst tracking changes over time.
Compare this week's competitor profile to last week's:
<this_week>
"""
[PASTE OUTPUT FROM STEP 2]
"""
</this_week>
<last_week>
"""
[PASTE PREVIOUS WEEK'S FILE: competitor_analysis_2025-01-08.md]
"""
</last_week>
**CHANGES DETECTED:**
**New Features/Capabilities:**
- [Feature]: [Description + significance]
**Pricing Changes:**
- [What changed + impact on our positioning]
**Messaging Shifts:**
- [Old message] → [New message]
- Interpretation: [what this signals]
**Strategic Moves:**
- [Action taken + likely motivation]
**Threat Level Change:**
- Previous: [High/Medium/Low]
- Current: [High/Medium/Low]
- Reason: [why it changed]
**RECOMMENDED RESPONSE:**
Immediate actions (this week):
1. [Specific response to most important change]
Monitor closely (next 2 weeks):
1. [What to watch]
No action needed:
- [Changes we can ignore]Save output to: competitive_changes_2025-01-15.md
Generate Executive Summary (Structured Output Prompt)
You are a data structuring specialist. Convert the following information into executive summary format.
Output requirements:
1. Format: Markdown with clear headers
2. Length: 200 words maximum
3. Tone: Business executive (strategic implications focus)
4. Structure:
- One-line headline (most important change)
- Key changes (3 bullet points max)
- Recommended actions (2 specific actions)
- Threat assessment (High/Medium/Low with one-sentence reason)
Source data:
"""
[PASTE OUTPUT FROM STEP 3]
"""
Additional instructions:
- Lead with biggest strategic insight
- Quantify when possible (e.g., "20% price reduction" not "significant price cut")
- Actions must be specific (owner + deadline implied)
- Assume audience has 60 seconds to read
Output:Save output to: exec_summary_2025-01-15.md
Create Action Items (Decision Analyzer)
You are a product strategist converting intelligence into action items.
Based on the competitive analysis:
"""
[PASTE OUTPUT FROM STEP 3]
"""
Generate action items:
**IMMEDIATE (This Week):**
- [ ] **Action:** [Specific task]
- **Owner:** [Role - e.g., "Product Manager" or "Marketing Lead"]
- **Deadline:** [Specific date]
- **Success Criteria:** [How we know it's done]
- **Effort:** [Hours estimate]
**SHORT-TERM (Next 2 Weeks):**
- [ ] **Action:** [Specific task]
- **Owner:** [Role]
- **Deadline:** [Specific date]
- **Success Criteria:** [How we know it's done]
- **Effort:** [Hours estimate]
**BACKLOG (Monitor/Consider):**
- **Action:** [Specific task]
- **Trigger:** [What would make us prioritize this]
- **Value:** [What we'd gain]
**TOTAL EFFORT THIS WEEK:** [Sum of immediate actions]
Format as GitHub issues / Asana tasks / Jira tickets:Save output to: action_items_2025-01-15.md
Complete Workflow Summary
# Weekly Competitive Intel Workflow
**Time:** 30 minutes total
**Frequency:** Every Monday morning
**Tools:** Web browser + Claude
## Process
| Step | Action | Time | Output File |
|------|--------|------|-------------|
| 1 | Manual research on 3 competitors | 10 min | raw_intel_[date].md |
| 2 | Extract structured data (Prompt 6) | 5 min | competitor_analysis_[date].md |
| 3 | Compare week-over-week (Custom prompt) | 5 min | competitive_changes_[date].md |
| 4 | Generate exec summary (Structured output) | 3 min | exec_summary_[date].md |
| 5 | Create action items (Decision analyzer) | 7 min | action_items_[date].md |
## Automation Opportunities
- **Now:** Template all 5 prompts (done above)
- **T1.2 (Claude Projects):** Upload all files to Project, auto-compare
- **T2.x (API Integration):** Fully automate collection + analysis
## Success Metrics
- Week 1 baseline: 90 minutes manual analysis
- Week 4 target: 30 minutes with prompts (67% time savings)
- Quality: Action items lead to measurable responsesWhen to Build Workflows: You've done the same multi-prompt sequence 3+ times, process has clear steps with defined inputs/outputs, team needs to replicate your approach, or consistency matters (weekly reports, recurring analysis).
Pattern 5: State Management Across Prompts
Use Case: Maintain context in long, multi-session projects
The Concept
When working on complex projects over multiple sessions, explicitly track state so you can resume without losing context.
Complete Example - Multi-Day Literature Review
Session 1 Setup - Create State Document:
# Literature Review State Tracker
**Project:** AI Impact on Labor Markets
**Started:** 2025-01-15
**Target:** 25 papers reviewed, synthesis complete by 2025-01-20
## Progress
**Papers Reviewed:** 3 / 25
- [x] Chen & Rodriguez (2023) - Remote work productivity
- [x] Smith (2024) - Remote work innovation
- [x] Johnson et al. (2024) - AI coding assistants
- [ ] [Remaining 22 papers]
## Emerging Themes
**Theme 1: Automation vs. Augmentation**
- Papers: Johnson 2024
- Key insight: AI tools augment rather than replace when used with human review
- Gap: No studies on full automation scenarios
**Theme 2: [To be developed]**
## Key Findings Tracker
| Paper | Main Finding | Effect Size | Quality |
|-------|--------------|-------------|---------|
| Chen 2023 | Remote → +13% productivity | Medium | High (RCT) |
| Smith 2024 | Remote → -22% radical innovation | Large | Medium (Survey) |
| Johnson 2024 | AI → +26% speed, +14% bugs | Large | High (RCT) |
## Research Gaps Identified
- [ ] Long-term effects (>6 months)
- [ ] Non-knowledge-worker sectors
- [ ] Interaction effects (AI + remote work)
## Next Steps
**Tomorrow's session:**
- Review papers 4-8
- Update emerging themes
- Check if "Automation vs. Augmentation" theme holdsSession 2 Prompt (Next Day) - Resume with State:
You are a research analyst continuing a multi-day literature review.
<project_context>
You've been helping analyze papers on AI's impact on labor markets. We're on day 2 of a 5-day review process.
</project_context>
<current_state>
"""
[PASTE ENTIRE STATE TRACKER DOCUMENT FROM ABOVE]
"""
</current_state>
<todays_task>
Review the next 5 papers (papers 4-8) and:
1. Extract key findings using the same format as the tracker
2. Identify if they support existing themes or reveal new ones
3. Update the research gaps list
4. Flag any contradictions with previous papers
</todays_task>
<papers_to_review>
Paper 4: [abstract]
Paper 5: [abstract]
Paper 6: [abstract]
Paper 7: [abstract]
Paper 8: [abstract]
</papers_to_review>
<output_format>
**UPDATED KEY FINDINGS TRACKER:**
[Extended table with new papers added]
**THEME UPDATES:**
Theme 1 (Automation vs. Augmentation):
- [Update based on new papers]
- New supporting papers: [list]
- Contradicting papers: [list if any]
Theme 2 (NEW if emerged):
- [Description]
- Supporting papers: [list]
**NEW RESEARCH GAPS:**
[Add any new gaps identified]
**CONTRADICTIONS FOUND:**
[Any papers that disagree with previous findings]
**PROGRESS SUMMARY:**
- Papers reviewed: 8 / 25
- Themes identified: [count]
- On track for [date] completion: Yes/No
</output_format>
Generate the update.State Management Template
# [Project Name] State Tracker
**Project Goal:** [What you're trying to achieve]
**Timeline:** [Start date] to [End date]
**Current Phase:** [Which stage you're in]
## Progress Metrics
**Completion:** [X] / [Total] [units]
- [x] Completed item 1
- [x] Completed item 2
- [ ] Remaining item 1
- [ ] Remaining item 2
## Accumulated Knowledge
**Key Insights So Far:**
1. [Insight 1 with supporting evidence]
2. [Insight 2 with supporting evidence]
**Decisions Made:**
- [Date]: [Decision] - [Reasoning]
- [Date]: [Decision] - [Reasoning]
**Open Questions:**
- [ ] [Question 1] - [Status: researching/blocked/answered]
- [ ] [Question 2] - [Status: researching/blocked/answered]
## Data Collected
[Table or structured list of data accumulated]
## Next Session Plan
**When:** [Date/Time]
**Goal:** [What to accomplish]
**Inputs needed:** [What you'll need ready]
**Expected outputs:** [What you'll produce]
## Resumption Prompt TemplateYou are [role] continuing a multi-session project.
[Brief project description]
""" [PASTE THIS ENTIRE STATE TRACKER] """
[What to do this session]
Continue the work.
When to Use State Management: Projects spanning multiple days, collaborative work (multiple people, same prompts), iterative analysis (review → revise → review), or complex research requiring synthesis over time.
Extension Patterns Summary
You now know how to:
-
Chain Prompts: Extract → Analyze → Synthesize workflows, specialize each prompt for quality, debug complex processes step-by-step
-
Structure with XML: Organize complex multi-part instructions, use tags like
<task>,<output_structure>,<constraints>,<input>, handle nested requirements cleanly -
Few-Shot Workflows: Teach complete multi-step processes with examples, ensure consistent analysis across many inputs, create repeatable team-shareable workflows
-
Automate Multi-Step Workflows: Document recurring prompt sequences, track time savings (manual vs. automated), identify future automation opportunities
-
Manage State: Maintain context across multiple sessions, resume complex projects seamlessly, coordinate multi-person prompt usage
Practical Application: Complete Workflow Example
Scenario: Weekly research update for economics department
Workflow Steps:
- Collect new papers (manual: search ArXiv, Google Scholar)
- Extract metadata (Prompt 5: Literature Extractor)
- Compare to ongoing research (Chain-of-thought prompt + state tracker)
- Identify relevant papers (Few-shot classification: relevant/not relevant)
- Generate summary email (Structured output with XML formatting)
Time:
- Manual approach: 2 hours
- With prompt workflow: 30 minutes
- Savings: 75%
Quality:
- Manual: Inconsistent depth, sometimes misses papers
- With prompts: Consistent structure, comprehensive coverage
Quick Reference: When to Use Each Pattern
| Pattern | Best For | Complexity |
|---|---|---|
| Prompt Chaining | Multi-step analysis, breaking complex tasks | Medium |
| XML Structuring | Complex prompts with 5+ requirements | Low |
| Few-Shot Workflows | Teaching consistent processes | Medium |
| Automated Workflows | Recurring tasks (weekly/daily) | High |
| State Management | Multi-day/multi-person projects | Low |
| Pattern | Initial Setup | Maintenance | ROI Timeline |
|---|---|---|---|
| Prompt Chaining | 5 minutes | Low | Immediate |
| XML Structuring | 3 minutes | Very Low | Immediate |
| Few-Shot Workflows | 10 minutes | Low | After 3 uses |
| Automated Workflows | 20 minutes | Medium | After 5 uses |
| State Management | 5 minutes | Low | Multi-day projects |
Prompt Chaining:
- Literature review pipelines
- Multi-stage data analysis
- Content creation workflows
XML Structuring:
- Complex technical documentation
- Multi-requirement analysis
- Precise output formatting
Few-Shot Workflows:
- Team standardization
- Consistent quality control
- Training new analysts
Automated Workflows:
- Weekly reporting
- Competitive intelligence
- Recurring research tasks
State Management:
- Long-term research projects
- Team collaboration
- Multi-session analysis
Next Steps
Immediate practice:
- Take one prompt from your library
- Chain it with another (e.g., Extract → Analyze)
- Time how long the manual version would take
- Calculate your time savings
This week:
- Identify one recurring multi-step task
- Document it as a workflow (like the competitive intel example)
- Run it 3 times and refine
- Share with a colleague
Advanced (after T1.2):
- Upload your workflows to Claude Projects
- Use 200K context for entire document sets
- Fully automate with custom instructions
Verification Checklist
Test yourself:
- I can chain 3+ prompts together for a complex task
- I can use XML tags to organize a complex prompt
- I've created at least one few-shot workflow
- I've documented a recurring task as a prompt workflow
- I understand how to maintain state across sessions
- I can estimate time savings for my workflows
Target: 100% before moving to next section
Your prompt library is now advanced-level. You can handle complex, multi-step workflows that save hours per week. Continue to Domain Applications to see these patterns in realistic, domain-specific scenarios.