Core Build: Complete 20-Prompt Library Workshop

Systematically build your production-ready prompt library with patterns, templates, and quality evaluation

Core Build (45-60 minutes)

Build your complete 20-prompt library organized by pattern and domain, with a template system for customization and a quality evaluation rubric.

You've already created 5 high-impact prompts in the Quick Start. Now you'll systematically expand that to a comprehensive library covering all major prompting patterns and domain-specific use cases.

What You'll Build

By the end of this section, you'll have:

20 Production-Ready Prompts

5 Core Pattern Prompts + 15 Domain-Specific Prompts across economics, software engineering, and business management

Template System

Reusable prompt frameworks with quick customization workflow and version control for iterations

Quality Evaluation Rubric

Objective scoring criteria, performance benchmarks, and continuous improvement process

Part 1: Master Core Prompting Patterns

Master the fundamental patterns that underpin all effective prompts. You've already seen structured output and role assignment. Now you'll add chain-of-thought reasoning, few-shot learning, and parameter tuning.

Chain-of-Thought Reasoning

Use Case: Complex analysis requiring step-by-step logic

The Prompt:

You are a critical thinking expert helping solve complex problems.

Problem to analyze:
"""
[DESCRIBE YOUR PROBLEM OR QUESTION]
"""

Think through this step-by-step:

**Step 1: Problem Decomposition**
Break down the problem into 3-5 sub-questions:
- Sub-question 1: [specific question]
- Sub-question 2: [specific question]
- Sub-question 3: [specific question]

**Step 2: Answer Each Sub-Question**
For each sub-question:
1. [Sub-question 1 restated]
   - Evidence: [relevant facts, data, or logic]
   - Reasoning: [how this evidence leads to a conclusion]
   - Answer: [clear, specific answer]

2. [Repeat for each sub-question]

**Step 3: Synthesize Conclusion**
Based on the sub-question answers:
- Primary conclusion: [main finding]
- Supporting logic: [how sub-answers connect]
- Confidence level: [High/Medium/Low] because [reason]
- Assumptions made: [list any assumptions]

**Step 4: Alternative Perspectives**
What competing interpretations exist?
- Alternative 1: [different conclusion with reasoning]
- Alternative 2: [different conclusion with reasoning]
- Why the primary conclusion is stronger: [comparative analysis]

Final answer:
[Clear, definitive response to original problem]

What This Does: Step-by-step structure prevents jumping to conclusions. Explicit reasoning makes logic auditable. Alternative perspectives check for bias. Confidence levels indicate certainty.

Test It Now: Problem example: "Should our economics department invest in building a large language model research center?"

Expected Output: Structured reasoning showing each step of analysis, making the thought process transparent and verifiable.

When to Use: Complex decisions with multiple factors, analysis requiring justification, problems where the reasoning matters as much as the answer, situations where you need to explain your thinking to others.

Verification Checklist:

  • Each step builds logically on the previous
  • Evidence cited for each sub-conclusion
  • Alternative perspectives genuinely challenge the primary conclusion
  • Confidence level matches strength of reasoning

Few-Shot Learning

Use Case: Specialized tasks where you need consistent format or style

The Prompt:

You are a [ROLE] specializing in [TASK].

I'll show you [N] examples of excellent outputs, then ask you to produce a similar one.

Example 1:
Input: [example input 1]
Output: [example output 1]

Example 2:
Input: [example input 2]
Output: [example output 2]

Example 3:
Input: [example input 3]
Output: [example output 3]

Key patterns to follow:
- [Pattern 1 observed in examples]
- [Pattern 2 observed in examples]
- [Pattern 3 observed in examples]

Now apply this to:
Input: [YOUR NEW INPUT]
Output:

What This Does: Examples teach format without lengthy explanation. Pattern extraction makes rules explicit. Consistent style across all outputs. Domain-specific conventions learned from examples.

Test It Now - Commit Message Example:

You are a software engineer specializing in writing clear, conventional commit messages.

I'll show you 3 examples of excellent commit messages, then ask you to produce a similar one.

Example 1:
Input: Added error handling to the API authentication module, catching invalid tokens and returning appropriate 401 responses
Output: feat(auth): add error handling for invalid tokens
- Catch invalid token errors in authentication middleware
- Return 401 status with clear error message
- Add unit tests for error scenarios

Example 2:
Input: Fixed bug where user profile images weren't loading on iOS Safari due to incorrect MIME type
Output: fix(profile): resolve image loading on iOS Safari
- Update MIME type for profile images to image/jpeg
- Add browser-specific fallback logic
- Test on iOS 15, 16, 17

Example 3:
Input: Updated documentation for the new payment API endpoints including request/response examples
Output: docs(api): update payment endpoints documentation
- Add request/response examples for /payment/create
- Document error codes and retry logic
- Include rate limiting information

Key patterns to follow:
- Format: type(scope): brief description
- Include bullet points with technical details
- Mention testing or verification

Now apply this to:
Input: Modified the database query in the analytics dashboard to use an index, making the page load 10x faster
Output:

Expected Output:

perf(analytics): optimize dashboard query with index
- Add index on analytics.timestamp column
- Reduce query time from 5s to 0.5s
- Verified with EXPLAIN ANALYZE

When to Use: Specialized formats (citations, commit messages, reports), domain-specific writing styles, tasks where "show, don't tell" works better, maintaining consistency across a team.

Verification Checklist:

  • Output matches format of examples
  • Key patterns from examples are present
  • Domain-specific conventions followed
  • Quality matches example standard

Parameter-Tuned Prompts

Use Case: Control creativity vs. precision with temperature and length settings

Temperature Guide:

  • 0.0-0.3: Deterministic, factual (data extraction, code generation, math)
  • 0.4-0.7: Balanced (analysis, summaries, general writing)
  • 0.8-1.0: Creative (brainstorming, marketing copy, storytelling)

The Prompt:

You are a [ROLE] producing [TYPE OF OUTPUT].

[FOR FACTUAL/PRECISE TASKS - Use with temperature 0.0-0.3]:
Be precise and deterministic. Stick strictly to facts from the source material. Do not infer, extrapolate, or add creative interpretation.

[FOR ANALYTICAL TASKS - Use with temperature 0.4-0.7]:
Provide balanced analysis with evidence-based conclusions. Include both what's stated and reasonable implications.

[FOR CREATIVE TASKS - Use with temperature 0.8-1.0]:
Be creative and exploratory. Generate multiple ideas, make unexpected connections, think outside conventional patterns.

Task:
[YOUR SPECIFIC TASK]

Source material:
"""
[YOUR CONTENT]
"""

[SPECIFY LENGTH]:
- Response length: [Exactly X words / Approximately X words / No more than X words]
- Format: [Bullet points / Paragraphs / Sections]

What This Does: Temperature control matches output style to task needs. Length specification prevents over/under-generation. Explicit instructions align with temperature setting. Repeatable results at low temperature, variety at high temperature.

Test It Now - Three Temperature Variants:

Variant 1: Low Temperature (0.2) - Data Extraction

You are a data analyst extracting facts from text.

Be precise and deterministic. Stick strictly to facts from the source material. Do not infer, extrapolate, or add creative interpretation.

Task:
Extract the following data points:
- Publication year
- Sample size
- Key statistical finding
- Methodology

Source material:
"""
Our 2023 study of 1,247 participants found that remote work increased productivity by 13% (p < 0.05) using a randomized controlled trial design.
"""

Response length: Exactly 4 bullet points, one per data point.
Format: Bullet points

Expected Output:

  • Publication year: 2023
  • Sample size: 1,247 participants
  • Key statistical finding: Remote work increased productivity by 13% (p < 0.05)
  • Methodology: Randomized controlled trial

Variant 2: Medium Temperature (0.5) - Analysis

You are a research analyst providing balanced interpretation.

Provide balanced analysis with evidence-based conclusions. Include both what's stated and reasonable implications.

Task:
Analyze the implications of this research finding for workplace policy.

Source material:
"""
Our 2023 study of 1,247 participants found that remote work increased productivity by 13% (p &lt; 0.05) using a randomized controlled trial design.
"""

Response length: Approximately 100 words
Format: Paragraph

Expected Output: This RCT provides strong evidence (p < 0.05) that remote work can meaningfully improve productivity. The 13% increase is substantial and statistically significant, suggesting workplace policies limiting remote work may harm organizational output. However, the study doesn't address which tasks or industries benefit most, or whether productivity gains persist long-term. Policymakers should consider pilot programs combining this evidence with organization-specific factors like collaboration needs and role requirements before making sweeping remote work decisions.

Variant 3: High Temperature (0.9) - Ideation

You are an innovative workplace strategist brainstorming solutions.

Be creative and exploratory. Generate multiple ideas, make unexpected connections, think outside conventional patterns.

Task:
Generate 5 creative workplace policies inspired by this research.

Source material:
"""
Our 2023 study of 1,247 participants found that remote work increased productivity by 13% (p &lt; 0.05) using a randomized controlled trial design.
"""

Response length: 5 creative ideas, 2-3 sentences each
Format: Numbered list

Expected Output:

  1. Productivity-Based Location Choice: Let employees choose work location daily based on task type—office for collaboration, remote for deep work—tracked via self-reported productivity journals.

  2. Remote Work Dividend: Share 50% of the 13% productivity gains as bonuses for remote workers, aligning incentives with measured outcomes.

  3. Async-First Office Redesign: Convert saved office space into quiet pods and recording studios for async video updates, maximizing the remote productivity advantage.

  4. "Remote Fridays" Universal Policy: Make Fridays company-wide remote days to front-load the productivity boost into end-of-week deliverables.

  5. Hybrid Cohort Experiments: Run ongoing internal RCTs testing different remote/office ratios across teams, building institutional data to refine policies quarterly.

When to Use: Low temperature for citations, code, data extraction, math, legal documents. Medium temperature for analysis, summaries, reports, explanations. High temperature for brainstorming, creative writing, marketing, ideation.

Verification Checklist:

  • Low temp outputs are identical on re-run (or very similar)
  • Medium temp outputs are consistent but not identical
  • High temp outputs show creative variety
  • Length constraints respected

Zero-Shot Role Assignment

Use Case: Instant expertise without examples, just persona

The Prompt:

You are a world-class [SPECIFIC ROLE] with [X] years of experience in [DOMAIN].

Your expertise includes:
- [Specific skill 1]
- [Specific skill 2]
- [Specific skill 3]

Your approach is characterized by:
- [Trait 1: e.g., "Evidence-based analysis"]
- [Trait 2: e.g., "Skepticism of conventional wisdom"]
- [Trait 3: e.g., "Clear communication for non-experts"]

A [CLIENT TYPE] asks you:
"""
[QUESTION OR TASK]
"""

Respond as this expert would, drawing on your deep knowledge and characteristic approach.

What This Does: Specific role activates domain knowledge. Expertise list provides context for response depth. Characteristic traits shape response style. Implicit audience (client type) sets communication level.

Test It Now - Econometrician Variant:

You are a world-class econometrician with 15 years of experience in causal inference and policy evaluation.

Your expertise includes:
- Difference-in-differences designs
- Regression discontinuity
- Instrumental variables
- Synthetic control methods

Your approach is characterized by:
- Obsessive attention to identification assumptions
- Healthy skepticism of correlational claims
- Clear explanation of technical concepts

A policy researcher asks you:
"""
I found that cities with higher minimum wages have lower unemployment. Does this prove minimum wage increases don't harm employment?
"""

Respond as this expert would, drawing on your deep knowledge and characteristic approach.

Expected Output: Not necessarily. This correlation could reflect reverse causality—cities with strong labor markets may be more likely to adopt higher minimum wages. You'd need to address several identification concerns: 1) Selection bias (which cities choose higher wages?), 2) Confounding factors (e.g., cost of living, industry composition), 3) Timing (did unemployment drop after the wage increase?).

A rigorous approach would use difference-in-differences comparing cities that raised wages to similar cities that didn't, or regression discontinuity around state borders with different wage policies. The correlation you found is a starting point, not evidence of causation. What does your data structure look like? We might be able to design a credible quasi-experiment.

When to Use: Domain-specific questions, technical reviews requiring expertise, situations where credibility matters, tasks benefiting from a particular perspective.

Verification Checklist:

  • Response uses domain-specific terminology correctly
  • Expertise traits evident in answer
  • Communication level appropriate for audience
  • Advice reflects stated experience level

Structured Multi-Format Output

Use Case: Generate outputs ready for specific tools (spreadsheets, databases, APIs)

The Prompt:

You are a data structuring specialist. Convert the following information into [FORMAT].

Output requirements:
1. Format: [JSON / CSV / Markdown Table / YAML / XML]
2. Schema: [Define exact structure]
3. Validation: [Data type requirements]
4. Completeness: [How to handle missing data]

Source data:
"""
[UNSTRUCTURED INPUT]
"""

Additional instructions:
- Return ONLY the formatted output
- No explanatory text before or after
- Validate all data types
- Use null/NA for missing values
- Ensure output is parseable by [TOOL/LANGUAGE]

Output:

What This Does: Explicit format requirements ensure machine-readable output. Schema definition prevents field name inconsistencies. Validation rules ensure data type correctness. Tool-specific parsability guarantees downstream integration.

Test It Now - CSV for Spreadsheet:

You are a data structuring specialist. Convert the following information into CSV format.

Output requirements:
1. Format: CSV with header row
2. Schema: title, authors, year, journal, citations, key_finding, methodology
3. Validation: year must be number, citations must be number, others are strings
4. Completeness: Use "Not available" for missing text fields, 0 for missing numbers

Source data:
"""
"AI and Productivity" by Chen & Rodriguez (2023) in Journal of Economic Perspectives has 45 citations. Found 13% productivity increase. Used RCT methodology.

"Remote Work Effects" published in Management Science 2022 by Johnson et al. Systematic review methodology. 67 citations.

Smith's 2024 paper "Automation and Jobs" found 8% job displacement. Published in American Economic Review.
"""

Additional instructions:
- Return ONLY the formatted output
- No explanatory text before or after
- Validate all data types
- Use null/NA for missing values
- Ensure output is parseable by Excel/Google Sheets

Output:

Expected Output:

title,authors,year,journal,citations,key_finding,methodology
"AI and Productivity","Chen & Rodriguez",2023,"Journal of Economic Perspectives",45,"13% productivity increase","RCT"
"Remote Work Effects","Johnson et al.",2022,"Management Science",67,"Not available","Systematic review"
"Automation and Jobs","Smith",2024,"American Economic Review",0,"8% job displacement","Not available"

Verification Checklist:

  • Valid CSV (paste into Google Sheets)
  • Header row matches schema
  • Data types correct (numbers not quoted)
  • Missing values handled consistently
  • No extra text outside CSV

Part 1 Checkpoint

You now have 10 prompts total:

  • 5 from Quick Start (Universal Summarizer, Literature Extractor, Code Reviewer, Data Synthesizer, Structured Output)
  • 5 from Core Patterns (Chain-of-Thought, Few-Shot, Parameter-Tuned, Zero-Shot Role, Multi-Format)

Skills mastered:

  • Chain-of-thought reasoning for complex analysis
  • Few-shot learning for consistent formats
  • Temperature tuning for creativity control
  • Role assignment for instant expertise
  • Multi-format output for tool integration

Part 2: Build Domain-Specific Prompt Libraries

Build 5 specialized prompts for YOUR domain. Choose ONE section below based on your primary work:

Economics Research Prompts

Prompt 6: Regression Results Interpreter

Use Case: Translate statistical output into plain-language findings

You are an econometrician explaining results to a policy audience.

Interpret the regression results below:

**PLAIN-LANGUAGE SUMMARY:**
What does this analysis tell us? (2-3 sentences for non-technical readers)

**KEY FINDINGS:**
For each significant variable:
- Variable: [name]
- Effect: [direction and magnitude in real-world terms]
- Significance: [p-value and interpretation]
- Practical meaning: [what this means in context]

**MODEL QUALITY:**
- R-squared: [value and what it means]
- Sample size: [N and whether it's adequate]
- Key controls: [what variables are held constant]

**LIMITATIONS:**
- What this model CANNOT tell us: [list 2-3 important caveats]
- Causality: [Can we claim causal effects? Why/why not?]

**POLICY IMPLICATIONS:**
If this model is credible, what actions might it suggest? (2-3 specific recommendations)

Regression output:
"""
[PASTE STATA/R/PYTHON OUTPUT]
"""

Test with:

Dependent Variable: log_wage
Independent Variables:
  years_education: 0.089 (p &lt; 0.001)
  experience: 0.034 (p &lt; 0.01)
  experience_squared: -0.0006 (p &lt; 0.05)
  female: -0.152 (p &lt; 0.001)

R-squared: 0.342
N: 2,456
Controls: industry, region, year

Expected Output: Plain-language interpretation with policy implications, avoiding jargon.

Prompt 7: Literature Gap Identifier

Use Case: Find what's missing from existing research

You are a research strategist identifying gaps in the literature.

Analyze the following papers and identify research gaps:

**PAPERS ANALYZED:**
[List will be extracted from input]

**ESTABLISHED FINDINGS:**
What do we now know? (consensus findings across multiple papers)
- Finding 1: [what's established and by how many papers]
- Finding 2: [what's established and by how many papers]

**CONTRADICTIONS:**
Where do papers disagree?
- Contradiction 1: [Paper X found A, but Paper Y found B]
- Possible reasons: [methodological, data, context differences]

**METHODOLOGICAL GAPS:**
What methods haven't been tried?
- Gap 1: [e.g., "No RCTs, only observational studies"]
- Gap 2: [e.g., "No long-term (>5 year) studies"]

**CONTEXTUAL GAPS:**
What populations/regions/time periods are understudied?
- Gap 1: [specific context missing]
- Gap 2: [specific context missing]

**THEORETICAL GAPS:**
What mechanisms or explanations haven't been explored?
- Gap 1: [unexplored mechanism]
- Gap 2: [unexplored mechanism]

**HIGHEST-VALUE RESEARCH QUESTIONS:**
Based on these gaps, what 3 studies would most advance the field?
1. [Research question + why it matters + suggested method]
2. [Research question + why it matters + suggested method]
3. [Research question + why it matters + suggested method]

Papers to analyze:
"""
[PASTE MULTIPLE PAPER ABSTRACTS OR SUMMARIES]
"""

When to Use: Beginning a new research project, writing literature review sections, identifying dissertation topics, grant proposal development.

Prompt 8: Data Cleaning Validator

Use Case: Catch data quality issues before analysis

You are a data quality auditor reviewing a dataset for analysis.

Dataset description:
"""
[DESCRIBE YOUR DATASET: variables, sources, time period]
"""

Sample of data:
"""
[PASTE 10-20 ROWS OF DATA OR SUMMARY STATISTICS]
"""

Conduct a quality audit:

**COMPLETENESS CHECK:**
- Missing values: [which variables, what percentage]
- Systematic missingness: [patterns in what's missing]
- Imputation recommendation: [how to handle]

**VALIDITY CHECK:**
- Out-of-range values: [impossible or suspicious values]
- Logical inconsistencies: [e.g., retirement age < 18]
- Timestamp issues: [date logic errors]

**CONSISTENCY CHECK:**
- Coding inconsistencies: [e.g., "Male" vs "M" vs "male"]
- Unit inconsistencies: [e.g., mixing dollars and thousands]
- Format inconsistencies: [date formats, decimal places]

**STATISTICAL FLAGS:**
- Outliers: [extreme values that might be errors]
- Unexpected distributions: [variables that don't look right]
- Correlation oddities: [relationships that seem wrong]

**RECOMMENDED CLEANING STEPS:**
Priority 1 (must fix before analysis):
1. [Specific action]
2. [Specific action]

Priority 2 (should fix):
1. [Specific action]
2. [Specific action]

**DATA QUALITY SCORE:** [0-100]
Justification: [brief explanation]

Expected Output: Actionable cleaning checklist preventing analysis errors.

Prompt 9: Citation Formatter & Checker

Use Case: Format and validate references

You are a citation specialist ensuring reference accuracy.

Task: [Format as APA 7th / Chicago / MLA / Harvard]

For each source below:

**FORMATTED CITATION:**
[Properly formatted citation]

**VALIDATION CHECKS:**
- Complete information: [✓ or ✗ with what's missing]
- Formatting accuracy: [✓ or list of errors]
- Alphabetization: [✓ or correction needed]

**SOURCE TYPE DETECTION:**
- Type: [Journal article / Book / Working paper / Website / etc.]
- Special notes: [e.g., "Preprint - include DOI", "Blog post - include access date"]

**POTENTIAL ISSUES:**
- [Flag if year seems wrong, author name unusual, journal unrecognized, etc.]

Sources to format:
"""
[PASTE UNFORMATTED REFERENCES]
"""

Return:
1. Properly formatted reference list
2. Flagged issues for manual review

Test with:

Chen and Rodriguez, AI and Productivity, Journal of Economic Perspectives 2023, volume 37 pages 45-67
smith j 2024 "Automation and Jobs" american economic review
Johnson, Williams & Lee, "Remote Work Effects" Management Science vol 36(3) 2022 pp 234-256

Expected Output: Clean, formatted bibliography with flagged issues.

Prompt 10: Abstract → Research Design Extractor

Use Case: Quickly extract methodology details

You are a methodology specialist cataloging research designs.

Extract the research design from this abstract:

**RESEARCH QUESTION:**
[Restate in standard form: "Does X affect Y?"]

**RESEARCH DESIGN:**
- Type: [Experimental / Quasi-experimental / Observational / Theoretical]
- Specific method: [RCT / DiD / RDD / IV / Case study / etc.]

**DATA:**
- Sample: [population studied]
- Sample size: [N]
- Time period: [dates]
- Geographic scope: [region/country]
- Data source: [survey, admin data, etc.]

**IDENTIFICATION STRATEGY:**
- Treatment: [what's the intervention or variable of interest]
- Comparison: [what's the control or comparison group]
- Key assumption: [what must be true for causal inference]

**OUTCOME MEASURES:**
- Primary: [main dependent variable]
- Secondary: [other outcomes measured]

**REPRODUCIBILITY ASSESSMENT:**
- Data availability: [Public / Restricted / Proprietary / Unknown]
- Code availability: [Mentioned / Not mentioned]
- Preregistration: [Yes / No / Unknown]

Abstract:
"""
[PASTE ABSTRACT]
"""

Expected Output: Structured methodology summary for comparison across papers.

Software Engineering Prompts

Prompt 6: Unit Test Generator

Use Case: Generate comprehensive test coverage fast

You are a senior software engineer writing unit tests following TDD principles.

Generate unit tests for the following function:

```text
[PASTE FUNCTION CODE]

Test requirements:

  • Framework: [Jest / pytest / JUnit / etc.]
  • Coverage: [Aim for 100% line coverage]
  • Test types: Happy path, edge cases, error conditions

For each test:

Test [N]: [Descriptive name]

[Complete test code]

Purpose: [What this tests] Expected result: [What should happen]

COVERAGE SUMMARY:

  • Happy path tests: [count]
  • Edge case tests: [count]
  • Error handling tests: [count]
  • Total coverage: [estimated %]

POTENTIAL GAPS: [What scenarios might still be untested]

Return: Complete, runnable test suite ready to paste into test file.


**Test with:**

```python
def calculate_discount(price, customer_type, quantity):
    """Apply discount based on customer type and quantity."""
    if customer_type == "VIP":
        discount = 0.20
    elif customer_type == "Regular":
        discount = 0.10
    else:
        discount = 0

    if quantity >= 10:
        discount += 0.05

    return price * (1 - discount)

Expected Output: 8-12 pytest tests covering all paths.

Prompt 11: Code Documentation Generator

Use Case: Auto-document functions and modules

You are a technical writer creating clear, comprehensive documentation.

Generate documentation for the following code:

```text
[PASTE CODE]

Documentation format: [Docstring / JSDoc / README / API reference]

Include:

FUNCTION SIGNATURE:

[Function with type annotations if applicable]

DESCRIPTION: [1-2 sentence summary of what it does]

PARAMETERS:

  • parameter_name ([type]): [Description, including valid values/ranges]
  • [Repeat for each parameter]

RETURNS:

  • [type]: [Description of return value]

RAISES/THROWS:

  • ExceptionType: [When this occurs]

EXAMPLES:

# Example 1: Basic usage
[Code example]
# Output: [expected output]

# Example 2: Edge case
[Code example]
# Output: [expected output]

NOTES:

  • [Any important implementation details]
  • [Performance considerations]
  • [Thread-safety, etc.]

SEE ALSO:

  • [Related functions]
  • [Documentation links]

**Expected Output:** Complete docstring/JSDoc ready to paste above function.

#### Prompt 12: Git Commit Message Generator

**Use Case:** Write clear, conventional commits every time

```text
You are a developer writing conventional commit messages.

Generate a commit message for these changes:

**FORMAT REQUIRED:**

[type]([scope]): [short description]

[body]
- Bullet point detail 1
- Bullet point detail 2

[footer]

**Types:** feat, fix, docs, style, refactor, perf, test, chore

Changes:
"""
[PASTE git diff OR DESCRIBE CHANGES]
"""

Generate:
1. Properly formatted commit message
2. Breaking change flag if applicable
3. Issue references if mentioned

**COMMIT MESSAGE:**
[Generated message ready to paste]

**VALIDATION:**
- [ ] Type is correct for change nature
- [ ] Scope is specific and meaningful
- [ ] Description is imperative mood ("add" not "added")
- [ ] Body provides context and reasoning
- [ ] Footer includes issue references

Test with:

Changes:
- Modified src/auth/login.ts to add rate limiting
- Added Redis integration for tracking login attempts
- Updated tests to cover rate limit scenarios
- Added configuration for max attempts and timeout window

Expected Output:

feat(auth): add rate limiting to login endpoint

- Implement Redis-based request tracking
- Configure max attempts (5) and timeout (15 min)
- Add appropriate HTTP 429 responses
- Update unit and integration tests

Closes #342

Prompt 13: Code Refactoring Suggester

Use Case: Identify improvement opportunities

You are a senior developer conducting a refactoring review.

Analyze the code below for refactoring opportunities:

```text
[PASTE CODE]

CODE SMELLS DETECTED: For each smell:

  • Smell: [Name of anti-pattern]
  • Location: [Line numbers or function names]
  • Impact: [Maintainability / Performance / Readability]

REFACTORING RECOMMENDATIONS:

Priority 1 (Critical - affects functionality/performance):

  1. [Recommendation]
    • Current code: [snippet]
    • Refactored code: [improved snippet]
    • Benefit: [specific improvement]

Priority 2 (Important - affects maintainability):

  1. [Recommendation]
    • Current code: [snippet]
    • Refactored code: [improved snippet]
    • Benefit: [specific improvement]

Priority 3 (Nice-to-have - affects readability):

  1. [Recommendation]
    • Current code: [snippet]
    • Refactored code: [improved snippet]
    • Benefit: [specific improvement]

ESTIMATED EFFORT:

  • Priority 1: [X hours]
  • Priority 2: [Y hours]
  • Priority 3: [Z hours]
  • Total: [Total hours]

REFACTORING STRATEGY: [Suggested order and approach for making changes safely]


**Expected Output:** Prioritized refactoring plan with concrete code examples.

#### Prompt 14: API Endpoint Designer

**Use Case:** Design RESTful endpoints following best practices

```text
You are an API architect designing RESTful endpoints.

Feature to design:
"""
[DESCRIBE FEATURE OR RESOURCE]
"""

Design the API:

**ENDPOINTS:**

**1. [HTTP METHOD] [PATH]**
- Purpose: [What this does]
- Request:
  ```json
  {
    "field": "type (validation rules)"
  }
  • Response (200):
    {
      "field": "type"
    }
  • Error responses:
    • 400: [When and why]
    • 401: [When and why]
    • 404: [When and why]
    • 500: [When and why]

[Repeat for each endpoint]

API DESIGN PRINCIPLES APPLIED:

  • RESTful resource naming
  • Proper HTTP methods (GET/POST/PUT/DELETE)
  • Consistent response format
  • Appropriate status codes
  • Pagination for list endpoints
  • Filtering/sorting support
  • API versioning strategy

EXAMPLE USAGE:

# Create resource
curl -X POST https://api.example.com/v1/[resource] \
  -H "Content-Type: application/json" \
  -d '[request body]'

# Response
[expected response]

SECURITY CONSIDERATIONS:

  • [Auth requirement]
  • [Rate limiting]
  • [Input validation]

**Test with:** "Design an API for managing research paper collections. Users should be able to create collections, add/remove papers, tag collections, and search within collections."

**Expected Output:** Complete API specification ready for implementation.

</Tab>

<Tab value="Business Management">

### Business Management Prompts

#### Prompt 6: Competitive Analysis Extractor

**Use Case:** Systematically analyze competitor information

```text
You are a competitive intelligence analyst.

Extract structured intelligence from competitor information:

**COMPETITOR PROFILE:**
- Company name: [extracted]
- Primary product/service: [description]
- Target market: [who they serve]
- Founded: [year if available]

**POSITIONING:**
- Value proposition: [their main pitch]
- Key differentiators: [what makes them unique]
- Pricing strategy: [premium/mid/budget + structure]

**PRODUCT FEATURES:**
Core features (what we also have):
- [Feature 1]
- [Feature 2]

Unique features (what they have that we don't):
- [Feature 1: description]
- [Feature 2: description]

Missing features (what we have that they don't):
- [Feature 1]
- [Feature 2]

**CUSTOMER SIGNALS:**
- Target customer: [description]
- Customer pain points addressed: [list]
- Review themes (if available):
  - Positive: [common praise]
  - Negative: [common complaints]

**STRATEGIC INSIGHTS:**
- Market position: [Leader/Challenger/Niche player]
- Growth indicators: [funding, team size, etc.]
- Vulnerability: [potential weakness we could exploit]
- Threat level: [High/Medium/Low + reason]

**RECOMMENDED RESPONSE:**
[2-3 specific actions based on this intelligence]

Competitor information:
"""
[PASTE WEBSITE CONTENT, PRESS RELEASES, PRODUCT PAGES]
"""

Expected Output: Structured competitive profile ready for analysis.

Prompt 7: Customer Feedback Prioritizer

Use Case: Turn feedback into product roadmap

You are a product manager analyzing customer feedback for roadmap prioritization.

Analyze the feedback below:

**FEEDBACK CATEGORIZATION:**

**Feature Requests:**
For each unique request:
- Request: [description]
- Frequency: [mentioned by X customers]
- Customer segments: [who's asking]
- Revenue impact: [based on customer value]
- Estimated effort: [Small/Medium/Large]
- Priority score: [calculated from above]

**Bug Reports:**
For each unique bug:
- Issue: [description]
- Severity: [Critical/Major/Minor]
- Frequency: [affected customer count]
- Workaround exists: [Yes/No]
- Priority: [P0/P1/P2/P3]

**Experience Complaints:**
For each theme:
- Issue: [what's frustrating]
- Frequency: [how many customers]
- User journey stage: [where it occurs]
- Impact: [abandonment/frustration/delay]

**PRIORITIZATION MATRIX:**

**Must Do (High value, High frequency):**
1. [Item + reasoning]
2. [Item + reasoning]

**Should Do (High value OR High frequency):**
1. [Item + reasoning]

**Could Do (Medium value/frequency):**
1. [Item + reasoning]

**Won't Do (Low value, Low frequency):**
1. [Item + why we're deferring]

**RECOMMENDED ROADMAP:**
Next Sprint:
- [Items from Must Do]

Next Quarter:
- [Items from Should Do]

Backlog:
- [Items from Could Do]

Customer feedback:
"""
[PASTE SUPPORT TICKETS, SURVEY RESPONSES, INTERVIEWS]
"""

Expected Output: Prioritized product roadmap backed by customer data.

Prompt 8: Financial Report Summarizer

Use Case: Extract insights from earnings reports, 10-Ks

You are a financial analyst summarizing reports for executives.

Summarize the financial document below:

**EXECUTIVE SUMMARY (3 sentences):**
[Key takeaway for someone with 30 seconds]

**FINANCIAL PERFORMANCE:**
- Revenue: [Current period] ([% change vs prior])
- Profit: [Current period] ([% change vs prior])
- Key metrics: [List relevant KPIs with changes]

**STRATEGIC HIGHLIGHTS:**
- [Major initiative 1 and its impact]
- [Major initiative 2 and its impact]
- [Major initiative 3 and its impact]

**RISK FACTORS:**
- [Risk 1 and potential impact]
- [Risk 2 and potential impact]

**FORWARD-LOOKING STATEMENTS:**
- Guidance: [What they're projecting]
- Investment areas: [Where they're spending]
- Market outlook: [How they see the market]

**COMPETITIVE IMPLICATIONS:**
How does this affect our business?
- Opportunity: [What this reveals]
- Threat: [What we should watch]
- Neutral: [What doesn't change]

**ACTION ITEMS:**
Based on this report, we should:
1. [Specific action]
2. [Specific action]
3. [Specific action]

Document to analyze:
"""
[PASTE EARNINGS REPORT, 10-K SECTIONS, FINANCIAL STATEMENTS]
"""

Expected Output: Executive-ready summary with strategic implications.

Prompt 9: Meeting Notes to Action Items

Use Case: Turn rambling meeting transcripts into clear next steps

You are an executive assistant creating actionable meeting summaries.

Process the meeting transcript/notes below:

**MEETING METADATA:**
- Date: [extract or specify]
- Attendees: [list]
- Purpose: [1 sentence]

**DECISIONS MADE:**
1. [Decision with context]
2. [Decision with context]

**ACTION ITEMS:**
For each action:
- [ ] **Action:** [Specific task]
  - **Owner:** [Person responsible]
  - **Due:** [Date or timeline]
  - **Dependencies:** [What's needed first]
  - **Success criteria:** [How we'll know it's done]

**OPEN QUESTIONS:**
1. [Question that needs answering]
   - Assigned to: [Person]
   - Needed by: [Date]

**PARKING LOT:**
[Topics raised but deferred for later]

**NEXT MEETING:**
- Date: [if scheduled]
- Agenda: [based on open items]

**FOLLOW-UP EMAIL DRAFT:**

Subject: Action Items from [Meeting Name] - [Date]

Hi team,

Thanks for the productive discussion. Here are our action items:

[Formatted action items]

Please confirm your action items and flag any blockers.

Next meeting: [Date and agenda preview]

[Your name]


Meeting content:
"""
[PASTE TRANSCRIPT OR NOTES]
"""

Expected Output: Meeting summary + ready-to-send follow-up email.

Prompt 10: Market Sizing Estimator

Use Case: Quick TAM/SAM/SOM calculations

You are a market research analyst estimating market size.

Estimate the market for:
"""
[DESCRIBE PRODUCT/SERVICE]
"""

**TOP-DOWN APPROACH:**

Step 1: Total Addressable Market (TAM)
- Total potential customers: [number]
- Calculation: [how you got this number]
- Source/assumption: [basis for estimate]

Step 2: Serviceable Available Market (SAM)
- Realistic segment we can target: [number]
- Filters applied: [geographic, demographic, etc.]
- % of TAM: [percentage]

Step 3: Serviceable Obtainable Market (SOM)
- Market share we can capture: [number]
- Assumptions: [competitive factors, sales capacity]
- % of SAM: [percentage]

**REVENUE ESTIMATION:**
- Average revenue per customer: $[amount]
- Calculation: [pricing × usage]
- Total SOM revenue: $[amount]

**BOTTOM-UP VALIDATION:**
- Approach: [e.g., survey data, comparable companies]
- Result: [does this validate top-down?]

**MARKET GROWTH:**
- Current market growth rate: [%/year]
- Key drivers: [what's driving growth]
- 3-year projection: [where market will be]

**ASSUMPTIONS TO VALIDATE:**
Critical assumptions (test these first):
1. [Assumption 1 + how to validate]
2. [Assumption 2 + how to validate]

**CONFIDENCE LEVEL:**
- TAM: [High/Medium/Low] because [reason]
- SAM: [High/Medium/Low] because [reason]
- SOM: [High/Medium/Low] because [reason]

Available data:
"""
[PASTE ANY MARKET DATA, STATISTICS, COMPARABLE COMPANIES]
"""

Expected Output: Defensible market size estimate with clear assumptions.

Part 2 Checkpoint

You now have 15 prompts total:

  • 5 from Quick Start
  • 5 from Core Patterns
  • 5 from your primary domain

Verify each domain-specific prompt:

  • Produces outputs you can use in actual work
  • Saves meaningful time vs. manual approach
  • Consistent quality across multiple tests
  • Domain terminology and conventions correct

Part 3: Template System & Quality Rubric

You have 15 excellent prompts. Now build the system to customize, organize, and continuously improve them.

Universal Prompt Template Framework

Create a reusable template structure for all future prompts:

# Prompt Name: [DESCRIPTIVE TITLE]

## Purpose
[1 sentence: what this prompt does]

## Use Cases
- Use case 1
- Use case 2
- Use case 3

## Parameters
- Temperature: [0.0-1.0 recommended]
- Max tokens: [typical length]
- Model: [which AI models work best]

## Template

[START PROMPT]
You are a [ROLE] specializing in [DOMAIN].

[TASK DESCRIPTION]:
[Specific instructions]

[OUTPUT STRUCTURE]:
[Define format]

[QUALITY REQUIREMENTS]:
- Requirement 1
- Requirement 2

[INPUT PLACEHOLDER]:
"""
[WHERE USER PASTES CONTENT]
"""

[FINAL INSTRUCTION]:
[Any closing directives]
[END PROMPT]

## Test Cases

### Test 1: [Scenario]
Input: [Example input]
Expected output: [What good output looks like]

### Test 2: [Edge case]
Input: [Example input]
Expected output: [What good output looks like]

## Quality Metrics
- Metric 1: [How to measure success]
- Metric 2: [How to measure success]

## Version History
- v1.0 (YYYY-MM-DD): Initial version
- v1.1 (YYYY-MM-DD): [What changed and why]

## Related Prompts
- [Prompt X]: Use before this for [reason]
- [Prompt Y]: Use after this for [reason]

Action: Create this template file and save it as prompt-template.md in your library.

Prompt Library Organization System

Recommended Directory Structure:

prompt-library/
├── README.md                          # Library overview
├── templates/
│   └── universal-prompt-template.md  # Template from above
├── core-patterns/
│   ├── 01-chain-of-thought.md
│   ├── 02-few-shot-learning.md
│   ├── 03-parameter-tuned.md
│   ├── 04-zero-shot-role.md
│   └── 05-structured-output.md
├── economics/
│   ├── 01-regression-interpreter.md
│   ├── 02-literature-gap-finder.md
│   ├── 03-data-cleaning-validator.md
│   ├── 04-citation-formatter.md
│   └── 05-research-design-extractor.md
├── software/
│   ├── 01-unit-test-generator.md
│   ├── 02-code-documenter.md
│   ├── 03-commit-message-generator.md
│   ├── 04-refactoring-suggester.md
│   └── 05-api-designer.md
├── business/
│   ├── 01-competitive-analysis.md
│   ├── 02-feedback-prioritizer.md
│   ├── 03-financial-summarizer.md
│   ├── 04-meeting-notes-processor.md
│   └── 05-market-sizing.md
└── custom/
    └── [your specialized prompts]

Quality Evaluation Rubric

Objective criteria to score prompt outputs (1-5 scale):

Universal Quality Rubric

Rate each output 1-5 on these dimensions:

1. Completeness (Weight: 25%)

  • 5 - Excellent: All requested sections present, every field populated or marked as unavailable, no placeholders
  • 4 - Good: All major sections present, 1-2 minor omissions, minimal placeholders
  • 3 - Acceptable: Most sections present, several missing elements, some placeholder content
  • 2 - Poor: Key sections missing, many gaps, significant placeholders
  • 1 - Unacceptable: Critical sections missing, mostly incomplete

2. Accuracy (Weight: 30%)

  • 5 - Excellent: All factual claims verifiable, no hallucinations, correct domain terminology
  • 4 - Good: 95%+ accuracy, minor terminology issues, no significant errors
  • 3 - Acceptable: 85-95% accuracy, some questionable claims, some terminology errors
  • 2 - Poor: 70-85% accuracy, multiple incorrect claims, frequent terminology mistakes
  • 1 - Unacceptable: <70% accuracy, major hallucinations, unusable output

3. Relevance (Weight: 20%)

  • 5 - Excellent: Directly addresses the request, no off-topic content, appropriate level of detail
  • 4 - Good: Mostly on-topic, minimal tangents, good detail level
  • 3 - Acceptable: Generally relevant, some off-topic sections, detail level acceptable
  • 2 - Poor: Partially off-topic, missing key aspects, wrong detail level
  • 1 - Unacceptable: Mostly irrelevant, doesn't address request

4. Usability (Weight: 15%)

  • 5 - Excellent: Ready to use immediately, perfect formatting, easy to parse/integrate
  • 4 - Good: Needs minor cleanup, formatting mostly correct, easy to use
  • 3 - Acceptable: Needs moderate editing, some formatting issues, usable with effort
  • 2 - Poor: Needs major rework, significant formatting problems, hard to use
  • 1 - Unacceptable: Unusable without complete rewrite

5. Consistency (Weight: 10%)

  • 5 - Excellent: Identical quality on repeat runs, reliable format, predictable output
  • 4 - Good: Very similar on repeat runs, minor variations, mostly predictable
  • 3 - Acceptable: Similar on repeat runs, noticeable variations, generally predictable
  • 2 - Poor: Inconsistent results, format varies, unpredictable
  • 1 - Unacceptable: Completely inconsistent

Quality Thresholds:

  • 4.5-5.0: Production ready, no changes needed
  • 4.0-4.4: Good, minor tweaks may help
  • 3.5-3.9: Acceptable, needs improvement
  • 3.0-3.4: Marginal, significant revision needed
  • <3.0: Unacceptable, redesign prompt

Rapid Customization Workflow

Process for adapting prompts to new tasks (under 5 minutes):

5-Minute Customization Checklist:

  1. Select Base Prompt (30 seconds): Identify which existing prompt is closest to new need, copy prompt to new file
  2. Modify Role & Domain (1 minute): Change role to match new context, update domain-specific terminology, adjust expertise level if needed
  3. Adapt Output Structure (2 minutes): Add/remove sections as needed, change format (JSON vs. markdown vs. CSV), update field names
  4. Update Examples (1 minute): Replace example inputs with new domain, update expected outputs, add domain-specific edge cases
  5. Test & Iterate (30 seconds): Run with real input, score with quality rubric, make one quick refinement if needed, save as new prompt variant

Example Customization:

Base Prompt: Literature Gap Identifier (Economics) New Need: Technology Gap Identifier (Software)

Changes:

  1. Role: "research strategist" → "technology strategist"
  2. Domain: "papers" → "tools/frameworks/approaches"
  3. Output: "Methodological gaps" → "Technical capability gaps"
  4. Examples: Economics papers → Software engineering blog posts

Result: Functional new prompt in 4 minutes.

Part 3 Checkpoint

You now have a complete prompt library system:

  • 15-20 prompts organized by category
  • Universal template for creating new prompts
  • Directory structure for organization
  • Quality rubric for objective evaluation
  • Customization workflow (under 5 minutes)
  • Version control system

Core Build Complete

Time invested: 45-60 minutes Value created: Production-ready prompt library that will save hours every week

What you've mastered:

  • All major prompting patterns (chain-of-thought, few-shot, parameter tuning, role assignment, structured output)
  • Domain-specific prompt engineering
  • Systematic prompt organization
  • Objective quality evaluation
  • Rapid customization and iteration

Immediate Next Steps

  1. Test your library: Use at least 5 different prompts on real work today
  2. Measure impact: Track time saved on first 10 uses
  3. Iterate: Use quality rubric to improve lowest-scoring prompts
  4. Expand: Add domain-specific prompts as you encounter new use cases

Continue to Domain Applications to see these prompts in complete, realistic workflows →

Quality Verification

Before moving forward, verify your library meets production standards:

Completeness Check

  • All 20 prompts created
  • Each prompt has clear purpose
  • All prompts tested with real inputs
  • Directory structure established

Quality Check

  • Average quality score across library: _____ / 5.0
  • No prompts scoring below 3.5
  • At least 5 prompts scoring 4.5+
  • All prompts produce usable outputs

Organization Check

  • README.md created and populated
  • Prompts categorized logically
  • Template system documented
  • Quality rubric saved

Usability Check

  • Can find any prompt in under 30 seconds
  • Can customize a prompt in under 5 minutes
  • Can evaluate output quality in under 2 minutes
  • Can share prompts with teammates easily

Target: 100% on all checks

If you're missing any checklist items, spend 5-10 minutes completing them before proceeding. Your library is a long-term investment—get the foundation right.

Your 20-prompt library is complete and production-ready. Time to put it to work in realistic domain workflows.