Economics Applications: Research-Specific Prompts
5 specialized prompts for economics research - literature reviews, data analysis, citations, and more
Economics Research: Real-World Applications
Goal: Apply the 5 economics prompts to complete realistic research workflows, demonstrating measurable time savings and quality improvements.
Powerful individual prompts become even more valuable when combined into workflows that solve actual research problems. These scenarios reflect real tasks economics researchers face weekly—literature reviews, data validation, paper analysis, and writing.
Workflow 1: Rapid Literature Review
Real-World Scenario: Starting a research project on remote work and productivity. The goal is to quickly understand what's established, what's debated, and where gaps exist.
Traditional Approach: 3-5 hours to read 10 papers, extract key findings, identify gaps
With Specialized Prompts: 15 minutes
Extract Research Designs (5 minutes)
Use Prompt 10: Abstract → Research Design Extractor on each paper.
Input for Paper 1:
Abstract:
We examine the impact of remote work on employee productivity using a randomized
controlled trial conducted at a Fortune 500 company. 823 employees were randomly
assigned to either work-from-home (treatment) or office-based (control) conditions
for 6 months. Using detailed productivity metrics and time-use data, we find that
remote workers were 13% more productive (p<0.01). Administrative data from 2019-2020.
Data available via company partnership. Analysis code provided in supplementary materials.Output:
RESEARCH QUESTION:
Does remote work affect employee productivity?
RESEARCH DESIGN:
- Type: Experimental
- Specific method: Randomized controlled trial (RCT)
DATA:
- Sample: Fortune 500 company employees
- Sample size: 823
- Time period: 2019-2020 (6-month treatment)
- Geographic scope: United States (implied)
- Data source: Administrative records, productivity metrics
IDENTIFICATION STRATEGY:
- Treatment: Work-from-home assignment
- Comparison: Office-based work (control group)
- Key assumption: Random assignment ensures treatment/control groups are comparable
OUTCOME MEASURES:
- Primary: Productivity metrics (13% increase)
- Secondary: Time-use data
REPRODUCIBILITY ASSESSMENT:
- Data availability: Restricted (company partnership required)
- Code availability: Mentioned (supplementary materials)
- Preregistration: UnknownAction: Repeat this for all 10 papers by running the prompt 10 times in batch.
Time saved: From 30+ minutes of manual reading to 5 minutes of systematic extraction.
Identify Literature Gaps (5 minutes)
Use Prompt 7: Literature Gap Identifier on all 10 extracted summaries.
Input:
Papers to analyze:
[Paste all 10 research design extracts from Step 1]
1. Remote Work RCT (Chen et al., 2023): 13% productivity increase, Fortune 500, N=823
2. Hybrid Work Observational (Rodriguez, 2022): Mixed results by occupation, Survey N=2,400
3. WFH in Academia (Johnson et al., 2024): 8% decrease, UK universities, N=156 faculty
[... etc for all 10 papers]Output:
ESTABLISHED FINDINGS:
What do we now know? (consensus findings across multiple papers)
- Finding 1: Remote work effects vary significantly by occupation type (6 of 10 papers)
- Finding 2: Productivity measurement methodology strongly influences results (5 of 10 papers)
- Finding 3: Short-term effects (<6 months) may differ from long-term (4 of 10 papers)
CONTRADICTIONS:
Where do papers disagree?
- Contradiction 1: Chen (2023) found 13% productivity increase, but Johnson (2024) found 8% decrease
- Possible reasons: Different sectors (corporate vs. academic), measurement approaches (objective metrics vs. self-report), sample characteristics (tech workers vs. faculty)
METHODOLOGICAL GAPS:
What methods haven't been tried?
- Gap 1: Only 2 of 10 studies use experimental designs (RCTs); rest are observational
- Gap 2: No studies exceed 12 months duration; long-term effects unknown
- Gap 3: No studies examine heterogeneous effects by worker characteristics (age, experience, family status)
CONTEXTUAL GAPS:
What populations/regions/time periods are understudied?
- Gap 1: Only 1 study from developing countries; 8 are US/UK-based
- Gap 2: No studies of remote work in manufacturing or service sectors (all knowledge work)
- Gap 3: All studies are pre-2025; post-pandemic norms may differ
THEORETICAL GAPS:
What mechanisms or explanations haven't been explored?
- Gap 1: Little research on why productivity effects vary (commute time savings? Reduced distractions? Different?)
- Gap 2: No examination of spillover effects (family, health, local economy)
- Gap 3: Team dynamics and collaboration quality understudied (mostly individual-level analysis)
HIGHEST-VALUE RESEARCH QUESTIONS:
Based on these gaps, what 3 studies would most advance the field?
1. Long-term RCT (2+ years) with heterogeneous effect analysis by worker demographics
- Why it matters: Current evidence limited to short-term; policy needs long-run estimates
- Suggested method: Multi-year RCT with rich demographic data and mechanism surveys
2. Developing country context study using administrative productivity data
- Why it matters: Vast majority of workers globally are outside US/UK samples
- Suggested method: Partner with large employer in India/Brazil/Nigeria for quasi-experimental design
3. Mechanism study examining causal pathways (commute savings vs. distraction reduction vs. scheduling flexibility)
- Why it matters: Understanding why effects occur enables better policy design
- Suggested method: Factorial RCT varying specific remote work components, mediation analysisWhat This Achieves: Comprehensive understanding of 10 papers in 10 minutes versus 2-3 hours, clear identification of research contribution, and ready-to-use gap analysis for grant proposals or paper introductions.
Format Citations for Writing (5 minutes)
Use Prompt 9: Citation Formatter & Checker to prepare the bibliography.
Input:
Task: Format as APA 7th
Sources to format:
Chen J and Rodriguez M, "Remote Work and Productivity: Evidence from a Randomized Experiment" Journal of Labor Economics 2023 vol 41 issue 2 pages 234-267 DOI: 10.1086/723456
Johnson et al 2024 Work from Home in Academia: productivity effects British Journal of Industrial Relations v62(3) pp445-478
rodriguez 2022 "hybrid work arrangements" survey evidence Management Science vol 40 pages 1234-1256
[... paste remaining 7 papers in rough format]Output:
FORMATTED CITATIONS:
Chen, J., & Rodriguez, M. (2023). Remote work and productivity: Evidence from a randomized experiment. Journal of Labor Economics, 41(2), 234-267. https://doi.org/10.1086/723456
Johnson, T., Williams, A., & Lee, S. (2024). Work from home in academia: Productivity effects. British Journal of Industrial Relations, 62(3), 445-478.
Rodriguez, M. (2022). Hybrid work arrangements: Survey evidence. Management Science, 40, 1234-1256.
[... remaining citations properly formatted]
FLAGGED ISSUES FOR MANUAL REVIEW:
Paper 3 (Rodriguez, 2022):
- POTENTIAL ISSUE: Volume number 40 seems high for Management Science (typically vol. 60s-70s in 2022)
- RECOMMENDATION: Verify volume/issue number in original source
Paper 5 (Author name unclear):
- MISSING INFORMATION: First author initial not provided in rough citation
- RECOMMENDATION: Check original paper for complete author name
Paper 8 (Journal name abbreviated):
- VALIDATION CHECK: "AER" expanded to "American Economic Review" - verify this is correct journalTime saved: From 20-30 minutes of manual formatting to 5 minutes, with automated error detection.
Workflow 1 Results
Time Investment: 15 minutes total
Traditional Time: 3-5 hours
Time Saved: 165-285 minutes (91-95% reduction)
Quality Improvements:
- ✅ Systematic extraction (no overlooked papers)
- ✅ Structured gap analysis (ready for writing)
- ✅ Properly formatted bibliography (APA compliant)
- ✅ Flagged citation errors before submission
Immediate Use Cases: Writing literature review sections, preparing grant proposals, designing follow-on studies, onboarding to new research areas.
Workflow 2: Data Quality Audit Before Analysis
Real-World Scenario: Received a panel dataset of 5,000 workers across 50 firms over 10 years. Before running regressions, validate data quality to prevent analytical disasters.
Traditional Approach: 30-60 minutes of manual inspection, creating validation rules in Stata/R
With Specialized Prompts: 8 minutes with comprehensive checklist
Quick Data Quality Assessment (8 minutes)
Use Prompt 8: Data Cleaning Validator
Input:
Dataset description:
Panel dataset of worker productivity from 2013-2022. Variables include worker_id,
firm_id, year, age, education, wage, hours_worked, productivity_score, remote_work_dummy.
Data from 5,000 workers across 50 firms. Source: Company HR records merged with
productivity tracking software.
Sample of data:
worker_id,firm_id,year,age,education,wage,hours_worked,productivity_score,remote_work
1001,5,2015,34,Bachelor,65000,40,87.3,0
1001,5,2016,35,Bachelor,68000,40,89.1,0
1001,5,2017,36,Bachelor,72000,42,91.2,1
1002,12,2015,28,Master,78000,45,76.5,0
1002,12,2016,29,Master,NA,40,82.1,0
1002,12,2017,30,Master,85000,40,NA,1
1003,12,2015,-5,PhD,95000,35,94.2,0
1004,5,2015,45,high school,52000,168,45.3,0
1005,8,2015,31,Bachelor,67000,40,88.7,0
1005,8,2015,31,Bachelor,67000,40,88.7,0Output:
COMPLETENESS CHECK:
- Missing values: wage (2.3%), productivity_score (1.8%), remote_work (0.1%)
- Systematic missingness: Wage missingness concentrated in 2015-2016 (possible system migration issue?)
- Imputation recommendation:
* For wage: Forward-fill within worker_id (use prior year's wage + inflation adjustment)
* For productivity_score: DO NOT impute - may introduce bias. Flag observations and run robustness check excluding them
* For remote_work: Assume 0 (office work) for missing values if pre-2020, manual review for post-2020
VALIDITY CHECK:
- Out-of-range values:
* Age = -5 for worker_id 1003 (impossible - likely data entry error)
* hours_worked = 168 for worker_id 1004 (24 hours/day × 7 days = impossible; likely weekly instead of daily)
* productivity_score should be 0-100 scale - verify max/min values
- Logical inconsistencies:
* Worker 1005 appears twice in 2015 (duplicate row - likely merge error)
* Education should be categorical but appears as text with inconsistent capitalization ("Bachelor" vs "high school")
- Timestamp issues: None detected in sample
CONSISTENCY CHECK:
- Coding inconsistencies:
* Education: "Bachelor" vs "high school" vs "Master" vs "PhD" - inconsistent capitalization
* RECOMMENDATION: Standardize to title case or create numeric codes (1=High School, 2=Bachelor, 3=Master, 4=PhD)
- Unit inconsistencies:
* hours_worked appears to mix daily/weekly reporting (40 is reasonable daily, 168 is weekly)
* RECOMMENDATION: Check firm-level patterns; may need firm-specific conversion
- Format inconsistencies:
* remote_work coded as 0/1 but column header says "remote_work" not "remote_work_dummy" - verify interpretation
STATISTICAL FLAGS:
- Outliers:
* hours_worked = 168 (99.9th percentile)
* Age = -5 (impossible outlier - ERROR)
* wage range needs verification (min/max check against industry standards)
- Unexpected distributions:
* Duplicate row for worker 1005 suggests possible broader duplication issue
* Check for duplicate worker_id × year combinations across full dataset
- Correlation oddities:
* Cannot assess from 10-row sample - recommend correlation matrix on full data
RECOMMENDED CLEANING STEPS:
Priority 1 (must fix before analysis):
1. Remove duplicate row: DROP observation where worker_id=1005 AND year=2015 appears twice
2. Fix impossible age: Investigate worker_id=1003 age=-5 (check source data, likely birth year miscoded)
3. Verify hours_worked units: Check with data provider if some firms report weekly vs daily hours
4. Standardize education coding: Create categorical variable with consistent levels
Priority 2 (should fix):
1. Handle wage missingness: Implement forward-fill within worker + inflation adjustment
2. Flag productivity_score missingness: Create indicator variable for missing productivity (use in robustness)
3. Standardize text fields: Convert education to title case before creating categories
4. Run duplicate check: Verify no other duplicate worker_id × year combinations exist
Priority 3 (nice to have):
1. Create derived variables: tenure = year - first_year_observed
2. Add validation flags: Create indicator variables for imputed/corrected values
3. Generate summary statistics by firm: Check for firm-level data quality patterns
DATA QUALITY SCORE: 68/100
Justification:
- Major issues: Impossible values (-5 age), duplicates, unit inconsistencies reduce score significantly
- Missing data: Moderate (~2%) but non-random patterns concerning
- Fixable problems: Most issues are correctable with Priority 1-2 steps
- After cleaning: Expected score would rise to 85-90/100What This Achieves: Identified critical errors like negative age and duplicate rows that would invalidate results, prioritized fixes from must-do to nice-to-have, provided specific implementation guidance, and documented data quality issues for the methods section.
Prevented Disasters: Duplicate rows would bias standard errors, negative age would cause model estimation failures, and unit inconsistencies would produce nonsense coefficients.
Workflow 2 Results
Time Investment: 8 minutes
Traditional Time: 45-60 minutes
Time Saved: 37-52 minutes (82-87% reduction)
Quality Improvements:
- ✅ Caught critical errors before running regressions
- ✅ Prioritized cleaning steps efficiently
- ✅ Documented quality issues for transparency
- ✅ Prevented downstream analytical errors
Workflow 3: Regression Results Communication
Real-World Scenario: Completed a wage regression and need to explain results to a policy audience with non-technical stakeholders.
Traditional Approach: 15-30 minutes to write plain-language interpretation
With Specialized Prompts: 5 minutes with policy implications
Interpret Results for Policy Brief (5 minutes)
Use Prompt 6: Regression Results Interpreter
Input:
Interpret the regression results below:
Regression output:
Dependent Variable: log_hourly_wage
Independent Variables:
years_education: 0.089 (SE: 0.012, p<0.001)
experience: 0.034 (SE: 0.008, p<0.01)
experience_squared: -0.0006 (SE: 0.0002, p<0.05)
female: -0.152 (SE: 0.031, p<0.001)
remote_work: 0.071 (SE: 0.028, p<0.05)
R-squared: 0.342
N: 2,456
Controls: industry (15 categories), region (5 categories), year fixed effects
Standard errors: Clustered by firmOutput:
PLAIN-LANGUAGE SUMMARY:
This analysis examines how education, experience, gender, and remote work affect wages, controlling for industry, location, and time trends. Using data from 2,456 workers, we find that education and experience significantly increase wages, women earn less than men even after controlling for qualifications, and remote work is associated with higher wages. The model explains about one-third of wage variation.
KEY FINDINGS:
Variable: Years of Education
- Effect: Each additional year of schooling increases wages by approximately 8.9%
- Significance: p<0.001 (highly statistically significant)
- Practical meaning: A bachelor's degree (4 years) over high school predicts 35.6% higher wages, holding other factors constant
Variable: Experience
- Effect: Each year of experience increases wages by 3.4%, but this effect diminishes over time (experience squared is negative)
- Significance: p<0.01 for experience, p<0.05 for experience squared
- Practical meaning: Early career wage growth is steeper than mid/late career. Peak wage growth occurs around 28 years of experience (calculated from turning point of quadratic), after which returns to experience decline.
Variable: Female
- Effect: Women earn approximately 15.2% less than men, all else equal
- Significance: p<0.001 (highly statistically significant)
- Practical meaning: This represents a substantial unexplained gender wage gap even after controlling for education, experience, industry, and location. This gap could reflect discrimination, unmeasured differences in job roles, negotiation patterns, or other factors not captured in the data.
Variable: Remote Work
- Effect: Workers with remote work arrangements earn approximately 7.1% more
- Significance: p<0.05 (statistically significant)
- Practical meaning: Remote work access is associated with higher wages. This could reflect higher-skilled workers having remote options, compensating differentials, or productivity effects. Causality unclear from this observational data.
MODEL QUALITY:
- R-squared: 0.342 (34.2% of wage variation explained)
- This is moderate explanatory power. About 66% of wage variation remains unexplained, likely due to unobserved factors like specific job roles, individual productivity, firm-specific wage policies, and measurement error.
- Sample size: N=2,456 is adequate for detecting effects of this magnitude with reasonable precision
- Key controls: Industry (15 categories), region (5), and year fixed effects account for sectoral differences, geographic cost-of-living, and time trends
LIMITATIONS:
What this model CANNOT tell us:
1. Causality for remote work: Association observed, but we cannot conclude remote work causes higher wages (could be reverse: high-wage workers get remote access)
2. Mechanisms for gender gap: Model shows gap exists but cannot identify whether it's discrimination, occupational sorting, hours flexibility, or other unmeasured factors
3. Long-term effects: Cross-sectional data cannot reveal how wages evolve within individuals over careers
Causality: Can we claim causal effects? Why/why not?
- Education and experience: Likely causal, but could have omitted variable bias (ability, family background)
- Gender: Descriptive gap, not necessarily discrimination (need audit studies or within-job comparisons)
- Remote work: Definitely NOT causal - selection bias concerns (who gets remote access?)
- For causal estimates, would need random assignment (experiment) or credible quasi-experimental design (IV, RDD, DiD)
POLICY IMPLICATIONS:
If this model is credible, what actions might it suggest? (2-3 specific recommendations)
1. Education Investment: With 9% wage returns per year of schooling, policies subsidizing education (tuition assistance, student loan programs) have strong economic justification. Community college and vocational training programs may be particularly cost-effective.
2. Gender Wage Gap Investigation: The persistent 15% gap after controlling for qualifications warrants deeper investigation. Policy responses could include pay transparency requirements, salary negotiation training for women, or stronger enforcement of equal pay laws. However, more research needed to identify mechanisms.
3. Remote Work Access Equity: If remote work genuinely increases productivity (and thus wages), unequal access could exacerbate wage inequality. Policymakers might consider: (a) ensuring remote work opportunities are not concentrated among already high-wage workers, (b) investing in broadband infrastructure to enable remote work in underserved areas, (c) studying whether formalizing remote work rights reduces wage disparities.What This Achieves: Translated statistical jargon into actionable insights, identified limitations honestly with causal inference concerns, generated specific and defensible policy recommendations, and created ready-to-paste content for policy briefs or presentations.
Workflow 3 Results
Time Investment: 5 minutes
Traditional Time: 20-30 minutes
Time Saved: 15-25 minutes (75-83% reduction)
Quality Improvements:
- ✅ Plain-language explanations accessible to non-technical audiences
- ✅ Explicit discussion of causal inference limitations
- ✅ Specific, actionable policy recommendations
- ✅ Transparent about model limitations
Use Cases: Policy briefs for government agencies, executive summaries for research reports, grant applications demonstrating impact, and teaching students how to interpret results.
Cumulative Time Savings: Economics Workflows
Total Time Comparison
| Workflow | Traditional | With Prompts | Time Saved | Savings % |
|---|---|---|---|---|
| Literature Review (10 papers) | 180-300 min | 15 min | 165-285 min | 91-95% |
| Data Quality Audit | 45-60 min | 8 min | 37-52 min | 82-87% |
| Results Interpretation | 20-30 min | 5 min | 15-25 min | 75-83% |
| TOTAL | 245-390 min | 28 min | 217-362 min | 89-93% |
Translation: Tasks that used to take 4-6.5 hours now take 28 minutes.
Before/After Quality Comparison
Before (Manual Approach)
Literature Review: Inconsistent note-taking across papers, easy to miss methodological details when skimming, gap analysis relies on memory or incomplete notes, and citation formatting errors slip through.
Data Cleaning: Ad-hoc checks might miss systematic issues, no prioritization of what to fix first, undocumented cleaning decisions, and easy to overlook subtle inconsistencies.
Results Communication: Technical jargon persists in drafts, causal language often too strong, policy implications vague or missing, and time pressure leads to shortcuts.
After (Prompt-Based Workflow)
Literature Review: Standardized extraction across all papers, systematic methodology documentation, structured gap analysis with evidence, and automated citation formatting with error detection.
Data Cleaning: Comprehensive systematic audit, clear priority levels (P1/P2/P3), documented quality score and rationale, and catches edge cases and subtle patterns.
Results Communication: Plain-language summaries by default, explicit causal inference caveats, specific evidence-based policy recommendations, and consistent quality across all outputs.
Integration with Your Research Workflow
Weekly Research Routine
Monday Morning (30 minutes):
Run literature search on Google Scholar and EconLit, extract 10-15 new papers using Prompt 10, update literature gap analysis using Prompt 7, and add formatted citations using Prompt 9.
Mid-Week Data Work (variable):
Before any new analysis run Prompt 8 on datasets, fix Priority 1 issues immediately, and document data quality scores in methods log.
Friday Afternoon (when writing):
Interpret new regression results using Prompt 6, draft policy implications sections, and update research presentations with plain-language findings.
Monthly Impact
Conservative Estimate:
- 4 literature reviews per month: 4 × 165 min saved = 660 minutes saved
- 2 data quality audits: 2 × 40 min saved = 80 minutes saved
- 8 results interpretations: 8 × 20 min saved = 160 minutes saved
Total: 900 minutes saved per month = 15 hours
Annual Impact: 180 hours saved (equivalent to 4.5 work weeks)
Extending These Workflows
Add Research Question Refinement
Use Chain-of-Thought Reasoning (Core Build Prompt 1) to refine broad research ideas.
Example: "Should I study remote work effects on productivity?" leads to structured analysis identifying specific populations, mechanisms, identification strategies, and feasibility constraints, resulting in a focused research question like "Does remote work affect early-career versus mid-career productivity differently, and what role does managerial oversight play?"
Combine with Regression Design
Use Few-Shot Learning (Core Build Prompt 2) with examples of excellent econometric specifications to generate appropriate control variables, robustness checks to run, and placebo tests for identification strategies.
Scale with Automation
Next step (Course T1.2): Use Claude Projects to process 50+ papers automatically by chaining these prompts.
Success Criteria Checklist
Verify successful completion of the economics research application:
- Workflow 1 Complete: Extracted and analyzed 10 papers in under 20 minutes
- Workflow 2 Complete: Audited dataset quality and generated cleaning checklist
- Workflow 3 Complete: Interpreted regression results for policy audience
- Time Savings Documented: Measured actual time versus traditional approach
- Quality Maintained: Outputs meet publication/presentation standards
- Prompts Customized: At least one economics prompt adapted to specific research area
- Integration Planned: Identified where prompts fit in weekly workflow
If all checked: Successfully deployed the prompt library in production economics research. These workflows will save 15+ hours monthly.
If incomplete: Spend 5-10 more minutes completing the workflow that was skipped. The value is in actually using the prompts on real work, not just reading about them.
What's Next
The same principles demonstrated here apply to software engineering (Course T1.1 Domain Apps - Software) and business management (Course T1.1 Domain Apps - Business).
Immediate Actions:
Apply these exact workflows to current research projects, track time saved on first 5 uses, customize prompts for specific subfields (labor, macro, development), and share successful prompts with research groups.
Next Courses:
- T1.2: Claude Projects for Research - Automate these workflows at scale (50+ papers simultaneously)
- T2.1: Command-Line AI Tools - Run prompts from terminal, integrate with scripts
- T3.2: Paper Processing Pipelines - Build end-to-end research automation
The economics prompt library is production-ready. Use it tomorrow.