Theoretical Framework: Understanding AI Exposure
Task-based vs occupation-based analysis, the three-dimension assessment framework, and task categorization
Theory: Understanding Task-Based AI Exposure
Before diving into implementation, establish the theoretical foundation. This isn't academic busywork—understanding the framework will help make better assessments.
Task-Based vs. Occupation-Based Analysis
Critical Distinction: Tasks vs. Occupations
Occupation-based analysis asks: "Will AI replace accountants?"
Task-based analysis asks: "Which specific tasks that accountants perform can AI do, and which are AI-resistant?"
The difference is profound. An accountant might spend their time on data entry and reconciliation (90% automatable), tax code interpretation (70% automatable), client consultation on financial planning (30% automatable), audit preparation and compliance (80% automatable), strategic tax optimization requiring judgment (20% automatable), and building trust with long-term clients (5% automatable).
If only looking at the occupation level, the analysis would miss that different accountants have different exposure based on their task mix. A bookkeeper focused on data entry has 85%+ exposure. A CFO advisor focused on strategic consultation has 25% exposure.
Same occupation. Wildly different risk profiles.
The Felten-Raj-Seamans Methodology
In their seminal 2024 paper "Occupational AI Exposure," Edward Felten, Manav Raj, and Robert Seamans developed a methodology for measuring AI exposure that goes beyond simple "can a robot do this?" analysis.
Their framework assesses:
- Technical Feasibility: Can current AI technology perform this task at human-level quality?
- Economic Viability: Is it cost-effective to automate this task with AI?
- Regulatory Constraints: Are there legal or ethical barriers to AI performing this task?
A task must score high on ALL THREE dimensions to be truly at risk.
Example: Radiological Image Analysis
Technical Feasibility: HIGH
AI matches or exceeds human accuracy in identifying patterns, anomalies, and diagnostic markers in medical imaging.
Economic Viability: HIGH
AI systems are significantly cheaper than employing radiologists for initial screening and routine analysis.
Regulatory Constraints: MODERATE
FDA approval required for clinical use, but regulatory pathways exist and approvals are happening.
Overall Exposure: HIGH (75-85%)
Radiological screening tasks face substantial AI automation pressure, though final diagnostic interpretation often requires human oversight.
Example: Legal Advocacy in Court
Technical Feasibility: LOW
AI cannot physically appear in court, lacks real-time human interaction capabilities, and cannot adapt dynamically to judge reactions and jury dynamics.
Economic Viability: N/A
Irrelevant when technical feasibility is low. Even if economically attractive, physical and interactive constraints prevent implementation.
Regulatory Constraints: VERY HIGH
Unauthorized practice of law is prohibited. Bar admission requires human practitioners with ethical accountability.
Overall Exposure: LOW (5-10%)
While AI can assist with legal research and document preparation, actual courtroom advocacy remains firmly in human domain.
Example: Customer Service Chat Support
Technical Feasibility: VERY HIGH
Modern chatbots and conversational AI handle 80%+ of customer queries with human-level or better response quality.
Economic Viability: VERY HIGH
AI customer service costs a fraction of human agents, with 24/7 availability and instant scaling.
Regulatory Constraints: NONE
No legal barriers to AI handling customer support interactions. Industry widely accepts automated support.
Overall Exposure: VERY HIGH (90-95%)
First-tier customer support faces near-complete automation potential, with human agents reserved for complex escalations.
The Three Dimensions Explained
1. Technical Feasibility
Ask: Can Claude, GPT-4, or a similar AI tool do this task today?
Consider:
- Routine cognitive tasks (data entry, classification, summarization): HIGH feasibility
- Non-routine cognitive tasks (creative problem-solving, strategic judgment): MODERATE feasibility
- Physical tasks requiring manipulation: LOW feasibility (AI doesn't have hands)
- Tasks requiring real-time human presence: LOW feasibility
- Tasks requiring embodied experience: LOW feasibility
Key insight: AI is currently better at cognitive tasks than physical tasks, but it's especially good at routine cognitive tasks that can be described as rules or patterns.
2. Economic Viability
Ask: Is it cheaper to use AI than to pay a human?
This is where many people get the analysis wrong. Just because AI can do something doesn't mean it's economically viable to replace humans.
Consider:
- Cost of AI: API fees, infrastructure, maintenance
- Cost of human: Salary, benefits, overhead
- Quality difference: If AI is 80% as good, is that acceptable?
- Integration costs: How much does it cost to actually implement AI for this task?
- Switching costs: Training, change management, disruption
Example: AI can write basic marketing copy. But if you're a senior copywriter, you spend only 20% of your time on the initial draft (easily automated) and 80% on strategic positioning, brand voice refinement, and cross-functional collaboration (hard to automate). The economic case for replacing you is weak because AI only addresses a small fraction of your value.
3. Regulatory Constraints
Ask: Are there legal, ethical, or professional barriers to AI doing this?
Consider:
- Licensed professions: Medicine, law, accounting (AI can assist but not replace)
- Fiduciary duties: Tasks requiring human accountability
- Safety-critical systems: Aviation, healthcare, infrastructure
- Privacy regulations: HIPAA, GDPR may restrict AI use
- Professional standards: Bar associations, medical boards
Key insight: Regulatory constraints change over time. What's forbidden today might be allowed in 5 years. But in the short-to-medium term (3-5 years), regulation creates genuine protection for certain roles.
AI Exposure vs. AI Impact
Critical Understanding: High Exposure ≠ Job Loss
High AI exposure doesn't necessarily mean job loss. High exposure means the task can be automated, you will likely work with AI rather than doing it manually, your role will transform, and you need to develop adjacent skills.
In many cases, high-exposure tasks become augmented rather than eliminated. Paralegals using AI research tools become 10x more productive. Analysts using AI data processing focus on interpretation rather than data wrangling.
The real risk isn't that AI can do your tasks. The real risk is that you refuse to adapt.
Task Categories: The Four Quadrants
Following labor economics tradition, tasks are categorized into four types:
| Category | Definition | AI Exposure | Examples |
|---|---|---|---|
| Routine Cognitive | Follows established rules, patterns, procedures | VERY HIGH (80-95%) | Data entry, scheduling, basic research, document classification |
| Non-Routine Cognitive | Requires creativity, judgment, problem-solving | MODERATE (30-60%) | Strategic planning, creative writing, complex problem-solving |
| Routine Manual | Repetitive physical tasks | LOW (10-30%)* | Assembly line work, packaging, basic cleaning |
| Non-Routine Manual | Physical tasks requiring adaptation | VERY LOW (5-15%)* | Plumbing, electrical work, home healthcare, fine dining service |
*Note: Manual tasks show low AI exposure because current AI is primarily software-based. Robotics will change this over 10-20 years, but that's outside our 3-5 year planning horizon.
Your Career Moat
If you're a knowledge worker, your routine cognitive tasks are most exposed. Your non-routine cognitive tasks are your moat.
Focus development efforts on tasks requiring creativity, judgment, strategic thinking, and complex problem-solving—the areas where AI augments rather than replaces human capability.