Chapter 5: Advanced Techniques
Explore multi-task automation, error handling, and Claude Code integration
Now that you have the foundation, let's explore advanced patterns.
Pattern 1: Multi-Task Automation
Add multiple tasks to your automation handler:
# In automation-handler.sh, under "6pm wake" section:
# Task 1: Backup critical files
log_message "[6pm] Running backup..."
rsync -av ~/Documents/ ~/Backups/Documents/ >> "$LOG_FILE" 2>&1
# Task 2: Process research data
log_message "[6pm] Processing research data..."
python3 ~/scripts/process-research.py >> "$LOG_FILE" 2>&1
# Task 3: Generate reports
log_message "[6pm] Generating reports..."
Rscript ~/scripts/generate-reports.R >> "$LOG_FILE" 2>&1
log_message "[6pm] All tasks complete"Pattern 2: Error Handling with Notifications
Add macOS notifications when tasks fail:
# Add to wake-utils.sh:
send_notification() {
local title="$1"
local message="$2"
osascript -e "display notification \"$message\" with title \"$title\""
}
run_task_with_error_handling() {
local task_name="$1"
local task_command="$2"
log_message "[$task_name] Starting..."
if eval "$task_command"; then
log_message "[$task_name] Success"
return 0
else
log_message "[$task_name] FAILED"
send_notification "Automation Error" "$task_name failed"
return 1
fi
}
# Usage in automation-handler.sh:
run_task_with_error_handling "Backup" "rsync -av ~/Documents/ ~/Backups/Documents/"Pattern 3: Conditional Execution
Run tasks only under certain conditions:
# Add to automation-handler.sh:
should_run_backup() {
# Only backup on weekdays
local day=$(date +%u) # 1-7, Monday-Sunday
[ "$day" -le 5 ] # Returns true if weekday
}
should_run_heavy_processing() {
# Only run heavy tasks if plugged in
pmstat -g batt | grep -q "AC Power"
}
# In main execution:
if should_run_backup; then
run_task_with_error_handling "Backup" "rsync -av ~/Documents/ ~/Backups/Documents/"
fi
if should_run_heavy_processing; then
run_task_with_error_handling "ML Training" "python3 ~/ml/train-model.py"
fiPattern 4: Dynamic Scheduling
Adjust wake times based on workload:
# Add to automation-handler.sh:
calculate_next_wake_time() {
local pending_tasks=$(count_pending_tasks)
if [ "$pending_tasks" -gt 100 ]; then
# Heavy workload: wake every 2 hours
echo "$(date -v+2H +%H)"
elif [ "$pending_tasks" -gt 10 ]; then
# Medium workload: wake every 4 hours
echo "$(date -v+4H +%H)"
else
# Light workload: wake once daily
echo "18" # 6pm tomorrow
fi
}
# In 6pm wake section:
local next_hour=$(calculate_next_wake_time)
schedule_next_wake "$next_hour"Pattern 5: Integration with Claude Code
Run Claude Code as part of your automation:
# Create claude-automation-task.sh:
#!/bin/bash
PROMPT="Analyze today's research papers and create a summary"
OUTPUT_FILE="$HOME/automation-projects/claude-pipeline/outputs/summary-$(date +%Y%m%d).md"
# Use Claude API (requires API key)
curl https://api.anthropic.com/v1/messages \
-H "Content-Type: application/json" \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-d "{
\"model\": \"claude-sonnet-4.5-20250929\",
\"max_tokens\": 4096,
\"messages\": [{
\"role\": \"user\",
\"content\": \"$PROMPT\"
}]
}" > "$OUTPUT_FILE"
echo "Claude automation complete: $OUTPUT_FILE"Add to your automation handler:
# In automation-handler.sh, 6pm section:
log_message "[6pm] Running Claude research automation..."
bash ~/automation-projects/claude-pipeline/scripts/claude-automation-task.sh >> "$LOG_FILE" 2>&1Pattern 6: Metrics and Optimization
Track automation performance:
# Create metrics-tracker.sh:
#!/bin/bash
METRICS_FILE="$HOME/automation-projects/claude-pipeline/logs/metrics.csv"
# Initialize CSV if doesn't exist
if [ ! -f "$METRICS_FILE" ]; then
echo "timestamp,task,duration_seconds,status,cpu_avg,memory_mb" > "$METRICS_FILE"
fi
track_task() {
local task_name="$1"
local start_time=$(date +%s)
local start_cpu=$(ps aux | awk '{sum+=$3} END {print sum}')
local start_mem=$(ps aux | awk '{sum+=$4} END {print sum}')
# Run the task
eval "$2"
local status=$?
local end_time=$(date +%s)
local duration=$((end_time - start_time))
local end_cpu=$(ps aux | awk '{sum+=$3} END {print sum}')
local end_mem=$(ps aux | awk '{sum+=$4} END {print sum}')
local avg_cpu=$(awk "BEGIN {print ($start_cpu + $end_cpu) / 2}")
local avg_mem=$(awk "BEGIN {print ($start_mem + $end_mem) / 2}")
# Log to CSV
echo "$(date -Iseconds),$task_name,$duration,$status,$avg_cpu,$avg_mem" >> "$METRICS_FILE"
return $status
}
# Usage:
track_task "Backup" "rsync -av ~/Documents/ ~/Backups/Documents/"
track_task "DataProcessing" "python3 ~/scripts/process.py"Analyze metrics:
# View average task durations
awk -F, 'NR>1 {sum[$2]+=$3; count[$2]++} END {for (task in sum) print task, sum[task]/count[task]}' metrics.csv
# View tasks by failure rate
awk -F, 'NR>1 {total[$2]++; if($4!=0) fail[$2]++} END {for (task in total) print task, (fail[task]+0)/total[task]*100"%"}' metrics.csv