Quick Start: Add Error Handling in 15 Minutes
Get immediate value by adding basic error handling to one script from T2.1
Goal: Transform a basic automation script into a reliable one in just 15 minutes. Add error handling that catches failures, logs them properly, and alerts when something goes wrong. Pick one script from T2.1 and make it production-ready.
Why This Matters
The difference between a script that works once and an automation that runs reliably for months comes down to error handling. Without it, failures happen silently, scripts break without warning, and debugging becomes guesswork. With proper error handling, failures are caught, logged, and you're notified immediately.
Choose Your Script to Upgrade
Select one script from T2.1 Command-Line AI Workflows to enhance. The best candidates are scripts that run frequently or handle important tasks like data collection, API calls, or file processing.
Good choices:
- Daily research paper download script
- Automated data processing pipeline
- API integration workflow
- File sync or backup automation
Quick check: Run the script once to confirm it works. Note what happens when it fails - does it tell you? Does it leave partial results? Does it crash silently?
Add Error Detection
Add basic error detection using exit codes. Every command has an exit code - 0 means success, anything else means failure.
# Check if command succeeded
if ! command_here; then
echo "ERROR: Command failed at $(date)" >> error.log
exit 1
fiPattern to memorize: The ! inverts the exit code check. If the command fails (non-zero exit), the error block runs. Always exit with code 1 to signal failure to calling scripts or schedulers.
Apply this pattern to the most critical command in your script - usually the API call, file download, or data processing step that could fail.
Add Basic Logging
Create a simple logging function that timestamps every important action and error.
log() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> script.log
}
log "INFO: Starting process"
log "ERROR: API request failed"Where to add logs:
- Before and after critical operations (file downloads, API calls, data processing)
- When errors occur (include what failed and why)
- At script start and completion (helps track execution time)
Test it: Run your script and check script.log - you should see timestamped entries showing what happened.
Test Failure Scenarios
Deliberately break your script to verify error handling works.
Test 1: Network failure
# Disconnect WiFi or use invalid API endpoint
# Expected: Error logged, script exits with code 1Test 2: Missing file
# Rename or move an input file the script needs
# Expected: Error logged with specific file nameTest 3: Invalid credentials
# Use wrong API key or expired token
# Expected: Auth error logged, script exits cleanlyVerification checklist:
- Errors appear in
script.logwith timestamps - Error messages explain what failed
- Script exits with code 1 (check with
echo $?after run) - No partial results or corrupted data left behind
What You Just Accomplished
In 15 minutes, the script went from fragile to reliable. Failures are now caught instead of ignored. Every error is logged with context for debugging. Exit codes signal success or failure to LaunchAgents or cron jobs.
Common mistake: Logging too much or too little. Log critical operations and errors, skip verbose debugging output for now. A good rule is to log anything that would help you understand what happened if the script fails at 3am and you're reading logs the next morning.
Next Steps
This basic error handling makes scripts reliable for daily use. To make them production-ready for critical workflows, continue to Core Build where you'll add retry logic, structured logging, alert notifications, and health monitoring. Those additions transform good automation into bulletproof systems that run for months without intervention.
The patterns you just learned - exit code checking, timestamp logging, and failure testing - are the foundation. Everything else builds on these basics.