Python is easy to write, but sloppy error handling can turn a working application into one that silently fails in production. No stack trace, no alert — just a feature that stopped working three days ago and nobody noticed.
This article covers 7 error handling patterns you will actually use at work. Every snippet runs on vanilla Python 3.6+ with no dependencies.
One thing to keep in mind before we start: exception handling is not an afterthought you bolt on at the end. It is part of the design. The best code does not prevent errors from happening — it makes sure errors do not bring the system down.
1. Basic try / except — Start Here
The foundation of all error handling in Python. Catch specific exception types and respond to each one differently.
def divide(a, b):
try:
return a / b
except ZeroDivisionError:
return "Cannot divide by zero"
except TypeError:
return "Please provide numbers"
print(divide(10, 2)) # 5.0
print(divide(10, 0)) # Cannot divide by zero
print(divide(10, "a")) # Please provide numbers
The golden rule: never write a bare except:. Always name the exception type. A bare except catches everything — including KeyboardInterrupt and SystemExit — and makes debugging nearly impossible.
The classic beginner mistake:
# DO NOT do this
except Exception:
pass
This swallows every error silently. When something breaks three weeks later, you will have zero clues about what went wrong. At minimum, log the exception:
except ZeroDivisionError as e:
logging.error("Division failed: %s", e)
Takeaway: Catch exceptions by type. Treat except: pass as a ticking time bomb.
2. finally — Guaranteed Cleanup
When you open a file, a database connection, or a network socket, you need to close it — whether the operation succeeded or not. That is what finally does.
f = None
try:
f = open("data.txt", "w")
f.write("hello")
except IOError as e:
print("Write failed:", e)
finally:
if f:
f.close()
print("Cleanup done")
The finally block runs no matter what — even if an exception is raised, even if you return from inside the try block.
A common mistake is writing f.close() in the finally block without checking if f was actually created. If the open() call itself fails, f is undefined and you get a NameError on top of your original error.
Real-world horror story: forgetting to close a file handle in a batch job. The file stays locked. Next morning, the scheduled job fails because it cannot write to the same file. Nobody notices until the client calls.
Takeaway: Use finally for cleanup. Always guard against the resource never being created.
3. The with Statement — How Professionals Do It
In practice, you will rarely write finally: f.close() by hand. The with statement handles it automatically — and more cleanly.
try:
with open("data.txt", "w") as f:
f.write("hello")
except IOError as e:
print("Write failed:", e)
The with statement guarantees that the file is closed when the block exits, even if an exception occurs. It works with any object that implements the context manager protocol.
You can open multiple files in one statement:
with open("input.txt") as src, open("output.txt", "w") as dst:
dst.write(src.read())
In code reviews, using open() without with will almost certainly draw a comment. It is that standard.
Takeaway: File handling = with. No exceptions (pun intended).
4. Re-raising Exceptions with raise
Sometimes you need to log an error and let it propagate. A bare raise inside an except block re-raises the original exception with its full traceback intact.
import logging
def process(data):
try:
return transform(data)
except Exception as e:
logging.error("Processing failed: %s", e)
raise
Without the raise, the function returns None silently, and the caller assumes everything worked. This is one of the most common causes of “the batch job says it succeeded but the data is wrong” incidents.
If you want to wrap the original error with additional context, use exception chaining:
class ProcessingError(Exception):
pass
try:
result = transform(data)
except ValueError as e:
raise ProcessingError("Bad input data") from e
The from e preserves the original traceback, so you get the full chain when debugging.
Takeaway: Log-and-swallow is a bug factory. If you catch it, either handle it fully or re-raise it.
5. Custom Exceptions — Errors That Mean Something
Built-in exceptions are generic. In a real application, ValueError could mean a hundred different things. Custom exceptions make your code self-documenting.
class ValidationError(Exception):
"""Raised when input data fails validation."""
pass
class APIError(Exception):
"""Raised when an external API call fails."""
def __init__(self, status_code, message):
self.status_code = status_code
super().__init__(f"{status_code}: {message}")
def validate_age(age):
if age < 0:
raise ValidationError("Age cannot be negative")
return age
try:
validate_age(-5)
except ValidationError as e:
print(e) # Age cannot be negative
A few rules of thumb:
• Always inherit from Exception, never from BaseException.
• Give exceptions descriptive names: PaymentDeclinedError beats Error1.
• Do not create dozens of exceptions for a small project — it adds noise without value.
Takeaway: Custom exceptions are a design tool. Use them to make error handling read like documentation.
6. assert — Development-Only Sanity Checks
assert is a quick way to verify assumptions during development. If the condition is false, it raises AssertionError.
def withdraw(balance, amount):
assert amount > 0, "Amount must be positive"
assert balance >= amount, "Insufficient funds"
return balance - amount
print(withdraw(100, 50)) # 50
print(withdraw(100, 200)) # AssertionError
The critical thing about assert: it can be disabled. Running python -O (optimize mode) strips all assert statements. This means you must never use assert for business logic or input validation.
For production validation, use explicit checks:
def withdraw(balance, amount):
if amount <= 0:
raise ValueError("Amount must be positive")
if balance < amount:
raise ValueError("Insufficient funds")
return balance - amount
Takeaway: Assert is a development tool, not a production guard.
7. Retry Logic — Because Networks Fail
API calls time out. Database connections drop. DNS lookups fail. In any system that talks to external services, retry is not optional.
import time
import random
def call_api():
if random.random() < 0.7:
raise ConnectionError("Server unavailable")
return {"status": "ok"}
max_retries = 5
for attempt in range(max_retries):
try:
result = call_api()
print("Success:", result)
break
except ConnectionError as e:
wait = 2 ** attempt # exponential backoff
print(f"Attempt {attempt + 1} failed, retrying in {wait}s...")
time.sleep(wait)
else:
print("All retries exhausted")
Key points for production retry logic:
• Always set a maximum number of retries. Infinite retry = infinite loop.
• Use exponential backoff (1s, 2s, 4s, 8s…) to avoid hammering the server.
• Only retry on transient errors (timeouts, 503s). Retrying a 400 Bad Request is pointless.
• The for/else construct is perfect here: the else block runs only if we never hit break.
A forgotten break is the #1 cause of accidental infinite loops in retry code. Always double-check.
Takeaway: Any code that hits a network needs retry. Period.
Wrapping Up: Error Handling Is Design
Here is the complete toolkit:
1. try/except — Catch specific exceptions, never bare except.
2. finally — Guaranteed cleanup for resources.
3. with — The Pythonic way to manage resources.
4. raise — Log and re-raise, do not swallow errors.
5. Custom exceptions — Self-documenting error types.
6. assert — Development sanity checks only.
7. Retry — Mandatory for anything involving a network.
The developers who get paged at 3 AM are not the ones who write clever code. They are the ones who forgot to handle the error case. Exception handling is not about preventing errors — it is about making sure errors do not bring the system down.
In every codebase, the code that handles failure is at least as important as the code that handles success. Write accordingly.

Leave a Reply