![]()
Logging is a crucial aspect of software development. It helps developers trace the execution flow, debug issues, monitor application behavior, and provide insights into performance. Python provides a built-in logging module that offers a flexible framework for logging messages from your program.
To make the most of logging, it is essential to follow best practices. This guide will cover the best practices for using logging in Python.
1. Use the Built-in logging Module
Python’s built-in logging module is highly configurable, flexible, and widely adopted. It is preferred over using print() statements for logging because it provides various features, such as log levels, handlers, formatters, and more.
Example: Basic Logging Setup
import logging
# Set up basic logging configuration
logging.basicConfig(level=logging.INFO)
# Log some messages
logging.debug("This is a debug message")
logging.info("This is an info message")
logging.warning("This is a warning message")
logging.error("This is an error message")
logging.critical("This is a critical message")
2. Define the Right Log Levels
The logging module provides several log levels, each indicating the severity of the log message. It’s important to define appropriate log levels for different types of messages to ensure clarity and proper filtering.
Log Levels in Python:
DEBUG: Detailed information, typically useful only for diagnosing problems.INFO: General information about program execution (e.g., startup, processing, completion).WARNING: Indication that something unexpected occurred, but the program can still run.ERROR: A more serious problem that prevents part of the program from working correctly.CRITICAL: A very serious error that may cause the program to stop running.
Best Practice: Use the appropriate log level to match the severity of the message.
- For debugging, use
DEBUG. - For normal operations and progress updates, use
INFO. - For potential issues, use
WARNING. - For critical issues, use
ERRORorCRITICAL.
Example:
logging.debug("Detailed debug message")
logging.info("Operation completed successfully")
logging.warning("Unexpected behavior detected, but continuing")
logging.error("Failed to perform action")
logging.critical("System failure, application stopping")
3. Use Configurable Logging
Rather than setting up logging with the basicConfig every time, it is a good practice to configure logging in a central place. This could be done at the start of the application, and it can be configured to log to various destinations (console, files, external services) with different levels and formats.
Best Practice: Configure logging once in your application, and reuse it across your modules.
Example: Logging Configuration File
You can use logging.config or a configuration file to set up the logging. This provides better control, especially when you want to configure logging for larger applications.
import logging
import logging.config
# Basic logging configuration
logging.config.fileConfig('logging_config.ini')
logger = logging.getLogger('mylogger')
logger.debug("This is a debug message")
Example of logging_config.ini File:
iniCopyEdit[loggers]
keys=root,mylogger
[handlers]
keys=consoleHandler,fileHandler
[formatters]
keys=simpleFormatter
[logger_root]
level=INFO
handlers=consoleHandler
[logger_mylogger]
level=DEBUG
handlers=consoleHandler,fileHandler
qualname=mylogger
[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=simpleFormatter
args=(sys.stdout,)
[handler_fileHandler]
class=FileHandler
level=DEBUG
formatter=simpleFormatter
args=('app.log', 'a')
[formatter_simpleFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
datefmt=%Y-%m-%d %H:%M:%S
4. Log to Files for Long-Term Storage
For production environments, it’s recommended to log messages to files, especially for error handling, auditing, and diagnostics. By logging to files, you can retain logs for future analysis.
Best Practice: Log to files and rotate log files periodically to prevent large log files.
Example: Logging to Files with Rotation
import logging
from logging.handlers import RotatingFileHandler
# Create a rotating file handler
handler = RotatingFileHandler('app.log', maxBytes=2000, backupCount=3)
handler.setLevel(logging.DEBUG)
# Set up formatter
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
# Add handler to the root logger
logging.getLogger().addHandler(handler)
# Log messages
logging.debug("Debug message")
logging.info("Info message")
In this example:
maxBytes=2000: Defines the maximum size of the log file (in bytes).backupCount=3: Keeps 3 backup log files when the log file exceeds the specified size.
5. Avoid Logging Sensitive Information
It’s essential to avoid logging sensitive information, such as passwords, API keys, and personal user data. Exposing such information in logs can lead to security risks.
Best Practice: Mask or avoid logging sensitive data.
Example:
def login(username, password):
if username == "admin" and password == "secret":
logging.info("User logged in successfully")
else:
logging.warning("Failed login attempt: username %s", username) # Avoid logging passwords
Here, we log the username but avoid logging the password.
6. Structured Logging for Machine Readability
Structured logging helps machines and automated systems parse logs easily. This can be achieved by using a consistent format (such as JSON) for log messages.
Best Practice: Use structured logging for easier log parsing and analysis.
Example: JSON-based Structured Logging
import logging
import json
class JSONFormatter(logging.Formatter):
def format(self, record):
log_obj = {
"timestamp": self.formatTime(record, self.datefmt),
"level": record.levelname,
"message": record.getMessage(),
"logger": record.name
}
return json.dumps(log_obj)
logger = logging.getLogger('json_logger')
handler = logging.StreamHandler()
# Set custom JSON formatter
handler.setFormatter(JSONFormatter())
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
# Log a message
logger.info("This is a structured log message")
Output:
{"timestamp": "2025-03-10 15:00:00", "level": "INFO", "message": "This is a structured log message", "logger": "json_logger"}
This structured format is machine-readable and can be easily parsed and analyzed by log aggregation systems or external tools.
7. Use Log Context for Better Traceability
Add context to logs to make them more informative. This could include details such as user IDs, request IDs, or any other relevant information.
Best Practice: Include contextual information in logs to make them more useful for debugging and analysis.
Example:
import logging
# Set up logger
logger = logging.getLogger('myapp')
def process_request(request_id, user_id):
logger.info(f"Processing request {request_id} for user {user_id}")
# Your processing logic here
logger.debug(f"Request {request_id} data: {some_data}")
process_request('123', 'user_001')
By logging the request_id and user_id, we can track specific requests and user interactions, which is crucial for troubleshooting and monitoring.
8. Testing and Debugging with Logs
While debugging code, it’s easy to introduce logs and trace the program’s flow. However, for the final version of the application, consider adjusting log levels based on the environment (e.g., DEBUG for development and INFO or ERROR for production).
Best Practice: Adjust the logging level based on the environment.
Example: Configuring Logging for Different Environments
import logging
import os
# Get environment (development or production)
env = os.getenv('ENV', 'production')
if env == 'development':
logging.basicConfig(level=logging.DEBUG)
else:
logging.basicConfig(level=logging.INFO)
logging.debug("This is a debug message")
logging.info("This is an info message")
In this case, when the environment is set to development, logs will include DEBUG messages, but in production, only INFO and more critical messages will be logged.
