Close Menu

    Subscribe to Updates

    Get the latest news from tastytech.

    What's Hot

    Romeo is a Dead Man Review: More Lynchian lunacy from one of gaming’s most uncompromising studios

    February 10, 2026

    ‘Friday the 13th’ Movies Returning to Theaters on Friday the 13th

    February 10, 2026

    2026 BYD Sealion 8 Dynamic FWD review

    February 10, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    tastytech.intastytech.in
    Subscribe
    • AI News & Trends
    • Tech News
    • AI Tools
    • Business & Startups
    • Guides & Tutorials
    • Tech Reviews
    • Automobiles
    • Gaming
    • movies
    tastytech.intastytech.in
    Home»Business & Startups»The Complete Guide to Logging for Python Developers
    The Complete Guide to Logging for Python Developers
    Business & Startups

    The Complete Guide to Logging for Python Developers

    gvfx00@gmail.comBy gvfx00@gmail.comJanuary 14, 2026No Comments13 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email


    The Complete Guide to Logging for Python Developers
    Image by Author

     

    Table of Contents

    Toggle
    • # Introduction
    • # Setting Up Your First Logger
    • # Understanding Log Levels and When to Use Each
    • # Logging Exceptions Properly
    • # Creating a Reusable Logger Configuration
    • # Structuring Logs with Context
    • # Rotating Log Files to Prevent Disk Space Issues
    • # Logging in Different Environments
    • # Wrapping Up
      • Related posts:
    • Claude Code Power Tips - KDnuggets
    • A Hands-On Guide to the Free AI Agent
    • Amazon Machine Learning Project: Sales Data in Python

    # Introduction

     
    Most Python developers treat logging as an afterthought. They throw around print() statements during development, maybe switch to basic logging later, and assume that is enough. But when issues arise in production, they learn they are missing the context needed to diagnose problems efficiently.

    Proper logging techniques give you visibility into application behavior, performance patterns, and error conditions. With the right approach, you can trace user actions, identify bottlenecks, and debug issues without reproducing them locally. Good logging turns debugging from guesswork into systematic problem-solving.

    This article covers the essential logging patterns that Python developers can use. You will learn how to structure log messages for searchability, handle exceptions without losing context, and configure logging for different environments. We will start with the basics and work our way up to more advanced logging strategies that you can use in projects right away. We will be using only the logging module.

    You can find the code on GitHub.

     

    # Setting Up Your First Logger

     
    Instead of jumping straight to complex configurations, let us understand what a logger actually does. We will create a basic logger that writes to both the console and a file.
     

    import logging
    
    logger = logging.getLogger('my_app')
    logger.setLevel(logging.DEBUG)
    
    console_handler = logging.StreamHandler()
    console_handler.setLevel(logging.INFO)
    
    file_handler = logging.FileHandler('app.log')
    file_handler.setLevel(logging.DEBUG)
    
    formatter = logging.Formatter(
        '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
    )
    console_handler.setFormatter(formatter)
    file_handler.setFormatter(formatter)
    
    logger.addHandler(console_handler)
    logger.addHandler(file_handler)
    
    logger.debug('This is a debug message')
    logger.info('Application started')
    logger.warning('Disk space running low')
    logger.error('Failed to connect to database')
    logger.critical('System shutting down')

     

    Here is what each piece of the code does.

    The getLogger() function creates a named logger instance. Think of it as creating a channel for your logs. The name ‘my_app’ helps you identify where logs come from in larger applications.

    We set the logger level to DEBUG, which means it will process all messages. Then we create two handlers: one for console output and one for file output. Handlers control where logs go.

    The console handler only shows INFO level and above, while the file handler captures everything, including DEBUG messages. This is useful because you want detailed logs in files but cleaner output on screen.

    The formatter determines how your log messages look. The format string uses placeholders like %(asctime)s for the timestamp and %(levelname)s for severity.

     

    # Understanding Log Levels and When to Use Each

     
    Python’s logging module has five standard levels, and knowing when to use each one is important for useful logs.

    Here is an example:
     

    logger = logging.getLogger('payment_processor')
    logger.setLevel(logging.DEBUG)
    
    handler = logging.StreamHandler()
    handler.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
    logger.addHandler(handler)
    
    def process_payment(user_id, amount):
        logger.debug(f'Starting payment processing for user {user_id}')
    
        if amount <= 0:
            logger.error(f'Invalid payment amount: {amount}')
            return False
    
        logger.info(f'Processing ${amount} payment for user {user_id}')
    
        if amount > 10000:
            logger.warning(f'Large transaction detected: ${amount}')
    
        try:
            # Simulate payment processing
            success = charge_card(user_id, amount)
            if success:
                logger.info(f'Payment successful for user {user_id}')
                return True
            else:
                logger.error(f'Payment failed for user {user_id}')
                return False
        except Exception as e:
            logger.critical(f'Payment system crashed: {e}', exc_info=True)
            return False
    
    def charge_card(user_id, amount):
        # Simulated payment logic
        return True
    
    process_payment(12345, 150.00)
    process_payment(12345, 15000.00)

     

    Let us break down when to use each level:

    • DEBUG is for detailed information useful during development. You would use it for variable values, loop iterations, or step-by-step execution traces. These are usually disabled in production.
    • INFO marks normal operations that you want to record. Starting a server, completing a task, or successful transactions go here. These confirm your application is working as expected.
    • WARNING signals something unexpected but not breaking. This includes low disk space, deprecated API usage, or unusual but handled situations. The application continues running, but someone should investigate.
    • ERROR means something failed but the application can continue. Failed database queries, validation errors, or network timeouts belong here. The specific operation failed, but the app keeps running.
    • CRITICAL indicates serious problems that might cause the application to crash or lose data. Use this sparingly for catastrophic failures that need immediate attention.

    When you run the above code, you will get:
     

    DEBUG: Starting payment processing for user 12345
    DEBUG:payment_processor:Starting payment processing for user 12345
    INFO: Processing $150.0 payment for user 12345
    INFO:payment_processor:Processing $150.0 payment for user 12345
    INFO: Payment successful for user 12345
    INFO:payment_processor:Payment successful for user 12345
    DEBUG: Starting payment processing for user 12345
    DEBUG:payment_processor:Starting payment processing for user 12345
    INFO: Processing $15000.0 payment for user 12345
    INFO:payment_processor:Processing $15000.0 payment for user 12345
    WARNING: Large transaction detected: $15000.0
    WARNING:payment_processor:Large transaction detected: $15000.0
    INFO: Payment successful for user 12345
    INFO:payment_processor:Payment successful for user 12345
    True

     

    Next, let us proceed to understand more about logging exceptions.

     

    # Logging Exceptions Properly

     
    When exceptions occur, you need more than just the error message; you need the full stack trace. Here is how to capture exceptions effectively.
     

    import json
    
    logger = logging.getLogger('api_handler')
    logger.setLevel(logging.DEBUG)
    
    handler = logging.FileHandler('errors.log')
    formatter = logging.Formatter(
        '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
    )
    handler.setFormatter(formatter)
    logger.addHandler(handler)
    
    def fetch_user_data(user_id):
        logger.info(f'Fetching data for user {user_id}')
    
        try:
            # Simulate API call
            response = call_external_api(user_id)
            data = json.loads(response)
            logger.debug(f'Received data: {data}')
            return data
        except json.JSONDecodeError as e:
            logger.error(
                f'Failed to parse JSON for user {user_id}: {e}',
                exc_info=True
            )
            return None
        except ConnectionError as e:
            logger.error(
                f'Network error while fetching user {user_id}',
                exc_info=True
            )
            return None
        except Exception as e:
            logger.critical(
                f'Unexpected error in fetch_user_data: {e}',
                exc_info=True
            )
            raise
    
    def call_external_api(user_id):
        # Simulated API response
        return '{"id": ' + str(user_id) + ', "name": "John"}'
    
    fetch_user_data(123)

     

    The key here is the exc_info=True parameter. This tells the logger to include the full exception traceback in your logs. Without it, you only get the error message, which often is not enough to debug the problem.

    Notice how we catch specific exceptions first, then have a general Exception handler. The specific handlers let us provide context-appropriate error messages. The general handler catches anything unexpected and re-raises it because we do not know how to handle it safely.

    Also notice we log at ERROR for expected exceptions (like network errors) but CRITICAL for unexpected ones. This distinction helps you prioritize when reviewing logs.

     

    # Creating a Reusable Logger Configuration

     
    Copying logger setup code across files is tedious and error-prone. Let us create a configuration function you can import anywhere in your project.
     

    # logger_config.py
    
    import logging
    import os
    from datetime import datetime
    
    
    def setup_logger(name, log_dir="logs", level=logging.INFO):
        """
        Create a configured logger instance
    
        Args:
            name: Logger name (usually __name__ from calling module)
            log_dir: Directory to store log files
            level: Minimum logging level
    
        Returns:
            Configured logger instance
        """
        # Create logs directory if it doesn't exist
    
        if not os.path.exists(log_dir):
            os.makedirs(log_dir)
        logger = logging.getLogger(name)
    
        # Avoid adding handlers multiple times
    
        if logger.handlers:
            return logger
        logger.setLevel(level)
    
        # Console handler - INFO and above
    
        console_handler = logging.StreamHandler()
        console_handler.setLevel(logging.INFO)
        console_format = logging.Formatter("%(levelname)s - %(name)s - %(message)s")
        console_handler.setFormatter(console_format)
    
        # File handler - everything
    
        log_filename = os.path.join(
            log_dir, f"{name.replace('.', '_')}_{datetime.now().strftime('%Y%m%d')}.log"
        )
        file_handler = logging.FileHandler(log_filename)
        file_handler.setLevel(logging.DEBUG)
        file_format = logging.Formatter(
            "%(asctime)s - %(name)s - %(levelname)s - %(funcName)s:%(lineno)d - %(message)s"
        )
        file_handler.setFormatter(file_format)
    
        logger.addHandler(console_handler)
        logger.addHandler(file_handler)
    
        return logger

     

    Now that you have set up logger_config, you can use it in your Python script like so:
     

    from logger_config import setup_logger
    
    logger = setup_logger(__name__)
    
    def calculate_discount(price, discount_percent):
        logger.debug(f'Calculating discount: {price} * {discount_percent}%')
        
        if discount_percent < 0 or discount_percent > 100:
            logger.warning(f'Invalid discount percentage: {discount_percent}')
            discount_percent = max(0, min(100, discount_percent))
        
        discount = price * (discount_percent / 100)
        final_price = price - discount
        
        logger.info(f'Applied {discount_percent}% discount: ${price} -> ${final_price}')
        return final_price
    
    calculate_discount(100, 20)
    calculate_discount(100, 150)

     

    This setup function handles several important things. First, it creates the logs directory if needed, preventing crashes from missing directories.

    The function checks if handlers already exist before adding new ones. Without this check, calling setup_logger multiple times would create duplicate log entries.

    We generate dated log filenames automatically. This prevents log files from growing infinitely and makes it easy to find logs from specific dates.

    The file handler includes more detail than the console handler, including function names and line numbers. This is invaluable when debugging but would clutter console output.

    Using __name__ as the logger name creates a hierarchy that matches your module structure. This lets you control logging for specific parts of your application independently.

     

    # Structuring Logs with Context

     
    Plain text logs are fine for simple applications, but structured logs with context make debugging much easier. Let us add contextual information to our logs.
     

    import json
    from datetime import datetime, timezone
    
    class ContextLogger:
        """Logger wrapper that adds contextual information to all log messages"""
    
        def __init__(self, name, context=None):
            self.logger = logging.getLogger(name)
            self.context = context or {}
    
            handler = logging.StreamHandler()
            formatter = logging.Formatter('%(message)s')
            handler.setFormatter(formatter)
            # Check if handler already exists to avoid duplicate handlers
            if not any(isinstance(h, logging.StreamHandler) and h.formatter._fmt == '%(message)s' for h in self.logger.handlers):
                self.logger.addHandler(handler)
            self.logger.setLevel(logging.DEBUG)
    
        def _format_message(self, message, level, extra_context=None):
            """Format message with context as JSON"""
            log_data = {
                'timestamp': datetime.now(timezone.utc).isoformat(),
                'level': level,
                'message': message,
                'context': {**self.context, **(extra_context or {})}
            }
            return json.dumps(log_data)
    
        def debug(self, message, **kwargs):
            self.logger.debug(self._format_message(message, 'DEBUG', kwargs))
    
        def info(self, message, **kwargs):
            self.logger.info(self._format_message(message, 'INFO', kwargs))
    
        def warning(self, message, **kwargs):
            self.logger.warning(self._format_message(message, 'WARNING', kwargs))
    
        def error(self, message, **kwargs):
            self.logger.error(self._format_message(message, 'ERROR', kwargs))

     

    You can use the ContextLogger like so:
     

    def process_order(order_id, user_id):
        logger = ContextLogger(__name__, context={
            'order_id': order_id,
            'user_id': user_id
        })
    
        logger.info('Order processing started')
    
        try:
            items = fetch_order_items(order_id)
            logger.info('Items fetched', item_count=len(items))
    
            total = calculate_total(items)
            logger.info('Total calculated', total=total)
    
            if total > 1000:
                logger.warning('High value order', total=total, flagged=True)
    
            return True
        except Exception as e:
            logger.error('Order processing failed', error=str(e))
            return False
    
    def fetch_order_items(order_id):
        return [{'id': 1, 'price': 50}, {'id': 2, 'price': 75}]
    
    def calculate_total(items):
        return sum(item['price'] for item in items)
    
    process_order('ORD-12345', 'USER-789')

     

    This ContextLogger wrapper does something useful: it automatically includes context in every log message. The order_id and user_id get added to all logs without repeating them in every logging call.

    The JSON format makes these logs easy to parse and search.

    The **kwargs in each logging method lets you add extra context to specific log messages. This combines global context (order_id, user_id) with local context (item_count, total) automatically.

    This pattern is especially useful in web applications where you want request IDs, user IDs, or session IDs in every log message from a request.

     

    # Rotating Log Files to Prevent Disk Space Issues

     
    Log files grow quickly in production. Without rotation, they will eventually fill your disk. Here is how to implement automatic log rotation.
     

    from logging.handlers import RotatingFileHandler, TimedRotatingFileHandler
    
    def setup_rotating_logger(name):
        logger = logging.getLogger(name)
        logger.setLevel(logging.DEBUG)
    
        # Size-based rotation: rotate when file reaches 10MB
        size_handler = RotatingFileHandler(
            'app_size_rotation.log',
            maxBytes=10 * 1024 * 1024,  # 10 MB
            backupCount=5  # Keep 5 old files
        )
        size_handler.setLevel(logging.DEBUG)
    
        # Time-based rotation: rotate daily at midnight
        time_handler = TimedRotatingFileHandler(
            'app_time_rotation.log',
            when='midnight',
            interval=1,
            backupCount=7  # Keep 7 days
        )
        time_handler.setLevel(logging.INFO)
    
        formatter = logging.Formatter(
            '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
        )
        size_handler.setFormatter(formatter)
        time_handler.setFormatter(formatter)
    
        logger.addHandler(size_handler)
        logger.addHandler(time_handler)
    
        return logger
    
    
    logger = setup_rotating_logger('rotating_app')

     

    Let us now try to use rotation of log files:
     

    for i in range(1000):
        logger.info(f'Processing record {i}')
        logger.debug(f'Record {i} details: completed in {i * 0.1}ms')

     

    RotatingFileHandler manages logs based on file size. When the log file reaches 10MB (specified in bytes), it gets renamed to app_size_rotation.log.1, and a new app_size_rotation.log starts. The backupCount of 5 means you will keep 5 old log files before the oldest gets deleted.

    TimedRotatingFileHandler rotates based on time intervals. The ‘midnight’ parameter means it creates a new log file every day at midnight. You could also use ‘H’ for hourly, ‘D’ for daily (at any time), or ‘W0’ for weekly on Monday.

    The interval parameter works with the when parameter. With when='H' and interval=6, logs would rotate every 6 hours.

    These handlers are essential for production environments. Without them, your application could crash when the disk fills up with logs.

     

    # Logging in Different Environments

     
    Your logging needs differ between development, staging, and production. Here is how to configure logging that adapts to each environment.
     

    import logging
    import os
    
    def configure_environment_logger(app_name):
        """Configure logger based on environment"""
        environment = os.getenv('APP_ENV', 'development')
        
        logger = logging.getLogger(app_name)
        
        # Clear existing handlers
        logger.handlers = []
        
        if environment == 'development':
            # Development: verbose console output
            logger.setLevel(logging.DEBUG)
            handler = logging.StreamHandler()
            handler.setLevel(logging.DEBUG)
            formatter = logging.Formatter(
                '%(levelname)s - %(name)s - %(funcName)s:%(lineno)d - %(message)s'
            )
            handler.setFormatter(formatter)
            logger.addHandler(handler)
            
        elif environment == 'staging':
            # Staging: detailed file logs + important console messages
            logger.setLevel(logging.DEBUG)
            
            file_handler = logging.FileHandler('staging.log')
            file_handler.setLevel(logging.DEBUG)
            file_formatter = logging.Formatter(
                '%(asctime)s - %(name)s - %(levelname)s - %(funcName)s - %(message)s'
            )
            file_handler.setFormatter(file_formatter)
            
            console_handler = logging.StreamHandler()
            console_handler.setLevel(logging.WARNING)
            console_formatter = logging.Formatter('%(levelname)s: %(message)s')
            console_handler.setFormatter(console_formatter)
            
            logger.addHandler(file_handler)
            logger.addHandler(console_handler)
            
        elif environment == 'production':
            # Production: structured logs, errors only to console
            logger.setLevel(logging.INFO)
            
            file_handler = logging.handlers.RotatingFileHandler(
                'production.log',
                maxBytes=50 * 1024 * 1024,  # 50 MB
                backupCount=10
            )
            file_handler.setLevel(logging.INFO)
            file_formatter = logging.Formatter(
                '{"timestamp": "%(asctime)s", "level": "%(levelname)s", '
                '"logger": "%(name)s", "message": "%(message)s"}'
            )
            file_handler.setFormatter(file_formatter)
            
            console_handler = logging.StreamHandler()
            console_handler.setLevel(logging.ERROR)
            console_formatter = logging.Formatter('%(levelname)s: %(message)s')
            console_handler.setFormatter(console_formatter)
            
            logger.addHandler(file_handler)
            logger.addHandler(console_handler)
        
        return logger

     

    This environment-based configuration handles each stage differently. Development shows everything on the console with detailed information, including function names and line numbers. This makes debugging fast.

    Staging balances development and production. It writes detailed logs to files for investigation but only shows warnings and errors on the console to avoid noise.

    Production focuses on performance and structure. It only logs INFO level and above to files, uses JSON formatting for easy parsing, and implements log rotation to manage disk space. Console output is limited to errors only.
     

    # Set environment variable (normally done by deployment system)
    os.environ['APP_ENV'] = 'production'
    
    logger = configure_environment_logger('my_application')
    
    logger.debug('This debug message won\'t appear in production')
    logger.info('User logged in successfully')
    logger.error('Failed to process payment')

     

    The environment is determined by the APP_ENV environment variable. Your deployment system (Docker, Kubernetes, or other cloud platforms) sets this variable automatically.

    Notice how we clear existing handlers before configuration. This prevents duplicate handlers if the function is called multiple times during the application lifecycle.

     

    # Wrapping Up

     
    Good logging makes the difference between quickly diagnosing issues and spending hours guessing what went wrong. Start with basic logging using appropriate severity levels, add structured context to make logs searchable, and configure rotation to prevent disk space problems.

    The patterns shown here work for applications of any size. Start simple with basic logging, then add structured logging when you need better searchability, and implement environment-specific configuration when you deploy to production.

    Happy logging!
     
     

    Bala Priya C is a developer and technical writer from India. She likes working at the intersection of math, programming, data science, and content creation. Her areas of interest and expertise include DevOps, data science, and natural language processing. She enjoys reading, writing, coding, and coffee! Currently, she’s working on learning and sharing her knowledge with the developer community by authoring tutorials, how-to guides, opinion pieces, and more. Bala also creates engaging resource overviews and coding tutorials.



    Related posts:

    17 AI Reveals That Will Blow Your Mind

    Make PPTs, PDFs, and Excel Sheets in Seconds With Kimi K2.5

    The Best Proxy Providers for Large-Scale Scraping for 2026

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleMeta has closed three VR studios as part of its metaverse cuts
    Next Article Venezuela’s top lawmaker says more than 400 prisoners have been released | Nicolas Maduro News
    gvfx00@gmail.com
    • Website

    Related Posts

    Business & Startups

    A Developer-First Platform for Orchestrating AI Agents

    February 10, 2026
    Business & Startups

    7 Python EDA Tricks to Find and Fix Data Issues

    February 10, 2026
    Business & Startups

    How to Learn AI for FREE in 2026?

    February 10, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    BMW Will Put eFuel In Cars Made In Germany From 2028

    October 14, 202511 Views

    Best Sonic Lego Deals – Dr. Eggman’s Drillster Gets Big Price Cut

    December 16, 20259 Views

    What is Fine-Tuning? Your Ultimate Guide to Tailoring AI Models in 2025

    October 14, 20259 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram

    Subscribe to Updates

    Get the latest tech news from tastytech.

    About Us
    About Us

    TastyTech.in brings you the latest AI, tech news, cybersecurity tips, and gadget insights all in one place. Stay informed, stay secure, and stay ahead with us!

    Most Popular

    BMW Will Put eFuel In Cars Made In Germany From 2028

    October 14, 202511 Views

    Best Sonic Lego Deals – Dr. Eggman’s Drillster Gets Big Price Cut

    December 16, 20259 Views

    What is Fine-Tuning? Your Ultimate Guide to Tailoring AI Models in 2025

    October 14, 20259 Views

    Subscribe to Updates

    Get the latest news from tastytech.

    Facebook X (Twitter) Instagram Pinterest
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    © 2026 TastyTech. Designed by TastyTech.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.