skip to content

loguru β€” Structured Logging

Add structured, colorized logging to Python apps with loguru. Covers sinks, log levels, file rotation, retention, exception catching, and context binding.

4 min read 11 snippets yesterday quick read

loguru β€” Structured Logging#

What it is#

Loguru is a structured logging library that replaces Python’s standard logging module with a simpler, zero-configuration API. It provides:

  • Colorized output with level labels out of the box
  • File sinks with automatic rotation, retention, and compression
  • logger.catch() decorator for automatic exception catching
  • Context binding with logger.bind()
  • Structured JSON output via serialize=True

Install#

pip install loguru

Quick example#

from loguru import logger

logger.debug("Starting up")
logger.info("Server running on port {port}", port=8080)
logger.warning("Low memory: {mb} MB remaining", mb=128)
logger.error("Connection refused to {host}", host="db.internal")

Output:

2026-04-25 14:30:01.234 | DEBUG    | __main__:<module>:3 - Starting up
2026-04-25 14:30:01.234 | INFO     | __main__:<module>:4 - Server running on port 8080
2026-04-25 14:30:01.234 | WARNING  | __main__:<module>:5 - Low memory: 128 MB remaining
2026-04-25 14:30:01.234 | ERROR    | __main__:<module>:6 - Connection refused to db.internal

[!NOTE] In the terminal, each level is colorized: DEBUG is dim, INFO is white, WARNING is yellow, ERROR is red. The output includes module, function, and line number automatically.

When / why to use it over logging#

Featureloggingloguru
Zero-config setup❌ (handlers, formatters)βœ…
Colorized outputβŒβœ…
f-string-style messagesβŒβœ…
File rotationManual setuplogger.add("app.log", rotation="10 MB")
Exception contextManuallogger.catch() / logger.exception()
Structured JSONManuallogger.add(..., serialize=True)

Common pitfalls#

[!WARNING] Thread-safety β€” loguru is thread-safe by default. But if you’re using multiprocessing, you must use enqueue=True on your file sink: logger.add("app.log", enqueue=True) to avoid write conflicts.

[!WARNING] logger.remove() before reconfiguring β€” loguru starts with a default stderr sink (id 0). If you add your own stderr sink without removing the default, you get duplicate output. Call logger.remove() first.

[!TIP] Use logger.info("value={x}", x=val) (lazy formatting) rather than logger.info(f"value={val}"). Lazy formatting skips string interpolation entirely if the log level is disabled β€” important for performance in hot paths.

Richer example β€” file sink with rotation#

import sys
from loguru import logger

# Remove default stderr sink and add custom ones
logger.remove()

# Concise stderr: only INFO and above
logger.add(
    sys.stderr,
    level="INFO",
    format="<green>{time:HH:mm:ss}</green> | <level>{level:<8}</level> | {message}",
    colorize=True,
)

# Verbose file: DEBUG and above, with rotation
logger.add(
    "logs/app.log",
    level="DEBUG",
    rotation="10 MB",       # new file after 10 MB
    retention="7 days",     # delete logs older than 7 days
    compression="zip",      # compress rotated files
    format="{time:YYYY-MM-DD HH:mm:ss} | {level:<8} | {name}:{line} | {message}",
)

logger.debug("Debug detail β€” goes to file only")
logger.info("Server started")
logger.warning("Disk usage above 80%")

Output (stderr only):

14:30:01 | INFO     | Server started
14:30:01 | WARNING  | Disk usage above 80%

Exception catching#

from loguru import logger

@logger.catch
def divide(a: int, b: int) -> float:
    return a / b

result = divide(10, 0)   # won't crash the program β€” logs the full traceback

Output:

2026-04-25 14:30:01.234 | ERROR    | __main__:divide:4 - An error has been caught in function 'divide', process 'MainProcess' (1234), thread 'MainThread' (5678):
Traceback (most recent call last):
  ...
ZeroDivisionError: division by zero

Context binding#

from loguru import logger

def handle_request(request_id: str, user_id: int):
    log = logger.bind(request_id=request_id, user_id=user_id)
    log.info("Request received")
    log.info("Processing complete")

handle_request("req-abc123", user_id=42)

Output:

2026-04-25 14:30:01 | INFO | __main__:handle_request:4 - Request received
2026-04-25 14:30:01 | INFO | __main__:handle_request:5 - Processing complete

JSON / structured output#

import sys
from loguru import logger

logger.remove()
logger.add(sys.stdout, serialize=True)   # every line is a JSON object

logger.info("User logged in", user_id=42, action="login")

Output:

{"text": "2026-04-25 14:30:01.234 | INFO | __main__:<module>:5 - User logged in", "record": {"elapsed": ..., "exception": null, "extra": {"user_id": 42, "action": "login"}, "file": ..., "function": "<module>", "level": {"icon": "ℹ️", "name": "INFO", "no": 20}, "line": 5, "message": "User logged in", ...}}

Log levels#

Levellogger.X()Default value
TRACElogger.trace()5
DEBUGlogger.debug()10
INFOlogger.info()20
SUCCESSlogger.success()25
WARNINGlogger.warning()30
ERRORlogger.error()40
CRITICALlogger.critical()50

[!TIP] logger.success() is a loguru-specific level (between INFO and WARNING) useful for marking successful completion of significant operations.