Logging is really important. Getting logging to work well in python feels like black magic: there's a ton of configuration across structlog, warnings, std loggers, fastapi + celery context, JSON logging in production, etc that requires lots of fiddling and testing to get working. I finally got this working for me in my project template and extracted this out into a nice package.
Here are the main goals:
- High performance JSON logging in production
- All loggers, even plugin or system loggers, should route through the same formatter
- Structured logging everywhere
- Pytest plugin to easily capture logs and dump to a directory on failure. This is really important for LLMs so they can easily consume logs and context for each test and handle them sequentially.
- Ability to easily set thread-local log context
- Nice log formatters for stack traces, ORM (ActiveModel/SQLModel), etc
- Ability to log level and output (i.e. file path) by logger for easy development debugging
- If you are using fastapi, structured logging for access logs
- Improved exception logging with beautiful-traceback
uv add structlog-configfrom structlog_config import configure_logger
log = configure_logger()
log.info("the log", key="value")
# named logger just like stdlib
import structlog
custom_named_logger = structlog.get_logger(logger_name="test")JSON logging can be easily enabled:
from structlog_config import configure_logger
# Automatic JSON logging in production
log = configure_logger(json_logger=True)
log.info("user login", user_id="123", action="login")
# Output: {"action":"login","event":"User login","level":"info","timestamp":"2025-09-24T18:03:00Z","user_id":"123"}JSON logs use orjson for performance, include sorted keys and ISO timestamps, and serialize exceptions cleanly.
In complex applications, multiple components might try to configure the logger. You can finalize the configuration to prevent accidental reinitialization without the correct state:
from structlog_config import configure_logger
# Initialize and lock the configuration
configure_logger(finalize_configuration=True)
# Any subsequent calls will log a warning and return the existing logger
configure_logger(json_logger=True) This package adds support for a custom TRACE logging level (level 5) that's even more verbose than DEBUG.
The TRACE level is automatically set up when you call configure_logger(). You can use it like any other logging level:
import logging
from structlog_config import configure_logger
log = configure_logger()
# Using structlog
log.info("This is info")
log.debug("This is debug")
log.trace("This is trace") # Most verbose
# Using stdlib logging
logging.trace("Module-level trace message")
logger = logging.getLogger(__name__)
logger.trace("Instance trace message")Set the log level to TRACE using the environment variable:
LOG_LEVEL=TRACEBy default, all stdlib loggers are:
- Given the same global logging level, with some default adjustments for noisy loggers (looking at you,
httpx) - Use a structlog formatter (you get structured logging, context, etc in any stdlib logger calls)
- The root processor is overwritten so any child loggers created after initialization will use the same formatter
You can customize loggers by name (i.e. the name used in logging.getLogger(__name__)) using ENV variables.
For example, if you wanted to mimic OPENAI_LOG functionality:
LOG_LEVEL_OPENAI=DEBUGLOG_PATH_OPENAI=tmp/openai.logLOG_LEVEL_HTTPX=DEBUGLOG_PATH_HTTPX=tmp/openai.log
By default all logs go to stdout.
PYTHON_LOG_PATH supports three destination forms in both JSON and non-JSON modes:
stdout: the default destination, useful when you want to set it explicitly via environmentstderr: redirect everything to stderr without custom Python code- a filesystem path: append logs to a file; in JSON mode this produces newline-delimited JSON
Examples:
PYTHON_LOG_PATH=stderr
PYTHON_LOG_PATH=stdout
PYTHON_LOG_PATH=tmp/app.logstdout and stderr are treated as reserved keywords for PYTHON_LOG_PATH, not literal filenames.
To redirect everything (structlog and stdlib) to stderr with an explicit factory instead:
import sys
import structlog
from structlog_config import configure_logger
log = configure_logger(logger_factory=structlog.PrintLoggerFactory(file=sys.stderr))For JSON mode, pass a BytesLoggerFactory pointing to stderr.buffer. The .buffer is required because orjson serializes to bytes and sys.stderr only accepts str:
log = configure_logger(
json_logger=True,
logger_factory=structlog.BytesLoggerFactory(file=sys.stderr.buffer),
)Both structlog and stdlib loggers will write to the same destination.
An explicit logger_factory takes precedence over PYTHON_LOG_PATH.
This package includes several custom formatters that automatically clean up log output:
Automatically formats pathlib.Path and PosixPath objects to show relative paths when possible, removing the wrapper class names:
from pathlib import Path
log.info("Processing file", file_path=Path.cwd() / "data" / "users.csv")
# Output: file_path=data/users.csv (instead of PosixPath('/home/user/data/users.csv'))Formats whenever datetime objects without their class wrappers for cleaner output:
from whenever import ZonedDateTime
log.info("Event scheduled", event_time=ZonedDateTime(2025, 11, 2, 0, 0, 0, tz="UTC"))
# Output: event_time=2025-11-02T00:00:00+00:00[UTC]
# Instead of: event_time=ZonedDateTime("2025-11-02T00:00:00+00:00[UTC]")Supports all whenever datetime types: ZonedDateTime, Instant, LocalDateTime, PlainDateTime, etc.
Automatically converts ActiveModel BaseModel instances to their ID representation and TypeID objects to strings:
from activemodel import BaseModel
user = User(id="user_123", name="Alice")
log.info("User action", user=user)
# Output: user_id=user_123 (instead of full object representation)Automatically includes all context data from starlette-context in your logs, useful for request tracing:
# Context data (request_id, correlation_id, etc.) automatically included in all logs
log.info("Processing request")
# Output includes: request_id=abc-123 correlation_id=xyz-789 ...All formatters are optional and automatically enabled when their respective dependencies are installed. They work seamlessly in both development (console) and production (JSON) logging modes.
Note: Requires uv add structlog-config[fastapi] for FastAPI dependencies.
Structured, simple access log with request timing to replace the default fastapi access log. Why?
- It's less verbose
- Uses structured logging params instead of string interpolation
- debug level logs any static assets
Here's how to use it:
A pytest plugin that captures stdout, stderr, and exceptions from failing tests and writes them to organized output files. This is useful for debugging test failures, especially in CI/CD environments where you need to inspect output after the fact.
- Captures stdout, stderr, and exception tracebacks for failing tests
- Only creates output for failing tests by default (keeps directories clean)
- Separate files for each output type (stdout.txt, stderr.txt, exception.txt)
- Captures all test phases (setup, call, teardown)
- Optional fd-level capture for file descriptor output
Enable the plugin with the --structlog-output flag and -s (to disable pytest's built-in capture):
pytest --structlog-output=./test-output -sDisable all structlog pytest capture functionality with --no-structlog or explicitly with -p no:structlog_config.
The --structlog-output flag both enables the plugin and specifies where output files should be written.
Add --structlog-persist-all when a test succeeds but you still want to inspect what happened, for example when the test may have passed unexpectedly:
pytest --structlog-output=./test-output --structlog-persist-all -sThis is mainly a local debugging mode. It is usually not a good default for CI or other large test runs because keeping passing-test artifacts makes the output directories noisier and harder to inspect.
Recommended: Also disable pytest's logging plugin with -p no:logging to avoid duplicate/interfering capture:
pytest --structlog-output=./test-output -s -p no:loggingWhile the plugin works without this flag, disabling pytest's logging capture ensures cleaner output and avoids any potential conflicts between the two capture mechanisms.
Each failing test gets its own directory with separate files by default. When
--structlog-persist-all is enabled, passing tests keep the same artifact layout too so you can inspect a suspicious success after the run.
test-output/
test_module__test_name/
stdout.txt # stdout from test (includes setup, call, and teardown phases)
stderr.txt # stderr from test (includes setup, call, and teardown phases)
exception.txt # exception traceback
The plugin clears the per-test artifact directory before each test runs, so files from previous runs do not linger.
spawn is the default multiprocessing start method on macOS and Windows (and the recommended method on Linux). With spawn, child processes start as fresh Python interpreters and do not inherit any file descriptor redirections from the parent, so there is no transparent way to capture their output from the outside.
Instead, call configure_subprocess_capture() at the top of your subprocess entrypoint. It reads the STRUCTLOG_CAPTURE_DIR env var that the plugin sets automatically per-test and redirects the child's own fds to write there.
The child will create files alongside the normal stdout.txt/stderr.txt:
subprocess-<pid>-stdout.txtsubprocess-<pid>-stderr.txt
Example:
# tests/conftest.py
import pytest
from multiprocessing import Process
from structlog_config.pytest_plugin import configure_subprocess_capture
def run_server():
# Must be called before any output; redirects this process's fds to the capture files
configure_subprocess_capture()
start_my_server()
@pytest.fixture
def server():
proc = Process(target=run_server, daemon=True)
proc.start()
yield
proc.terminate()
proc.join()For a real-world example of this pattern see python-starter-template/tests/integration/server.py.
The parent process does not merge or modify subprocess output files.
When a test fails:
def test_user_login():
print("Starting login process")
print("ERROR: Connection failed", file=sys.stderr)
assert False, "Login failed"You'll get:
test-output/test_user__test_user_login/
stdout.txt: "Starting login process"
stderr.txt: "ERROR: Connection failed"
exception.txt: Full traceback with "AssertionError: Login failed"
Optional support for beautiful-traceback (>=0.8.0) provides enhanced exception formatting with improved readability, smart coloring, path aliasing (e.g., <pwd>, <site>), and better alignment. Automatically activates when installed:
uv add beautiful-traceback --group devNo configuration needed - just install and configure_logger() will use it automatically.
Replaces Python's default exception handler to log uncaught exceptions through structlog instead of printing them to stderr. This ensures all exceptions are formatted consistently with your logging configuration and includes support for threading exceptions.
When installed, the hook intercepts both main thread exceptions (sys.excepthook) and thread exceptions (threading.excepthook), preserving standard behavior for KeyboardInterrupt while logging all other uncaught exceptions with full traceback information.
from structlog_config import configure_logger
# Install exception hook during logger configuration
configure_logger(install_exception_hook=True)
# Uncaught exceptions now go through structlog
raise ValueError("This will be logged, not printed to stderr")For threading exceptions, the hook automatically includes thread metadata:
import threading
def worker():
raise RuntimeError("Error in thread")
thread = threading.Thread(target=worker, name="worker-1")
thread.start()
thread.join()
# Logs: uncaught_exception thread={'name': 'worker-1', 'id': ..., 'is_daemon': False}Often it's helpful to update logging level within an iPython session. You can do this and make sure all loggers pick up on it.
%env LOG_LEVEL=DEBUG
from structlog_config import configure_logger
configure_logger()
General logging:
- https://github.com/replicate/cog/blob/2e57549e18e044982bd100e286a1929f50880383/python/cog/logging.py#L20
- https://github.com/apache/airflow/blob/4280b83977cd5a53c2b24143f3c9a6a63e298acc/task_sdk/src/airflow/sdk/log.py#L187
- https://github.com/kiwicom/structlog-sentry
- https://github.com/jeremyh/datacube-explorer/blob/b289b0cde0973a38a9d50233fe0fff00e8eb2c8e/cubedash/logs.py#L40C21-L40C42
- https://stackoverflow.com/questions/76256249/logging-in-the-open-ai-python-library/78214464#78214464
- https://github.com/openai/openai-python/blob/de7c0e2d9375d042a42e3db6c17e5af9a5701a99/src/openai/_utils/_logs.py#L16
- https://www.python-httpx.org/logging/
FastAPI access logger:
- https://github.com/iloveitaly/fastapi-logger/blob/main/fastapi_structlog/middleware/access_log.py#L70
- https://github.com/fastapiutils/fastapi-utils/blob/master/fastapi_utils/timing.py
- https://pypi.org/project/fastapi-structlog/
- https://pypi.org/project/asgi-correlation-id/
- https://gist.github.com/nymous/f138c7f06062b7c43c060bf03759c29e
- https://github.com/sharu1204/fastapi-structlog/blob/master/app/main.py
This project was created from iloveitaly/python-package-template