The Ultimate Logging Guide for Python Explained with Examples and Best Practices

What will we cover?

At the end of this tutorial we will have covered the following.

  • Why do we need logging?
  • Why not just use print statements?
  • How logging works
  • Adding logging to our project

Why do we need logging?

As a starting developer you focus – and I did too – mostly on getting the program to get the job done and less on how it was being done.

Little thought was given on design and best practices.

Over time this change, and you might wonder why?

Debugging. Extending the code. Adding modules to the ecosystem.

You might only relate to the first one, debugging.

That is, your program is not doing as intended, and it might be difficult to find the bug. Yes, you add print statements all over the code to figure out how it works. If you are more advanced, you might be using the PyCharm debugger to find it.

Bottom line is, it is difficult. 

When your module become part of a bigger ecosystem, the non-intended behavior might be difficult to figure out – and you might not know where the error is.

Then adding print statements to all the modules in the ecosystem is not desirable.

While logging will not solve the problems, it will be a good tool to do that.

But logging is used for more than debugging.

  • Issue diagnosis. Your service crashes when some user does something. Well, the user might not really remember what was causing it. Then good logging can help you figuring out how to replicate the scenario in your development environment.
  • Analytics. Logs can give you information about load on your services, when and what modules are being used the most. This can help you to improve the experience for users.

We already discovered when you have bugs to catch, you will often insert print statements to see what happens. These print statements need to be removed afterwards.

That might be a lot of work. Especially, if the bug you are hunting might be part of several modules.

Also, we learned that logging is also used for issue diagnosis and analytics.

Yes, you might build your own way of making issue diagnosis and analytics, and it might work. But if you use standard modules for logging, it will integrate easy with other systems.

Don’t build your own – if there is a good standard way to do it.

This holds for logging. As we will see later – we will make an easy integration of all our logs into Grafana.

When we learn a bit more about logs, you will also realize, that logs have different levels. One level is for debugging – the lowest level – where you get a lot of information to help you find the bug. This is equivalent to adding all the print statements – and when you are done – it will automatically remove them again. All done by adjusting the debug level.

Step 2: How logging works

Logging comes in different levels.

  • DEBUG. Used to diagnose problems.
  • INFO. Confirmation on things are working as expected.
  • WARNING. Something unexpected happened or indicating some problem in the near future (but software is still working as expected).
  • ERROR. The software has not been able to perform some function as expected.
  • CRITICAL. A serious error. Program might not be able to continue running.

See the official docs here.

A simple example of how logging works is given here.

import logging
logging.warning('Watch out!')  # will print a message to the console
logging.info('I told you so')  # will not print anything

This might be strange. But the default logging level is WARNING, which means that all logging from WARNING and above (ERROR and CRITICAL) will be output.

On the other hand, if logging level is set to DEBUG – then all logging messages will be output.

This can be achieved as follows as well as writing the log to a file.

import logging
logging.basicConfig(filename='example.log', encoding='utf-8', level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
logging.error('And non-ASCII stuff, too, like Øresund and Malmö')

This will output all the logging messages to the file example.log.

Step 3: Some good practices with logging

The best advice with logging is, do not add too many logs. At first you might want to have logs all over.

Use them as intended from the schema.

LevelWhen it is used
DebugDetailed information, typically of interest only when diagnosing problems.
InfoConfirmation that things are working as expected.
WarningAn indication that something unexpected happened, or indicative of some problem in the near future (e.g. ‘disk space low’). The software is still working as expected.
ErrorDue to a more serious problem, the software has not been able to perform some function.
CriticalA serious error, indicating that the program itself may be unable to continue running.

The second thing to consider is, what to log?

  • When. Always log the time – when you look at different logs form different services communicating with each other – the time stamp will help you. Also, just to identify when something happened, if it correlates with an incident.
  • Where. What application or what file is the log coming from. This is also crucial when you have logs from many modules.
  • Level. It is a great idea to have the level, it makes it easy to identify WARNINGS or similar.
  • What. Then the actual message – what happened.

There are different ways to configure the logging.

Common ways include a log configuration file or directly in the code.Here we will keep it simple and do it directly in the code.

Step 4: Adding logging to a REST API

In this tutorial we will add logging to the REST API we created here.

You can clone the code from the repository here (see how to do it in the above tutorial) or you can just follow along the code here.

The repository consists of the files:

  • README.md
  • requirements.txt
  • .gitignore
  • server.py
  • make_order.py
  • app/main.py
  • app/routers/order.py

For a description see the above tutorial.

We will setup the logger inside the main file app/main.py

import logging
from http import HTTPStatus
from fastapi import FastAPI
from .routers import order
logging.basicConfig(encoding='utf-8', level=logging.INFO,
                    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__file__)
app = FastAPI(
    title='Your Fruit Self Service',
    version='1.0.0',
    description='Order your fruits here',
    root_path=''
)
app.include_router(order.router)

@app.get('/', status_code=HTTPStatus.OK)
async def root():
    """
    Endpoint for basic connectivity test.
    """
    logger.info('root called')
    return {'message': 'I am alive'}

The simple way to setup a logger (only done once in the code base) is to apply the basicConfig(…). Here we have setup encoding, level, and format. The format configures what to log (beside the message). It is a good idea to have the time, name, and level. The time is adding a time to when it happened, this is a crucial detail when you try to figure out when something went wrong and you compare logs from various services (which communicate with each other).

The name is the file-name, which tells you where the log originates from. As you will see next, the logger is used in various files from this module.

Finally, see the logger used in the root()-endpoint. Simply by calling logger.info(‘root called’). Adding logs like this can seem a bit overkill, but info logs like this can be crucial when you want to check if a service was running. It is common practice to have an endpoint like this and another service which calls it every minute or so. This is to check if the service is running. Now you can also investigate in the log, if the corresponding log is there for every minute (or how often you call it).

Now you have created the possibility to monitor if the service is running, and a log which can tell you the history of calls.

Now let’s explore the file app/routers/order.py.

import logging
from http import HTTPStatus
from fastapi import APIRouter
from app.routers.storage_module import Storage
logger = logging.getLogger(__file__)
router = APIRouter(tags=['income'])

@router.post('/order', status_code=HTTPStatus.OK)
async def order_call(order: str) -> Dict[str, str]:
    logger.info(f'Incoming order: {order}')
    return {'Received order': order}

Here you see, that the logger is initialized by getting the logger based on the filename (__file__). What this does, is, that if there is a logger with that filepath, then it will use it (and that is the one we created in main.py). Therefore it will inherit the same configuration.

Here we changed the print statement to an info-log.

In this project we use to name the logger after the file (__file__) (Notice, there are 2 underscores before and after file and not just one long) – this makes it easy to start with and is simple to exactly locate the file where the log comes from.

It is common to use __name__ in when building modules, as it keeps a qualified name of the module. It has the advantage over __file__, that the output is shorter and as descriptive, also, it can have logging to inherit defined logging on a lower level.

Step 5: Run it and recap

To run the server, simply run the server.py.

Then to test it, try to run the make_order.py.

This should create logs in the output of the server.py terminal.

To summarize it all.

  • What are the use cases of logging.
    • Debugging. Finding that nasty bug that is bugging you.
    • Issue diagnosis. When you get an issue and need to replicate it – then logs can help you figure out what happened.
    • Analysis. You want to know how much modules are used and by whom – then logs can be a great way.
  • As a beginner it can be tempting to use print statements – do not fall trap for that urge.
  • What are the different log-levels and how to use them.
    • Debug, info, warning, error, critical.
  • How to make simple configuration of the logger and set log-levels.
    • A simple logging will have timefilenamelevel, and message.
  • Then we added some info logs to a REST API.
  • The official Python logging guide is quite good (docs).
  • Here we have covered the basics you need to understand. Specific setup very from project to project and organizations.

Want to learn more?

Get my book that will teach you everything a modern Python cloud developer needs to master.

Learn how to create REST API microservices that generate metrics that allow you to monitor the health of the service.

What does all that mean? 

Don’t wait, get my book and master it and become one of the sought after Python developers and get your dream job.

Leave a Reply

%d bloggers like this: