Software development and beyond

Running FastAPI applications in production

There are different ways to run FastAPI applications on production servers. Since I used Gunicorn HTTP server before for other Python-based applications, I keep using it with FastAPI too. This is possible thanks to Uvicorn package which includes a Gunicorn worker for running ASGI applications. That means that Gunicorn manages workers and Uvicorn processes the requests. We can then package everything up as a standard systemd service and serve it with a reverse proxy like nginx.

If we use Poetry to manage dependencies, we can simply install Gunicorn and uvicorn packages as the application dependencies and then run the application in production with the command poetry run. This way the application will start in its own virtual environment and will have all necessary dependencies, including the web server.

The heart of the solution is the systemd service configuration. To configure our FastAPI app as systemd service, we need to create an appname.service file in /etc/systemd/system/. In this case the configuration would be at /etc/systemd/system/appname.service:


ExecStart=/usr/local/bin/poetry run gunicorn main:app --workers 2 -k uvicorn.workers.UvicornWorker --bind unix:appname.sock --error-logfile /root/appname/error_log.txt


This configuration will run Gunicorn using poetry run from the WorkingDirectory (this is the folder with our application files). Poetry has to be installed on the system, in this case /usr/local/bin/poetry is the default path to Poetry on Fedora. main:app is the application entrypoint. Let's have a look at the parameters:

Restart=on-failure and RestartSec=5s will restart the application service in case the application crashes.

It might be a good idea to run the service under a different user than root by specifying a different User=.

When the systemd configuration is saved, it needs to be reloaded with systemctl daemon-reload. After that we can set the service to be started automatically after each server restart using enable and start it using start, the same way like we would do for any other systemd service:

sudo systemctl daemon-reload
sudo systemctl enable appname
sudo systemctl start appname

If everything is well at this point, we would get a running application with two workers at unix:/root/appname/app/appname.sock. Before the socket can be consumed by nginx, it is important to set correct permissions for it using chmod. Basically we need to make sure that nginx will be able to access the socket.

Let's finish with a sample nginx configuration to serve the application.

server {
    listen [::]:80;
    listen 80;
    return 301 https://$host$request_uri;

server {
    listen [::]:443 ssl http2;
    listen 443 ssl http2;
    access_log /var/log/nginx/appname.access.log;
    error_log /var/log/nginx/appname.error.log;

    ssl_certificate /etc/letsencrypt/live/;
    ssl_certificate_key /etc/letsencrypt/live/;

    include /etc/nginx/conf.d/ssl.conf;

    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;

    location / {
        proxy_pass http://unix:/root/appname/app/appname.sock;
        proxy_connect_timeout       75s;
        proxy_send_timeout          75s;
        proxy_read_timeout          75s;
        send_timeout                75s;

In the file we can see:

When we already know how we want to run our FastAPI app in production, we can automate the deployment with Ansible, Fabric or CI/CD pipeline.

Last updated on 23.10.2020.

devops fastapi python