UWSGI emperor mode & nginx
In this guide, I'll explain both how to use UWSGI emperor mode for better handling and performance with multiple apps, as well as a configuration system to make spontaneous deployment quick and simple.
Rationale
Consider a standard out of the box UWSGI, with some applications defined as such:
- /etc/uwsgi/apps-enabled
- Service1.ini [ /sites/service1/ ]
- Service2.ini [ /sites/service2/ ]
- Service3.ini [ /sites/service3/ ]
We have the apps-enabled folder containing our .ini files that give explicit paths to UWSGI to tell it where the plugin/service is located. If you wanted to add a new service, you would need to reload/restart UWSGI, and potentially any errors in these files might cause the whole process to bail and become unavailable.
UWSGI emperor mode, allows services/applications to be run as 'vassals', which means it will monitor the configuration folder [e.g. /etc/uwsgi/vassals ] dynamically and reload vassals as they change or become added. Any errors in one particular vassal cause that one to become cursed, but will not prevent well-behaved vassals from operating normally, which is desirable.
Let's imagine that we have alot of API services required for various websites or applications, and that we add and remove them for testing regularly, we could have something more like.
- /etc/uwsgi/apps-enabled
- Service1.ini [ /sites/prod/service1/ ]
- Service1-uat.ini [ /sites/uat/service1/ ]
- Service1-dev.ini [ /sites/dev/service1/ ]
- Service2.ini [ /sites/prod/service2/ ]
- Service2-uat.ini [ /sites/uat/service2/ ]
- Service2-dev.ini [ /sites/dev/service2/ ]
- Service3.ini [ /sites/prod/service3/ ]
- Service3-uat.ini [ /sites/uat/service3/ ]
- Service3-dev.ini [ /sites/dev/service3/
That's quite alot, and that's just a small example. It would be annoying to manually update the paths and configs and also restart UWSGI every time (potentially causing downtime for failed scripts). So let's go with the emperor config and also devise a way that we can create arbitrary installations even of the same app with a different commit/version.
Get into it
Configure the UWSGI daemon
Now you might have installed UWSGI through apt-get or some other package manager, and it would possibly already install a script in /etc/init.d/uwsgi - I'm going to suggest we trash this and just use a script like or exactly as per the below.
[Unit]
Description=UWSGI application server
After=syslog.target network.target nss-lookup.target
[Service]
Type=simple
ExecReload=/usr/bin/uwsgi reload
ExecStart=/usr/bin/uwsgi --emperor /etc/uwsgi/vassals --uid www-data --gid www-data
Restart=always
RestartSec=5
[Install]
WantedBy=default.target
This is because the default script isn't geared towards using emperor mode with vassals, and quite frankly the systemd ini as per above is a damn sight simpler to configure or edit. Take it or leave it, but it works easier for me. Once that's done reload the systemd and start UWSGI.
systemctl daemon-reload
systemctl start uwsgi
Optional NGINX mechanism
For the ultra-lazy (or ultra-smart) developer, here is a way we can avoid having to even touch the Nginx config when we add new services. This assumes that whatever we want will be defined under api.mycompany.com, it could exist in the nameserver records or it could simply be fudged using /etc/hosts.
Here's our config for Nginx:
server {
listen 80;
server_name ~^(?<app>[^.]+)\.api\.mycompany\.com$;
location / {
include uwsgi_params;
uwsgi_pass unix://tmp/$app.api.sock;
}
}
Well, isn't that neat? We tell nginx to look for a socket matching the path of /tmp/<app>
.api.sock, for anything that comes under the domain of <app>
.api.mycompany.com, the possibilities are limitless.
Configure the UWSGI vassals
We've got Nginx ready to serve to UWSGI for any service we wish to add arbitrarily, but how do we easily configure UWSGI - or even have it work itself out.
I'm glad you asked, let's consider the following setup:
We store an app.uwsgi.ini file in the root of our application's project/code directory. This tells uwsgi everything it needs to know about how to run the application, whether it requires python3, python2 or anything else. Since the project is designed to run with UWSGI, storing the configuration file there is no big deal (if you used some other service as well - you could store it there also). This would allow different revisions of the project to function as intended, for instance if version 1.x requires python2 and version 2.0 needs Python3, the config file would call the correct plugin and virtual-env as required.
Here is my magic configuration for the UWSGI vassal, that will (even when symlinked) take the real location of the configuration file, and use the containing directory as the app root. So even if you've symlinked /sites/service1/app.uwsgi.ini to /etc/uwsgi/vassals/service1.ini, it'll work out the app lives in /sites/service1 and set the chdir (and venv as applicable here), despite the fact that %d (magic variable in UWSGI config spec) would resolve it to /etc/uwsgi/vassals; This magic is thanks to @(exec://dirname $(realpath %P))
.
[uwsgi]
app_root = @(exec://dirname $(realpath %P))
chdir = %(app_root)
plugin = python3
wsgi-file = service.py
processes = 2 # number of cores on machine
max-requests = 5000
chmod-socket = 666
master = True
vacuum = True
socket = /tmp/%n.api.sock
home = %(app_root)/venv-bottle
So what does this mean? You can place the app anywhere you want, and call it service1-temp, service1-beta or whatever, and simply create a symlink in /etc/uwsgi/vassals and it'll launch. Now bare in mind here that we use the name of the symlinked configuration under /etc/uwsgi/vassals/ to name the socket, e.g. if you symlink /etc/uwsgi/vassals/service1-beta.ini -> /sites/uat-env/service1/app.uwsgi.ini then the socket will be created at /tmp/service1-beta.api.sock (the api
.sock is to make sure we don't clash with any other .sock files, e.g. if you call your service php5-fpm for some reason and you also have php5-fpm installed and running - that would be disastrous.
Also recall the NGINX configuration that uses <app_name>.api.mycompany.com
. This means that it will extract app_name, and look for the socket /tmp/app_name.api.sock
. This system is obviously geared toward running multiple applications under .api.mycompany.com, however it makes it incredibly easy to install a new API - or environment with suffixed name withour even touching the NGINX config.
Conclusion
Well, that really wasn't too hard. Now that we've got an easy way to deploy applications from anywhere simply by symlinking the bundled .ini for UWSGI, we can easily spin up new endpoints, without causing risk to UWSGI grinding to a halt due to one bad application config.