Best practices for deploying Elixir apps
By DevOps on Tue 31 December 2019
inFiguring out how to deploy your Elixir app can be confusing, as it's a bit different from other languages. It's very mature and well thought out, though. Once you get the hang of it, it's quite nice.
Our mix_deploy and mix_systemd libraries help automate the process. This working example puts the pieces together to get you started quickly. This post gives more background and links to advanced topics.
Summary
Big picture, we deploy Erlang "releases" using systemd for process supervision. We run in cloud or dedicated server instances. We build and deploy on the same server or build on a continuous integration server and deploy using Ansible or AWS CodeDeploy.
We make healthcare and financial apps, so we are paranoid about security. We run apps that get large amounts of traffic, so we are careful about performance. And we deploy to the cloud, so the apps need to be stateless, dynamically scaled under the control of a system like AWS CodeDeploy.
Locking dependency versions
The process starts in your dev environment. When you run mix deps.get
,
mix fetches the dependencies listed in the mix.exs
, but they are normally
only loosely specified, e.g. {:plug_cowboy, "~> 2.0"}
will actually install
the latest compatible version, 2.6.3.
Mix records the specific versions that it fetched in the mix.lock
file.
Later, on the build machine, mix uses the specific package version or git
reference in the lock file to build the release.
This makes a release completely predictable and reproducible. It does not
depend on the version of libraries installed on the server, and one app doesn't
affect another. It's like Ruby's Gemfile.lock
or Node's package-lock.json
files. This locking happens automatically as part of the standard mix process,
just make sure you check the mix.lock
file into source control.
Managing Erlang and Elixir versions
For simple deployments, we can install Erlang and Elixir from binary packages. Instead of using the packages that come with the OS, which are generally out of date, use the packages from Erlang Solutions.
One disadvantage of OS packages is that only one version can be installed at a time. If different projects need different versions, then we have a conflict. Similarly, when we upgrade Erlang or Elixir, we need to first test the code with the new version, moving it through dev and test environments, then putting it into production. If anything goes wrong, we need to be able to roll back quickly. To support this, we need to precisely specify runtime versions and keep multiple versions installed so we can switch between them.
When building a release for production, Elixir is just another library dependency as far as Erlang is concerned. We can also package the Erlang virtual machine inside the release, so it's not necessary to install Erlang on the prod machine globally at all. Just install the release and it includes the matching VM.
That lets us upgrade production systems with no drama. We have apps which have been running continuously for years on clusters of servers, upgrading through multiple Elixir and Erlang versions with no downtime.
ASDF manages multiple versions of Erlang, Elixir and Node.js. It is a language-independent equivalent to tools like Ruby's RVM or rbenv.
The .tool-versions
file in the project root specifies the versions to use:
erlang 22.2
elixir 1.9.4
nodejs 10.15.3
ASDF looks at the .tool-versions
file and automatically sets the path to
point to the correct version. The build script for the project runs asdf
install
to install the matching Erlang, Elixir and Node.js versions.
See Using ASDF with Elixir and Phoenix for details.
Building and testing
We normally develop on macOS and deploy to Linux. The Erlang VM mostly isolates us from the operating system, and mix manages library dependencies tightly, so we don't find it necessary to use Docker or Vagrant. It is necessary, however, to build the release with an Erlang VM executable that matches your target system. You can't just build the release on macOS and use it on a Linux server.
For simple projects, we build on the same server that runs the app: check out the code from git, build a release, then deploy it locally running under systemd.
In larger projects, a CI/CD server checks out the code, runs tests, then builds a release. We then deploy to the cloud using AWS CodeDeploy or deploy the release using Ansible.
Like your dev machine, the build server runs ASDF. When it makes a build, it
automatically uses the versions of Erlang and Elixir specified in the
.tool-versions
file, which is in sync with the code. These build
scripts
handle the setup and build process.
Erlang releases
The most important part of the deployment process is using Erlang "releases". A release combines the Erlang VM, your application, and the libraries it depends on into a tarball, which you deploy as a unit. The release has a script to start the app, launched and supervised by the OS init system (e.g. systemd). If it dies, the system restarts it.
Releases handle a lot of the details you need to run things reliably in production, e.g.:
- Packaging
- Configuration
- Running migrations
- Getting a console on a running app
- Upgrades
Building releases
Since Elixir 1.9, mix has built in support for creating releases. For earlier versions, use the Distillery library.
Configure your release in mix.exs
.
def project do
[
app: :foo,
releases: [
prod: [
include_executables_for: [:unix],
steps: [:assemble, :tar]
]
]
]
end
rel/vm.args.eex
sets Erlang VM startup arguments. We normally tune it
to increase TCP ports for
high volume apps.
Generate a template in your project under rel
:
mix release.init
Edit it as needed, then build the release:
MIX_ENV=prod mix release
This creates a tarball with everything you need to deploy:
_build/prod/foo-0.1.0.tar.gz
Running database migrations
In the deployed system, we don't have mix. The release command script allows us to call an Elixir function to run migrations from a release.
/srv/foo/current/bin/foo eval "Foo.Release.migrate"
Configuration
There are four different kinds of things that we may want to configure:
-
Static information about application layout, e.g. file paths. This is the same for all machines in an environment, e.g. staging or prod.
-
Information specific to the environment, e.g. the hostname of the db server.
-
Secrets such as db passwords, API keys or TLS keys.
-
Dynamic information such as the IP address of the server or other machines in the cluster.
Elixir has a couple of mechanisms for storing configuration. When you compile
the release, it converts Elixir-format config files like config/prod.exs
into an initial application environment (sys.config
) that is read by
Application.get_env/3.
That's fine for simple, relatively static apps. It's better to keep secrets
separate from the release, though.
Elixir 1.9 releases support dynamic configuration at runtime. You can run the
Elixir file config/releases.exs
when it boots or use the shell script
rel/env.sh.eex
to set environment vars. With these you can theoretically do
anything. In practice, however, it can be more convenient and secure to process
the config outside of the app. That's where
mix_systemd and
mix_deploy come in.
Environment vars
The simplest way to configure your app is via OS environment variables.
You can set them via the systemd supervisor or container runtime.
Your application then calls System.get_env/1
in config/releases.exs
or
application startup. Note that these environment vars are read at runtime,
not when building your app.
mix_systemd supports reading environment vars from files, e.g.
/srv/foo/etc/environment
/etc/foo/environment
/run/foo/environment
This lets you set config defaults in the release, then override them in the environment or at runtime.
Config providers
At a certain point, making everything into an environment var becomes annoying. It's verbose and vars are simple strings, so you have to encode values safely and convert them back to lists, integers or atoms.
Config providers load data on startup, merging it with the default application environment before starting the VM. This lets us keep secrets outside of the release file and change settings depending on where the app is running. We also keep secrets out of the build environment, e.g. a shared CI system.
They support standard formats like TOML:
[foo."Foo.Repo"]
url = "ecto://foo_prod:Sekrit!@db.foo.local/foo_prod"
pool_size = 15
[foo."FooWeb.Endpoint"]
secret_key_base = "EOdJB1T39E5Cdeebyc8naNrOO4HBoyfdzkDy2I8Cxiq4mLvIQ/0tK12AK1ahrV4y"
Add the TOML config provider to mix.exs
:
defp releases do
[
foo: [
include_executables_for: [:unix],
config_providers: [
{TomlConfigProvider, path: "/etc/foo/config.toml"}
],
steps: [:assemble, :tar]
]
]
end
The startup scripts read the initial application environment compiled into the
release, parse the config file, merge the values, write it to a temp file then
start the VM. Because of that, they need a writable directory. That is
configured using the RELEASE_TMP
environment var, which you can set to the app's
runtime_dir
, e.g. /run/foo
.
Copying files
This config file approach is simple, but effective. The question is how to get
the environment files onto the server. When deploying a simple app on the same
server, we can just copy the prod.secret.exs
or environment file to /etc/foo
.
When deploying to dedicated servers, we can generate the config file using Ansible
and push it to the server.
In cloud environments, we may run from a read-only image, e.g. an Amazon AMI,
which gets configured at start up based on the environment by copying the
config from an S3 bucket. See deploy-sync-config-s3
in
mix_deploy.
Config servers and vaults
You can also store config params in an external configuration system and read them at runtime. An example is AWS Systems Manager Parameter Store.
Set a parameter using the AWS CLI:
aws ssm put-parameter --name '/foo/prod/db/password' --type ‘SecureString’ --value 'Sekrit!"
While it's possible to read params in config/releases.exs
, it's tedious.
Better is to grab all of them at once and write them to a file, then read it in
with a Config Provider like aws_ssm_provider.
Application initialization
Instead of doing a lot of work in your config/releases.exs
file, keep it focused
on getting the data. Handle application config in your
Application.start/2 or
Supervisor.init/1. This
leverages the supervision structure of OTP, allowing components to fail and be
restarted with the right configuration.
Supervising your app
In the Erlang OTP framework, we use supervisors to start and stop processes, restarting them in case of problems. It's turtles all the way down: you need a supervisor to make sure your Erlang VM is running, restarting it if there is a problem.
Ignore the haters, systemd is the best supervisor we have right now, and all the Linux distros are standardizing on it. We might as well take advantage of it. Systemd handles all the things that "well behaved" daemons need to do. Instead of scripts, it has declarative config that handles standard situations. It sets up the environment, handles logging and controls permissions.
mix_systemd generates a systemd unit file for your app and mix_deploy generates the scripts it needs to start and configure it.
Permissions and directories
For security, following the principle of least privilege, we limit the app to only what it really needs to do its job. If the app is compromised, the attacker can only do what the app can do.
We use one OS user (deploy
) to upload the release files, and another (e.g.
foo
) to run the app. This means that the app only needs to have read-only access to
its own source code and config. The app user account does not need permissions
to restart the app, that's handled by the deploy user or systemd.
We make use of systemd features and cloud services. Instead of writing our own log files, we send them to journald, which sends them to CloudWatch Logs or ELK. When running in the cloud, the app should be stateless. Instead of putting files on the disk, it keeps state in an RDS database and uses S3 for file storage.
The result is that many apps can run without needing write access to anything on the disk, improving security.
Deploying the app
So now we have a release tarball and some config files, time to put them on a server.
- Build and deploy to the same server
- Build with CodeBuild and deploy with CodeDeploy doc and example
- Build on a build server and deploy using Ansible
Connecting to the outside world
The app isn't much use if we can't talk to it. There are two options for how to receive traffic, direct or via a proxy.
You can serve your Phoenix app with Nginx, but listening direct gives you lower latency and overall lower complexity. Erlang can handle lots of load with no problems. For example, Heroku's routing layer is based on Erlang. We have apps that handle a billion requests a day, including DDOS attacks. You can handle 3000 requests per second on a simple $5/month Digital Ocean droplet.
In a modern cloud app running running behind a load balancer, then listening on port 4000 is fine, just tell the load balancer to use that port. For a freestanding app, we need to listen to port 80 and/or port 443 for SSL. We normally redirect traffic from port 80 to 4000 in the firewall using iptables.
You may need to set some HTTP options that Nginx was dealing with, e.g.:
config :foo, FooWeb.Endpoint,
http: [
compress: true,
protocol_options: [max_keepalive: 5_000_000]
],
Things you probably don't need right now
While they are cool, you don't initially need to worry about:
- Hot code updates
- Distributed Erlang
Additional topics
- Deploying an Elixir app to Digital Ocean with mix_deploy
- Serving Phoenix static assets from a CDN
- Deploying Elixir apps without sudo
- Benchmarking Phoenix on Digital Ocean
- Improving app security with the principle of least privilege
- Presentation on Elixir performance
- Incrementally migrating a legacy app to Phoenix
- Secure web applications with GraphQL and Elixir