Best practices for deploying Elixir apps

By Jake Morrison in DevOps on Mon 17 June 2019

Figuring out how to deploy your Elixir app can be confusing, as it's a bit different from other languages. It's very mature and well thought out, though. Once you get the hang of it, it's quite nice.

Our mix_deploy and mix_systemd libraries help automate the process. This working example puts the pieces together to get you started quickly. This post gives more background and links to advanced topics.

Summary

Big picture, we deploy Erlang "releases" using systemd for process supervision. We run in cloud or dedicated server instances. We build and deploy on the same server or build on a continuous integration server and deploy using Ansible or AWS CodeDeploy.

We make healthcare and financial apps, so we are paranoid about security. We run apps that get large amounts of traffic, so we are careful about performance. And we deploy to the cloud, so the apps need to be stateless, dynamically scaled under the control of a system like AWS CodeDeploy.

Locking dependency versions

The process starts in your dev environment. When you run mix deps.get, mix fetches the dependencies listed in the mix.exs, but they are normally only loosely specified, e.g. {:plug_cowboy, "~> 2.0"} will actually install the latest compatible version, 2.6.3.

Mix records the specific versions that it fetched in the mix.lock file. Later, on the build machine, mix uses the specific package version or git reference in the lock file to build the release.

This makes a release completely predictable and reproducible. It does not depend on the version of libraries installed on the server, and one app doesn't affect another. It's like Ruby's Gemfile.lock or Node's package-lock.json files. This locking happens automatically as part of the standard mix process, just make sure you check the mix.lock file into source control.

Managing Erlang and Elixir versions

For simple deployments, we can install Erlang and Elixir from binary packages. Instead of using the packages that come with the OS, which are generally out of date, use the packages from Erlang Solutions.

One disadvantage of OS packages is that only one version can be installed at a time. If different projects need different versions, then we have a conflict. Similarly, when we upgrade Erlang or Elixir, we need to first test the code with the new version, moving it through dev and test environments, then putting it into production. If anything goes wrong, we need to be able to roll back quickly. To support this, we need to precisely specify runtime versions and keep multiple versions installed so we can switch between them.

When building a release for production, Elixir is just another library dependency. We can also package the Erlang virtual machine inside the release, so it's not necessary to install Erlang on the prod machine globally at all. Just install the release and it includes the matching VM.

That lets us upgrade production systems with no drama. We have apps which have been running continuously for years on clusters of servers, upgrading through multiple Elixir and Erlang versions with no downtime.

ASDF manages multiple versions of Erlang, Elixir and Node.js. It is a language-independent equivalent to tools like Ruby's RVM or rbenv.

The .tool-versions file in the project root specifies the versions to use:

erlang 21.3
elixir 1.8.1
nodejs 10.15.3

ASDF looks at the .tool-versions file and automatically sets the path to point to the correct version. The build script for the project runs asdf install to install the matching Erlang, Elixir and Node.js versions.

See Using ASDF with Elixir and Phoenix for details.

Building and testing

We normally develop on macOS and deploy to Linux. The Erlang VM mostly isolates us from the operating system, and mix manages library dependencies tightly, so we don't find it necessary to use Docker or Vagrant. It is necessary, however, to build the release with an Erlang VM binary that matches your target system. You can't just build the release on macOS and use it on a Linux server.

For simple projects, we build on the same server that runs the app: check out the code from git, build a release, then deploy it locally running under systemd.

In larger projects, a CI/CD server checks out the code, runs tests, then builds a release. We then deploy to the cloud using AWS CodeDeploy or deploy the release using Ansible.

Like your dev machine, the build server runs ASDF. When it makes a build, it automatically uses the versions of Erlang and Elixir specified in the .tool-versions file. These build scripts handle the setup and build process.

Erlang releases

The most important part of the deployment process is using Erlang "releases". A release combines the Erlang VM, your application, and the libraries it depends on into a tarball, which you deploy as a unit. The release has a script to start the app, launched and supervised by the OS init system (e.g. systemd). If it dies, the system restarts it.

Releases handle a lot of the details you need to run things reliably in production, e.g.:

  • Packaging
  • Configuration
  • Running migrations
  • Getting a console on a running app
  • Upgrades

Building releases

Since Elixir 1.9, mix has built in support for creating releases. For earlier versions, use the Distillery library.

Configure your release in mix.exs.

def project do
  [
    app: :mix_deploy_example,
    releases: [
      prod: [
        include_executables_for: [:unix],
      ]
    ]
  ]
end

rel/vm.args.eex sets Erlang VM startup arguments. We normally tune it to increase TCP ports for high volume apps. Generate a template in your project under rel by running mix release.init, then edit it as needed.

Next, build the release:

MIX_ENV=prod mix release

This creates a tarball with everything you need to deploy:

_build/prod/rel/foo/releases/0.1.0/foo.tar.gz

Running database migrations

You can configure the release to run ecto migrations.

Configuration

Configuration for an Elixir application can thought of in three parts:

Build

When mix builds a release, it converts the Elixir config file config/prod.exs into an Erlang term format file sys.config and packages it with the release. Applications can then read parameters using Application.get_env/3.

Build settings should be used for things that don't change, e.g. file paths. We should ideally be able to run the same release build in our staging and prod environments, testing what we will run in production.

Secrets, per-environment and per-server

These settings depend on the environment the application is running in, e.g. the hostname of the db server and secrets like the db password, API keys.

"Config providers" load data on startup, merging it with the default application environment before starting the VM. This lets us keep secrets outside of the release file and change settings depending on where the app is running. We also keep secrets out of the build environment, e.g. a shared CI system.

Runtime

These settings are dynamic and may change every time the application starts. For example, if we are running in an AWS auto scaling group, the IP address of the server may change every time it starts.

For deployment we are mainly concerned with runtime settings which need to be set before the VM starts up. Others we can get later from Elixir.

Distillery configuration providers

The Distillery Mix.Config provider reads files in Elixir config format. Instead of including your prod.secret.exs file in prod.exs, you can copy it to the server separately, and it will be read at startup.

set config_providers: [
  {Mix.Releases.Config.Providers.Elixir, ["/etc/foo/config.exs"]}
]

Since config.exs is an Elixir script, you can also run arbitrary Elixir code, loading config from an external source like AWS Parameter Store or etcd. The code runs in a relatively limited environment, however, and you can run into bootstrapping problems where you need config to be able to find your config server.

A better approach is to do the minimum config in startup scripts, then handle application config in your Application.start/2 or Supervisor.init/1. This leverages the supervision structure of OTP, allowing components to fail and be restarted with the right configuration.

If, like me, the idea of having config files running arbitrary code triggers your paranoia, then you can store your secrets in TOML format and read them with the TOML configuration provider.

Add the TOML library to your deps in mix.exs:

{:toml, "~> 0.5.2"}

In rel/config.exs set:

set config_providers: [
  {Toml.Provider, [path: "/etc/foo/config.toml"]},
]

Then the config file looks like:

[foo."Foo.Repo"]
username = "app_prod"
password = "CHANGEME"
database = "app_prod"
ssl = true
pool_size = 15

[foo."FooWeb.Endpoint"]
secret_key_base = "CHANGEME2"

This config file approach is simple, but effective. When deploying a simple app on the same server, we can just copy the prod.secret.exs to /etc. When deploying to dedicated servers, we generate the config file using Ansible and push it to the server. When deploying cloud apps to an autoscaling group, we put the config file in S3 and copy it to the instance on startup. See deploy-sync-config-s3 in mix_deploy.

Environment vars

If you are a fan of The 12-Factor App or Heroku, you can put your config in environment vars. In addition to the app config file, the mix_systemd unit attempts to load environment vars from the following files:

/srv/foo/current/etc/environment
/srv/foo/etc/environment
/etc/foo/environment
/run/foo/runtime-environment

This lets you set config defaults in the release, then override them in the environment or at runtime. Environment vars have their limitations, though. They are best used for short simple strings.

Supervising your app

In the Erlang OTP framework, we use supervisors to start and stop processes, restarting them in case of problems. It's turtles all the way down: you need a supervisor to make sure your Erlang VM is running, restarting it if there is a problem.

Ignore the haters, systemd is the best supervisor we have right now, and all the Linux distros are standardizing on it. We might as well take advantage of it. Systemd handles all the things that "well behaved" daemons need to do. Instead of scripts, it has declarative config that handles standard situations. It sets up the environment, handles logging and controls permissions.

mix_systemd and mix_deploy generate a systemd unit file for your app and the scripts it needs to start and configure the app.

Permissions and directories

For security, following the principle of least privilege, we limit the app to only what it really needs to do its job. If the app is compromised, the attacker can only do what the app can do.

We use one OS user (deploy) to upload the release files, and another (e.g. foo) to run the app. This means that the app only needs read-only access to its own source code and config. The app user account does not need permissions to restart the app, that's handled by the deploy user or systemd.

We make use of systemd features and cloud services. Instead of writing our own log files, we send them to journald, which sends them to CloudWatch Logs or ELK. When running in the cloud, the app should be stateless. Instead of putting files on the disk, it keeps state in an RDS database and uses S3 for file storage.

The result is that apps may be able to run without needing write access to anything, which improves security.

Deploying the app

So now we have a release tarball and some config files, time to put them on a server.

Connecting to the outside world

The app isn't much use if we can't talk to it. There are two options for how to receive traffic, direct or via a proxy.

You can serve your Phoenix app with Nginx, but listening direct gives you lower latency and overall lower complexity. Erlang can handle lots of load with no problems. For example, Heroku's routing layer is based on Erlang. We have apps that handle a billion requests a day, including DDOS attacks. You can handle 3000 requests per second on a simple $5/month Digital Ocean droplet.

In a modern cloud app running running behind a load balancer, then listening on port 4000 is fine. Just tell the load balancer to use that port. For a freestanding app, we need to listen to port 80 and/or port 443 for SSL. We normally redirect traffic from port 80 to 4000 in the firewall using iptables.

You may need to set some HTTP options that Nginx was dealing with, e.g.:

config :foo, FooWeb.Endpoint,
  http: [
    compress: true,
    protocol_options: [max_keepalive: 5_000_000]
  ],

Things you probably don't need right now

While they are cool, you don't initially need to worry about:

  • Hot code updates
  • Distributed Erlang

Additional topics