Serving your Phoenix app with Nginx
By DevOps on Wed 16 May 2018
inIt's common to run web apps behind a proxy such as Nginx or HAProxy. Nginx listens on port 80, then forwards traffic to the app on another port, e.g. 4000.
Following is an example nginx.conf
config:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
worker_rlimit_nofile 65536;
events {
worker_connections 65536;
use epoll;
multi_accept on;
}
http {
real_ip_header X-Forwarded-For;
set_real_ip_from 0.0.0.0/0;
server_tokens off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] $host "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" $request_time';
access_log /var/log/nginx/access.log main;
limit_req_zone $binary_remote_addr zone=foo:10m rate=1r/s;
limit_req_status 429;
include /etc/nginx/conf.d/*.conf;
}
Here is a vhost for the app, e.g. /etc/nginx/conf.d/foo.conf
:
server {
listen 80 default_server;
# server_name example.com;
root /opt/foo/current/priv/static;
access_log /var/log/nginx/foo.access.log main;
error_log /var/log/nginx/foo.error.log;
location / {
index index.html;
# first attempt to serve request as file, then fall back to app
try_files $uri @app;
# expires max;
# access_log off;
}
location @app {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Refrerer $http_referer;
proxy_set_header User-Agent $http_user_agent;
limit_req zone=foo burst=5 nodelay;
proxy_pass http://127.0.0.1:4000;
}
}
Proxy settings
The main setting that does the forwarding is proxy_pass
.
You can set additional options depending on usage, e.g. if it's an API endpoint, then you can reduce various buffers and timers to give better response vs the defaults, which are for more generic web serving:
proxy_intercept_errors on;
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 256 16k;
proxy_busy_buffers_size 256k;
proxy_temp_file_write_size 256k;
proxy_max_temp_file_size 0;
proxy_read_timeout 300;
High load
Once you start pushing Nginx hard, you will see issues. One of the first problems is OS limits on the number of open sockets. The sign of this is clients see a 5-second delay to responses (or a 503 error), but the app logs look fine, responding in milliseconds.
What is happening is that the client talks to Nginx, then Nginx talks to your app. When there are not enough filehandles available to open a connection to the app, Nginx queues the request.
You may start with 1024 by default, which is pitifully small. You will need to raise that at each step in the config, e.g. systemd unit file, Nginx, and Erlang VM.
Create /etc/systemd/system/nginx.service.d/override.conf
with the following
contents:
[Service]
LimitNOFILE=65536
systemctl daemon-reload
In the nginx config file, set worker_rlimit_nofile
, smaller or equal to LimitNOFILE
:
worker_rlimit_nofile 65536;
systemctl restart nginx
You can verify that the limits have been increased for the process by running:
cat /proc/<nginx-pid>/limits
Running out of TCP ports
After that, you may run into lack of TCP ports. In TCP/IP, a connection is defined by the combination of source IP + source port + destination IP + destination port. In this proxy situation, all but the source port is fixed: 127.0.0.1 + random + 127.0.0.1 + 4000. There are only 64K ports. The TCP/IP stack won't reuse a port for 2 x maximum segment lifetime, which by default is 2 minutes.
Doing the math:
- 60000 ports / 120 sec = 500 requests per sec
You can tune the global kernel config to reduce the maximum segment lifetime, e.g.:
# Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15
# Recycle and Reuse TIME_WAIT sockets faster
net.ipv4.tcp_tw_reuse = 1
The HTTP client may keep the connection open, assuming that there will be
another request. Depending on your use case (e.g. for an API endpoint), that
may not be needed. Shut it down immediately by adding the "Connection: close
"
HTTP header. This is particularly useful for abuse, e.g. DDOS attacks.
See rate limitmiting Nginx requests for details about the rate limiting config in this example.
Nginx also has some complex behavior when it runs into errors when proxying.
It can be hard to figure out what is going on, as you don't get visibility. The Nginx business model is to hide the detailed proxy metrics unless you buy their Nginx Plus product, which costs thousands of dollars per server per year. A dedicated proxy server like HAProxy gives more visibility and control over the process.
Listening directly
At a certain point, you wonder what value you were getting from the local proxy. If you are only running a single app on your instance, common in cloud deployments, you can listen directly to HTTP traffic in Phoenix. That will give you lower latency and overall lower complexity. This works fine: we have Phoenix applications which handle billions of requests a day on the internet, resisting regular DDOS attacks with no problems.
In order to listen on a TCP port less than 1024, i.e. the standard port 80,
an app needs to be running as root (or have CAP_NET_BIND_SERVICE
capability).
Running as root increases the chance of security problems. If the application
has a vulnerability, then the attacker can do anything on the system. One
solution is to run the application on a normal port such as 4000, and
redirect traffic from port 80 to 4000 in the firewall using iptables.
Running multiple applications together
One place where running Nginx in front of the application makes sense
is when you are using Nginx to glue together multiple apps, e.g. using Phoenix to
improve performance of a Rails app. The first step is configuring Nginx to
route certain URL prefixes to Phoenix, e.g. http://api.example.com/
or /api
.
Beyond that, we need to integrate the applications, e.g. sharing a login session between Phoenix and Rails. This depends on the specific authentication frameworks used by each app.
We can also implement the UI and navigation on Phoenix to match a Rails app, allowing users to seamlessly work between both apps. The only thing the user will notice is that the Phoenix pages are 10x faster :-)
See this blog post on migrating legacy apps or this presentation for details.
Another option is to have Phoenix handle the routing, e.g. with Terraform.