.----------------------------------------------.
| Cluster |
| .----------. |
| .---------------->| Frontend | |
.------. | / HTTP: / '----------' |
|.------. | / |
'|.------. | .-------. |
'| User |---------------->| Proxy | |
'------' HTTPS: /* | '-------' |
| \ |
| \ .---------. |
| '---------------->| Backend |. |
| HTTP: /api/* '---------'|. |
| '---------'| |
| '---------' |
| | |
| | |
| v |
| .----------. |
| | Database | |
| '----------' |
| |
'----------------------------------------------'
hello-world.c
See src/ for reference.
If you're using NixOS, you already have Nix installed. If not, head to nixos.org/nix and follow the installation instructions:
$ curl https://nixos.org/nix/install | sh
Verify your installation with nix-env --version
:
$ nix-env --version
nix-env (Nix) 1.11.8
We'll use Nix to install NixOps, which will do the work of bootstrapping, configuring, and managing the servers.
Install NixOps with nix-env
:
$ nix-env -i nixops
nixos.org/nix/manual/#ch-expression-language
Nix is a dynamically-typed, pure, lazy, functional language.
Fire up the Nix REPL, and let's explore the basics.
$ nix-repl
Arithmetic:
nix-repl> 6 * 7
42
String concatenation:
nix-repl> "Hello, " + "world!"
"Hello, world!"
Multi-line strings:
nix-repl> ''
Line one
Line two
''
"Line one\nLine two\n"
Lists use bracket-and-space syntax:
nix-repl> [ 1 2 3 4 5 ]
[ 1 2 3 4 5 ]
Lists are heterogeneous:
nix-repl> [ "one" 2 /var/log [ 4 5 ] ]
[ "one" 2 /var/log [ ... ] ]
Sets are collections of key/value pairs:
nix-repl> { x = 6; y = 7; }
{ x = 6; y = 7; }
Fields can be referenced by dot-notation:
nix-repl> { x = 6;
y = 7;
}.x
6
A function argument is followed by a colon:
nix-repl> square = x: x * x
nix-repl> square 7
49
Functions can be curried:
nix-repl> times = x: y: x * y
nix-repl> times 6 7
42
A function can take a set of inputs:
nix-repl> times = { x, y }: x * y
nix-repl> times { x = 6; y = 7; }
42
As arguments, sets can have extra values:
nix-repl> times { x = 6; y = 7; z = 8; }
error: anonymous function at (string):1:2 called with
unexpected argument ‘z’, at (string):1:1
nix-repl> times = { x, y, ... }: x * y
nix-repl> times { x = 6; y = 7; z = 8; }
42
As arguments, sets can provide default values:
nix-repl> times = { x ? 6, y }: x * y
nix-repl> times { y = 7; }
42
nix-repl> let x = 6; y = 7; in x * y
42
nix-repl> let
times = x: y: x * y;
in
times 6 7
42
nix-repl> fact =
let
fact' =
x:
if (x == 0) then 1
else x * fact' (x - 1);
in
fact'
nix-repl> fact 7
5040
nixos.org/nix/manual/#sec-language-operators
Lists can be concatenated:
nix-repl> [ 1 2 ] ++ [ 3 4 ]
[ 1 2 3 4 ]
Sets can be combined:
nix-repl> { x = 6; } // { y = 7; }
{ x = 6; y = 7; }
nixos.org/nix/manual/#ssec-builtins
Mapping over a list:
nix-repl> map (x: x * x) [ 1 2 3 4 5 ]
[ 1 4 9 16 25 ]
Combining several sets:
nix-repl> let
f = x: y: x // y;
z = {};
xs = [ { x = 6; } { y = 7; } ];
in
builtins.foldl' f z xs
{ x = 6; y = 7; }
nix-repl> fib =
let
fib' =
n:
if (n == 1 || n == 2) then 1
else fib' (n - 1) + fib' (n - 2);
in
fib'
nix-repl> fib 10
55
-repl> fibs = xs: map fib xs
nix
-repl> fibs [ 1 2 3 ]
nix[ 1 1 2 ]
-repl> range = (import <nixpkgs> {}).lib.range
nix
-repl> fibs (range 1 10)
nix[ 1 1 2 3 5 8 13 21 34 55 ]
Nix is more than just an expression language; it's mostly a package manager.
$ nix-env -qa | grep fdupes
fdupes-20150902
-q
(short for --query
) displays
information about store paths-a
(short for --available
) causes
installable packages to be listed-P
(short for --attr-path
) causes the
attribute path to be listed$ which fdupes
which: no fdupes in ...
$ nix-shell -p fdupes
[nix-shell:~]$ which fdupes
/nix/store/dcy0a8nmmvrbz18ld9vgy5gdrfgpcx9q-fdupes-20150902/bin/fdupes
[nix-shell:~]$ exit
exit
$ which fdupes
which: no fdupes in ...
-p
(short for --packages
) includes the
specified packages in the envrionment$ nix-env -i fdupes
installing ‘fdupes-20150902’
building path(s) ‘/nix/store/6arm74dmk009v45si3alwkymbqxnhj70-user-environment’
created 2 symlinks in user environment
$ which fdupes
/home/student1/.nix-profile/bin/fdupes
Let's build and package a C program using Nix.
hello-world.c:
#include <stdio.h>
void main() {
("Hello, world!");
printf}
hello-world.nix:
{ pkgs ? import <nixpkgs> {} }:
let
src = ./hello-world.c;
in
"hello-world" { buildInputs = [ pkgs.gcc ]; } ''
pkgs.runCommand mkdir -pv $out/bin
gcc ${src} -o $out/bin/hello-world
''
$ nix-build hello-world.nix
$ ./result/bin/hello-world
Hello, world!
If we omit the hello-world.nix
argument,
nix-build
expects the build expression in a file named
default.nix.
For the backend, let's make a simple JSON API in Haskell.
hello-api.hs:
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE DeriveGeneric #-}
import Data.Aeson (ToJSON)
import GHC.Generics (Generic)
import Web.Scotty (get)
import Web.Scotty (json)
import Web.Scotty (scotty)
import Network.HostName (getHostName)
data Greeting = Greeting { greeting :: String
hostname :: String
,deriving (Show, Generic)
}
instance ToJSON Greeting
main :: IO ()
= do
main <- getHostName
hostname 3000 $ do
scotty "/greeting" $ do
get $ Greeting "Hello" hostname json
To try it out, build dependencies can be made available using
nix-shell
:
$ nix-shell -p "
haskellPackages.ghcWithPackages
(pkgs: [ pkgs.scotty pkgs.aeson pkgs.hostname ])
"
[nix-shell]$ ghc hello-api.hs
Let's capture this as a Nix derivation.
hello-api.nix:
{ pkgs ? import <nixpkgs> {} }:
let
paths = pkgs: [ pkgs.scotty pkgs.aeson pkgs.hostname ];
ghc = pkgs.haskellPackages.ghcWithPackages paths;
src = ./.;
in
"hello-api" { buildInputs = [ ghc ]; } ''
pkgs.runCommand mkdir -pv $out/bin
TMP=`mktemp -d`
ghc -odir $TMP \
-hidir $TMP \
-O2 ${src}/hello-api.hs \
-o $out/bin/hello-api
''
Learn more about derivations:
With the backend service ready to go, let's get a taste of NixOS by defining the server configuration for it.
This will be the first node in the cluster:
.----------------------------------------------.
| Cluster |
.------. | |
|.------. | |
'|.------. | .---------. |
'| User |------------------------------------------->| Backend | |
'------' HTTP: /* | '---------' |
| |
'----------------------------------------------'
A basic Nix expression tells NixOS how to retrieve, extract, build, and run the backend.
backend.nix:
{ pkgs ? import <nixpkgs> {} }:
let
helloApi = import ./hello-api.nix {};
backendPlan =
{ resources, pkgs, lib, nodes, ...}:
{
networking.firewall.allowedTCPPorts = [ 22 3000 ];
systemd.services.backend = {
description = "hello-api";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
WorkingDirectory = "${helloApi}";
ExecStart = "${helloApi}/bin/hello-api";
Restart = "always";
};
};
};
in
{
backend = backendPlan;
}
With the backend instance defined, let's configure the cluster.
The cluster can run either as a set of VirtualBox machines or a set of EC2 instances.
To deploy anything to EC2, we'll need AWS credentials set as environment variables:
$ export AWS_ACCESS_KEY_ID=...
$ export AWS_SECRET_ACCESS_KEY=...
A basic Nix expression tells NixOps how to set this up.
target.nix:
let
vbox =
{ config, pkgs, ... }:
{
deployment.targetEnv = "virtualbox";
deployment.virtualbox.memorySize = 512;
deployment.virtualbox.headless = true;
};
ec2 =
{ resources, pkgs, lib, nodes, ...}:
{
deployment.targetEnv = "ec2";
deployment.ec2.region = "us-west-1";
deployment.ec2.instanceType = "t2.nano";
deployment.ec2.keyPair = resources.ec2KeyPairs.nixed;
};
target = ec2;
in
{
network.description = "nixed";
network.enableRollback = true;
resources.ec2KeyPairs.nixed.region = "us-west-1";
backend = target;
}
We can easily switch between VirtualBox and EC2 by setting
target
to either vbox
or ec2
.
Define the deployment with nixops create
:
$ nixops create -d nixed target.nix backend.nix
List deployments with nixops list
:
$ nixops list
+--------------------------------------+-------+------------+
| UUID | Name | # Machines |
+--------------------------------------+-------+------------+
| e78fa6a3-33ff-11e7-88bb-0242616f3769 | nixed | 0 |
+--------------------------------------+-------+------------+
Get deployment details with nixops info
:
$ nixops info -d nixed
+---------+---------------+-------------------------+------------+
| Name | Status | Type | IP address |
+---------+---------------+-------------------------+------------+
| nixed | Missing / New | ec2-keypair [us-west-1] | |
| backend | Missing / New | ec2 [us-west-1] | |
+---------+---------------+-------------------------+------------+
Launch it with nixops deploy
:
$ nixops deploy -d nixed
backend> activation finished successfully
nixed..> deployment finished successfully
Running nixops deploy
is idempotent; it will deploy the
instance(s), bring it up to date, or leave it alone, depending on its
state relative to the configuration in the .nix files.
We can find the instance's IP address with nixops info
,
and test it with curl
:
$ nixops info -d nixed
+---------+-----------------+---------------------------+---------------+
| Name | Status | Type | IP address |
+---------+-----------------+---------------------------+---------------+
| backend | Up / Up-to-date | ec2 [us-west-1c; t2.nano] | 52.53.170.188 |
| nixed | Up / Up-to-date | ec2-keypair [us-west-1] | |
+---------+-----------------+---------------------------+---------------+
$ curl 52.53.170.188:3000/greeting
{"hostname":"backend","greeting":"Hello"}
Let's build an AJAX frontend to consume the backend service.
index.html:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>nixed</title>
<script type="text/javascript">
function run() {
var xhr = new XMLHttpRequest();
.open('GET', '/api/greeting', true);
xhr.onreadystatechange =
xhrfunction () {
if (xhr.readyState === 4 && xhr.status === 200) {
var res = JSON.parse(xhr.responseText);
document.getElementById('greeting').innerHTML =
.greeting + ' from ' +
res'<tt>' + res.hostname + '</tt>!';
};
}.send();
xhr
}</script>
</head>
<body onload="run()">
<div id="greeting"></div>
</body>
</html>
hello-static.nix:
{ pkgs ? import <nixpkgs> {} }:
let
src = ./.;
in
"hello-static" { buildInputs = [ ]; } ''
pkgs.runCommand mkdir -p $out
cp ${src}/index.html $out/index.html
''
This will be the second node in the cluster:
.----------------------------------------------.
| Cluster |
| .----------. |
| .---------------->| Frontend | |
.------. | / '----------' |
|.------. | / |
'|.------. | / |
'| User |======================: |
'------' HTTP: /* | \ |
| \ |
| \ .---------. |
| '---------------->| Backend | |
| '---------' |
| |
'----------------------------------------------'
Now that we have a basic frontend, we'll need to define the server configuration for it.
frontend.nix:
{ pkgs ? import <nixpkgs> {} }:
let
helloStatic = import ./hello-static.nix {};
in
{
frontend =
{ resources, pkgs, lib, nodes, config, ...}:
{
networking.firewall.allowedTCPPorts = [ 22 80 ];
services.nginx.enable = true;
services.nginx.httpConfig = ''
server {
listen 80;
location / {
root ${helloStatic};
index index.html;
}
}
'';
};
}
Tell NixOps about this the new .nix file with
nixops modify
:
$ nixops modify -d nixed target.nix backend.nix frontend.nix
We also need to update target.nix to include
frontend = target
:
let
...
in
{
backend = target;
frontend = target;
}
Deploy the frontend by re-running nixops deploy
:
$ nixops deploy -d nixed
backend.> activation finished successfully
frontend> activation finished successfully
nixed> deployment finished successfully
This won't work quite yet.
Let's grab the frontend IP address, and try it out.
$ nixops info -d nixed
+----------+-----------------+---------------------------+---------------+
| Name | Status | Type | IP address |
+----------+-----------------+---------------------------+---------------+
| backend | Up / Up-to-date | ec2 [us-west-1c; t2.nano] | 52.53.170.188 |
| frontend | Up / Up-to-date | ec2 [us-west-1c; t2.nano] | 54.153.67.212 |
| nixed | Up / Up-to-date | ec2-keypair [us-west-1] | |
+----------+-----------------+---------------------------+---------------+
The /api/greeting
endpoint doesn't exist on the frontend
server.
We'll need to proxy this request to the /greeting
endpoint on the backend server.
This will be the third node in the cluster:
.----------------------------------------------.
| Cluster |
| .----------. |
| .---------------->| Frontend | |
.------. | / HTTP: / '----------' |
|.------. | / |
'|.------. | .-------. |
'| User |---------------->| Proxy | |
'------' HTTP: /* | '-------' |
| \ |
| \ .---------. |
| '---------------->| Backend | |
| HTTP: /api/* '---------' |
| |
'----------------------------------------------'
We'll use an nginx-based proxy:
nginx-conf.nix
{ nodes }:
''
upstream frontend {
server ${nodes.frontend.config.networking.privateIPv4}:80;
}
upstream backend {
server ${nodes.backend.config.networking.privateIPv4}:3000;
}
server {
listen 80;
location /api {
rewrite /api$ /api/ redirect;
rewrite /api/(.*) /$1 break;
proxy_pass http://backend;
}
location / {
proxy_pass http://frontend;
}
}
''
proxy.nix:
{
proxy =
{ resources, pkgs, lib, nodes, config, ...}:
{
networking.firewall.allowedTCPPorts = [ 22 80 443 ];
services.nginx.enable = true;
services.nginx.httpConfig =
import ./nginx-conf.nix { inherit nodes; };
};
}
Add proxy = target
to target.nix:
let
...
in
{
backend = target;
frontend = target;
proxy = target;
}
Add this configuration to the deployment with
nixops modify
:
$ nixops modify -d nixed target.nix backend.nix frontend.nix proxy.nix
Redeploy it with nixops deploy
:
$ nixops deploy -d nixed
frontend> activation finished successfully
proxy...> activation finished successfully
backend.> activation finished successfully
nixed> deployment finished successfully
Grab the proxy's IP address with nixops info
, then take
it for a spin with a browser:
$ nixops info -d nixed
+----------+-----------------+---------------------------+----------------+
| Name | Status | Type | IP address |
+----------+-----------------+---------------------------+----------------+
| backend | Up / Up-to-date | ec2 [us-west-1c; t2.nano] | 52.53.170.188 |
| frontend | Up / Up-to-date | ec2 [us-west-1c; t2.nano] | 54.153.67.212 |
| proxy | Up / Up-to-date | ec2 [us-west-1c; t2.nano] | 54.215.156.84 |
| nixed | Up / Up-to-date | ec2-keypair [us-west-1] | |
+----------+-----------------+---------------------------+----------------+
Let's add a DNS entry to make the cluster accessible via a hostname.
proxy.nix:
{
proxy =
{ resources, pkgs, lib, nodes, config, ...}:
{
deployment.route53.hostName =
builtins.getEnv "USER" + ".infunstructure.com";
...
Deploy it again:
$ nixops deploy -d nixed
proxy...> sending Route53 DNS...
frontend> activation finished successfully
proxy...> activation finished successfully
backend.> activation finished successfully
nixed> deployment finished successfully
Point a browser at the hostname:
This will duplicate the backend node in the cluster:
.----------------------------------------------.
| Cluster |
| .----------. |
| .---------------->| Frontend | |
.------. | / HTTP: / '----------' |
|.------. | / |
'|.------. | .-------. |
'| User |---------------->| Proxy | |
'------' HTTP: /* | '-------' |
| \ |
| \ .---------. |
| '---------------->| Backend |. |
| HTTP: /api/* '---------'|. |
| '---------'| |
| '---------' |
| |
'----------------------------------------------'
We'll redefine the backend configuration to define two more backends,
named backend2
and backend3
.
backend.nix:
let
...
in
{
backend = backendPlan;
backend2 = backendPlan;
backend3 = backendPlan;
}
target.nix:
{
...
backend = target;
backend2 = target;
backend3 = target;
frontend = target;
proxy = target;
}
In nginx-conf.nix, reference these instances in the
backend
upstream definition:
upstream backend {
server ${nodes.backend.config.networking.privateIPv4}:3000;
server ${nodes.backend2.config.networking.privateIPv4}:3000;
server ${nodes.backend3.config.networking.privateIPv4}:3000;
}
Deploy the new configuration with nixops deploy
:
$ nixops deploy -d nixed
frontend> activation finished successfully
proxy...> activation finished successfully
backend2> activation finished successfully
backend.> activation finished successfully
backend3> activation finished successfully
nixed> deployment finished successfully
Refresh it a few times in the browser:
Deployments can be reviewed with list-generations
:
$ nixops list-generations -d nixed
1 2017-05-25 20:07:00
2 2017-05-25 20:10:38
3 2017-05-25 20:13:33
4 2017-05-25 20:15:47
5 2017-05-25 20:17:39 (current)
An older deployment can be re-applied with rollback
:
$ nixops rollback -d nixed 4
Notice how the second and third backends are now gone.
Similarly, a newer older deployment can also be re-applied with
rollback
:
$ nixops rollback -d nixed 5
All three backends are back again.
This step requires a public IP address, which can be cumbersome with VirtualBox. We'll just cover EC2 here.
This will encrypt the channel between the user and the proxy:
.----------------------------------------------.
| Cluster |
| .----------. |
| .---------------->| Frontend | |
.------. | / HTTP: / '----------' |
|.------. | / |
'|.------. | .-------. |
'| User |---------------->| Proxy | |
'------' HTTPS: /* | '-------' |
| \ |
| \ .---------. |
| '---------------->| Backend |. |
| HTTP: /api/* '---------'|. |
| '---------'| |
| '---------' |
| |
'----------------------------------------------'
proxy.nix:
let
hostname = builtins.getEnv "USER" + ".infunstructure.com";
in
{
proxy =
{ resources, pkgs, lib, nodes, config, ...}:
{
security.acme.preliminarySelfsigned = true;
security.acme.certs."${hostname}" = {
webroot = "/var/www/challenges";
email = "webmaster@${hostname}";
user = "nginx";
group = "nginx";
postRun = ''
systemctl restart nginx.service
'';
};
## A place to put challenges
system.activationScripts.nginx = {
text = ''
mkdir -p /var/www/challenges
chown nginx.nginx /var/www /var/www/challenges
'';
deps = [];
};
deployment.route53.hostName =
builtins.getEnv "USER" + ".infunstructure.com";
networking.firewall.allowedTCPPorts = [ 22 80 443 ];
services.nginx.enable = true;
services.nginx.httpConfig =
import ./nginx-conf.nix { inherit nodes config hostname; };
};
}
nginx-conf.nix:
{ nodes, config, hostname }:
''
upstream frontend {
server ${nodes.frontend.config.networking.privateIPv4}:80;
}
upstream backend {
server ${nodes.backend.config.networking.privateIPv4}:3000;
server ${nodes.backend2.config.networking.privateIPv4}:3000;
server ${nodes.backend3.config.networking.privateIPv4}:3000;
}
server {
listen 80;
listen [::]:80;
server_name ${hostname};
server_tokens off;
location /.well-known/acme-challenge {
root /var/www/challenges;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name ${hostname};
server_tokens off;
gzip on;
gzip_vary on;
gzip_types text/plain text/css application/json application/x-javascript
text/xml application/xml application/xml+rss text/javascript;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_certificate
${config.security.acme.directory}/${hostname}/fullchain.pem;
ssl_certificate_key
${config.security.acme.directory}/${hostname}/key.pem;
access_log /var/log/nginx-access.log;
location /api {
rewrite /api$ /api/ redirect;
rewrite /api/(.*) /$1 break;
proxy_pass http://backend;
}
location / {
proxy_pass http://frontend;
}
}
''
$ nixops deploy -d nixed
Try it out by with https
in place of
http
:
This step uses RDS to provision a database. We won't cover VirtualBox targets here.
.----------------------------------------------.
| Cluster |
| .----------. |
| .---------------->| Frontend | |
.------. | / HTTP: / '----------' |
|.------. | / |
'|.------. | .-------. |
'| User |---------------->| Proxy | |
'------' HTTPS: /* | '-------' |
| \ |
| \ .---------. |
| '---------------->| Backend |. |
| HTTP: /api/* '---------'|. |
| '---------'| |
| '---------' |
| | |
| | |
| v |
| .----------. |
| | Database | |
| '----------' |
| |
'----------------------------------------------'
target.nix:
{
resources.rdsDbInstances.nixed = region = "us-west-1";
id = "nixed";
instanceClass = "db.t2.nano";
allocatedStorage = 5;
masterUsername = "master";
masterPassword = "master";
port = 5432;
engine = "postgres";
dbName = "nixed";
};
backend.nix:
{
environment = DB_URI = resources.rdsDbInstances.nixed.endpoint;
};
$ nixops deploy -d nixed
It sometimes takes upwards of twenty minutes for AWS to provision a new RDS instance. Go brew a pot of coffee.