Functional Infrastructure

May 26, 2017

What we will build

                       .----------------------------------------------.
                       | Cluster                                      |
                       |                              .----------.    |
                       |            .---------------->| Frontend |    |
.------.               |           /   HTTP: /        '----------'    |
|.------.              |          /                                   |
'|.------.             |   .-------.                                  |
 '| User |---------------->| Proxy |                                  |
  '------'  HTTPS: /*  |   '-------'                                  |
                       |          \                                   |
                       |           \                  .---------.     |
                       |            '---------------->| Backend |.    |
                       |               HTTP: /api/*   '---------'|.   |
                       |                               '---------'|   |
                       |                                '---------'   |
                       |                                    |         |
                       |                                    |         |
                       |                                    v         |
                       |                               .----------.   |
                       |                               | Database |   |
                       |                               '----------'   |
                       |                                              |
                       '----------------------------------------------'

What we will write

See src/ for reference.

References we will use

Outline

Setup

Nix

If you're using NixOS, you already have Nix installed. If not, head to nixos.org/nix and follow the installation instructions:

$ curl https://nixos.org/nix/install | sh

Verify your installation with nix-env --version:

$ nix-env --version
nix-env (Nix) 1.11.8

NixOps

We'll use Nix to install NixOps, which will do the work of bootstrapping, configuring, and managing the servers.

Install NixOps with nix-env:

$ nix-env -i nixops

Outline

Nix, the expression language

nixos.org/nix/manual/#ch-expression-language

Nix is a dynamically-typed, pure, lazy, functional language.

Fire up the Nix REPL, and let's explore the basics.

$ nix-repl

Simple values

Arithmetic:

nix-repl> 6 * 7
42

String concatenation:

nix-repl> "Hello, " + "world!"
"Hello, world!"

Multi-line strings:

nix-repl> ''
          Line one
          Line two
          ''
"Line one\nLine two\n"

Lists

Lists use bracket-and-space syntax:

nix-repl> [ 1 2 3 4 5 ]
[ 1 2 3 4 5 ]

Lists are heterogeneous:

nix-repl> [ "one" 2 /var/log [ 4 5 ] ]
[ "one" 2 /var/log [ ... ] ]

Sets

Sets are collections of key/value pairs:

nix-repl> { x = 6; y = 7; }
{ x = 6; y = 7; }

Fields can be referenced by dot-notation:

nix-repl> { x = 6;
            y = 7;
          }.x
6

Functions

A function argument is followed by a colon:

nix-repl> square = x: x * x

nix-repl> square 7
49

Functions can be curried:

nix-repl> times = x: y: x * y

nix-repl> times 6 7
42

A function can take a set of inputs:

nix-repl> times = { x, y }: x * y

nix-repl> times { x = 6; y = 7; }
42

As arguments, sets can have extra values:

nix-repl> times { x = 6; y = 7; z = 8; }
error: anonymous function at (string):1:2 called with
       unexpected argument ‘z’, at (string):1:1

nix-repl> times = { x, y, ... }: x * y

nix-repl> times { x = 6; y = 7; z = 8; }
42

As arguments, sets can provide default values:

nix-repl> times = { x ? 6, y }: x * y

nix-repl> times { y = 7; }
42

Let expressions

nix-repl> let x = 6; y = 7; in x * y
42
nix-repl> let
            times = x: y: x * y;
          in
            times 6 7
42

Recursion

nix-repl> fact =
            let
              fact' =
                x:
                  if (x == 0) then 1
                  else x * fact' (x - 1);
            in
              fact'

nix-repl> fact 7
5040

Operators

nixos.org/nix/manual/#sec-language-operators

Lists can be concatenated:

nix-repl> [ 1 2 ] ++ [ 3 4 ]
[ 1 2 3 4 ]

Sets can be combined:

nix-repl> { x = 6; } // { y = 7; }
{ x = 6; y = 7; }

Standard library

nixos.org/nix/manual/#ssec-builtins

Mapping over a list:

nix-repl> map (x: x * x) [ 1 2 3 4 5 ]
[ 1 4 9 16 25 ]

Combining several sets:

nix-repl> let
            f = x: y: x // y;
            z = {};
            xs = [ { x = 6; } { y = 7; } ];
          in
            builtins.foldl' f z xs
{ x = 6; y = 7; }

Coding challenges

  1. Write a function to compute a given Fibonacci number
  2. Write a function to compute many given Fibonacci numbers

Example solutions

nix-repl> fib =
            let
              fib' =
                n:
                  if (n == 1 || n == 2) then 1
                  else fib' (n - 1) + fib' (n - 2);
              in
                fib'

nix-repl> fib 10
55
nix-repl> fibs = xs: map fib xs

nix-repl> fibs [ 1 2 3 ]
[ 1 1 2 ]

nix-repl> range = (import <nixpkgs> {}).lib.range

nix-repl> fibs (range 1 10)
[ 1 1 2 3 5 8 13 21 34 55 ]

Nix, the package manager

Nix is more than just an expression language; it's mostly a package manager.

Searching available packages

$ nix-env -qa | grep fdupes
fdupes-20150902

Temporarily using packages

$ which fdupes
which: no fdupes in ...

$ nix-shell -p fdupes

[nix-shell:~]$ which fdupes
/nix/store/dcy0a8nmmvrbz18ld9vgy5gdrfgpcx9q-fdupes-20150902/bin/fdupes

[nix-shell:~]$ exit
exit

$ which fdupes
which: no fdupes in ...

Installing packages

$ nix-env -i fdupes
installing ‘fdupes-20150902’
building path(s) ‘/nix/store/6arm74dmk009v45si3alwkymbqxnhj70-user-environment’
created 2 symlinks in user environment

$ which fdupes
/home/student1/.nix-profile/bin/fdupes

Building packages

Let's build and package a C program using Nix.

hello-world.c:

#include <stdio.h>

void main() {
  printf("Hello, world!");
}

hello-world.nix:

{ pkgs ? import <nixpkgs> {} }:
let
  src = ./hello-world.c;
in
  pkgs.runCommand "hello-world" { buildInputs = [ pkgs.gcc ]; } ''
    mkdir -pv $out/bin
    gcc ${src} -o $out/bin/hello-world
  ''
$ nix-build hello-world.nix
$ ./result/bin/hello-world
Hello, world!

If we omit the hello-world.nix argument, nix-build expects the build expression in a file named default.nix.

Outline

Backend service

For the backend, let's make a simple JSON API in Haskell.

hello-api.hs:

{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE DeriveGeneric #-}

import Data.Aeson (ToJSON)
import GHC.Generics (Generic)
import Web.Scotty (get)
import Web.Scotty (json)
import Web.Scotty (scotty)
import Network.HostName (getHostName)

data Greeting = Greeting { greeting :: String
                         , hostname :: String
                         } deriving (Show, Generic)

instance ToJSON Greeting

main :: IO ()
main = do
  hostname <- getHostName
  scotty 3000 $ do
    get "/greeting" $ do
      json $ Greeting "Hello" hostname

To try it out, build dependencies can be made available using nix-shell:

$ nix-shell -p "
    haskellPackages.ghcWithPackages
    (pkgs: [ pkgs.scotty pkgs.aeson pkgs.hostname ])
  "

[nix-shell]$ ghc hello-api.hs

Let's capture this as a Nix derivation.

hello-api.nix:

{ pkgs ? import <nixpkgs> {} }:
let
  paths = pkgs: [ pkgs.scotty pkgs.aeson pkgs.hostname ];
  ghc = pkgs.haskellPackages.ghcWithPackages paths;
  src = ./.;
in
  pkgs.runCommand "hello-api" { buildInputs = [ ghc ]; } ''
    mkdir -pv $out/bin
    TMP=`mktemp -d`
    ghc -odir $TMP \
        -hidir $TMP \
        -O2 ${src}/hello-api.hs \
        -o $out/bin/hello-api
  ''

Learn more about derivations:

Backend server

With the backend service ready to go, let's get a taste of NixOS by defining the server configuration for it.

This will be the first node in the cluster:

                       .----------------------------------------------.
                       | Cluster                                      |
.------.               |                                              |
|.------.              |                                              |
'|.------.             |                              .---------.     |
 '| User |------------------------------------------->| Backend |     |
  '------'   HTTP: /*  |                              '---------'     |
                       |                                              |
                       '----------------------------------------------'

NixOS configuration

A basic Nix expression tells NixOS how to retrieve, extract, build, and run the backend.

backend.nix:

{ pkgs ? import <nixpkgs> {} }:
let
  helloApi = import ./hello-api.nix {};
  backendPlan =
    { resources, pkgs, lib, nodes, ...}:
    {
      networking.firewall.allowedTCPPorts = [ 22 3000 ];
      systemd.services.backend = {
        description = "hello-api";
        after = [ "network.target" ];
        wantedBy = [ "multi-user.target" ];
        serviceConfig = {
          WorkingDirectory = "${helloApi}";
          ExecStart = "${helloApi}/bin/hello-api";
          Restart = "always";
        };
      };
    };
in
  {
    backend = backendPlan;
  }

Cluster configuration

With the backend instance defined, let's configure the cluster.

The cluster can run either as a set of VirtualBox machines or a set of EC2 instances.

To deploy anything to EC2, we'll need AWS credentials set as environment variables:

$ export AWS_ACCESS_KEY_ID=...
$ export AWS_SECRET_ACCESS_KEY=...

Target environment

A basic Nix expression tells NixOps how to set this up.

target.nix:

let
  vbox =
    { config, pkgs, ... }:
    {
      deployment.targetEnv = "virtualbox";
      deployment.virtualbox.memorySize = 512;
      deployment.virtualbox.headless = true;
    };
  ec2 =
    { resources, pkgs, lib, nodes, ...}:
    {
      deployment.targetEnv = "ec2";
      deployment.ec2.region = "us-west-1";
      deployment.ec2.instanceType = "t2.nano";
      deployment.ec2.keyPair = resources.ec2KeyPairs.nixed;
    };
  target = ec2;
in
  {
    network.description = "nixed";
    network.enableRollback = true;

    resources.ec2KeyPairs.nixed.region = "us-west-1";

    backend = target;
  }

We can easily switch between VirtualBox and EC2 by setting target to either vbox or ec2.

Deployment

Define the deployment with nixops create:

$ nixops create -d nixed target.nix backend.nix

List deployments with nixops list:

$ nixops list
+--------------------------------------+-------+------------+
| UUID                                 | Name  | # Machines |
+--------------------------------------+-------+------------+
| e78fa6a3-33ff-11e7-88bb-0242616f3769 | nixed |          0 |
+--------------------------------------+-------+------------+

Get deployment details with nixops info:

$ nixops info -d nixed
+---------+---------------+-------------------------+------------+
| Name    |     Status    | Type                    | IP address |
+---------+---------------+-------------------------+------------+
| nixed   | Missing / New | ec2-keypair [us-west-1] |            |
| backend | Missing / New | ec2 [us-west-1]         |            |
+---------+---------------+-------------------------+------------+

Launch it with nixops deploy:

$ nixops deploy -d nixed
backend> activation finished successfully
nixed..> deployment finished successfully

Running nixops deploy is idempotent; it will deploy the instance(s), bring it up to date, or leave it alone, depending on its state relative to the configuration in the .nix files.

We can find the instance's IP address with nixops info, and test it with curl:

$ nixops info -d nixed
+---------+-----------------+---------------------------+---------------+
| Name    |      Status     | Type                      | IP address    |
+---------+-----------------+---------------------------+---------------+
| backend | Up / Up-to-date | ec2 [us-west-1c; t2.nano] | 52.53.170.188 |
| nixed   | Up / Up-to-date | ec2-keypair [us-west-1]   |               |
+---------+-----------------+---------------------------+---------------+

$ curl 52.53.170.188:3000/greeting
{"hostname":"backend","greeting":"Hello"}

Outline

Static HTML

Let's build an AJAX frontend to consume the backend service.

index.html:

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8" />
    <title>nixed</title>
    <script type="text/javascript">
      function run() {
        var xhr = new XMLHttpRequest();
        xhr.open('GET', '/api/greeting', true);
        xhr.onreadystatechange =
          function () {
            if (xhr.readyState === 4 && xhr.status === 200) {
              var res = JSON.parse(xhr.responseText);
              document.getElementById('greeting').innerHTML =
                res.greeting + ' from ' +
                '<tt>' + res.hostname + '</tt>!';
            }
          };
        xhr.send();
      }
    </script>
  </head>
  <body onload="run()">
    <div id="greeting"></div>
  </body>
</html>

hello-static.nix:

{ pkgs ? import <nixpkgs> {} }:
let
  src = ./.;
in
  pkgs.runCommand "hello-static" { buildInputs = [ ]; } ''
    mkdir -p $out
    cp ${src}/index.html $out/index.html
  ''

Frontend server

This will be the second node in the cluster:

                       .----------------------------------------------.
                       | Cluster                                      |
                       |                              .----------.    |
                       |            .---------------->| Frontend |    |
.------.               |           /                  '----------'    |
|.------.              |          /                                   |
'|.------.             |         /                                    |
 '| User |======================:                                     |
  '------'   HTTP: /*  |         \                                    |
                       |          \                                   |
                       |           \                  .---------.     |
                       |            '---------------->| Backend |     |
                       |                              '---------'     |
                       |                                              |
                       '----------------------------------------------'

NixOS configuration

Now that we have a basic frontend, we'll need to define the server configuration for it.

frontend.nix:

{ pkgs ? import <nixpkgs> {} }:
let
  helloStatic = import ./hello-static.nix {};
in
  {
    frontend =
      { resources, pkgs, lib, nodes, config, ...}:
      {
        networking.firewall.allowedTCPPorts = [ 22 80 ];
        services.nginx.enable = true;
        services.nginx.httpConfig = ''
          server {
              listen 80;
              location / {
                root ${helloStatic};
                index index.html;
              }
          }
        '';
      };
  }

Deployment

Tell NixOps about this the new .nix file with nixops modify:

$ nixops modify -d nixed target.nix backend.nix frontend.nix

We also need to update target.nix to include frontend = target:

let
  ...
in
  {
    backend = target;
    frontend = target;
  }

Deploy the frontend by re-running nixops deploy:

$ nixops deploy -d nixed
backend.> activation finished successfully
frontend> activation finished successfully
nixed> deployment finished successfully

Almost there

This won't work quite yet.

Let's grab the frontend IP address, and try it out.

$ nixops info -d nixed
+----------+-----------------+---------------------------+---------------+
| Name     |      Status     | Type                      | IP address    |
+----------+-----------------+---------------------------+---------------+
| backend  | Up / Up-to-date | ec2 [us-west-1c; t2.nano] | 52.53.170.188 |
| frontend | Up / Up-to-date | ec2 [us-west-1c; t2.nano] | 54.153.67.212 |
| nixed    | Up / Up-to-date | ec2-keypair [us-west-1]   |               |
+----------+-----------------+---------------------------+---------------+

The /api/greeting endpoint doesn't exist on the frontend server.

We'll need to proxy this request to the /greeting endpoint on the backend server.

Outline

Proxy server

This will be the third node in the cluster:

                       .----------------------------------------------.
                       | Cluster                                      |
                       |                              .----------.    |
                       |            .---------------->| Frontend |    |
.------.               |           /   HTTP: /        '----------'    |
|.------.              |          /                                   |
'|.------.             |   .-------.                                  |
 '| User |---------------->| Proxy |                                  |
  '------'   HTTP: /*  |   '-------'                                  |
                       |          \                                   |
                       |           \                  .---------.     |
                       |            '---------------->| Backend |     |
                       |               HTTP: /api/*   '---------'     |
                       |                                              |
                       '----------------------------------------------'

Nginx configuration

We'll use an nginx-based proxy:

nginx-conf.nix

{ nodes }:
''
upstream frontend {
  server ${nodes.frontend.config.networking.privateIPv4}:80;
}
upstream backend {
  server ${nodes.backend.config.networking.privateIPv4}:3000;
}
server {
  listen 80;
  location /api {
    rewrite /api$ /api/ redirect;
    rewrite /api/(.*) /$1 break;
    proxy_pass http://backend;
  }
  location / {
    proxy_pass http://frontend;
  }
}
''

NixOS configuration

proxy.nix:

{
  proxy =
    { resources, pkgs, lib, nodes, config, ...}:
    {
      networking.firewall.allowedTCPPorts = [ 22 80 443 ];
      services.nginx.enable = true;
      services.nginx.httpConfig =
        import ./nginx-conf.nix { inherit nodes; };
    };
}

Add proxy = target to target.nix:

let
  ...
in
{
  backend = target;
  frontend = target;
  proxy = target;
}

Deployment

Add this configuration to the deployment with nixops modify:

$ nixops modify -d nixed target.nix backend.nix frontend.nix proxy.nix

Redeploy it with nixops deploy:

$ nixops deploy -d nixed
frontend> activation finished successfully
proxy...> activation finished successfully
backend.> activation finished successfully
nixed> deployment finished successfully

Deployment

Grab the proxy's IP address with nixops info, then take it for a spin with a browser:

$ nixops info -d nixed
+----------+-----------------+---------------------------+----------------+
| Name     |      Status     | Type                      | IP address     |
+----------+-----------------+---------------------------+----------------+
| backend  | Up / Up-to-date | ec2 [us-west-1c; t2.nano] | 52.53.170.188  |
| frontend | Up / Up-to-date | ec2 [us-west-1c; t2.nano] | 54.153.67.212  |
| proxy    | Up / Up-to-date | ec2 [us-west-1c; t2.nano] | 54.215.156.84  |
| nixed    | Up / Up-to-date | ec2-keypair [us-west-1]   |                |
+----------+-----------------+---------------------------+----------------+

DNS support

Let's add a DNS entry to make the cluster accessible via a hostname.

proxy.nix:

{
  proxy =
    { resources, pkgs, lib, nodes, config, ...}:
    {
      deployment.route53.hostName =
        builtins.getEnv "USER" + ".infunstructure.com";
      ...

Deploy it again:

$ nixops deploy -d nixed
proxy...> sending Route53 DNS...
frontend> activation finished successfully
proxy...> activation finished successfully
backend.> activation finished successfully
nixed> deployment finished successfully

Point a browser at the hostname:

Outline

More backend servers

This will duplicate the backend node in the cluster:

                       .----------------------------------------------.
                       | Cluster                                      |
                       |                              .----------.    |
                       |            .---------------->| Frontend |    |
.------.               |           /   HTTP: /        '----------'    |
|.------.              |          /                                   |
'|.------.             |   .-------.                                  |
 '| User |---------------->| Proxy |                                  |
  '------'   HTTP: /*  |   '-------'                                  |
                       |          \                                   |
                       |           \                  .---------.     |
                       |            '---------------->| Backend |.    |
                       |               HTTP: /api/*   '---------'|.   |
                       |                               '---------'|   |
                       |                                '---------'   |
                       |                                              |
                       '----------------------------------------------'

NixOS configuration

We'll redefine the backend configuration to define two more backends, named backend2 and backend3.

backend.nix:

let
  ...
in
{
  backend = backendPlan;
  backend2 = backendPlan;
  backend3 = backendPlan;
}

target.nix:

{
  ...
  backend = target;
  backend2 = target;
  backend3 = target;
  frontend = target;
  proxy = target;
}

In nginx-conf.nix, reference these instances in the backend upstream definition:

upstream backend {
  server ${nodes.backend.config.networking.privateIPv4}:3000;
  server ${nodes.backend2.config.networking.privateIPv4}:3000;
  server ${nodes.backend3.config.networking.privateIPv4}:3000;
}

Deployment

Deploy the new configuration with nixops deploy:

$ nixops deploy -d nixed
frontend> activation finished successfully
proxy...> activation finished successfully
backend2> activation finished successfully
backend.> activation finished successfully
backend3> activation finished successfully
nixed> deployment finished successfully

Refresh it a few times in the browser:

Rolling it back (and forward again)

Deployments can be reviewed with list-generations:

$ nixops list-generations -d nixed
   1   2017-05-25 20:07:00
   2   2017-05-25 20:10:38
   3   2017-05-25 20:13:33
   4   2017-05-25 20:15:47
   5   2017-05-25 20:17:39   (current)

An older deployment can be re-applied with rollback:

$ nixops rollback -d nixed 4

Notice how the second and third backends are now gone.

Similarly, a newer older deployment can also be re-applied with rollback:

$ nixops rollback -d nixed 5

All three backends are back again.

Outline

Encrypted Web traffic

This step requires a public IP address, which can be cumbersome with VirtualBox. We'll just cover EC2 here.

This will encrypt the channel between the user and the proxy:

                       .----------------------------------------------.
                       | Cluster                                      |
                       |                              .----------.    |
                       |            .---------------->| Frontend |    |
.------.               |           /   HTTP: /        '----------'    |
|.------.              |          /                                   |
'|.------.             |   .-------.                                  |
 '| User |---------------->| Proxy |                                  |
  '------'  HTTPS: /*  |   '-------'                                  |
                       |          \                                   |
                       |           \                  .---------.     |
                       |            '---------------->| Backend |.    |
                       |               HTTP: /api/*   '---------'|.   |
                       |                               '---------'|   |
                       |                                '---------'   |
                       |                                              |
                       '----------------------------------------------'

Nginx configuration

proxy.nix:

let
  hostname = builtins.getEnv "USER" + ".infunstructure.com";
in
  {
    proxy =
      { resources, pkgs, lib, nodes, config, ...}:
      {
        security.acme.preliminarySelfsigned = true;
        security.acme.certs."${hostname}" = {
          webroot = "/var/www/challenges";
          email = "webmaster@${hostname}";
          user = "nginx";
          group = "nginx";
          postRun = ''
            systemctl restart nginx.service
          '';
        };

        ## A place to put challenges
        system.activationScripts.nginx = {
          text = ''
            mkdir -p /var/www/challenges
            chown nginx.nginx /var/www /var/www/challenges
          '';
          deps = [];
        };

        deployment.route53.hostName =
          builtins.getEnv "USER" + ".infunstructure.com";
        networking.firewall.allowedTCPPorts = [ 22 80 443 ];
        services.nginx.enable = true;
        services.nginx.httpConfig =
          import ./nginx-conf.nix { inherit nodes config hostname; };
      };
  }

nginx-conf.nix:

{ nodes, config, hostname }:
''
upstream frontend {
  server ${nodes.frontend.config.networking.privateIPv4}:80;
}

upstream backend {
  server ${nodes.backend.config.networking.privateIPv4}:3000;
  server ${nodes.backend2.config.networking.privateIPv4}:3000;
  server ${nodes.backend3.config.networking.privateIPv4}:3000;
}

server {
  listen 80;
  listen [::]:80;
  server_name ${hostname};
  server_tokens off;

  location /.well-known/acme-challenge {
    root /var/www/challenges;
  }

  location / {
    return 301 https://$host$request_uri;
  }
}

server {
  listen 443 ssl;
  server_name ${hostname};
  server_tokens off;

  gzip on;
  gzip_vary on;
  gzip_types text/plain text/css application/json application/x-javascript
             text/xml application/xml application/xml+rss text/javascript;

  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
  ssl_prefer_server_ciphers on;

  ssl_certificate
    ${config.security.acme.directory}/${hostname}/fullchain.pem;

  ssl_certificate_key
    ${config.security.acme.directory}/${hostname}/key.pem;

  access_log /var/log/nginx-access.log;

  location /api {
    rewrite /api$ /api/ redirect;
    rewrite /api/(.*) /$1 break;
    proxy_pass http://backend;
  }

  location / {
    proxy_pass http://frontend;
  }
}
''

Deployment

$ nixops deploy -d nixed

Try it out by with https in place of http:

Outline

RDS instance

This step uses RDS to provision a database. We won't cover VirtualBox targets here.

                       .----------------------------------------------.
                       | Cluster                                      |
                       |                              .----------.    |
                       |            .---------------->| Frontend |    |
.------.               |           /   HTTP: /        '----------'    |
|.------.              |          /                                   |
'|.------.             |   .-------.                                  |
 '| User |---------------->| Proxy |                                  |
  '------'  HTTPS: /*  |   '-------'                                  |
                       |          \                                   |
                       |           \                  .---------.     |
                       |            '---------------->| Backend |.    |
                       |               HTTP: /api/*   '---------'|.   |
                       |                               '---------'|   |
                       |                                '---------'   |
                       |                                    |         |
                       |                                    |         |
                       |                                    v         |
                       |                               .----------.   |
                       |                               | Database |   |
                       |                               '----------'   |
                       |                                              |
                       '----------------------------------------------'

NixOps configuration

target.nix:

resources.rdsDbInstances.nixed = {
  region = "us-west-1";
  id = "nixed";
  instanceClass = "db.t2.nano";
  allocatedStorage = 5;
  masterUsername = "master";
  masterPassword = "master";
  port = 5432;
  engine = "postgres";
  dbName = "nixed";
};

backend.nix:

environment = {
  DB_URI = resources.rdsDbInstances.nixed.endpoint;
};

Deployment

$ nixops deploy -d nixed

It sometimes takes upwards of twenty minutes for AWS to provision a new RDS instance. Go brew a pot of coffee.