NixOS Planet

November 15, 2016

Joachim Schiele



paul and me visisted the augsburger openlab again!



  • package
    • ✔ build with ant
  • ✔ Initialize package tests


  • Quassel + qt4 doesn't support postgresql as database backend
    • ✔ Add an option to the quassel service to allow the qt5 version
  • "nixify" postfix configuration


paul, michael & qknight

  • ✔ started nextcloud packaging
  • ✔ leaps: packaged with tests:
  • ✔ made work on nixos,
  • fixed email system so that qknight can use thunderbird with STARTTLS and submission

    submissionOptions = {
      "smtpd_tls_security_level" = "encrypt";
      "smtpd_sasl_auth_enable" = "yes";
      "smtpd_client_restrictions" = "permit_sasl_authenticated,reject";
      "smtpd_sasl_type" = "dovecot";
      "smtpd_sasl_path" = "private/auth";
  • LXC: Unprivileged container with NixOS as guest and as host:
    • ✔ LXC container ist started as root that spawnes the LXC as user 100000 which is unprivileged on the host
    • ✔ shared read only store with the host
    • ✔ container can be build and updated on the host with nix-env


this sprint was awesome. we got so many things to work!

by qknight at November 15, 2016 12:10 PM

November 07, 2016

Anders Papitto

Scripting pulseaudio, bluetooth, jack

Posted on November 7, 2016

I’ve just leveled up my audio configuration, and there’s precious little information out there on how to script against pulseaudio and bluetooth and jack, so I’ll document it a bit.

Future versions of all snippets will live in my nixos configuration repo.

Here are some things that I have working - I can run jack alongside pulseaudio. youtube videos can play at the same time as audio software. - I can play audio through bluetooth headphones. I can even do this simultaneously with running jack - though, only pulseaudio outputs can be sent to the bluetooth headphones.

And that’s basically it. But it’s tricky to set up for the first time. So, let’s look at the implementation.

system configuration

First of all, I’ve scrapped the default /etc/pulse/, and written my own. It’s largely the same - I went through the default and copied in each line, except that I skipped all the modules that have to do with restoring streams and devices, etc. It’s quite annoying when pulseaudio has persistent state and tries to do “smart” things - I would rather my scripts be in full control. Note the presence of some jackdbus and bluetooth modules.

In that file, I also have some pam limits configured, as well as some kernel modules and /etc/jackdrc. Those precede my most recent pass of configuration, so I’m not sure if they’re actually necessary.

Assuming that you can get jack and/or bluetooth to work the first time, the interesting bits are the scripts that make them not a pain to deal with. I have two main scripts - switch-to-jack, switch-to-bluetooth. I also have a mostly-unused switch-to-stereo for completeness.

switch-to-jack looks like this

set -x
until pacmd list-sinks | egrep -q 'jack_out'
    jack_control start
pactl set-sink-volume jack_out 50%
pacmd set-default-sink jack_out
for index in $(pacmd list-sink-inputs | grep index | awk '{ print $2 }')
    pacmd move-sink-input $index jack_out

It starts jack with jack_control. This will cause jack to take exclusive control of the sound card. pulseaudio will notice this and add a sink and source for jack - however because all the ‘smart’ modules were removed, it won’t redirect any active streams to jack. They will just freeze, because they no longer have access to the sound card.

Then, I set the volume to something low enough to be safe - disabling the ‘smarts’ means that the jack device always gets added at 100% volume, which generally is too loud. After doing that, it’s safe to tell future audio to default to playing over jack, and then to move all existing streams there.

switch-to-bluetooth is similar


set -x

DEVICE=$(bluetoothctl <<< devices | egrep '^Device' | awk '{ print $2 }')
until bluetoothctl <<< show | grep -q 'Powered: yes'
    bluetoothctl <<< 'power on'
until pacmd list-sinks | egrep -q 'name:.*bluez_sink'
    bluetoothctl <<< "connect $DEVICE"

TARGET_CARD=$(pacmd list-cards | grep 'name:' | egrep -o 'bluez.*[^>]')
TARGET_SINK=$(pacmd list-sinks | grep 'name:' | egrep -o 'bluez.*[^>]')

until pacmd list-cards | egrep -q 'active profile: <a2dp_sink>'
    pacmd set-card-profile $TARGET_CARD a2dp_sink

pactl set-sink-volume $TARGET_SINK 30%
pacmd set-default-sink $TARGET_SINK
for index in $(pacmd list-sink-inputs | grep index | awk '{ print $2 }')
    pacmd move-sink-input $index $TARGET_SINK

A couple differences are

  • we make sure bluetooth is on
  • we have to connect to the specific device. If I had more than one set of bluetooth headphones this logic might be more complicated.
  • I have to always tell pulseaudio to use a2dp instead of hsp/hfp, because again I disabled the ‘remembering’ modules in pulseaudio.

Also note that this is the ‘steady-state’ implementation. The first time you want to connect a particular bluetooth device, you have to go through a little dance that looks something like this:

$ bluetoothctl
[NEW] Controller 60:57:18:9B:AB:71 BlueZ 5.40 [default]

[bluetooth]# agent on
[bluetooth]# Agent registered

[bluetooth]# discoverable on
[bluetooth]# Changing discoverable on succeeded
[CHG] Controller 60:57:18:9B:AB:71 Discoverable: yes

[bluetooth]# scan on
[bluetooth]# Discovery started
[CHG] Controller 60:57:18:9B:AB:71 Discovering: yes
[NEW] Device E8:07:BF:00:14:14 Mixcder ShareMe 7

[bluetooth]# [CHG] Device E8:07:BF:00:14:14 RSSI: -52

[bluetooth]# scan off
[bluetooth]# [CHG] Device E8:07:BF:00:14:14 RSSI is nil
Discovery stopped
[CHG] Controller 60:57:18:9B:AB:71 Discovering: no

[bluetooth]# devices
Device E8:07:BF:00:14:14 Mixcder ShareMe 7
[bluetooth]# pair E8:07:BF:00:14:14
Attempting to pair with E8:07:BF:00:14:14
[CHG] Device E8:07:BF:00:14:14 Connected: yes

[Mixcder ShareMe 7]# [CHG] Device E8:07:BF:00:14:14 Modalias: bluetooth:v0094p5081d0101
[CHG] Device E8:07:BF:00:14:14 UUIDs: 00001108-0000-1000-8000-00805f9b34fb
[CHG] Device E8:07:BF:00:14:14 UUIDs: 0000110b-0000-1000-8000-00805f9b34fb
[CHG] Device E8:07:BF:00:14:14 UUIDs: 0000110c-0000-1000-8000-00805f9b34fb
[CHG] Device E8:07:BF:00:14:14 UUIDs: 0000110e-0000-1000-8000-00805f9b34fb
[CHG] Device E8:07:BF:00:14:14 UUIDs: 0000111e-0000-1000-8000-00805f9b34fb
[CHG] Device E8:07:BF:00:14:14 UUIDs: 00001200-0000-1000-8000-00805f9b34fb
[CHG] Device E8:07:BF:00:14:14 Paired: yes
Pairing successful

[CHG] Device E8:07:BF:00:14:14 Connected: no

[bluetooth]# connect E8:07:BF:00:14:14
Attempting to connect to E8:07:BF:00:14:14
[CHG] Device E8:07:BF:00:14:14 Connected: yes
Connection successful

[Mixcder ShareMe 7]# exit
[Mixcder ShareMe 7]# Agent unregistered
[DEL] Controller 60:57:18:9B:AB:71 BlueZ 5.40 [default]

Abandoning ship

Laptop suspend causes bluetooth to disconnect. With the pulseaudio ‘smarts’ disabled, as far as I’m aware the bluetooth-connected streams will just die. To work around this, I switch everything to jack right before shutting down with a systemd service.

Other tips

Given the command line tooling, it’s not really necessary to use any guis. However, I found it convenient to use pavucontrol while setting all this up just to keep an eye on what was active and which streams were going where.

Note that I set volume with pactl, and most other things use pacmd. I’m not sure what the exact differences between these tools is, but pacmd doesn’t support setting a percent-based volume.

November 07, 2016 12:00 AM

November 06, 2016

Joachim Schiele

pulseaudio tcp streaming


this is a simple setup for streaming pulseaudio streams over the network.


hardware.pulseaudio = {
  enable = true;
  tcp.enable = true;
  tcp.anonymousClients.allowedIpRanges = [ "" ];
  zeroconf.publish.enable = true;



to use the new setup simply play some music and in pavucontrol you can select a different output device for the listed stream.

by qknight at November 06, 2016 12:35 AM

November 05, 2016

Joachim Schiele



a few months back i've replaced the odroid XU4 with this APU 2c4 board.

installing nixos

first have a look into the apu2 manual.

since there is no VGA/DVI output but only a RS232 serial interface we need to use that:

  1. serial cable

    out of simplicity i soldered one myself, the pins are:

    pin 2 to pin 3
    pin 3 to pin 2
    pin 5 to pin 5 (GND)

    i've been using this with a USB 2 RS232 converter

    # lsusb
    Bus 003 Device 003: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port
  2. connecting via serial console:

    picocom /dev/ttyUSB0 -b 115200
  3. nixos boot cd

    download the nixos-minimal-16.03.714.69420c5-x86_64-linux.iso and use unetbootin to deploy it to an USB stick. afterwards mount the first partition of the USB-stick and append this to the syslinux.cfg file's kernel command line:


    info: using the serial console you can see the GRUB output, see the kernel's output after boot and finally get a shell.

  4. booting from the USB stick

    the apu 2c4 features corebios and the process is straight forward, just hit F10 and select the USB stick

  5. nixos installation

    basically follow the nixos manual

    info: but don't forget to include this line in configuration.nix:

    boot.kernelParams = [ "console=ttyS0,115200n8" ];


# Edit this configuration file to define what should be installed on
# your system.  Help is available in the configuration.nix(5) man page
# and in the NixOS manual (accessible by running ‘nixos-help’).

{ config, pkgs, ... }:

  pw = import ./passwords.nix;
# setfacl -R -m u:joachim:rwx /backup

  imports =
    [ # Include the results of the hardware scan.

  # Use the GRUB 2 boot loader.
  boot.loader.grub.enable = true;
  boot.loader.grub.version = 2;
  # Define on which hard drive you want to install Grub.
  boot.loader.grub.device = "/dev/sda";

  boot.kernelParams = [ "console=ttyS0,115200n8" ];

  networking = {
    hostName = "apu-nixi"; # Define your hostname.
    bridges.br0.interfaces = [ "enp1s0" "wlp4s0" ];
    firewall = {
      enable = true;
      allowPing = true;
      allowedTCPPorts = [ 22 ];
      #allowedUDPPorts = [ 5353 ];


  # networking.wireless.enable = true;  # Enables wireless support via wpa_supplicant.

  #Select internationalisation properties.
  i18n = {
    consoleFont = "Lat2-Terminus16";
    consoleKeyMap = "us";
    defaultLocale = "en_US.UTF-8";

  security.sudo.enable = true;

  programs.zsh.enable = true;
  users.defaultUserShell = "/run/current-system/sw/bin/zsh";

  services = {
    nscd.enable = true;
    ntp.enable = true;
    klogd.enable = true;
    nixosManual.enable = false; # slows down nixos-rebuilds, also requires nixpkgs.config.allowUnfree here..?
    xserver.enable = false;

    cron = {
      enable = true;
      mailto = "";
      systemCronJobs = [
        "0 0,8,16 * * * joachim cd /backup/; ./"
        #*     *     *   *    *            command to be executed
        #-     -     -   -    -
        #|     |     |   |    |
        #|     |     |   |    +----- day of week (0 - 6) (Sunday=0)
        #|     |     |   +------- month (1 - 12)
        #|     |     +--------- day of month (1 - 31) 
        #|     +----------- hour (0 - 23)
        #+------------- min (0 - 59)

  # Set your time zone.
  # time.timeZone = "Europe/Amsterdam";

  # List packages installed in system profile. To search by name, run:
  environment.systemPackages = with pkgs; [

  time.timeZone = "Europe/Berlin";

  # Enable the OpenSSH daemon.
  services.openssh = {
    enable = true;
    permitRootLogin = "without-password";
  }; = [ "sys-subsystem-net-devices-wlp4s0.device" ];

  services.hostapd = {
    enable = true;
    wpaPassphrase = pw.wpaPassphrase;
    interface = "wlp4s0";

  # Define a user account. Don't forget to set a password with ‘passwd’.
  users.extraUsers.joachim = {
    isNormalUser = true;
    uid = 1000;

  # The NixOS release to be compatible with for stateful data such as databases.
  system.stateVersion = "16.09";

WD passport USB 3.0 bug

with a WD passport USB 3.0 disk i can't boot the system since i hit this bug.

SeaBIOS (version ?-20160311_005214-0c3a223c2ee6)
XHCI init on dev 00:10.0: regs @ 0xfea22000, 4 ports, 32 slots, 32 byte contexts
XHCI    extcap 0x1 @ fea22500
XHCI    protocol USB  3.00, 2 ports (offset 1), def 0
XHCI    protocol USB  2.00, 2 ports (offset 3), def 10
XHCI    extcap 0xa @ fea22540
Found 2 serial ports
ATA controller 1 at 4010/4020/0 (irq 0 dev 88)
EHCI init on dev 00:13.0 (regs=0xfea25420)
ATA controller 2 at 4018/4024/0 (irq 0 dev 88)
Searching bootorder for: /pci@i0cf8/*@14,7
Searching bootorder for: /rom@img/memtest
Searching bootorder for: /rom@img/setup
ata0-0: KINGSTON SMS200S360G ATA-8 Hard-Disk (57241 MiBytes)
Searching bootorder for: /pci@i0cf8/*@11/drive@0/disk@0
XHCI port #3: 0x002202a0, powered, pls 5, speed 0 [ - ]
XHCI port #1: 0x00021203, powered, enabled, pls 0, speed 4 [Super]
Searching bootorder for: /pci@i0cf8/usb@10/storage@1/*@0/*@0,0
Searching bootorder for: /pci@i0cf8/usb@10/usb-*@1
USB MSC vendor='WD' product='My Passport 0827' rev='1012' type=0 removable=0
call16 with invalid stack
PCEngines apu2
coreboot build 20160311


how to recover a bricked BIOS (after flashing)? on APU1 it was SPI, and there's a header, so like wires + a ch341a should do it.


the APU i'm using also has a Mini-PCIe wireless card built and you can choose from these two cards:

the access point works nicely with my android devices as well as my linux laptops.

buy the APU

if you want to buy an APU, buy the APU bundle.


the APU is runnig NixOS and is very stable and fast while using little energy. would use/buy again!

by qknight at November 05, 2016 02:35 PM

November 04, 2016

Edward Tjörnhammar

Plant Based UML Wiki

Turn Gitit into a UML renderer

November 04, 2016 12:00 AM

October 26, 2016

Sander van der Burg

Push and pull deployment of Nix packages

In earlier blog posts, I have covered various general concepts of the Nix package manager. For example, I have written three blog posts explaining Nix from a system administrator, programming language, and a sales perspective.

Furthermore, I have written a number of blog posts demonstrating how Nix can be used, such as managing a set of private packages, packaging binary-only software, and Nix's declarative nature -- the Nix package manager (as well as other tools in the Nix project), are driven by declarative specifications describing the structure of the system to deploy (e.g. the packages and its dependencies). From such a specification, Nix carries out all required deployment activities, including building the packages from its sources, distributing packages from the producer to consumer site, installation and uninstallation.

In addition to the execution of these activities, Nix provides a number of powerful non-functional properties, such as strong guarantees that dependency specifications are complete, that package builds can be reproduced, and that upgrades are non-destructive, atomic, and can always be rolled back.

Although many of these blog posts cover the activities that the Nix package manager typically carries out (e.g. the three explanation recipes cover building and the blog post explaining the user environment concepts discusses installing, uninstalling and garbage collection), I have not elaborated much about the distribution mechanisms.

A while ago, I noticed that people were using one of my earlier blog post as a reference. Despite being able to set up a collection of private Nix expressions, there were still some open questions left, such as fully setting up a private package repository, including the distribution of package builds.

In this blog post, I will explain Nix's distribution concepts and show how they can be applied to private package builds. As with my earlier blog post on managing private packages, these steps should be relatively easy to repeat.

Source and transparent binary deployments

As explained in my earlier blog post on packaging private software, Nix is in principle a source-based package manager. Nix packages are driven by Nix expressions describing how to build packages from source code and all its build-time dependencies, for example:

{ stdenv, fetchurl
, pkgconfig, perl, glib, gpm, slang, zip, unzip, file, gettext
, libX11, libICE, e2fsprogs

stdenv.mkDerivation {
name = "mc-4.8.12";

src = fetchurl {
url =;
sha256 = "15lkwcis0labshq9k8c2fqdwv8az2c87qpdqwp5p31s8gb1gqm0h";

buildInputs = [ pkgconfig perl glib gpm slang zip unzip file gettext
libX11 libICE e2fsprogs ];

meta = {
description = "File Manager and User Shell for the GNU Project";
homepage =;
license = "GPLv2+";
maintainers = [ stdenv.lib.maintainers.sander ];

The above expression (mc.nix) describes how to build Midnight Commander from source code and its dependencies, such as pkgconfig, perl and glib. Because the above expression does not specify any build procedure, the Nix builder environment reverts to the standard GNU Autotools build procedure, that typically consists of the following build steps: ./configure; make; make install.

Besides describing how to build a package, we must also compose a package by providing it the right versions or variants of the dependencies that it requires. Composition is typically done in a second expression referring to the former:

{ nixpkgs ? <nixpkgs>
, system ? builtins.currentSystem

pkgs = import nixpkgs { inherit system; };

callPackage = pkgs.lib.callPackageWith (pkgs // pkgs.xlibs // self);

self = rec {
mc = callPackage ./mc.nix { };

# Other package imports

In the above expression (default.nix), the mc attribute imports our former expression and provides the build-time dependencies as function parameters. The dependencies originate from the Nixpkgs collection.

To build the Nix expression shown above, it typically suffices to run:

$ nix-build -A mc

The result of a Nix build is a Nix store path in which the build result is stored.

As may be noticed, the prefix of the package name (jal99995sk6rixym4gfwcagmdiqrwv9a) is a SHA256 hash code that has been derived from all involved build dependencies, such as the source tarball, build-time dependencies and build scripts. Changing any of these dependencies (such as the version of the Midnight Commander) triggers a rebuild and yields a different hash code. Because hash codes ensure that the Nix store paths to packages will be unique, we can safely store multiple versions and variants of the same packages next to each other.

In addition to executing builds, Nix also takes as many precautions to ensure purity. For example, package builds are carried out in isolated environments in which only the specified dependencies can be found. Moreover, Nix uses all kinds of techniques to make builds more deterministic, such as resetting the timestamps of all files to 1 UNIX time, making build outputs read-only etc.

The combination of unique hash codes and pure builds results in a property called transparent binary deployments -- a package with an identical hash prefix results in a (nearly) bit-identical build regardless on what machine the build has been performed. If we want to deploy a package with a certain hash prefix that already exists on a trustable remote machine, then we can also transfer the package instead of building it again.

Distribution models

The Nix package manager (as well as its sub projects) support two kinds of distribution models -- push and pull deployment.

Push deployment

Push deployment is IMO conceptually the simplest, but at the same time, infrequently used on package management level and not very well-known. The idea of push deployment is that you take an existing package build on your machine (the producer) and transfer it elsewhere, including all its required dependencies.

With Nix this can be easily accomplished with the nix-copy-closure command, for example:

$ nix-copy-closure --to \

The above command serializes the Midnight Commander store path including all its dependencies, transfers them to the provided target machine through SSH, and then de-serializes and imports the store paths into the remote Nix store.

An implication of push deployment is that the producer requires authority over the consumer machine. Moreover, nix-copy-closure can transfer store paths from one machine to another, but does not execute any additional deployment steps, such as the "installation" of packages (in Nix packages become available to end-users by composing a Nix user environment that is in the user's PATH).

Pull deployment

With pull deployment the consumer machine is control instead of the producer machine. As a result, the producer does not require any authority over another machine.

As with push deployment, we can also use the nix-copy-closure command for pull deployment:

$ nix-copy-closure --from \

The above command invocation is similar to the previous example, but instead copies a closure of the Midnight Commander from the producer machine.

Aside from nix-copy-closure, Nix offers another pull deployment mechanism that is more powerful and more commonly used, namely: channels. This is what people typically use when installing "end-user" packages with Nix.

The idea of channels is that they are remote HTTP/HTTPS URLs where you can subscribe yourself to. They provide a set of Nix expressions and a binary cache of substitutes:

$ nix-channel --add

The above command adds the NixOS unstable channel to the list of channels. Running the following command:

$ nix-channel --update

Fetches or updates the collection of Nix expressions from the channels allowing us to install any package that it provides. By default, the NIX_PATH environment variable is configured in such a way that they refer to the expressions obtained from channels:

$ echo $NIX_PATH

With a preconfigured channel, we can install any package we want including prebuilt substitutes, by running:

$ nix-env -i mc

The above command installs the Midnight Commander from the set of Nix expressions from the channel and automatically fetches the substitutes of all involved dependencies. After the installation succeeds, we can start it by running:

$ mc

The push deployment mechanisms of nix-copy-closure and pull deployment mechanisms of the channels can also be combined. When running the following command:

$ nix-copy-closure --to \
/nix/store/jal99995sk6rixym4gfwcagmdiqrwv9a-mc-4.8.12 \

The consumer machine first attempts to pull the substitutes of the dependency closure from the binary caches first, and finally the producer pushes the missing packages to the consumer machine. This approach is particularly useful if the connection between the producer and the consumer is slow, but the connection between the consumer and the binary cache is fast.

Setting up a private binary cache

With the concepts of push and pull deployment explained, you may probably wonder how these mechanisms can be used to your own private set of Nix packages?

Fortunately, with nix-copy-closure no additional work is required as it works on any store path, regardless of how it is produced. However, when it is desired to set up your own private binary cache, some additional work is required.

As channels/binary caches use HTTP as a transport protocol, we need to set up a web server with a document root folder serving static files. In NixOS, we can easily configure an Apache HTTP server instance by adding the following lines to a NixOS configuration: /etc/nixos/configuration.nix:

services.httpd = {
enable = true;
adminAddr = "";
hostName = "producer";
documentRoot = "/var/www";

With the nix-push command we can generate a binary cache for our private package set (that includes the Midnight Commander and all its dependencies):

$ nix-push --dest /var/www $(nix-build default.nix)

As may be observed by inspecting the document root folder, we now have a set of compressed NAR files representing the serialized forms of all packages involved and narinfo files capturing a package's metadata:

$ ls /var/www

In addition to the binary cache, we also need to make the corresponding Nix expressions available. This can be done by simply creating a tarball out of the private Nix expressions and publishing the corresponding file through the web server:

tar cfvz /var/www/custompkgs.tar.gz custompkgs

On the customer machine, we need to configure the binary cache, by adding the following property to /etc/nix.conf:

binary-caches =

In NixOS, this property can be set by adding the following property to /etc/nixos/configuration.nix:

nix.binaryCaches = [ "" ];
nix.requireSignedBinaryCaches = false;

Additionally, we have to configure the NIX_PATH environment variable to refer to our tarball of Nix expressions:

$ export NIX_PATH=custompkgs=$NIX_PATH

Now, when we run the following command-line instruction:

$ nix-env -f '<custompkgs>' -iA mc
downloading ‘’... [0/0 KiB, 0.0 KiB/s]
unpacking ‘’...
installing ‘mc-4.8.12’

We can install our custom Midnight Commander package by pulling the package from our own custom HTTP server without explicitly obtaining the set of Nix expressions or building it from source code.


In this blog post, I have explained Nix's push and pull deployment concepts and shown how we can use them for a set of private packages, including setting up a private binary cache. The basic idea of binary cache distribution is quite simple: create a tarball of your private Nix expressions, construct a binary cache with nix-push and publish the files with an HTTP server.

In real-life production scenarios, there are typically more aspects you need to take into consideration beyond the details mentioned in this blog post.

For example, to make binary caches safe and trustable, it is also recommended to use HTTPS instead of plain HTTP connections. Moreover, you may want to sign the substitutes with a cryptographic key. The manual page of nix-push provides more details on how to set this up.

Some inconvenient aspects of the binary cache generation approach shown in this blog post (in addition to the fact that we need to set up an HTTP server), is that the approach is static -- whenever, we have a new version of a package built, we need to regenerate the binary cache and the package set.

To alleviate these inconveniences, there is also a utility called nix-serve that spawns a standalone web server generating substitutes on the fly.

Moreover, the newest version of Nix also provides a so-called binary cache Nix store. When Nix performs operations on the Nix store, it basically talks to a module with a standardized interface. When using the binary cache store module (instead of the standalone or remote Nix store plugin), Nix automatically generates NAR files for any package that gets imported into the Nix store, for example after the successful completion a build. Besides an ordinary binary cache store plugin, there is also plugin capable of uploading substitutes directly to an Amazon AWS S3 bucket.

Apart from the Nix package manager, also Nix-related projects use Nix's distribution facilities in some extent. Hydra, the Nix-based continuous integration server, also supports pull deployment as it can dynamically generate channels from jobsets. Users can subscribe to these channels to install the bleeding-edge builds of a project.

NixOps, a tool that deploys networks of NixOS configurations and automatically instantiates VMs in the cloud, as well as Disnix, a tool that deploys service-oriented systems (distributed systems that can be decomposed in a "distributable units", a.k.a. services) both use push deployment -- from a coordinator machine (that has authority over a collection of target machines) packages are distributed.

Concluding remarks

After writing this blog post and some thinking, I have become quite curious to see what a pull deployment variant of Disnix (and maybe NixOps) would look like.

Although Disnix and NixOps suit all my needs at the moment, I can imagine that when we apply the same concepts in large organizations with multiple distributed teams, it can no longer considered to be practical to work with a centralized deployment approach that requires authority over all build artefacts and the entire production environment.

by Sander van der Burg ( at October 26, 2016 09:38 PM

Joachim Schiele



inspired by the shackspace's grafana usage for moisture/temperature monitoring i wanted to use grafana myself. since i'm also active in the fablab neckar-alb and we are having a nice project, called cycle-logistics tübingen, to monitor this seemd to be a good opportunity to apply this toolchain.

we are interested in the voting behaviour:

this blog posting documents all steps needed to rebuild the setup so you can leverage this toolchain for your own projects!

here is a screenshot of how it looks:


below you can find a detailed listing and discussion of the single programs used. the source code can be found on github, except the nixos specific parts which is listed below exclusively.


selenium is used to visit, parse the DOM tree and to export the data as data.json.

to execute one needs an python environment with two additional libraries. nix-shell along with collect_data-environment.nix is used to create that environment on the fly.

#! /usr/bin/env nix-shell
#! nix-shell collect_data-environment.nix --command 'python3'

from selenium import webdriver
from selenium import selenium
from selenium.webdriver.common.keys import Keys

from import By
from import WebDriverWait # available since 2.4.0
from import expected_conditions as EC # available since 2.26.0
from pyvirtualdisplay import Display

import sys
import os

display = Display(visible=0, size=(800, 600))

from distutils.version import LooseVersion, StrictVersion
if LooseVersion(webdriver.__version__) < LooseVersion("2.51"):
    sys.exit("error: version of selenium ("
        + str(LooseVersion(webdriver.__version__))
        + ") is too old, needs 2.51 at least")

ff = webdriver.Firefox()

st = ""

v = ff.execute_script("""
var t = document.getElementById('profile').childNodes; 
var ret = []
for (var i = 0; i < t.length; i++) {
  if ('id' in t[i]) {
    if(t[i].id.includes('profil-')) {
      var myID = t[i].id.replace("profil-","");
      var myVotes = t[i].getElementsByClassName('profile-txt-stimmen')[0].innerHTML;
      var myTitle = t[i].getElementsByClassName('archive-untertitel')[0].innerHTML;
      var myVerein = t[i].getElementsByClassName('archive-titel')[0].innerHTML;
      //console.log(myID,myVerein, myTitle, myVotes)
      var r = new Object();
      r.Id = parseInt(myID);
      r.Votes = parseInt(myVotes);
      r.Verein = myVerein; 
      r.Title = myTitle;
var date = new Date();
var exp = {};
exp.Date = date;
exp.Data = ret;
var j = JSON.stringify(exp,  null, "\t");
return j;

print (v)



with import <nixpkgs> { };

  pkgs1 = import (pkgs.fetchFromGitHub {
    owner = "qknight";
    repo = "nixpkgs";
    rev = "a1dd8b2a5b035b758f23584dbf212dfbf3bff67d";
    sha256 = "1zn9znsjg6hw99mshs0yjpcnh9cf2h0y5fw37hj6pfzvvxfrfp9j";
  }) {};


pkgs1.python3Packages.buildPythonPackage rec {
  name = "crawler";
  version = "0.0.1";

  buildInputs = [ pkgs.firefox xorg.xorgserver ];

  propagatedBuildInputs = with pkgs1.python3Packages; [
    virtual-display selenium


info: in the above environment, two different versions of nixpkgs are mixed, which is a nix speciality.

virtual-display and selenium from an older nixpkgs base called pkgs1 while firefox is the one coming with the nixos operating system called pkgs.

golang import

the go based importer is very simple and basically follows the example from the influxdb client code base.

if you want to build the inject_into_fluxdb binary you can simply use nix-shell and inside that shell simply type go build. you have to put that binary into the right place, which is /var/lib/crawler/, manually since this was only a prototype.

warning use nixos-rebuild switch with the nixos specific changes from below first so that the nixos system will create the user/group and directory (crawler/crawler and /var/lib/crawler). and when you deploy stuff into that directory, make sure you use chown crawler:crawler . -R in that directory.


package main

import (
//   "os"

type rec struct {
  Id     int
  Votes  int
  Verein string
  Title  string

type json_message struct {
  Date string
  Data []rec

const (
  MyDB     = "square_holes"
  username = "bubba"
  password = "bumblebeetuna"

func main() {

  f, err2 := ioutil.ReadFile("data.json")
  if err2 != nil {
    log.Fatalln("Error: ", err2)

  var l json_message
  err2 = json.Unmarshal(f, &l)
  if err2 != nil {
    log.Fatalln("Error: ", err2)

  // Make client
  c, err := client.NewHTTPClient(client.HTTPConfig{
    Addr:     "http://localhost:8086",
    Username: username,
    Password: password,

  if err != nil {
    log.Fatalln("Error: ", err)

  // Create a new point batch
  bp, err := client.NewBatchPoints(client.BatchPointsConfig{
    Database:  MyDB,
    Precision: "s",

  if err != nil {
    log.Fatalln("Error: ", err)
  layout := "2006-01-02T15:04:05.000Z"
  t, err3 := time.Parse(layout, l.Date)

  if err3 != nil {
  for _, r := range l.Data {
    pt, err := client.NewPoint("frei", map[string]string{"Id": strconv.Itoa(r.Id)}, structs.Map(r), t.Local())
    if err != nil {
      log.Fatalln("Error: ", err)

  // Write the batch


go2nix in version 1.1.1 has been used to generate the default.nix and deps.nix automatically. this is also the reason for the weird directory naming inside the GIT repo.

warning there are two different implementation of a go to nix dependency converter and both are called go2nix. i was using the one from kamilchm the other never worked for me.

# This file was generated by go2nix.
    goPackagePath = "";
    fetch = {
      type = "git";
      url = "";
      rev = "dc3312cb1a4513a366c4c9e622ad55c32df12ed3";
      sha256 = "0wgm6shjf6pzapqphs576dv7rnajgv580rlp0n08zbg6fxf544cd";
    goPackagePath = "";
    fetch = {
      type = "git";
      url = "";
      rev = "6fa145943a9723f9660586450f4cdcf72a801816";
      sha256 = "14ggx1als2hz0227xlps8klhn5s478kczqx6i6l66pxidmqz1d61";


info: go2nix generates a default.nix which is basically a dropin when used in nixpkgs but i wanted to use it with nix-shell so a few lines needed changes. just be awere of that!

{ pkgs ? import <nixpkgs>{} } :
  stdenv = pkgs.stdenv;
  buildGoPackage = pkgs.buildGoPackage;
  fetchgit = pkgs.fetchgit;
  fetchhg = pkgs.fetchhg;
  fetchbzr = pkgs.fetchbzr; 
  fetchsvn = pkgs.fetchsvn;

buildGoPackage rec {
  name = "crawler-${version}";
  version = "20161024-${stdenv.lib.strings.substring 0 7 rev}";
  rev = "6159f49025fd5500e5c2cf8ceeca4295e72c1de5";

  goPackagePath = "fooooooooooo";

  src = ./.;

  goDeps = ./deps.nix;

  meta = {



info: in the configuration.nix excerpt apache, which is called httpd in nixos`, is used as a reverse proxy. you don't have to follow that example but it is a nice setup once one gets it working.

  imports =
    [ # Include the results of the hardware scan.
  services.grafana = {
    security = {
    users = {
      allowSignUp = false;
      allowOrgCreate = true;
    analytics.reporting.enable = false;


  services.influxdb = {
    enable = true;
  services.httpd = {
    enable = true;
    enablePHP = true;
    logPerVirtualHost = true;
    hostName = "";
    extraModules = [
      { name = "php7"; path = "${pkgs.php}/modules/"; }
      { name = "deflate"; path = "${pkgs.apacheHttpd}/modules/"; }
      { name = "proxy_wstunnel"; path = "${pkgs.apacheHttpd}/modules/"; }
 virtualHosts =
      # (https)
        hostName = "";
        serverAliases = [ "" "" ];

        documentRoot = "/www/";
        enableSSL = true;
        sslServerCert = "/ssl/acme/";
        sslServerKey = "/ssl/acme/";
       extraConfig = ''
          Alias /.well-known/acme-challenge /var/www/challenges/
          <Directory "/var/www/challenges/">
            Options -Indexes
            AllowOverride None
            Order allow,deny
            Allow from all
            Require all granted
          RedirectMatch ^/$ /main/
          #Alias /main /www/
          <Directory "/www/">
            Options -Indexes
            AllowOverride None
            Require all granted

          SetOutputFilter DEFLATE

          <Directory "/www/">
            Options -Indexes
            AllowOverride None
            Order allow,deny
            Allow from all

          # prevent a forward proxy! 
          ProxyRequests off

          # User-Agent / browser identification is used from the original client
          ProxyVia Off
          ProxyPreserveHost On

          RewriteEngine On
          RewriteRule ^/grafana$ /grafana/ [R]

          <Proxy *>
          Order deny,allow
          Allow from all

          ProxyPass /grafana/ retry=0
          ProxyPassReverse /grafana/


simply put crawler.nix into /etc/nixos and reference it from configuration.nix using the imports directive.

{ config, pkgs, lib, ... } @ args:

#with lib;

  cfg =;
  stateDir = "/var/lib/crawler/";
in { 
  config = {
    users = {
      users.crawler= {
        #note this is a hack since this is not commited to the nixpkgs
        uid             = 2147483647;
        description     = "crawler server user";
        group           = "crawler";
        home            = stateDir;
        createHome      = true;

      groups.crawler= {
        #note this is a hack since this is not commited to the nixpkgs
        gid = 2147483648;
    }; {
      script = ''
          source /etc/profile
          export HOME=${stateDir}
          ${stateDir}/ > ${stateDir}/data.json
          cd ${stateDir}
      serviceConfig = {
        Nice = 19;
        IOSchedulingClass = "idle";
        PrivateTmp = "yes";
        NoNewPrivileges = "yes";
        ReadWriteDirectories = stateDir;
        WorkingDirectory = stateDir;

    systemd.timers.crawler = { 
      description = "crawler service";
      partOf      = [ "crawler.service" ];
      wantedBy    = [ "" ];
      timerConfig = {
        OnCalendar = "*:0/30";
        Persistent = true;

info: note the timerConfig.OnCalendar setting which starts the crawling every 30 minutes.


[root@nixcloud:/var/lib/crawler]# ls -lathr
total 5.6M
drwxr-xr-x 21 root    root    4.0K Oct 24 18:09 ..
drwx------  3 crawler crawler 4.0K Oct 24 23:24 .cache
drwx------  3 crawler crawler 4.0K Oct 24 23:24 .dbus
drwxr-xr-x  2 crawler crawler 4.0K Oct 24 23:24 Desktop
drwx------  4 crawler crawler 4.0K Oct 24 23:24 .mozilla
drwxr-xr-x  3 crawler crawler 4.0K Oct 25 13:37
-rw-r--r--  1 crawler crawler  490 Oct 25 13:37 collect_data-environment.nix
-rwxr-xr-x  1 crawler crawler 1.8K Oct 25 18:15
drwx------  8 crawler crawler 4.0K Oct 25 18:15 .
drwxr-xr-x  8 crawler crawler 4.0K Oct 25 18:15 .git
-rwxr-xr-x  1 crawler crawler 5.6M Oct 25 18:16 inject_into_fluxdb
-rw-r--r--  1 crawler crawler 5.1K Oct 27 12:30 data.json


this is an example of the data.json which is generated by selenium and with jq, a very nice tool to process json in a shell, one can experiment with the values.

  "Date": "2016-10-27T11:00:55.123Z",
  "Data": [
      "Id": 338,
      "Votes": 2252,
      "Verein": "Ziegenprojekt am Jusi und Florian",
      "Title": "Schwäbischer Albverein Kohlberg/Kappishäuseren"
      "Id": 215,
      "Votes": 2220,
      "Verein": "„Karl, der Käfer, wurde nicht gefragt …“ – ein Baumprojekt",
      "Title": "Waldkindergarten Schurwaldspatzen e.V."
      "Id": 194,
      "Votes": 34,
      "Verein": "Plankton: Das wilde Treiben im Baggersee!",
      "Title": "Tübinger Mikroskopische Gesellschaft e.V. (Tümpelgruppe)"

jq usage example

cat data.json | jq '.Data[0]'
  "Id": 338,
  "Votes": 2252,
  "Verein": "Ziegenprojekt am Jusi und Florian",
  "Title": "Schwäbischer Albverein Kohlberg/Kappishäuseren"

grafana setup

first you need to add an influxdb data source:

based on that you need to configure the graph to use the influxdb source:


hope you enjoyed reading this and if you have further questions, drop an email to:



by qknight at October 26, 2016 04:35 PM

October 16, 2016

Munich NixOS Meetup

Nix(OS) Introduction & Hackathon

photoMunich NixOS Meetup

Curry Club Augsburg & OpenLab Augsburg present: the second NixOS hackathon  //Augsburg  

We’ll have three days of hacking on all things Nix.

On Saturday there will be an extensive introduction to Nix(OS) for beginners. With NixOS install party!

If you want to present the project you are working on, we are going to have a time of 10–20 minute presentations on Friday (or alternatively Sunday).

Food & drinks are available, whether there will be sponsoring for catering will be clear in a few days.

Please RSVP so we can estimate the numbers.

Augsburg - Germany

Friday, November 4 at 11:00 AM


October 16, 2016 04:32 PM

October 09, 2016

Rok Garbas

Updating your Nix sources

It feels a bit tedious that we (package maintainers of nixpkgs) are still updating version of packages manually. Especially in such a small community, we should be really careful where we use our resources.

In this blog post I will take you on a tour of things I learned in a project I am working on with Release Engineering team at Mozilla, mozilla-releng/services, and how we are continuously updating our nix sources.

Extending nixpkgs

Many times I was asked questions how to work efficiently with upstream nixpkgs and how to work with private nix package set.

For each project I usually create a private set of nix expressions. This could look like:

{ pkgs ? import <nixpkgs> {}

  custom_pkgs = {
    inherit pkgs;
    packageA = ...;
    packageB = ...;
in custom_pkgs

Our nix expression set is called custom_pkgs and it is a function which takes nixpkgs as an argument.

Pinning nixpkgs

In the past I already wrote about why pinning nixpkgs matters. To recap: We always want to have a stable development environment. Importing <nixpkgs> would depend on system's nixpkgs which is different from machine to machine. To pin down nixpkgs let us change previous example:

  _pkgs = import <nixpkgs> {};
{ pkgs ? import (_pkgs.fetchFromGitHub { owner = "NixOS";
                                         repo = "nixpkgs-channels";
                                         rev = "...";
                                         sha256 = "...";
                                       }) {}

  custom_pkgs = {
    inherit pkgs;
    packageA = ...;
    packageB = ...;
in custom_pkgs

We still depend on system's <nixpkgs>, but only to provide us with _pkgs.fetchFromGitHub function. We then pin NixOS/nixpkgs-channels repository to specific revision. I chose nixpkgs-channels repository, since that means I will also get binaries and wont have to recompile too often.

Update runner script

mozilla-releng/services's update runner script can be found in nix/update.nix. What this script does is, it checks which package has update attribute and then loops over and executes every update script. A minimal example would look like:

  _pkgs = import <nixpkgs> {};
{ pkgs ? import (_pkgs.fetchFromGitHub { owner = "NixOS";
                                         repo = "nixpkgs-channels";
                                         rev = "...";
                                         sha256 = "...";
                                       }) {}

  packagesWith = name: attrs: ...; # function which searches attrs values
                                   # and checks for name attribute.
  custom_pkgs = import ./default.nix { inherit pkgs; };
  packages = packagesWith "update" attrs;
in pkgs.stdenv.mkDerivation {
  name = "update-releng-services";
  buildCommand = ''
    echo "+--------------------------------------------------------+"
    echo "| Not possible to update repositories using \`nix-build\`. \|"
    echo "|         Please run \`nix-shell update.nix\`.             \|"
    echo "+--------------------------------------------------------+"
    exit 1
  shellHook = ''
    export HOME=$PWD
    echo "Updating packages ..."
    ${builtins.concatStringsSep "\n\n" (
        map (pkg: ''echo ' - ${(builtins.parseDrvName}';
                  '') packages
    echo ""
    echo "Packages updated!"

If above script is ran with nix-build it will raise an error, saying this script can only be ran with nix-shell. Update scripts will need access to internet, and this is the reason we must run it with nix-shell.

When we run nix-shell update.nix we can see that packagesWith function currently does not find any package with update attribute, since we did not define any.

Update script

Lets first create an update script for tracking nixos-unstable branch for NixOS/nixpks-channels repository on github. Implementation of such script can be found here, updateFromGitHub.

Now lets use updateFromGitHub function and add update attribute to nixpkgs attribute in our custom_pkgs.

pkgs = pkgs // {
   update = releng_pkgs.lib.updateFromGitHub {
     owner = "garbas";
     repo = "nixpkgs";
     branch = "python-srcs";
     path = "nix/nixpkgs.json";
   _pkgs = import <nixpkgs> {};
 { pkgs ? import (_pkgs.fetchFromGitHub
                     (_pkgs.lib.importJSON ./nixpkgs.json)) {}

   updateFromGitHub = ...;
   custom_pkgs = {
     pkgs = pkgs // {
       update = updateFromGitHub {
         owner = "NixOS";
         repo = "nixpkgs-channels";
         branch = "nixos-unstable";
         path = "nixpkgs.json";
     packageA = ...;
     packageB = ...;
 in custom_pkgs

As you can see above our nixpkgs update script stores owner, repo, rev and sha256 to a nixpkgs.json file in a format which we also read and pass to fetchFromGitHub to be used to initialize custom_pkgs.

An example of update script for a python project, which uses pypi2nix, would look like this:

packageA = pkgs.stdenv.mkDerivation {
  passthru.update = writeScript "update-${name}" ''
    pushd src/packageA
    ${pkgs.pypi2nix}/bin/pypi2nix -v \
     -V 3.5 \
     -E "postgresql libffi openssl" \
     -r requirements.txt \
     -r requirements-dev.txt

Automation with Travis CI

Now that we can manually run update script we need to run it on a daily basis. Luckily for us Travis supports this.

  1. Enable Travis for your project.
  2. Ask Travis for cron support.
  3. Create .travis.yml such as:
language: nix
- if [ "$TRAVIS_EVENT_TYPE" == "cron" ]; then
    nix-shell update.nix --pure;
- nix-build default.nix
- eval "$(ssh-agent -s)"
- chmod 600 $TRAVIS_BUILD_DIR/deploy_rsa
- ssh-add $TRAVIS_BUILD_DIR/deploy_rsa
- if [[ -n `git diff --exit-code` ]]; then
    git config 'travis';
    git config 'you@email';
    git stash
    git checkout -b result-$TRAVIS_BRANCH origin/$TRAVIS_BRANCH
    git pull
    git stash apply
    git add .
    git commit -m "Travis update [skip ci]";
    git push<owner>/<repo>.git HEAD:$TRAVIS_BRANCH;
  1. Create deploy_rsa using following commands:
% ssh-keygen -t rsa -b 4096 -C 'travis/<owner>/<repo>' -f ./deploy_rsa
% travis encrypt-file deploy_rsa --add
  1. Allow to push to your repository.

Add content of to Authorized keys for your github repository and make sure it has Push permission.

  1. Commit everything and watch updates coming in.
% git add deploy_rsa.enc .travis
% git commit -c "Travis will make us all lazy."
% git push


It does take quite some time to setup all of this, but benefits can be seen within a week once first (auto-)commits are coming in.

It is important to look at your project as a living system where latest version should be automatically picked, something we are used to from traditional packages managers. With Nix it is not possible to depend on future releases.

With above setup we can get best of both worlds. Despite constantly updating to latest versions, we don't update versions if build (including tests) is not passing. This way we have flexibility of traditional package managers and robustness of Nix.

Best of both worlds.

With large number of packages or projects a setup like this will save us a lot of time. Now imagine how much time we could save if we have similar setup for nixpkgs.

Let me know what you think. Nix out.

by Rok Garbas at October 09, 2016 10:00 PM

October 05, 2016

Rok Garbas

SystemD Conference 2016

I realized I have never really sat down and learned systemd. I was mostly exposed to it via NixOS when writing NixOS modules and time came to dig a bit deeper. Systemd man pages are invaluable source of information. Initially they might be a bit overwhelming, but soon you start to appreciate the long explanation.


journalctl became one of the tools I can not really work without. To be worry free when it comes to logging and to not be a awk/grep wizard when it comes to accessing logs is something everybody new to systemd will appreciate.

Some (hopefully) useful commands I wrote down during Lennart's journald presentation:

  • journalctl -r - show logs in reverse order
  • journalctl -b - show logs since last boot
  • journalctl -k - show kernel logs
  • journalctl -p warning - show logs with warning priority
  • journalctl -p error - show logs with error priority
  • journalctl --since=2016-08-01 - show logs since
  • journalctl --until=2016-08-03 - show logs until
  • journalctl --until=today - show logs until midnight today
  • journalctl --since=yesterday - show logs since yesterday midnight
  • journalctl --since=-2week - show logs for last 2 weeks
  • journalctl -u <unit-name> - show logs of certain unit
  • journalctl /dev/sda - show kernel message of device
  • journalctl -o json - show logs in json format

And you can mix more or less or of the above options.

To serve journald events over the networks there is systemd-journal-gatewayd service which is disabled by default.

Every log message can now provide MESSAGE_ID (generated via journalctl --new-id128) to be able to use journal's catalog capabilities. If this gains traction, it could really help to have better documentation with every error that shows up in journal.

Currently we can limit globally how much space logs will take. Option to have this per service would be very nice and might happen in future.

Containers (systend-nspawn)

Containers were the hot topic at systemd conference. Not about their current usage, but more or less looking into the future and how containers might look.

Apart from great talks I was not aware that there are commands to pull / import / export containers from command line

% machinectl pull-tar <url>
% machinectl pull-raw <url>

% machinectl import-tar <name> <file>
% machinectl import-raw <name> <file>

% machinectl export-tar <name> <file>
% machinectl export-raw <name> <file>

Not something revolutionary, but this commands can be useful to quickly share images with coworkers.

Lennart also presented another tool called mkosi which is aiming to build legacy free OS images. You can consider it as a replacement for deboostrap and dnf. I wonder if this is a place were Nix could be used, since we already have a way of creating images of non NixOS distribution.


All the negativity that some people have towards systemd, got me convinced that systemd is one big monolith and that it can not be stripped down to the bare minimum. Last and this year show us that systemd on embedded devices is growing and while there still are problems those problems are possible to solve.

Listening to all the talks about embedded systems got me thinking what kind of build systems do they use and could Nix be a possible solution. Knowing the strong points of Nix (the build tool), reproducibility, composability, etc... must be features that would get some developers in embedded world interested. But to unlock this space for Nix, a basic support for ARM should be there, maybe just in a form of binary channel for a small subset of packages.


One of pain points of my current setup (no desktop manager like kde/gnome/xfce, only i3 window manager) is that I have to manually mount every USB stick. I was not aware that systemd could also handle this.

/dev/sdb1   /mnt/usb     auto     noauto,x-systemd.automount     0 2

Looks like systemd community is really tackling the hard problems. While there is a networkd which manages networking (there was also a talk about it: Lennart Poettering, Tom Gundersen: What you didn't know about networkd) there is also work in progress to have have a utility to manage wireless networks, Marcel Holtmann: New Wireless Daemon for Linux.

I think the last area where Linux feels painful would be Printing. systemd-printing anybody? :)

Configuration Management

Systemd unit files allow us to specify services in a declarative way. With systemd growing in scope we are more and more seeing a need for a better way of managing services. We have seen to talk about configuration management.

We have seen an very nice presentation of a even more awesome tool called Mgmt (James (purpleidea) Shubin: Next Generation Config Mgmt). While still a bit unpolished you can easily see being it superior to many existing configuration tools out there.

Completely different approach to Configuration management took NixOS. And I had an honor to speak about it (NixOS - Reproducible Linux Distribution built around systemd). I tried my best to explain the core principles of how Nix works, but sadly I was short in time to show all the demos. Still I have learned few things when presenting Nix/NixOS:

  • Don't mention functional, ignore it
  • There is no Nix Expression Language, but a JSON-like syntax
  • Not everybody sees the benefits of reproducibility in first 5 seconds, show examples
  • Nix is not for everybody and for every use case


I had a wonderful 3 days, I hope to see all next year. Especially I would like to thank Kinvolk crew who organized the conference. Also we must not forget the amazing CCC VOC team for all the videos.

New NixOS release is here, what are you waiting! :)

by Rok Garbas at October 05, 2016 10:00 PM

October 04, 2016

Anders Papitto

transient global environments - the third path

Posted on October 4, 2016
Tags: nixos

There’s two broad paths to managing contexts and dependencies for projects, which cover most approaches that people use. However, there’s significant downsides to both, and it turns out that using Nix enables a third alternative which is very appealing.

Global Installation

A common, well known approach is to install things globally. This usually means using the system package manager or equivalent (e.g. apt-get, brew). In the simple cases, this works really well - there’s minimal overhead to get started. The well known downside is that when you have multiple projects, they can start to interfere with each other - for example if you work on two codebases that require different versions of gcc to build, you’re pretty much out of luck because the system will only support one global installation of gcc.

With Nix/NixOS, this would mean installing all your dependencies via nix-env -i, or by adding them all to environment.systemPackages.

Local Installation

This is generally the alternative to global installations. Part of the system is sandboxed off using some tool, and installations only exist within the sandbox. Some such tools are virtualenv, docker, stack, nix-shell. Another variant is manually installing things to weird locations - e.g. having one version of gcc in /usr/bin and then putting another one in /opt.

With Nix/NixOS, this would mean using nix-shell to set up isolated environments.

This works pretty well (and this approach is pretty commonly regarded as a best practice), but one pretty big downside is that it generally requires lots of fiddling with tools. For example, if you want to debug your program with gdb, and it’s running inside a docker container, you may have to map PIDs back and forth across the PID namespace. Or, if you want emacs to be able to run your python linter, you may have to set up a bunch of emacs variables so that emacs knows that it must invoke nix-shell /path/to/project.nix --run python, instead of just running python directly.

Transient Global Installation

A key point is that (at least for me), even when I have multiple projects, I’m only actively working on one at any given moment. So, I’m fine with a global install of my current project, as long as it’s trivial to get it out of the way when it’s time for me to switch focus. This way, I can avoid all the complexities of local installations - e.g. python will always just be python, but it will mean something different depending on which project I have open.

As a concrete example, let’s say that I have two projects, A and B.

A depends on (i.e. these need to be installed while I work on project A) python 3.4 and wget.

B depends on python 3.5 and curl

I can’t satisfy this by just installing everything globally - the two versions of python will conflict.

The local installation solution might look like this

$ cat project-a/default.nix
let pkgs = import <nixpkgs> { };
in pkgs.stdenv.mkDerivation {
  name = "env-project-a";
  buildInputs = with pkgs; [ python34 wget ];
$ cat project-b/default.nix
let pkgs = import <nixpkgs> { };
in pkgs.stdenv.mkDerivation {
  name = "env-project-b";
  buildInputs = with pkgs; [ python35 curl ];

They don’t interfere, but to interact with either project I have to work inside nix-shell project-a/default.nix or nix-shell project-b/default.nix.

The equivalent transient global installation would look like this

$ cat project-a/project.nix
let pkgs = import <nixpkgs> { };
in pkgs.buildEnv {
  name = "env-project-a";
  paths = with pkgs; [ python34 wget ];
$ cat project-b/project.nix
let pkgs = import <nixpkgs> { };
in pkgs.buildEnv {
  name = "env-project-b";
  paths = with pkgs; [ python35 curl ];

To work on A, I run

$ nix-env -f project-a/project.nix -i

to switch to B, I run

$ nix-env -e env-project-a
$ nix-env -f project-b/project.nix -i

I no longer have to be inside a nix-shell to interact with the current project - all the dependencies are immediately available everywhere, including other terminals and editors and other programs.

Ideally the couple commands to activate a project can be wrapped up in a nice script, along with some logic to ensure only one is active at a time, and maybe some nice rofi-based project-switching menu.

October 04, 2016 12:00 AM

September 26, 2016

Sander van der Burg

Simulating NPM global package installations in Nix builds (or: building Grunt projects with the Nix package manager)

A while ago, I "rebranded" my second re-engineered version of npm2nix into node2nix and officially released it as such. My main two reasons for giving the tool a different name is that node2nix is neither a fork nor a continuation of npm2nix, but a tool that is written from scratch (though it incorporates some of npm2nix's ideas and concepts including most of its dependencies).

Furthermore, it approaches the expression generation problem in a fundamentally different way -- whereas npm2nix generates derivations for each package in a dependency tree and composes symlinks to their Nix store paths to allow a package to find its dependencies, node2nix deploys an entire dependency tree in one derivation so that it can more accurately mimic NPM's behaviour including flat-module installations (at the expense of losing the ability to share dependencies among packages and projects).

Because node2nix is conceptually different, I have decided to rename the project so that it can be used alongside the original npm2nix tool that still implements the old generation concepts.

Besides officially releasing node2nix, I have recently extended its feature set with a new concept for a recurring class of NPM development projects.

Global NPM development dependencies

As described in earlier blog posts, node2nix (as well as npm2nix) generate Nix expressions from a set of third party NPM packages (obtained from external sources, such as the NPM registry) or a development project's package.json file.

Although node2nix works fine for most of my development projects, I have noticed that for a recurring class of projects, the auto generation approach is too limited -- some NPM projects may require the presence of globally installed packages and must run additional build steps in order to be deployed properly. A prominent example of such a category of projects are Grunt projects.

Grunt advertises itself as a "The JavaScript Task Runner" and can be used to run all kinds of things, such as code generation, linting, minification etc. The tasks that Grunt carries out are implemented as plugins that must be deployed as a project's development dependencies with the NPM package manager.

(As a sidenote: it is debatable whether Grunt is a tool that NPM developers should use, as NPM itself can also carry out build steps through its script directive, but that discussion is beyond the scope of this blog post).

A Grunt workflow typically looks as follows. Consider an example project, with the following Gruntfile.js:

module.exports = function(grunt) {

jshint: {
files: ['Gruntfile.js', 'src/**/*.js'],
options: {
globals: {
jQuery: true
watch: {
files: ['<%= jshint.files %>'],
tasks: ['jshint']


grunt.registerTask('default', ['jshint']);


The above Gruntfile defines a configuration that iterates over all JavaScript files (*.js files) in the src/ directory and invokes jshint to check for potential errors and code smells.

To deploy the development project, we first have to globally install the grunt-cli command-line utility:

$ npm install -g grunt-cli
$ which grunt

To be able to carry out the steps, we must update a project's package.json file to have grunt and all its required Grunt plugins as development dependencies:

"name": "grunt-test",
"version": "0.0.1",
"private": "true",
"devDependencies": {
"grunt": "*",
"grunt-contrib-jshint": "*",
"grunt-contrib-watch": "*"

Then we must install the development dependencies with the NPM package manager:

$ npm install

And finally, we can run the grunt command-line utility to execute our tasks:

$ grunt

"Global dependencies" in Nix

Contrary to NPM, Nix does not support "global dependencies". As a matter of fact, it takes all kinds of precautions to prevent global dependencies to influence the builds that it performs, such as storing all packages in isolation in a so-called Nix store (e.g. /nix/store--grunt-1.0.1 as opposed to storing files in global directories, such as /usr/lib), initially clearing all environment variables (e.g. PATH) and setting these to the Nix store paths of the provided dependencies to allow packages to find them, running builds in chroot environments, etc.

These precautions are taken for a very good reason: purity -- each Nix package stored in the Nix store has a hash prefix that is computed from all involved build-time dependencies.

With pure builds, we know that (for example) if we encounter a build performed on one machine with a specific hash code and a build with an identical hash code on another machine, their build results are identical as well (with some caveats, but in general there are no observable side effects). Pure package builds are a crucial ingredient to make deployments of systems reliable and reproducible.

In Nix, we must always be explicit about the dependencies of a build. When a dependency is unspecified (something that commonly happens with global dependencies), a build will typically fail because it cannot be (implicitly) found. Similarly, when a build has dependencies on packages that would normally have to be installed globally (e.g. non-NPM dependencies), we must now explicitly provide them as a build inputs.

The problem with node2nix is that it automatically generates Nix expressions and that global dependencies cannot be detected automatically, because they are not specified anywhere in a package.json configuration file.

To cope with this limitation, the generated Nix expressions are made overridable, so that any missing dependency can be provided manually. For example, we may want to deploy an NPM package named floomatic from the following JSON file (node-packages.json):


We can generate Nix expressions from the above specification, by running:

$ node2nix -i node-packages.json

One of floomatic's dependencies is an NPM package named: native-diff-match-patch that requires the Qt 4.x library and pkgconfig. These two packages are non-NPM package dependencies left undetected by the node2nix generator. In conventional Linux distributions, these packages typically reside in global directories, such as /usr/lib, and can still be implicitly found.

By creating an override expression (named: override.nix), we can inject these missing (global) dependencies ourselves:

{pkgs ? import <nixpkgs> {
inherit system;
}, system ? builtins.currentSystem}:

nodePackages = import ./default.nix {
inherit pkgs system;
nodePackages // {
floomatic = nodePackages.floomatic.override (oldAttrs: {
buildInputs = oldAttrs.buildInputs ++ [

With the override expression shown above, we can correctly deploy the floomatic package, by running:

$ nix-build override.nix -A floomatic

Providing supplemental NPM packages to an NPM development project

Similar to non-NPM dependencies, we also need to supply the grunt-cli as an additional dependency to allow a Grunt project build to succeed in a Nix build environment. What makes this process difficult is that grunt-cli is also an NPM package. As a consequence, we need to generate a second set of Nix expressions and propagate their generated package configurations as parameters to the former expression. Although it was already possible to do this, because the Nix language is flexible enough, the process is quite complex, hacky and inconvenient.

In the latest node2nix version, I have automated this workflow -- when generating expressions for a development project, it is now also possible to provide a supplemental package specification. For example, for our trivial Grunt project, we can create the following supplemental JSON file (supplement.json) that provides the grunt-cli:


We can generate Nix expressions for the development project and supplemental package set, by running:

$ node2nix -d -i package.json --supplement-input supplement.json

Besides providing the grunt-cli as an additional dependency, we also need to run grunt after obtaining all NPM dependencies. With the following wrapper expression (override.nix), we can run the Grunt task runner after all NPM packages have been successfully deployed:

{ pkgs ? import <nixpkgs> {}
, system ? builtins.currentSystem

nodePackages = import ./default.nix {
inherit pkgs system;
nodePackages // {
package = nodePackages.package.override {
postInstall = "grunt";

As may be observed in the expression shown above, the postInstall hook is responsible for invoking the grunt command.

With the following command-line instruction, we can use the Nix package manager to deploy our Grunt project:

$ nix-build override.nix -A package


In this blog post, I have explained a recurring limitation of node2nix that makes it difficult to deploy projects having dependencies on NPM packages that (in conventional Linux distributions) are typically installed in global file system locations, such as grunt-cli. Furthermore, I have described a new node2nix feature that provides a solution to this problem.

In addition to Grunt projects, the solution described in this blog post is relevant for other tools as well, such as ESLint.

All features described in this blog post are part of the latest node2nix release (version 1.1.0) that can be obtained from the NPM registry and the Nixpkgs collection.

Besides a new release, node2nix is now also used to generate the expressions for the set of NPM packages included with the development and upcoming 16.09 versions of Nixpkgs.

by Sander van der Burg ( at September 26, 2016 10:39 PM

September 22, 2016

Flying Circus

DevOps Autumn Sprint Halle 28th-30th September 2016

Next week our Autumn 2016 Sprint starts and we really look forward to welcome our guests. We are in the midst of preparation and hope the weather plays along. All details around the sprint can be find on Meetup. Interesting topics are on the agenda as: backy, batou, NixOS and more – there is an Etherpad to gather them.

If you want to contribute but can’t make it in person, think about join us remote. Just let us know in advance (send a short message to or poke us on twitter @flyingcircusio).

by Andrea at September 22, 2016 02:37 PM

August 22, 2016

Sander van der Burg

An extended self-adaptive deployment framework for service-oriented systems

Five years ago, while I was still in academia, I built an extension framework around Disnix (named: Dynamic Disnix) that enables self-adaptive redeployment of service-oriented systems. It was an interesting application as it demonstrated the full potential of service-oriented systems having their deployment processes automated with Disnix.

Moreover, the corresponding research paper was accepted for presentation at the SEAMS 2011 symposium (co-located with ICSE 2011) in Honolulu (Hawaii), which was (obviously!) a nice place to visit. :-)

Disnix's development was progressing at a very low pace for a while after I left academia, but since the end of 2014 I made some significant improvements. In contrast to the basic toolset, I did not improve Dynamic Disnix -- apart from the addition of a port assigner tool, I only kept the implementation in sync with Disnix's API changes to prevent it from breaking.

Recently, I have used Dynamic Disnix to give a couple of demos. As a result, I have improved some of its aspects a bit. For example, some basic documentation has been added. Furthermore, I have extended the framework's architecture to take a couple of new deployment planning aspects into account.


For readers unfamiliar with Disnix: the primary purpose of the basic Disnix toolset is executing deployment processes of service-oriented systems. Deployments are driven by three kinds of declarative specifications:

  • The services model captures the services (distributed units of deployments) of which a system consists, their build/configuration properties and their inter-dependencies (dependencies on other services that may have to be reached through a network link).
  • The infrastructure model describes the target machines where services can be deployed to and their characteristics.
  • The distribution model maps services in the services model to machines in the infrastructure model.

By writing instances of the above specifications and running disnix-env:

$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix

Disnix executes all activities to get the system deployed, such as building their services from source code, distributing them to the target machines in the network and activating them. Changing any of these models and running disnix-env again causes the system to be upgraded. In case of an upgrade, Disnix will only execute the required activities making the process more efficient than deploying a system from scratch.

"Static" Disnix

So, what makes Disnix's deployment approach static? When looking at software systems from a very abstract point of view, they are supposed to meet a collection of functional and non-functional requirements. A change in a network of machines affects the ability for a service-oriented system to meet them, as the services of which these systems consist are typically distributed.

If a system relies on a critical component that has only one instance deployed and the machine that hosts it crashes, the functional requirements can no longer be met. However, even if we have multiple instances of the same components giving better guarantees that no functional requirements will be broken, important non-functional requirements may be affected, such as the responsiveness of a system.

We may also want to optimize a system's non-functional properties, such as its responsiveness, by adding more machines to the network that offer more system resources, or by changing the configuration of existing machine, e.g. upgrading the amount available RAM.

The basic Disnix toolset is considered static, because all these events events require manual modifications to the Disnix models for redeployment, so that a system can meet its requirements under the changed conditions.

For simple systems, manual reconfiguration is still doable, but with one hundred services, one hundred machines or a high frequency of events (or a combination of the three), it becomes too complex and time consuming.

For example, when a machine has been added or removed, we must rewrite the distribution model in such a way that all services are deployed to at least one machine and that none of them are mapped to machines that are not capable or allowed to host them. Furthermore, with microservices (one of their traits is that they typically embed HTTP servers), we must typically bind them to unique TCP ports that do not conflict with system services or other services deployed by Disnix. None of these configuration aspects are trivial for large service-oriented systems.

Dynamic Disnix

Dynamic Disnix extends Disnix's architecture with additional models and tools to cope with the dynamism of service oriented-systems. In the latest version, I have extended its architecture (which has been based on the old architecture described in the SEAMS 2011 paper and corresponding blog post):

The above diagram shows the structure of the dydisnix-self-adapt tool. The ovals denote command-line utilities, the rectangles denote files and the arrows denote files as inputs or outputs. As with the basic Disnix toolset, dydisnix-self-adapt is composed of command-line utilities each being responsible for executing an individual deployment activity:

  • On the top right, the infrastructure generator is shown that captures the configurations of the machines in the network and generates an infrastructure model from it. Currently, two different kinds of generators can be used: disnix-capture-infra (included with the basic toolset) that uses a bootstrap infrastructure model with connectivity settings, or dydisnix-geninfra-avahi that uses multicast DNS (through Avahi) to retrieve the machines' properties.
  • dydisnix-augment-infra is responsible for augmenting the generated infrastructure model with additional settings, such as passwords. It is typically undesired to automatically publish privacy-sensitive settings over a network using insecure connection protocols.
  • disnix-snapshot can be optionally used to preemptively capture the state of all stateful services (services with property: deployState = true; in the services model) so that the state of these services can be restored if a machine crashes or disappears. This tool is new in the extended architecture.
  • dydisnix-gendist generates a mapping of services to machines based on technical and non-functional properties defined in the services and infrastructure models.
  • dydisnix-port-assign assigns unique TCP port numbers to previously undeployed services and retains assigned TCP ports in a previous deployment for optimization purposes. This tool is new in the extended architecture.
  • disnix-env redeploys the system with the (statically) provided services model and the dynamically generated infrastructure and distribution models.

An example usage scenario

When a system has been configured to be (statically) deployed with Disnix (such as the infamous StaffTracker example cases that come in several variants), we need to add a few additional deployment specifications to make it dynamically deployable.

Auto discovering the infrastructure model

First, we must configure the machines in such a way that they publish their own configurations. The basic toolset comes with a primitive solution called: disnix-capture-infra that does not require any additional configuration -- it consults the Disnix service that is installed on every target machine.

By providing a simple bootstrap infrastructure model (e.g. infrastructure-bootstrap.nix) that only provides connectivity settings:

{ = "test1"; = "test2";

and running disnix-capture-infra, we can obtain the machines' configuration properties:

$ disnix-capture-infra infrastructure-bootstrap.nix

By setting the following environment variable, we can configure Dynamic Disnix to use the above command to capture the machines' infrastructure properties:

$ export DYDISNIX_GENINFRA="disnix-capture-infra infrastructure-bootstrap.nix"

Alternatively, there is the Dynamic Disnix Avahi publisher that is more powerful, but at the same time much more experimental and unstable than disnix-capture-infra.

When using Avahi, each machine uses multicast DNS (mDNS) to publish their own configuration properties. As a result, no bootstrap infrastructure model is needed. Simply gathering the data published by the machines on the same subnet suffices.

When using NixOS on a target machine, the Avahi publisher can be enabled by cloning the dydisnix-avahi Git repository and adding the following lines to /etc/nixos/configuration.nix:

imports = [ /home/sander/dydisnix/dydisnix-module.nix ];
services.dydisnixAvahiTest.enable = true;

To allow the coordinator machine to capture the configurations that the target machines publish, we must enable the Avahi system service. In NixOS, this can be done by adding the following lines to /etc/nixos/configuration.nix:

services.avahi.enable = true;

When running the following command-line instruction, the machines' configurations can be captured:

$ dydisnix-geninfra-avahi

Likewise, when setting the following environment variable:

$ export DYDISNIX_GENINFRA=dydisnix-geninfra-avahi

Dynamic Disnix uses the Avahi-discovery service to obtain an infrastructure model.

Writing an augmentation model

The Java version of StaffTracker for example uses MySQL to store data. Typically, it is undesired to publish the authentication credentials over the network (in particular with mDNS, which is quite insecure). We can augment these properties to the captured infrastructure model with the following augmentation model (augment.nix):

{infrastructure, lib}:

lib.mapAttrs (targetName: target:
target // (if target ? containers && target.containers ? mysql-database then {
containers = target.containers // {
mysql-database = target.containers.mysql-database //
{ mysqlUsername = "root";
mysqlPassword = "secret";
} else {})
) infrastructure

The above model implements a very simple password policy, by iterating over each target machine in the discovered infrastructure model and adding the same mysqlUsername and mysqlPassword property when it encounters a MySQL container service.

Mapping services to machines

In addition to a services model and a dynamically generated (and optionally augmented) infrastructure model, we must map each service to machine in the network using a configured strategy. A strategy can be programmed in a QoS model, such as:

{ services
, infrastructure
, initialDistribution
, previousDistribution
, filters
, lib

distribution1 = filters.mapAttrOnList {
inherit services infrastructure;
distribution = initialDistribution;
serviceProperty = "type";
targetPropertyList = "supportedTypes";

distribution2 = filters.divideRoundRobin {
distribution = distribution1;

The above QoS model implements the following policy:

  • First, it takes the initialDistribution model that is a cartesian product of all services and machines. It filters the machines on the relationship between the type attribute and the list of supportedTypes. This ensures that services will only be mapped to machines that can host them. For example, a MySQL database should only be deployed to a machine that has a MySQL DBMS installed.
  • Second, it divides the services over the candidate machines using the round robin strategy. That is, it divides services over the candidate target machines in equal proportions and in circular order.

Dynamically deploying a system

With the services model, augmentation model and QoS model, we can dynamically deploy the StaffTracker system (without manually specifying the target machines and their properties, and how to map the services to machines):

$ dydisnix-env -s services.nix -a augment.nix -q qos.nix

The Node.js variant of the StaffTracker example requires unique TCP ports for each web service and web application. By providing the --ports parameter we can include a port assignment specification that is internally managed by dydisnix-port-assign:

$ dydisnix-env -s services.nix -a augment.nix -q qos.nix --ports ports.nix

When providing the --ports parameter, the specification gets automatically updated when ports need to be reassigned.

Making a system self-adaptable from a deployment perspective

With dydisnix-self-adapt we can make a service-oriented system self-adaptable from a deployment perspective -- this tool continuously monitors the network for changes, and runs a redeployment when a change has been detected:

$ dydisnix-self-adapt -s services.nix -a augment.nix -q qos.nix

For example, when shutting down a machine in the network, you will notice that Dynamic Disnix automatically generates a new distribution and redeploys the system to get the missing services back.

Likewise, by adding the ports parameter, you can include port assignments as part of the deployment process:

$ dydisnix-self-adapt -s services.nix -a augment.nix -q qos.nix --ports ports.nix

By adding the --snapshot parameter, we can preemptively capture the state of all stateful services (services annotated with deployState = true; in the services model), such as the databases in which the records are stored. If a machine hosting databases disappears, Disnix can restore the state of the databases elsewhere.

$ dydisnix-self-adapt -s services.nix -a augment.nix -q qos.nix --snapshot

Keep in mind that this feature uses Disnix's snapshotting facilities, which may not be the best solution to manage state, in particular with large databases.


In this blog post, I have described an extended architecture of Dynamic Disnix. In comparison to the previous version, a port assigner has been added that automatically provides unique port numbers to services, and the disnix-snapshot utility that can preemptively capture the state of services, so that they can be restored if a machine disappears from the network.

Despite the fact that Dynamic Disnix has some basic documentation and other improvements from a usability perspective, Dynamic Disnix remains a very experimental prototype that should not be used for any production purposes. In contrast to the basic toolset, I have only used it for testing/demo purposes and I still have no real-life production experience with it. :-)

Moreover, I still have no plans to officially release it yet as many aspects still need to be improved/optimized. For now, you have to obtain the Dynamic Disnix source code from Github and use the included release.nix expression to install it. Furthermore, you probably need to a lot of courage. :-)

Finally, I have extended the Java and Node.js versions of the StaffTracker example as well as the virtual hosts example with simple augmentation and QoS models.

by Sander van der Burg ( at August 22, 2016 09:46 PM

August 10, 2016

Joachim Schiele



managing a 'call for papers' can be a lot of work. the tuebix cfp-software was created in the best practice of KISS.


we held a linuxtag at the university of tübingen called tuebix and we had a talk about nixos and a workshop about nixops.



the cfp-software backend is written in golang. the frontend was done in materializecss.

the workflow:

  • user fills the form-fields and gets instant feedback because of javascript checks
  • after 'submit' it will generate a json document and send it via email to a mailinglist
  • the mailinglist is monitored manually and people are contacted afterwards manually

after the cfp is over, one can use jq to process the data for creating a schedule.


security wise it would be good to create a custom user for hosting which was not done here.


source /etc/profile
cd /home/joachim/cfp
nix-shell --command "while true; do go run server.go ; done"

systemd job = {
  wantedBy = [ "" ];
    after = [ "" ];
    serviceConfig = {
      #Type = "forking";
      User = "joachim";
      ExecStart = ''/home/joachim/'';
      ExecStop = ''

reverse proxy

# (https)
  hostName = "";
  serverAliases = [ "" "" ];

  documentRoot = "/www/";
  enableSSL = true;
  sslServerCert = "/ssl/";
  sslServerKey = "/ssl/";
  sslServerChain = "/ssl/";

  extraConfig = ''
    RewriteRule ^/cfp$ /cfp/ [R]
    ProxyPass /cfp/ retry=0
    ProxyPassReverse /cfp/


using nix-shell it was easy to develop the software and to deploy it to the server. all dependencies are contained.

for further questions drop me an email:

by qknight at August 10, 2016 04:35 PM