NixOS Planet

March 19, 2018

Munich NixOS Meetup

NixOS für Einsteiger

photoMunich NixOS Meetup

- Profpatsch: Nix(OS): Package-management done right (German)

Augsburg - Germany

Monday, March 19 at 7:00 PM


March 19, 2018 11:37 AM

March 18, 2018

Joachim Schiele



in the nix-language-atlas series on i want to discuss how well programming languages, for which i’m familiar with, integrate with nix. today, let’s revisit emscripten, as there also had been improvements since i wrote about it last time.

projects we have done:

what’s new

  • emscripten toolchain:
    • refactored to force a common revision (for example 1.37.16) on emscripten, emscripten-fastcomp and emscripten-fastcomp-clang called emscriptenVersion
    • added a unit test in emscripten to verify a small part of the toolchain
    • initial emscripten documentation in nixpkgs (not on yet)
  • added 2 more unit tests & repaired all builds

for details see the PR

want to give this a shot from nixos?

git clone
cd nixpkgs
git checkout f41a3e7d7d327ea66459d17bfbe4a751b2496cb1
nix-env -f default.nix -I nixpkgs=. -iA emscriptenPackages
installing ‘emscripten-json-c-0.13’
installing ‘emscripten-libxml2-2.9.7’
installing ‘emscripten-xmlmirror’
installing ‘emscripten-zlib-1.2.11’

dynamic/static libraries

in the above toolchain we are using libraries in .so format, not the .a format and in the end we link them together using emcc. this has some advantages:

  • building .so files is best practice on linux
  • thus easy to do
  • license wise smart, some packages can’t be statically linked legally IIRC


nix emscripten toolchain is now supported from:

especially the microsoft WSL release with the creators update is a very interesting audience as it makes it so easy to use nix on windows. no mingw, no cygwin! took me 10 minutes to install nix on windows!

for testing i’ve been compiling this very toolchain on my windows 10 computer. IO is slow but it works and it is easy to deploy.


i’ve sumbled on this questions in

How do I change the currently active SDK version?

How do I build multiple projects with different SDK versions in parallel?

How do I use Emscripten SDK with a custom version of python, java, node.js or some other tool?

this are very interesting questions and all get easy once one uses this nixpkgs based toolchain as pointed out. i’ve been using the emsdk in the past but now that we have the ‘bits’ automated in nixpkgs i’m happy to not have to work statefully anymore!


during my last two days of work on the toolchain update, i had these ideas and motivations for the future:

  • more documentation:
    • nix-shell -A usage
    • cross platform nix usage
    • section how to (fast) write good and lasting unit tests
    • how to use older nixpkgs vs. newer ones (slightly modify a very old project for instance which has a bug)
  • packaging some common targts, what would these be?
  • package the caching of emscripten in ~/.emscripten properly, so that build artefacts can be reused over builds (time save) and remove the HOME=$TMPDIR requirement (ugly)

      DEBUG:root:adding object /home/joachim/.emscripten_cache/asmjs/dlmalloc.bc to link
      DEBUG:root:adding object /home/joachim/.emscripten_cache/asmjs/libc.bc to link
  • update nix-instantiate, which we use in the ‘tour of nix’, to a more recent version and also package it into the toolchain as an example
  • hydra builds
  • be more transparent on the license, ideally we could generate a list of licenses in the final folder

@nixos community: i’d love to get some feedback on this, so send me an email to if you have some interesting input.


thanks to kripken (emscripten author) for his help! i’d love to put more effort into this, i think it’s really worth it so if you have any funding for general development or want to have something special realized, let me know!

another interesting project i’ve learned about lately would be from

by qknight at March 18, 2018 05:35 PM

February 25, 2018

Sander van der Burg

A more realistic public Disnix example

It has almost been ten years ago when I started developing Disnix -- February 2008 marked the start of my master's thesis internship at Philips Research that resulted in the first prototype version.

Originally, Disnix was specifically developed for one use case only -- a medical service-oriented system called the "Service Development Support System" (SDS2) that can be used for asset tracking and utilisation analysis for medical devices in a hospital environment. More information about this case study can be found in my master's thesis, some of my research papers and my PhD thesis (all of them can be found on my publications page).

Many developments have happened since the realization of the first prototype -- its feature set has been extended considerably, its architecture has been overhauled several times and the code has evolved significantly. Most notably, I have been maintaining a production system for over three years with it.

In all these years, there is always one recurring question that I regularly receive from various kinds of people:

Why should I use Disnix and why would it be useful?

The answer is that Disnix becomes useful when you have a system that can be decomposed into distributable services, such as web services, RESTful services, web applications or processes.

In addition to the fact that Disnix automates its deployment and offers a number of powerful quality properties (e.g. non-destructive upgrades for the static parts of a system), it also helps componentized systems in reaching their full potential -- for example, when services can be built, deployed, and managed individually you can scale a system up and down (e.g. by distributing services to dedicated machines or consolidating all services on a single machine) and you can anticipate more flexibly to events (e.g. by redeploying services when we encounter a crashing machine).

Although the answer may sound simple, service-oriented systems are complicated -- besides facing all kinds of deployment complexities, properly dividing a system into distributable components is also quite challenging. For all the systems I have seen in the last decade, the requirements and their modularization strategies were all quite different from each other. I have also seen a number of systems for which decomposing into services did not work and unnecessary complexities were introduced.

Moreover, it is hard to find representative public examples that people can use as a reference. I was fortunate that I had access to an industrial case study during my research. Nonetheless, I was suffering from many difficulties because of the lack of any meaningful public case studies. As a countermeasure, I developed a collection of example cases in addition to SDS2, but because of their over-simplicity, proving my point often remained hard.

Roughly half a year ago, I have released most parts of my ancient web framework that I used to actively develop before I started doing research in software deployment and I created a couple of example applications for it.

Although my web framework development predates my deployment research, I was already using it to implement information systems that followed some modularity principles that are beneficial when using Disnix as a deployment system.

Recently, I have extended my web framework's example applications repository (providing a homework assistant, CMS, photo gallery and literature survey assistant) to become another public Disnix example case following the same modularity principles I used for the information systems I used to implement at that time.

Creating a componentized web information system

As mentioned earlier in this blog post, I have already implemented a (fairly simple) componentized web information system before I started working on Disnix using my ancient custom made web framework. The "componentization process" (a term that I had neither learned about yet nor something I was consciously implementing at that time) was partially driven by evolution and partially by non-functional requirements.

Originally, the system started out as just one single web application for one specific purpose and consisted of only two components -- a MySQL database responsible for storing the data and web front-end implemented in PHP, which is quite a common separation pattern for PHP applications.

Later, I was asked to implement another PHP application with similar functionality. Initially, I wrote the application from scratch without any reuse in mind, but at some point I made two important decisions:

  • I decided to keep the databases of each applications separate as opposed to integrating all the tables into one single database. My main motivating factor was that I wanted to prevent another developer's wrong decisions from messing up the other application. Moreover, I realized that for the data that was specific to the application domain that other systems did not have to know about it.
  • In addition to domain specific data, I noticed that both databases also stored the same kind of data, namely: user accounts -- both systems had a user account system to allow users to change the data. This also did not motivate me to integrate both databases into one database. Instead, I created a separate user database and authentication system (as a library API) that was shared among both applications.

After completing the two web applications, I had to implement more functionality. I decided to keep all of these new features for these new problem domains in separate applications with separate databases. The only thing they had in common was a shared user authentication system.

At some point I ended up having many sub applications. As a result, I needed a portal application that redirected users to these sub applications. Essentially, what I implemented became a system of systems.

Deployment with Disnix

The "architectural decisions" that I described earlier resulted in a system composed of several kinds of components:

  • Domain-specific web applications exposing functionality that logically belongs together.
  • Domain-specific databases storing tables that are strongly correlated.
  • A shared user database.
  • A portal application redirecting users to the domain-specific web applications.

The above listed components can be distributed over multiple machines in a network, because they connect to each other through network links (e.g. connecting to a MySQL database can be done with a TCP connection and connecting to a domain specific web application can be done through HTTP). As a result, they can also be modeled as services that can be deployed with Disnix.

To replicate the same patterns for demo purposes, I integrated my framework's example applications into a similar system of sub systems. We can deploy the corresponding example system to one single target machine with Disnix, by running:

$ disnixos-env -s services.nix \
-n network-single.nix \
-d distribution-single.nix --use-nixops

The entire system gets deployed to a single machine because of the distribution model (distribution.nix) that maps all services to one target machine:


usersdb = [ infrastructure.test1 ];
cmsdb = [ infrastructure.test1 ];
cmsgallerydb = [ infrastructure.test1 ];
homeworkdb = [ infrastructure.test1 ];
literaturedb = [ infrastructure.test1 ];
portaldb = [ infrastructure.test1 ];

cms = [ infrastructure.test1 ];
cmsgallery = [ infrastructure.test1 ];
homework = [ infrastructure.test1 ];
literature = [ infrastructure.test1 ];
users = [ infrastructure.test1 ];
portal = [ infrastructure.test1 ];

The resulting deployment architecture looks as follows:

The above visualization of the deployment architecture shows the following aspects:

  • The surrounding light grey colored box denotes a target machine. In this particular example, we only have one single target machine where services are deployed to.
  • The dark grey colored boxes correspond to container environments. For our example system, we have two of them: mysql-database corresponding to a MySQL DBMS server and apache-webapplication corresponding to an Apache HTTP server.
  • The ovals denote services corresponding to MySQL databases and web applications.
  • The arrows denote inter-dependency links that correspond to network connections. As explained in my previous blog post, solid arrows are dependencies with a strict ordering requirement while dashed arrows are dependencies without an ordering requirement.

Some people may argue that it is not really beneficial to deploy such a system with Disnix -- with NixOps you can define a machine configuration having a MySQL DBMS server and an Apache HTTP server with the corresponding databases and web application components. With Disnix, you must first ensure that the machines, the MySQL and Apache HTTP servers are configured by other means first (that could for example be done with NixOps), and then you have to deploy the system's components with Disnix.

In a single machine deployment scenario, it may indeed not be that beneficial. However, what you get in addition to automated deployment is also more flexibility. Since Disnix manages the services directly, as opposed to entire machine configurations as a whole, you can anticipate better in case of events by redeploying the system.

For example, when the amount of visitors keeps growing, you may run into the problem that a single server can no longer handle all the traffic. In such cases, you can easily add another machine to the network and adjust the distribution model to move (for example) the databases to another machine:


usersdb = [ infrastructure.test2 ];
cmsdb = [ infrastructure.test2 ];
cmsgallerydb = [ infrastructure.test2 ];
homeworkdb = [ infrastructure.test2 ];
literaturedb = [ infrastructure.test2 ];
portaldb = [ infrastructure.test2 ];

cms = [ infrastructure.test1 ];
cmsgallery = [ infrastructure.test1 ];
homework = [ infrastructure.test1 ];
literature = [ infrastructure.test1 ];
users = [ infrastructure.test1 ];
portal = [ infrastructure.test1 ];

By redeploying the system, we can take advantage of the additional system resources that the new machine provides:

$ disnixos-env -s services.nix \
-n network-separate.nix \
-d distribution-separate.nix --use-nixops

resulting in the following deployment architecture:

Likewise, there are countless of other deployment strategies possible to meet all kinds of non-functional requirements. For example, we can also distribute bundles of domain specific application and database pairs over two machines:

$ disnixos-env -s services.nix \
-n network-bundles.nix \
-d distribution-bundles.nix --use-nixops

resulting in the following deployment architecture:

This approach is even more scalable than simply offloading the databases to another server.

In addition to scalability, there are countless of other reasons to pick a certain distribution strategy. You could also, for example, distribute redundant instances of databases and applications as a failover to improve availability or improve security by deploying the databases with privacy sensitive data to a machine with restrictive network access.

State management

When updating the deployment of systems with Disnix (such as moving a database from one machine to another), there may be a recurring limitation that you could run frequently into -- like Nix, Disnix only manages the static parts of the system, but not any state. This means that a service's deployment can be reproduced elsewhere, but data, such as the content of a database is not migrated.

For example, the sub system of example applications stores two kinds of data -- records in the MySQL database and files, such as images uploaded in the photo gallery or PDF files uploaded to the literature application. When moving these applications around the data is not migrated.

As a possible solution, Disnix also provides simple state management facilities. When enabled, Disnix will take snapshots of the databases and filesets on the source machines, transfers the snapshots to the target machines, and finally restores the snapshots when moving a service one machine to another in the distribution model.

State management can be enabled globally by passing the --deploy-state parameter to (disnix-env or annotating the services with deployState = true; in the services model):

$ disnixos-env -s services.nix \
-n network-bundles.nix \
-d distribution-bundles.nix --use-nixops --deploy-state

We can also directly use the state management system, e.g. for backup purposes. When running the following command:

$ disnix-snapshot

Disnix takes snapshots of all databases and web application state (e.g. the images in the photo gallery and uploaded PDF files) and transfers them to the coordinator machine. With the dysnomia-snapshots tool we can inspect the snapshot store:

$ dysnomia-snapshots --query-all

and with some shell scripting, the actual contents of the snapshot store:

$ find $(dysnomia-snapshots --resolve $(dysnomia-snapshots --query-all)) -type f

The above output shows that for each MySQL database, we store a compressed SQL dump of the database and for each stateful web application, a compressed tarball of state files.


In this blog post, I have described a more realistic public Disnix example that is inspired by my web framework developments a long time ago. Aside from automating a system's deployment, the purpose of this blog post is to describe how a system that can be decomposed into distributable services that can be deployed with Disnix. Implementing such a system is all but trivial and driven by various kinds of design decisions.


The example web application system can be obtained from my GitHub page. The Disnix deployment expressions can be found in the deployment/ sub folder.

In addition, I have created a Dysnomia module named: fileset that can capture the state files of web applications in a compressed tarball.

After the recent developments the Disnix toolset has reached a new stable point. As a result, I have decided to release Disnix 0.8. Consult the Disnix homepage for more information!

by Sander van der Burg ( at February 25, 2018 09:59 PM

February 21, 2018

Joachim Schiele



in the nix-language-atlas series on i want to discuss how well programming languages, for which i’m familiar with, integrate with nix. let’s revisit javascript, as there had been major improvements since i wrote about it last time. we won’t look into the emscripten toolchain.

DSL-PM in general

are ‘domain specific language package manager(s)’ (DSL-PM) as npm or yarn a good thing?

integrate DSL-PM(s) into nix/nixpkgs

DSL-PM properties nix requires:

  • reproducible dependency calculations/downloads
  • reproducible configuration, build & installation into the store of each dependency
  • reproducible configuration, build & installation into the store of main target

DSL-PM evolution (javascript)

note: read this if you are interested in npm/yarn differences. it seems that yarn forced many of its concepts into npm.

yarn2nix workflow

let’s see a simple example how to integrate a yarn based project into your nix codebase:

  1. we clone an example:

    git clone
    cd example-yarn-package
    mkdir bin
    touch bin/myapp
    chmod u+x bin/myapp
  2. let’s add a default.nix

    { pkgs ? import <nixpkgs> {} }:
      yarn2nixSrc = pkgs.fetchFromGitHub {
        owner = "moretea";
        repo = "yarn2nix";
        rev = "0472167f2fa329ee4673cedec79a659d23b02e06";
        sha256 = "10gmyrz07y4baq1a1rkip6h2k4fy9is6sjv487fndbc931lwmdaf";
      yarn2nixRepo = pkgs.callPackage yarn2nixSrc {};
      inherit (yarn2nixRepo) mkYarnPackage;
      mkYarnPackage {
        src = ./.;

    and add contents to bin/myapp

    #!/usr/bin/env node
    'use strict';
    // For package depenency demonstration purposes only
    var multiply = require('lodash/multiply');
    console.log("h" + multiply(2,0.5) + ", it works!")

    add a “bin” to package.json, see this patch:

  3. now build the source

    nix-build -Q
    building path(s) ‘/nix/store/gkdb8hsykw3idp4xpv6by618wad5s052-offline’
    building path(s) ‘/nix/store/b6kiqlcaacxrlq4nnjgk69mwjnrlvkpv-yarn2nix-modules-0.1.0’
    yarn config v1.3.2
    success Set "yarn-offline-mirror" to "/nix/store/gkdb8hsykw3idp4xpv6by618wad5s052-offline".
    Done in 0.10s.
    yarn install v1.3.2
    [1/4] Resolving packages...
    [2/4] Fetching packages...
    [3/4] Linking dependencies...
    [4/4] Building fresh packages...
    warning Ignored scripts due to flag.
    Done in 4.99s.
    <skipped many lines>>
    building path(s) ‘/nix/store/7yg1mah963w35dhxjcylw5q0r62bkk83-yarn.nix’
    these derivations will be built:
    <skipped many lines>>
    building path(s) ‘/nix/store/chsx89ifr7dk7mc00d6789indh0rn41x-to-fast-properties-1.0.2.tgz’
    building path(s) ‘/nix/store/sqpsd9y77kkq1vvfpfcfzk9q347dwg14-walker-1.0.7.tgz’
    building path(s) ‘/nix/store/j7fp6iz0wj6dcp8ibsqyph2vy4p007wk-watch-0.10.0.tgz’
    building path(s) ‘/nix/store/nv5p8ijp15mc2kbpp26mvd8pn7vqlwyy-webidl-conversions-3.0.1.tgz’
    building path(s) ‘/nix/store/a3amdrmhzrvd3v1lsy1ih5rfpdamyj46-whatwg-url-3.0.0.tgz’
    building path(s) ‘/nix/store/vhwym48w1pvgzli9i57fnzqw0pjp3a83-window-size-0.2.0.tgz’
    building path(s) ‘/nix/store/2dg59ikzfrdwbcpkwg2xwjxs9rhpilb8-worker-farm-1.3.1.tgz’
    building path(s) ‘/nix/store/497p7kg9iyb09ah0vvqx88dpq3bn843p-yargs-5.0.0.tgz’
    building path(s) ‘/nix/store/w3lmwwq4cac653nhajnvyl1k2vhivaws-yargs-parser-3.2.0.tgz’
    building path(s) ‘/nix/store/ywn6ysmnvrkyzjwpdiwr4iivk9z9kx2y-offline’
    building path(s) ‘/nix/store/pnbb86igrhhfw3bdv9kcqnyz4hxgfkmd-example-yarn-package-modules-1.0.0’
    building path(s) ‘/nix/store/r9s6cmq5ik47c0s1hd48xz3r6pixmm1r-example-yarn-package-1.0.0’

    note: the last output line of nix-build is what we are interested in!

  4. checking the result

    ls -lathr result/node_modules/
    <skipped many lines>>
    lrwxrwxrwx 1 root root  102  1. Jan 1970  align-text -> /nix/store/pnbb86igrhhfw3bdv9kcqnyz4hxgfkmd-example-yarn-package-modules-1.0.0/node_modules/align-text
    lrwxrwxrwx 1 root root  105  1. Jan 1970  acorn-globals -> /nix/store/pnbb86igrhhfw3bdv9kcqnyz4hxgfkmd-example-yarn-package-modules-1.0.0/node_modules/acorn-globals
    lrwxrwxrwx 1 root root   97  1. Jan 1970  acorn -> /nix/store/pnbb86igrhhfw3bdv9kcqnyz4hxgfkmd-example-yarn-package-modules-1.0.0/node_modules/acorn
    lrwxrwxrwx 1 root root   98  1. Jan 1970  abbrev -> /nix/store/pnbb86igrhhfw3bdv9kcqnyz4hxgfkmd-example-yarn-package-modules-1.0.0/node_modules/abbrev
    lrwxrwxrwx 1 root root   96  1. Jan 1970  abab -> /nix/store/pnbb86igrhhfw3bdv9kcqnyz4hxgfkmd-example-yarn-package-modules-1.0.0/node_modules/abab
  5. execute the binary

    h1, it works!

using nix-build we declaratively built the node package!

yarn2nix internals

yarn (imperative)

  1. yarn -> read yarn.lock
  2. make / build all the downloads
  3. come up with node_packages folder
  4. yarn install

    note: in general we skip step (4.)

yarn (declarative)

  1. yarn2nix -> read yarn.lock
  2. translate each dependency into a mkDerivation

    each is a mkDerivation residing in /nix/store/…

  3. evaluate each mkDerivation, gain store path
  4. call yarn with the list of all mkDerivation(s), yarn comes up with node_modules

    a single mkDerivation also residing in /nix/store/… but encapsulating other /nix/store entries

  5. final mkDerivation uses this node_modules and creates the store path we are interested in


  • take notice, and i can’t stress this enough, we map yarn.lock entries into single pkgs.stdenv.mkDerivation (by calling mkYarnPackage) but later pass all these into yarn which creates the node_modules contents from this. this is exactly how a DSL-PM must be designed to be easily integrated
  • yarn.lock obsoletes the requirement of manually creating a dependency file per project:
    • shown in the example above where i only added a default.nix
    • i’d wish dep’s Gopkg.lock used with go would also contain a hash already but they are using GIT and we have to call nix-prefetch-git to generate a sha256 hash manually
  • yarn2nix might not be as advanced as node2nix but we at use it for all projects. yarn/yarn2nix is really fast, in dependency calculation and deployment, compared what we had before
  • i’d wish that npm’s sha1 would be replaced by something more recent but in comparison to dep they at least have a hash of some sort
  • yarn2nix is not a part of nixpkgs yet, see

by qknight at February 21, 2018 12:35 PM

February 11, 2018

Sander van der Burg

Deploying systems with circular dependencies using Disnix

Some time ago, during my PhD thesis defence, one of my committee members asked me how I would deploy systems with Disnix in which services have circular dependencies.

It was an interesting question because Disnix defines dependencies between services (that typically involve network connections) as inter-dependencies that have two properties:

  • They allow services to find services they depend on by providing their connection properties
  • They ensure that any inter-dependency is activated before the service itself, so that no failures will occur because of missing dependencies -- in Disnix, a service is either available or unavailable, but never in a broken state due to missing inter-dependencies at runtime.

In a system with circular dependencies, the ordering property is problematic -- it is impossible to activate one dependency before another without having broken connections between them.

During the defence, I had to admit that I have never deployed such systems with Disnix before, but that there were a couple of possible solutions to cope with such constraints. For example, you can propagate properties of the distribution model directly to a service, as opposed to declaring circular inter-dependencies. Then the ordering requirement is not enforced.

I also explained that systems should not have any hard cyclic requirements on other services, but instead compose their (potential bidirectional) communication channels at runtime. Furthermore, I explained that circular dependencies are bad from a reuse perspective -- when two services mutually depend on each other, then they should ideally be one service.

Although the answer sufficed (e.g. it provided the answer that it was possible), the solution basically relies on unconventional usage of the deployment tool. Recently, as a personal exercise, I have decided to dig up this question again and explore the possibilities of deploying systems with circular dependencies.

Chord: a peer-to-peer distributed hash table

When thinking of an example system that has a circular dependency structure, the first thing that came up in my mind is Chord: a peer-to-peer distributed hash table (a copy of the research paper written by Stoica et al can be found here). Interesting fact is that I had to implement it many years ago in the lab course of the distributed algorithms course taught by another member of my PhD thesis committee.

A Chord network has circular runtime dependencies because it has a a ring structure -- in a network that has more than one node, each node has a successor and predecessor link, in which no node has the same predecessor or successor and the last successor link refers to the first node:

The Chord nodes (shown in the figure above) constitute a distributed peer-to-peer hash table. In addition to the fact that it can store key and value pairs (all kinds of objects), it also distributes the data over the nodes in the network.

Moreover, its operations are decentralized -- for example, when it is desired to search for an object or to store new objects in the hash table, it is possible to consult any node in the network. The system will redirect the caller to the appropriate node that should host the data.

Various kinds of implementations exist of the Chord protocol. The official reference implementation is a filesystem abstraction layer built on top of it. I experimented with the Java-based OpenChord implementation that is capable of storing arbitrary serializable Java objects.

More details about the implementation details of Chord operations can be found in the research paper.

Deploying a Chord network

One of the challenges I faced during the lab course is that I had deploy a test Chord network with a small collection of nodes. At that time, I had no proper deployment automation. I ended up writing a bash shell script that spawned a collection of processes in parallel.

Because deployment was complicated, I never tried more complex scenarios than running a small collection of processes on a single machine. Because it was not required for the lab course to do more than just that I, for example, never tried any real network communication deployments in which I had to distribute Chord nodes over multiple computer systems. The latter would have introduced even more complexity to the deployment process.

Deploying a Chord network basically works as follows:

  • First, we must deploy an initial node that has no connection to a predecessor or successor node.
  • Then for each additional node, we call the join operation to attach it to the network. As explained earlier, a Chord hash-table is decentralized and as a result, we can consult any node we want in the network for the join process. The join and stabilization procedures decide which predecessor and successor a new node actually gets.

There are various strategies to join additional nodes to the network, but I what I ended up doing is using the initial node as a bootstrap node -- all successive nodes, simply join to the bootstrap node and the network stabilizes to become a ring.

(As a sidenote: you could argue whether this is a good process, since the introduction of a central bootstrap node during the deployment process violates the peer-to-peer contraint, but that is a different story. Obviously, you could also think of other bootstrap strategies but that is beyond the scope of this blog post).

Automating a Chord network deployment with Disnix

To experiment with a Chord network, I have decided to create a simple server process (using the OpenChord API) whose only responsibility is to store data. It can optionally join another node in the network and it has a command-line interface allowing me to conveniently specify the connection parameters.

The deployment strategy using the initial node as a bootstrap node can be easily automated with Disnix. In the Disnix services model, we can define the bootstrap node as follows:

ChordBootstrapNode = rec {
name = "ChordBootstrapNode";
pkg = customPkgs.ChordBootstrapNode { inherit port; };
port = 8001;
portAssign = "private";
type = "process";

The above service configuration corresponds to a process that binds the service to a provided TCP port.

Each successive node can be defined as a service that has an inter-dependency on the bootstrap node:

ChordNode1 = rec {
name = "ChordNode1";
pkg = customPkgs.ChordNode { inherit port; };
port = 8002;
portAssign = "private";
type = "process";
dependsOn = {
inherit ChordBootstrapNode;

As can be seen in the above Nix expression, the dependsOn attribute specifies that the node has an inter-dependency on the bootstrap node. The inter-dependency declaration provides the connection settings of the bootstrap node to the command-line utility that spawns the service and ensures that the bootstrap node is deployed first.

By providing an infrastructure model containing a number of machines and writing a distribution model that maps the node to the machine, such as:


ChordBootstrapNode = [ infrastructure.test1 ];
ChordNode1 = [ infrastructure.test1 ];
ChordNode2 = [ infrastructure.test2 ];
ChordNode3 = [ infrastructure.test2 ];

we can deploy a Chord network consisting of 4 nodes distributed over two machines by running:

$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix

This is the resulting deployment architecture of the Chord network that gets deployed:

In the above picture, the light grey colored boxes denote machines, the dark grey colored boxes container environments, the ovals services and the arrows inter-dependency relationships.

By running the OpenChord console, we can join any of our nodes in the network, such as the third node deployed to machine test2:

$ /nix/var/nix/profiles/disnix/default/bin/openchord-console
> joinN -port 9000 -bootstrap test2:8001
Trying to join chord network with boostrap URL ocsocket://test2:8001/
URL of created chord node ocsocket://

we can check the references that the console node has:

> refsN
Node: C1 F0 42 95 , ocsocket://
Finger table:
59 E4 86 AC , ocsocket://test2:8001/ (0-159)
Successor List:
59 E4 86 AC , ocsocket://test2:8001/
64 F1 96 B9 , ocsocket://test1:8001/
Predecessor: 9C 51 42 1F , ocsocket://test2:8002/

As may be observed in the output above, our predecessor is the node 3 deployed to machine test2 and our successors are node 3 deployed to machine test2 and node 1 deployed to machine test1.

We can also insert and retrieve the data we want:

> insertN -key test -value test
> entriesN
key = A9 4A 8F E5 , value = [( key = A9 4A 8F E5 , value = test)]

Defining services with circular dependencies in Disnix

As shown in the previous paragraph, the ring structure of a Chord hash table is constructed at runtime. As a result, Disnix does not need to manage any circular dependencies. Instead, it only has to know the dependencies of the bootstrap phase which are not cyclic at all.

I was also curious whether I could modify Disnix to properly define circular-dependencies, without any workarounds such as directly propagating properties from the distribution model. As explained in the introduction, inter-dependencies have two properties in which the second property is problematic: the ordering constraint.

To cope with the problematic ordering property, I have introduced a new property in the services model called: connectsTo allowing users to specify inter-dependencies for which the ordering does not matter. The connectsTo property makes it possible for services to define mutual dependencies on each other.

As an example case, I have extended the Disnix composition examples (a set of trivial examples implementing "Hello world" testcases) with a cyclic example case. In this new sub example, I have created a web application that both contains a server returning the "Hello world!" string and a client displaying the string. The result would be the following screen:

(Does it look cool? :p)

A web application instance is capable of connecting to another web service to obtain the "Hello world!" message to display. We can compose two web application instances that refer to each other to accomplish this.

The corresponding services model looks as follows:

{distribution, invDistribution, system, pkgs}:

let customPkgs = import ../top-level/all-packages.nix {
inherit system pkgs;
rec {
HelloWorldCycle1 = {
name = "HelloWorldCycle1";
pkg = customPkgs.HelloWorldCycle;
connectsTo = {
# Depends on the other cyclic service
HelloWorldCycle = HelloWorldCycle2;
type = "tomcat-webapplication";

HelloWorldCycle2 = {
name = "HelloWorldCycle2";
pkg = customPkgs.HelloWorldCycle;
connectsTo = {
# Depends on the other cyclic service
HelloWorldCycle = HelloWorldCycle1;
type = "tomcat-webapplication";

As may be observed in the above code fragment, the first service has a dependency on the second, while the second also has a dependency on the first. They are allowed to refer to each other because the connectsTo property disregards ordering.

By mapping the services to a network of machines that have Apache Tomcat hosted:


HelloWorldCycle1 = [ infrastructure.test1 ];
HelloWorldCycle2 = [ infrastructure.test2 ];

and deploying the system:

$ disnix-env -s services-cyclic.nix \
-i infrastructure.nix \
-d distribution-cyclic.nix

We end-up with a deployment architecture of two services having cyclic dependencies:

To produce the above visualization, I have extended the disnix-visualize tool with support for the connectsTo property that displays inter-dependencies as dashed arrows (as opposed to solid arrows that denote ordinary inter-dependencies).

In addition to the option to specify circular dependencies, the connectsTo property has another interesting use case -- when services have inter-dependencies that may be broken, we can optimize the duration of an upgrade processes.

Normally, when a service gets upgraded, all its inter-dependent services will be reactivated. This is an implication of Disnix's strictness -- a service is either available or unavailable, but never broken because of missing inter-dependencies.

However, all the extra reactivations in the upgrade phase can be quite expensive as a result. If a link is non-critical and it is permitted to be down for a short while, then redeployments can be made faster.


In this blog post, I have described two deployment experiments with Disnix involving systems that have circular dependencies -- a Chord-based distributed hash table (that constructs a ring structure at runtime) and a trivial toy example system in which two services have mutual dependencies on each other.


The newly introduced connectsTo property is part of the development version of Disnix and will become available in the next release.

The composition example and newly created Chord example can be found on my GitHub page.

by Sander van der Burg ( at February 11, 2018 11:31 PM

January 31, 2018

Sander van der Burg

Diagnosing problems and running maintenance tasks in a network with services deployed by Disnix

I have been maintaining a production system with Disnix for quite some time. Although deployment works quite conveniently for me (I may probably be a bit biased, since I created Disnix :-) ), you cannot get around unforeseen incidents and problems, such as:

  • Crashing processes due to bugs or excessive load.
  • Database problems, such as inconsistencies in the data.

Errors in distributed systems are typically much more difficult to debug than single machine system failures. For example, tracing the origins of an error in distributed systems is generally hard -- one service's fault may be caused by a message propagated by another service residing on a different machine in the network.

But even if you know the origins of an error (e.g. you can clearly observe that a web application is crashing or a database connection), you may face other kinds of challenges:

  • You have to figure out to which machine in the network a service has been deployed.
  • You have to connect to the machine, e.g. through an SSH connection, to run debugging tasks.
  • You have to know the configuration properties of a service to diagnose it -- in Disnix, as explained in earlier blog posts, services can take any form -- they can be web services, but also web applications, databases and processes.

Because of these challenges, diagnosing errors and running maintenance tasks in a system deployed by Disnix is always unnecessarily time-consuming and inconvenient.

To alleviate this burden, I have developed a small tool and extension that establishes remote shell connections with environments providing all relevant configuration properties. Furthermore, the tool gives suggestions to the end-user explaining what kinds of maintenance tasks he could carry out.

The shell activity of Dysnomia

As explained in previous Disnix-related blog posts, Disnix carries out all activities to deploy a service oriented system to a network machines (i.e. to bring it in a running state), such as building services from source code, distributing their intra-dependency closures to the target machines, and activating or deactivating every service.

For the build and distribution activities, Disnix uses, as its name implies, the Nix package manager because it offers a number of powerful properties, such as strong reproducibility guarantees and atomic upgrades and rollbacks.

For the remaining activities that Nix does not support, e.g. activating or deactivating services, Disnix uses a companion tool called Dysnomia. Because services in a Disnix context could take any form, there is no generic means to activate or deactivate them -- for this reason, Dysnomia provides a plugin system with modules that carry out specific activities for a specific service type.

One of the plugins that Dysnomia provides is the deployment of MySQL databases to a MySQL DBMS server. Dysnomia deployment activities are driven by two kinds of configuration specifications. A component configuration defines the properties of a deployable unit, such as a MySQL database:

create table author
FirstName VARCHAR(255) NOT NULL,

create table books
FOREIGN KEY(AUTHOR_ID) references author(AUTHOR_ID) on update cascade on delete cascade

The above configuration is a MySQL script (~/testdb) that creates the database schema consisting of two tables.

The container configuration captures properties of the environment in which the component should be hosted, which is in this particular case, a MySQL DBMS server:


The above component configuration (~/mysql-production) defines the type stating that mysql-database plugin must be used, and provides the authentication credentials required to connect to the DBMS server.

The Dysnomia plugin for MySQL implements various kinds of deployment activities for MySQL databases. For example, the activation activity is implemented as follows:


case "$1" in
# Initalize the given schema if the database does not exists
if [ "$(echo "show databases" | @mysql@ --user=$mysqlUsername --password=$mysqlPassword -N | grep -x $componentName)" = "" ]
( echo "create database $componentName;"
echo "use $componentName;"

if [ -d $2/mysql-databases ]
cat $2/mysql-databases/*.sql
) | @mysql@ $socketArg --user=$mysqlUsername --password=$mysqlPassword -N


The above code fragment checks whether a database with the given schema exists and if it does not, it will create it by running the database initialization script provided by the component configuration. As may also be observed, the above activity uses the container properties (such as the authentication credentials) as environment variables.

Dysnomia activities can be executed by invoking the dysnomia command-line tool. For example, the following command will activate the MySQL database in the MySQL database server:

$ dysnomia --operation activate \
--component ~/testdb --container ~/mysql-production

To make the execution of arbitrary tasks more convenient, I have created a new Dysnomia option called: shell. The shell operation is basically an activity that does not execute anything, but instead spawns a shell session that provides the container configuration properties as environment variables.

Moreover, the shell activity of a Dysnomia plugin typically displays suggestions for shell commands that the user may want to carry out.

For example, when we run the following command:

$ dysnomia --shell \
--component ~/testdb --container ~/mysql-production

Dysnomia spawns a shell session that shows the following:

This is a shell session that can be used to control the 'staff' MySQL database.

Module specific environment variables:
mysqlUsername Username of the account that has the privileges to administer
the database
mysqlPassword Password of the above account
mysqlSocket Path to the UNIX domain socket that is used to connect to the
server (optional)

Some useful commands:
/nix/store/h0kcf5g2ssyancr9m2i8sr09b3wq2zy0-mariadb-10.1.28/bin/mysql --user=$mysqlUsername --password=$mysqlPassword staff Start a MySQL interactive terminal

General environment variables:
this_dysnomia_module Path to the Dysnomia module
this_component Path to the mutable component
this_container Path to the container configuration file


By executing the command-line suggestion shown above in the above shell session, we get a MySQL interactive terminal allowing us to execute arbitrary SQL commands. It saves us the burden looking up all the MySQL configuration properties, such as the authentication credentials and the database name.

The Dysnomia shell feature is heavily inspired by nix-shell that works in quite a similar way -- it will take the build dependencies of a package build as inputs (which typically manifest themselves as environment variables) and fetches the sources, but it will not execute the package build procedure. Instead, it spawns an interactive shell session allowing the user to execute arbitrary build tasks. This Nix feature is particularly useful for development projects.

Diagnosing services with Disnix

In addition to extending Dysnomia with the shell feature, I have also extended Disnix to make this feature available in a distributed context.

The following command can be executed to spawn a shell for a particular service of the ridiculous staff tracker example (that happens to be a MySQL database):

$ disnix-diagnose -S staff
[test2]: Connecting to service: /nix/store/yazjd3hcb9ds160cq03z66y5crbxiwq0-staff deployed to container: mysql-database
This is a shell session that can be used to control the 'staff' MySQL database.

Module specific environment variables:
mysqlUsername Username of the account that has the privileges to administer
the database
mysqlPassword Password of the above account
mysqlSocket Path to the UNIX domain socket that is used to connect to the
server (optional)

Some useful commands:
/nix/store/h0kcf5g2ssyancr9m2i8sr09b3wq2zy0-mariadb-10.1.28/bin/mysql --user=$mysqlUsername --password=$mysqlPassword staff Start a MySQL interactive terminal

General environment variables:
this_dysnomia_module Path to the Dysnomia module
this_component Path to the mutable component
this_container Path to the container configuration file


The above command-line instruction will lookup the location of the staff database in the configuration of the system that is currently deployed, connects to it (typically through SSH) and spawns a Dysnomia shell for the given service type.

In addition to an interactive shell, you can also directly run shell commands. For example, the following command will query all the staff records:

$ disnix-diagnose -S staff \
--command 'echo "select * from staff" | mysql -u $mysqlUsername -p $mysqlPassword staff'

In most cases, only one instance of a service exists, but Disnix can also deploy redundant instances of the same service. For example, we may want to deploy two redundant instances of the web application front end in the distribution.nix configuration file:

stafftracker = [ infrastructure.test1 infrastructure.test2 ];

When trying to spawn a Dysnomia shell, the tool returns an error because it does not know to which instance to connect to:

$ disnix-diagnose -S stafftracker
Multiple mappings found! Please specify a --target and, optionally, a
--container parameter! Alternatively, you can execute commands for all possible
service mappings by providing a --command parameter.

This service has been mapped to:

container: apache-webapplication, target: test1
container: apache-webapplication, target: test2

In this case, we must refine our query with a --target parameter. For example, the following command connects to the web front-end on the test1 machine:

$ disnix-diagnose -S stafftracker --target test1

It is still possible to execute remote shell commands for redundantly deployed services. For example, the following command gets executed twice, because we have two instances deployed:

$ disnix-diagnose -S stafftracker \
--command 'echo I will see this message two times!'

In some cases, you may want to execute other kinds of maintenance tasks or you simply want to know where a particular service resides. This can be done by running the following command:

$ disnix-diagnose -S stafftracker --show-mappings
This service has been mapped to:

container: apache-webapplication, target: test1
container: apache-webapplication, target: test2


In this blog post, I have described a new feature of Dysnomia and Disnix that spawns interactive shell sessions making problem solving and maintenance tasks more convenient.

disnix-diagnose and the shell extension are part of the development versions of Disnix and Dysnomia and will become available in the next release.

by Sander van der Burg ( at January 31, 2018 10:55 PM

January 08, 2018

Sander van der Burg

Syntax highlighting Nix expressions in mcedit

The year 2017 has passed and 2018 has now started. For quite a few people, this is a good moment for reflection (as I have done in my previous blog post) and to think about new year's resolutions. New year's resolutions are typically about adopting good new habits and rejecting old bad ones.

Orthodox file managers

One of my unconventional habits is that I like orthodox file managers and that I extensively use them. Orthodox file managers have a number of interesting properties:

  • They typically display textual lists of files, as opposed to icons or thumbnails.
  • They typically have two panels for displaying files: one source and one destination panel.
  • They may also have third panel (typically placed underneath the source and destination panels) that serves as a command-line prompt.

The first orthodox file manager I ever used was DirectoryOpus on the Commodore Amiga. For nearly all operating systems and desktop environments that I touched ever since, I have been using some kind of a orthodox file manager, such as:

Over the years, I have received many questions from various kinds of people -- they typically ask me what is so appealing about using such a "weird program" and why I have never considered switching to a more "traditional way" of working, because "that would be more efficient".

Aside from the fact that it may probably be mostly inertia, my motivating factors are the following:

  • Lists of files allow me to see more relevant and interesting details. In many traditional file managers, much of the screen space is wasted by icons and the spacing between them. Furthermore, traditional file managers may typically hide properties of files that I also typically want to know about, such as a file's size or modification timestamp.
  • Some file operations involve a source and destination, such as copying or moving files. In an orthodox file manager, these operations can be executed much more intuitively IMO because there is always a source and destination panel present. When I am using a traditional file manager, I typically have to interrupt my workflow to open a second destination window, and use it to browse to my target location.
  • All the orthodox file managers I have mentioned, implement virtual file system support allowing me to browse compressed archives and remote network locations as if they were directories.

    Nowadays, VFS support is not exclusive to orthodox file managers anymore, but they existed in orthodox file managers much longer.

    Moreover, I consider the VFS properties of orthodox file managers to be much more powerful. For example, the Windows file explorer can browse Zip archives, but Total Commander also has first class support for many more kinds of archives, such as RAR, ACE, LhA, 7-zip and tarballs, and can be easily extended to support many other kinds of file systems by an add-on system.
  • They have very powerful search properties. For example, searching for a collection of files having certain kinds of text patterns can be done quite conveniently.

    As with VFS support, this feature is not exclusive to orthodox file managers, but I have noticed that their search functions are still considerably more powerful than most traditional file managers.

From all the orthodox file managers listed above, Midnight Commander is the one I have been using the longest -- it was one of the first programs I used when I started using Linux (in 1999) and I have been using it ever since.

Midnight Commander also includes a text editor named: mcedit that integrates nicely with the search function. Although I have experience with half a dozen editors (such as vim and various IDEs, such as Eclipse and Netbeans), I have been using mcedit, mostly for editing configuration files, shell scripts and simple programs.

Syntax highlighting in mcedit

Earlier in the introduction I mentioned: "new year's resolutions", which may probably suggest that I intend to quit using orthodox file managers and an unconventional editor, such as mcedit. Actually, this is not something I am planning to :-).

In addition to Midnight Commander and mcedit, I have also been using another unconventional program for quite some time, namely: the Nix package manager since late 2007.

What I noticed is that, despite being primitive, mcedit has reasonable syntax highlighting support for a variety of programming languages. Unfortunately, what I still miss is support for the Nix expression language -- the DSL that is used to specify package builds and system configurations.

For quite some time, editing Nix expressions was a primitive process for me. To improve my unconventional way of working a bit, I have decided to address this shortcoming in my Christmas break by creating a Nix syntax configuration file for mcedit.

Implementing a syntax configuration for the Nix expression language

mcedit provides syntax highlighting (the format is described in the manual page) for a number of programming languages. The syntax highlighting configurations seem to follow similar conventions, probably because of the fact that programming languages influence each other a lot.

As with many programming languages, the Nix expression language has its own influences as well, such as Haskell, C, bash, JavaScript (more specifically: the JSON subset) and Perl.

I have decided to adopt similar syntax highlighting conventions in the Nix expression syntax configuration. I started by examining Nix's lexer module (src/libexpr/lexer.l):

  • First, I took the keywords and operators, and configured the syntax highlighter to color them yellow. Yellow keywords is a convention that other syntax highlighting configurations also seem to follow.
  • Then I implemented support for single line and multi-line comments. The context directive turned out to be very helpful -- it makes it possible to color all characters between a start and stop token. Comments in mcedit are typically brown.
  • The next step were the numbers. Unfortunately, the syntax highlighter does not have full support for regular expressions. For example, you cannot specify character ranges, such as [0-9]+. Instead you must enumerate all characters one by one:

    keyword whole \[0123456789\]

    Floating point numbers were a bit trickier to support, but fortunately I could steal them from the JavaScript syntax highlighter, since the formatting Nix uses is exactly the same.
  • Strings were also relatively simple to implement (with the exception of anti-quotations) by using the context directive. I have configured the syntax highlighter to color them green, similar to other programming languages.
  • The Nix expression language also supports objects of the URL or path type. Since there is no other language that I am aware of that has a similar property, I have decided to color them white, with the exception of system paths -- system paths look very similar to the C preprocessor's #include path arguments, so I have decided to color them red, similar to the C syntax highlighter.

    To properly support paths, I implemented an approximation of the regular expression used in Nix's lexer. Without full regular expression support, it is extremely difficult to make a direct translation, but for all my use cases it seems to work fine.

After configuring the above properties, I noticed that there were still some bits missing. The next step was opening the parser configuration (src/libexpr/parser.y) and look for any missing characters.

I discovered that there were still separators that I needed to add (e.g. parenthesis, brackets, semi-colons etc.). I have configured the syntax highlighter to color them bright cyan, with the exception of semi-colons -- I colored them purple, similar to the C and JavaScript syntax highlighter.

I also added syntax highlighting for the builtin functions (e.g. derivation, map and toString) so that they appear in cyan. This convention is similar to bash' syntax highlighting.

The implementation process of the Nix syntax configuration was generally straight forward, except for one thing -- anti-quotations. Because we only have a primitive lexer and no parser, it is impossible to have a configuration that covers all possibilities. For example, anti-quotations in strings that embed strings cannot be properly supported. I ended up with an implementation that only works for simple cases (e.g. a reference to an identifier or a file).


The syntax highlighter works quite well for the majority of expressions in the Nix packages collection. For example, the expression for the Disnix package looks as follows:

The top-level expression that contains the package compositions looks as follows:

Also, most Hydra release.nix configurations seem to work well, such as the one used for node2nix:


The Nix syntax configuration can be obtained from my GitHub page. It can be used by installing it in a user's personal configuration directory, or by deploying a patched version of Midnight Commander. More details can be found in the README.

by Sander van der Burg ( at January 08, 2018 10:44 PM

December 19, 2017

Sander van der Burg

Bypassing NPM's content addressable cache in Nix deployments and generating expressions from lock files

Roughly half a year ago, Node.js version 8 was released that also includes a major NPM package manager update (version 5). NPM version 5 has a number of substantial changes over the previous version, such as:

  • It uses package lock files that pinpoint the resolved versions of all dependencies and transitive dependencies. When a project with a bundled package-lock.json file is deployed, NPM will use the pinpointed versions of the packages that are in the lock file making it possible to exactly reproduce a deployment elsewhere. When a project without a lock file is deployed for the first time, NPM will generate a lock file.
  • It has a content-addressable cache that optimizes package retrieval processes and allows fully offline package installations.
  • It uses SHA-512 hashing (as opposed to the significantly weakened SHA-1), for packages published in the NPM registry.

Although these features offer significant benefits over previous versions, e.g. NPM deployments are now much faster, more secure and more reliable, it also comes with a big drawback -- it breaks the integration with the Nix package manager in node2nix. Solving these problems were much harder than I initially anticipated.

In this blog post, I will explain how I have adjusted the generation procedure to cope with NPM's new conflicting features. Moreover, I have extended node2nix with the ability to generate Nix expressions from package-lock.json files.

Lock files

One of the major new features in NPM 5.0 is the lock file (the idea itself is not so new since NPM-inspired solutions such as yarn and the PHP-based composer already support them for quite some time).

A major drawback of NPM's dependency management is that version specifiers are nominal. They can refer to specific versions of packages in the NPM registry, but also to version ranges, or external artifacts such as Git repositories. The latter category of version specifiers affect reproducibility -- for example, the version range specifier >= 1.0.0 may refer to version 1.0.0 today and to version 1.0.1 tomorrow making it extremely hard to reproduce a deployment elsewhere.

In a development project, it is still possible to control the versions of dependencies by using a package.json configuration that only refers to exact versions. However, for transitive dependencies that may still have loose version specifiers there is only very little control.

To solve this reproducibility problem, a package-lock.json file can be used -- a package lock file pinpoints the resolved versions of all dependencies and transitive dependencies making it possible to reproduce the exact same deployment elsewhere.

For example, for the NiJS package with the following package.json configuration:

"name": "nijs",
"version": "0.0.25",
"description": "An internal DSL for the Nix package manager in JavaScript",
"bin": {
"nijs-build": "./bin/nijs-build.js",
"nijs-execute": "./bin/nijs-execute.js"
"main": "./lib/nijs",
"dependencies": {
"optparse": ">= 1.0.3",
"slasp": "0.0.4"
"devDependencies": {
"jsdoc": "*"

NPM may produce the following partial package-lock.json file:

"name": "nijs",
"version": "0.0.25",
"lockfileVersion": 1,
"requires": true,
"dependencies": {
"optparse": {
"version": "1.0.5",
"resolved": "",
"integrity": "sha1-dedallBmEescZbqJAY/wipgeLBY="
"requizzle": {
"version": "0.2.1",
"resolved": "",
"integrity": "sha1-aUPDUwxNmn5G8c3dUcFY/GcM294=",
"dev": true,
"requires": {
"underscore": "1.6.0"
"dependencies": {
"underscore": {
"version": "1.6.0",
"resolved": "",
"integrity": "sha1-izixDKze9jM3uLJOT/htRa6lKag=",
"dev": true

The above lock file pinpoints all dependencies and development dependencies including transitive dependencies to exact versions, including the locations where they can be obtained from and integrity hash codes that can be used to validate them.

The lock file can also be used to derive the entire structure of the node_modules/ folder in which all dependencies are stored. The top level dependencies property captures all packages that reside in the project's node_modules/ folder. The dependencies property of each dependency captures all packages that reside in a dependency's node_modules/ folder.
If NPM 5.0 is used and no package-lock.json is present in a project, it will automatically generate one.

Substituting dependencies

As mentioned in an earlier blog post, the most important technique to make Nix-NPM integration work is by substituting NPM's dependency management activities that conflict with Nix's dependency management -- Nix is much more strict with handling dependencies (e.g. it uses hash codes derived from the build inputs to identify a package as opposed to a name and version number).

Furthermore, in Nix build environments network access is restricted to prevent unknown artifacts to influence the outcome of a build. Only so-called fixed output derivations, whose output hashes should be known in advance (so that Nix can verify its integrity), are allowed to obtain artifacts from external sources.

To substitute NPM's dependency management, populating the node_modules/ folder ourselves with all required dependencies and substituting certain version specifiers, such as Git URLs, used to suffice. Unfortunately, with the newest NPM this substitution process no longer works. When running the following command in a Nix builder environment:

$ npm --offline install ...

The NPM package manager is forced to work in offline mode consulting its content-addressable cache for the retrieval of external artifacts. If NPM needs to consult an external resource, it throws an error.

Despite the fact that all dependencies are present in the node_modules/ folder, deployment fails with the following error message:

npm ERR! request to failed: cache mode is 'only-if-cached' but no cached response available.

At first sight, the error message suggests that NPM always requires the dependencies to reside in the content-addressable cache to prevent it from downloading it from external sites. However, when we use NPM outside a Nix builder environment, wipe the cache, and perform an offline installation, it does seem to work properly:

$ npm install
$ rm -rf ~/.npm/_cacache
$ npm --offline install

Further experimentation reveals that NPM augments the package.json configuration files of all dependencies with additional metadata that are prefixed by an underscore (_):

"_from": "optparse@>= 1.0.3",
"_id": "optparse@1.0.5",
"_inBundle": false,
"_integrity": "sha1-dedallBmEescZbqJAY/wipgeLBY=",
"_location": "/optparse",
"_phantomChildren": {},
"_requested": {
"type": "range",
"registry": true,
"raw": "optparse@>= 1.0.3",
"name": "optparse",
"escapedName": "optparse",
"rawSpec": ">= 1.0.3",
"saveSpec": null,
"fetchSpec": ">= 1.0.3"
"_requiredBy": [
"_resolved": "",
"_shasum": "75e75a96506611eb1c65ba89018ff08a981e2c16",
"_spec": "optparse@>= 1.0.3",
"_where": "/home/sander/teststuff/nijs",
"name": "optparse",
"version": "1.0.5",

It turns out that when the _integrity property in a package.json configuration matches the integrity field of the dependency in the lock file, NPM will not attempt to reinstall it.

To summarize, the problem can be solved in Nix builder environments by running a script that augments the package.json configuration files with _integrity fields with the values from the package-lock.json file.

For Git repository dependency specifiers, there seems to be an additional requirement -- it also seems to require the _resolved field to be set to the URL of the repository.

Reconstructing package lock files

The fact that we have discovered how to bypass the cache in a Nix builder environment makes it possible to fix the integration with the latest NPM. However, one of the limitations of this approach is that it only works for projects that have a package-lock.json file included.

Since lock files are still a relatively new concept, many NPM projects (in particular older projects that are not frequently updated) may not have a lock file included. As a result, their deployments will still fail.

Fortunately, we can reconstruct a minimal lock file from the project's package.json configuration and compose dependencies objects by traversing the package.json configurations inside the node_modules/ directory hierarchy.

The only attribute that cannot be immediately derived are the integrity fields containing hashes that are used for validation. It seems that we can bypass the integrity check by providing a dummy hash, such as:

integrity: "sha1-000000000000000000000000000=",

NPM does not seem to object when it encounters these dummy hashes allowing us to deploy projects with a reconstructed package-lock.json file. The solution is a very ugly hack, but it seems to work.

Generating Nix expressions from lock files

As explained earlier, lock files pinpoint the exact versions of all dependencies and transitive dependencies and describe the structure of the entire dependency graph.

Instead of simulating NPM's dependency resolution algorithm, we can also use the data provided by the lock files to generate Nix expressions. Lock files appear to contain most of the data we need -- the URLs/locations of the external artifacts and integrity hashes that we can use for validation.

Using lock files for generation offer the following advantages:

  • We no longer need to simulate NPM's dependency resolution algorithm. Despite my best efforts and fairly good results, it is hard to truly make it 100% identical to NPM's. When using a lock file, the dependency graph is already given, making deployment results much more accurate.
  • We no longer need to consult external resources to resolve versions and compute hashes making the generation process much faster. The only exception seems to be Git repositories -- Nix needs to know the output hash of the clone whereas for NPM the revision hash suffices. When we encounter a Git dependency, we still need to download it and compute the output hash.

Another minor technical challenge are the integrity hashes -- in NPM lock files integrity hashes are in base-64 notation, whereas Nix uses heximal notation or its own custom base-32 notation. We need to convert the NPM integrity hashes to a notation that Nix understands.

Unfortunately, lock files can only be used in development projects. It appears that packages that are installed directly from the NPM registry, e.g. end-user packages that are installed globally through npm install -g, never include a package lock file. (It even seems that the NPM registry blacklist the lock files when publishing a package in the registry).

For this reason, we still need to keep our own implementation of the dependency resolution algorithm.


By adding a script that augments the dependencies' package.json configuration files with _integrity fields and by optionally reconstructing a package-lock.json file, NPM integration with Nix has been restored.

Using the new NPM 5.x features is straight forward. The following command can be used to generate Nix expressions for a development project with a lock file:

$ node2nix -8 -l package-lock.json

The above command will directly generate Nix expressions from the package lock file, resulting in a much faster generation process.

When a development project does not ship with a lock file, you can use the following command-line instruction:

$ node2nix -8

The generator will use its own implementation of NPM's dependency resolution algorithm. When deploying the package, the builder will reconstruct a dummy lock file to allow the deployment to succeed.

In addition to development projects, it is also possible to install end-user software, by providing a JSON file (e.g. pkgs.json) that defines an array of dependency specifiers:

, { "node2nix": "1.5.0" }

A Node.js 8 compatible expression can be generated as follows:

$ node2nix -8 -i pkgs.json


The approach described in this blog post is not the first attempt to fix NPM 5.x integration. In my first attempt, I tried populating NPM's content-addressable cache in the Nix builder environment with artifacts that were obtained by the Nix package manager and forcing NPM to work in offline mode.

NPM exposes its download and cache-related functionality as a set of reusable APIs. For downloading packages from the NPM registry, pacote can be used. For downloading external artifacts through the HTTP protocol make-fetch-happen can be used. Both APIs are built on top of the content-addressable cache that can be controlled through the lower-level cacache API.

The real difficulty is that neither the high-level NPM APIs nor the npm cache command-line instruction work with local directories or local files -- they will only add artifacts to the cache if they come from a remote location. I have partially built my own API on top of cacache to populate the NPM cache with locally stored artifacts pretending that they were fetched from a remote location.

Although I had some basic functionality supported, it turned out to be much more complicated and time consuming to get all functionality implemented.

Furthermore, the NPM authors never promised that these APIs are stable, so the implementation may break at some point in time. As a result, I have decided to look for another approach.


I just released node2nix version 1.5.0 with NPM 5.x support. It can be obtained from the NPM registry, Github, or directly from the Nixpkgs repository.

by Sander van der Burg ( at December 19, 2017 09:26 PM

December 08, 2017

Sander van der Burg

Controlling a Hydra server from a Node.js application

In a number of earlier blog posts, I have elaborated about the development of custom configuration management solutions that use Nix to carry out a variety (or all) of its deployment tasks.

For example, an interesting consideration is that when a different programming language than Nix's expression language is used, an internal DSL can be used to make integration with the Nix expression language more convenient and safe.

Another problem that might surface when developing a custom solution is the scale at which builds are carried out. When it is desired to carry out builds on a large scale, additional concerns must be addressed beyond the solutions that the Nix package manager provides, such as managing a build queue, timing out builds that appear to be stuck, and sending notifications in case of a success or failure.

In such cases, it may be very tempting to address these concerns in your own custom build solution. However, some of these concerns are quite difficult to implement, such as build queue management.

There is already a Nix project that solves many of these problems, namely: Hydra, the Nix-based continuous integration service. One of Hydra's lesser known features is that it also exposes many of its operations through a REST API. Apart from a very small section in the Hydra manual, the API is not very well documented and -- as a result -- a bit cumbersome to use.

In this blog post, I will describe a recently developed Node.js package that exposes most of Hydra's functionality with an API that is documented and convenient to use. It can be used to conveniently integrate a Node.js application with Hydra's build management services. To demonstrate its usefulness, I have developed a simple command-line utility that can be used to remotely control a Hydra instance.

Finding relevant API calls

The Hydra API is not extensively documented. The manual has a small section titled: "Using the external API" that demonstrates some basic usage scenarios, such as querying data in JSON format.

Querying data is basically a nearly identical operation to opening the Hydra web front-end in a web browser. The only difference is that the GET request should include a header field stating that the output format should be displayed in JSON format.

For example, the following request fetches an overview of projects in JSON format (as opposed to HTML which is the default):

$ curl -H "Accept: application/json"

In addition to querying data in JSON format, Hydra supports many additional REST operations, such as creating, updating or deleting projects and jobsets. Unfortunately, these operations are not documented anywhere in the manual.

By analyzing the Hydra source code and running the following script I could (sort of) semi-automatically derive the REST operations that are interesting to invoke:

$ cd hydra/src/lib/Hydra/Controller
$ find . -type f | while read i
echo "# $i"
grep -E '^sub [a-zA-Z0-9]+[ ]?\:' $i

Running the script shows me the following (partial) output:

# ./
sub projectChain :Chained('/') :PathPart('project') :CaptureArgs(1) {
sub project :Chained('projectChain') :PathPart('') :Args(0) :ActionClass('REST::ForBrowsers') { }
sub edit : Chained('projectChain') PathPart Args(0) {
sub create : Path('/create-project') {

The web front-end component of Hydra uses Catalyst, a Perl-based MVC framework. One of the conventions that Catalyst uses is that every request invokes an annotated Perl subroutine.

The above script scans the controller modules, and shows for each module annotated subroutines that may be of interest. In particular, the sub routines with a :ActionClass('REST::ForBrowsers') annotation are relevant -- they encapsulate create, read, update and delete operations for various kinds of data.

For example, the project subroutine shown above supports the following REST operations:

sub project_GET {

sub project_PUT {

sub project_DELETE {

The above operations will query the properties a project (GET), create a new or update an existing project (PUT) or delete a project (DELETE).

With the manual, the extracted information and reading the source code a bit (which is unavoidable since many details are missing such as formal function parameters), I was able to develop a client API supporting a substantial amount of Hydra features.

API usage

Using client the API is relatively straight forward. The general idea is to instantiate the HydraConnector prototype to connect to a Hydra server, such as a local test instance:

var HydraConnector = require('hydra-connector').HydraConnector;

var hydraConnector = new HydraConnector("http://localhost");

and then invoke any of its supported operations:

hydraConnector.queryProjects(function(err, projects) {
if(err) {
console.log("Some error occurred: "+err);
} else {
for(var i = 0; i < projects.length; i++) {
var project = projects[i];
console.log("Project: ";

The above code fragment fetches all projects and displays their names.

Write operations require a user to be logged in with the appropriate permissions. By invoking the login() method, we can authenticate ourselves:

hydraConnector.login("admin", "myverysecretpassword", function(err) {
if(err) {
console.log("Login succeeded!");
} else {
console.log("Some error occurred: "+err);

Besides logging in, the client API also implements a logout() operation to relinquish write operation rights.

As explained in an earlier blog post, write operations require user authentication but all data is publicly readable. When it is desired to restrict read access, Hydra can be placed behind a reverse proxy that requires HTTP basic authentication.

The client API also supports HTTP basic authentication to support this usage scenario:

var hydraConnector = new HydraConnector("http://localhost",
"12345"); // HTTP basic credentials

Using the command-line client

To demonstrate the usefulness of the API, I have created a utility that serves as a command-line equivalent of Hydra's web front-end. The following command shows an overview of projects:

$ hydra-connect --url --projects

As may be observed in the screenshot above, the command-line utility provides suggestions for additional command-line instructions that could be relevant, such as querying more detailed information or modifying data.

The following command shows the properties of an individual project:

$ hydra-connect --url --project disnix

By default, the command-line utility will show a somewhat readable textual representation of the data. It is also possible to display the "raw" JSON data of any request for debugging purposes:

$ hydra-connect --url --project disnix --json

We can also use the command-line utility to create or edit data, such as a project:

$ hydra-connect --url --login
$ export HYDRA_SESSION=...
$ hydra-connect --url --project disnix --modify

The application will show command prompts asking the user to provide all the relevant project properties.

When changes have been triggered, the active builds in the queue can be inspected by running:

$ hydra-connect --url --status

Many other operations are supported. For example, you can inspect the properties of an individual build:

$ hydra-connect --url --build 65100054

And request various kinds of build related artifacts, such as the raw build log of a build:

$ hydra-connect --url --build 65100054 --raw-log

or download one of its build products:

$ hydra-connect --url --build 65100054 \
--build-product 4 > /home/sander/Download/disnix-0.7.2.tar.gz


In this blog post, I have shown the node-hydra-connector package that can be used to remotely control a Hydra server from a Node.js application. The package can be obtained from the NPM registry and the corresponding GitHub repository.

by Sander van der Burg ( at December 08, 2017 09:32 PM

November 28, 2017

Joachim Schiele



many programming languages have testing frameworks and for sometimes they are even used. in nixpkgs i see them lots, most often in libraries written in perl, python and go.

this posting is about how we adapted this concept into nix. AFAIK this hasn’t been done before so it might be worth sharing.

how testing works in perl/python/go

the tests are executed during the build of the software and most often implemented in libraries.

sometimes there is doCheck = false; as the tests fail for some reason. often these tests are impure and the build environment can’t perform the actions, as for instance, visiting some remote website during build time which is not possible in a nix-build phase.

testing systems i’m refering to:

python example

av = buildPythonPackage rec {
  name = "av-${version}";
  version = "0.2.4";

  src = pkgs.fetchurl {
    url = "mirror://pypi/a/av/${name}.tar.gz";
    sha256 = "bdc7e2e213cb9041d9c5c0497e6f8c47e84f89f1f2673a46d891cca0fb0d19a0";

    =  (with self; [ nose pillow numpy ])
    ++ (with pkgs; [ ffmpeg_2 git libav pkgconfig ]);

  # Because of
  doCheck = false;

  meta = {
    description = "Pythonic bindings for FFmpeg/Libav";
    homepage =;
    license = licenses.bsd2;

example taken from

perl example

ack = buildPerlPackage rec {
  name = "ack-2.16";
  src = fetchurl {
    url = "mirror://cpan/authors/id/P/PE/PETDANCE/${name}.tar.gz";
    sha256 = "0ifbmbfvagfi76i7vjpggs2hrbqqisd14f5zizan6cbdn8dl5z2g";
  outputs = ["out" "man"];
  # use gnused so that the preCheck command passes
  buildInputs = stdenv.lib.optional stdenv.isDarwin gnused;
  propagatedBuildInputs = [ FileNext ];
  meta = with stdenv.lib; {
    description = "A grep-like tool tailored to working with large trees of source code";
    homepage    =;
    license     = licenses.artistic2;
    maintainers = with maintainers; [ lovek323 ];
    platforms   = platforms.unix;
  # tests fails on nixos and hydra because of different purity issues
  doCheck = false;

example taken from

go example

terraform_0_9 = generic {
  version = "0.9.11";
  sha256 = "045zcpd4g9c52ynhgh3213p422ahds63mzhmd2iwcmj88g8i1w6x";
  # checks are failing again
  doCheck = false;

example taken from

nixos testing

first, let’s have a look at how testing in nixos is done traditionally. these tests use KVM, spawn one or serveral virtual machines which interact with each other over the network. they are called explicitly: nix-build -A leaps release.nix.

the leaps test:

import ./make-test.nix ({ pkgs,  ... }:

  name = "leaps";
  meta = with pkgs.stdenv.lib.maintainers; {
    maintainers = [ qknight ];

  nodes =
      client = { };

      server =
        { services.leaps = {
            enable = true;
            port = 6666;
            path = "/leaps/";
          networking.firewall.enable = false;

  testScript =
      $client->succeed("${pkgs.curl}/bin/curl http://server:6666/leaps/ | grep -i 'leaps'");

nixos tests are described in the manual nicely. the source of the tests are at

nixcloud testing

at nixcloud we also implement tests as we have them in nixos (described above). some tests can be found here: but, and that is why this posting was written, we also have different kind of tests, which are similar to those in perl, python and go explained previously.

test usage

we basically extended the nixos module system:

{ config, pkgs, lib, ... } @ args:

  options = {
    nixcloud.reverse-proxy = {
      enable = mkEnableOption "reverse-proxy";
  config = { 
    nixcloud.tests.wanted = [ ./test.nix ];

see complete usage implementation

note: the nixcloud.reverse-proxy module is similar to nixos modules as services.openssh

if the nixcloud.reverse-proxy module is used and one does a nixos-rebuild switch, it evaluates the ./test.nix during build.


and the implementation of the test is here


  • tests are located near the implementation of the service and not in a completely different directory

  • a user can’t forget to run the test since they are implicit

  • everytime a developer changes the implementation, nixcloud.webservices for instance, it is unit tested. this insures that users don’t forget to run the unit test before they commit.

  • these test builds are cached locally and are executed only once. similar for packages out of nixpkgs: if a dependency or the source code is changed, the test is also rerun.

  • oh, and hydra can evaluate the tests as well


  • a major drawback is that if you don’t have KVM support it still spawns the tests and emulating will be very slow.

    this is true for:

    • remote machines which were already virtualized (and don’t support nested virtualization)
    • if you execute nixos inside virtualbox
    • if you are using nix from inside mac os x (darwin) or basically from any other linux other than nixos


i would love to see this feature coming to nixos/nixpkgs also! oh and thanks to aszlig!


by qknight at November 28, 2017 12:35 PM

November 18, 2017

Joachim Schiele



on the quest to make nix the no. 1 deployment tool for webservices, we are happy to announce the release of:

  • nixcloud.webservices
  • nixcloud.reverse-proxy

all can be found online at:

git clone

or visit:


what is the nixcloud? paul and me (joachim) see the nixcloud as an extension to nixpkgs focusing on the deployment of webservices. we abstract installation, monitoring, DNS management and most of all ‘state management’.


nixcloud.webservices is a extension to nixpkgs for packaging webservices (apache, nginx) or language specific implementations as go, rust, perl and so on.

nixcloud.webservices.leaps.myservice = {
  enable = true;
  proxyOptions = {
    port   = 50000;
    path   = "/foo";
    domain = "";

it comes with so many features and improvements that you are better off reading the documentation here:


this component makes it easy to mix several webservices, like multiple webservers as apache or nginx, into one or several domains. it also abstracts TLS using ACME.

nixcloud.reverse-proxy = {
  enable = true;
  extendEtcHosts = true;
  extraMappings = [
      domain = "";
      path = "/";
      http = {
        mode = "on";
        record = ''
          rewrite ^(.*)$ permanent;
      https = {
        mode = "on";
        basicAuth."joachim" = "foo";
        record = ''
          rewrite ^(.*)$ permanent;

manual here:

the idea is to make email deployment accessible to the masses by providing an easy abstraction, especially for webhosting.

configuration example {
  enable = true;
  domains = [ "" ""  ];
  ipAddress = "";
  ip6Address = "afe2::2;
  hostname = "";
  users = [
    # kdkdkdkdkdkd -> feed that into -> doveadm pw -s sha256-crypt
    { name = "js2"; domain = ""; password = "{PLAIN}foobar1234"; aliases = [ "" ]; }
    { name = "paul"; domain = ""; password = "{PLAIN}supersupergeheim"; }
    { name = "catchall"; domain = ""; password = "{PLAIN}foobar1234"; aliases = [ "" ]; }
    #{ name = "js2"; domain = ""; password = "{PLAIN}supersupergeheim"; }
    #{ name = "quotatest"; domain = ""; password = "{PLAIN}supersupergeheim"; quota = "11M"; aliases = [ "" "" ]; }


by qknight at November 18, 2017 12:35 PM

November 07, 2017

Flying Circus

NixOS: The DOs and DON’Ts of nixpkgs overlays

One presentation at NixCon 2017 that especially drew my attention was Nicolas Pierron‘s talk about Nixpkgs overlays (video, slides). I’d like to give a quick summary here for future reference. All the credits go to Nicolas, of course.

What are overlays?

Overlays are the new standard mechanism to customize nixpkgs. They replace constructs like packageOverride and overridePackages.

How do I use overlays?

Put your overlays into ~/.config/nixpkgs/overlays. All overlays share a basic structure:

self: super: {


  • self is the result of the fix point calculation. Use it to access packages which could be modified somewhere else in the overlay stack.
  • super one overlay down in the stack (and base nixpkgs for the first overlay). Use it to access the package recipes you want to customize, and for library functions.

Good examples

Add default proxy to Google Chrome

self: super:
   google-chrome = {
     commandLineArgs =

From Alexandre Peyroux.

Fudge Nix store location

self: super:
  nix = super.nix.override {
    storeDir = "${<nix-dir>}/store";
     stateDir = "${<nix-dir>}/var";

From Yann Hodique.

Add custom package

self: super:
  VidyoDesktop = super.callPackage ./pkgs/VidyoDesktop { };

From Mozilla.

Bad examples

Recursive attrset

self: super: rec {
   fakeClosure = self.writeText "fake-closure.nix" ''
   fakeConfig = self.writeText "fake-config.nix" ''
    (import ${fakeClosure} {}).config.nixpkgs.config

Two issues:

  • Use super to access library functions.
  • Overlays should not be recursive. Use self.

Surplus arguments

{ python }:
self: super:
  "execnet" =
    python.overrideDerivation super."execnet" (old: {
      buildInputs = old.buildInputs ++ [ self."setuptools-scm" ];

The issue:

  • Overlays should depend just on self and super in order to be composeable.

Awkward nixpkgs import

{ pkgs ? import <nixpkgs> {} }:
  projectOverlay = self: super: {
    customPythonPackages =
      (import ./requirements.nix { inherit pkgs; }).packages;
import pkgs.path {
  overlays = [ projectOverlay ];

Two issues:

  • Unnecessary double-import of nixpkgs. This might break cross-system builds.
  • requirements.nix should use self not pkgs.

Improved version


{ pkgsPath ? <nixpkgs> }:
import pkgsPath {
  overlays = [ import ./default.nix; ];


self: super:
 customPythonPackages =
   (import ./requirements.nix { inherit self; }).packages;

Incorrectly referenced dependencies

self: super:
let inherit (super) callPackage;
in {
  radare2 = callPackage ./pkgs/radare2 {
    inherit (super.gnome2) vte;
    lua = super.lua5;

The issue:

  • Other packages should be taken from self not super. This way they
    can be overridden by other overlays.

Overridden attrset

self: super:
  lib = {
    firefoxVersion = … ;
  latest = {
    firefox-nightly-bin = … ;
    firefox-beta-bin = … ;
    firefox-bin = … ;
    firefox-esr-bin = … ;

The issue:

  • Other attributes present in lib and latest from down the overlay stack are

Improved version

Always extend attrsets in overlays:

self: super:
  lib = (super.lib or {}) // {
    firefoxVersion = … ;
  latest = (super.latest or {}) // {


I hope this condensed guide helps you to write better overlays. For in-depth discussion, please go watch Nicolas’ talk and read the Nixpkgs manual. Many thanks to Nicolas for putting this together!

Cover image: ⓒ 2009 studio tdes / Flickr / CC-BY-2.0

by Christian Kauhaus at November 07, 2017 09:44 AM

November 03, 2017

Sander van der Burg

Creating custom object transformations with NiJS and PNDP

In a number earlier blog posts, I have described two kinds of internal DSLs for Nix -- NiJS is a JavaScript-based internal DSL and PNDP is a PHP-based internal DSL.

These internal DSLs have a variety of application areas. Most of them are simply just experiments, but the most serious application area is code generation.

Using an internal DSL for generation has a number of advantages over string generation that is more commonly used. For example, when composing strings containing Nix expressions, we must make sure that any variable in the host language that we append to a generated expression is properly escaped to prevent code injection attacks.

Furthermore, we also have to take care of the indentation if we want to output Nix expression code that should be readable. Finally, string manipulation itself is not a very intuitive activity as it makes it very hard to read what the generated code would look like.

Translating host language objects to the Nix expression language

A very important feature of both internal DSLs is that they can literally translate some language constructs from the host language (JavaScript or PHP) to the Nix expression because they have (nearly) an identical meaning. For example, the following JavaScript code fragment:

var nijs = require('nijs');

var expr = {
hello: "Hello",
name: {
firstName: "Sander",
lastName: "van der Burg"
numbers: [ 1, 2, 3, 4, 5 ]

var output = nijs.jsToNix(expr, true);

will output the following Nix expression:

hello = "Hello",
name = {
firstName = "Sander";
lastName = "van der Burg";
numbers = [

In the above example, strings will be translated to strings (and quotes will be escaped if necessary), objects to attribute sets, and the array of numbers to a list of numbers. Furthermore, the generated code is also pretty printed so that attribute set and list members have 2 spaces of indentation.

Similarly, in PHP we can compose the following code fragment to get an identical Nix output:

use PNDP\NixGenerator;

$expr = array(
"hello" => "Hello",
"name" => array(
"firstName" => "Sander",
"lastName => "van der Burg"
"numbers" => array(1, 2, 3, 4, 5)

$output = NixGenerator::phpToNix($expr, true);

The PHP generator uses a number of clever tricks to determine whether an array is associative or sequential -- the former gets translated into a Nix attribute set while the latter gets translated into a list.

There are objects in the Nix expression language for which no equivalent exists in the host language. For example, Nix also allows you to define objects of a 'URL' and 'file' type. Neither JavaScript nor PHP have a direct equivalent. Moreover, it may be desired to generate other kinds of language constructs, such as function declarations and function invocations.

To still generate these kinds of objects, you must compose an abstract syntax tree from objects that inherit from the NixObject prototype or class. For example, we can define a function invocation to fetchurl {} in Nixpkgs as follows in JavaScript:

var expr = new nijs.NixFunInvocation({
funExpr: new nijs.NixExpression("fetchurl"),
paramExpr: {
url: new nijs.NixURL("mirror://gnu/hello/hello-2.10.tar.gz"),
sha256: "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i"

and in PHP as follows:

use PNDP\AST\NixExpression;
use PNDP\AST\NixFunInvocation;

$expr = new NixFunInvocation(new NixExpression("fetchurl"), array(
"url" => new NixURL("mirror://gnu/hello/hello-2.10.tar.gz"),
"sha256" => "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i"

Both of the objects in the above code fragments translate to the following Nix expression:

fetchurl {
url = mirror://gnu/hello/hello-2.10.tar.gz;
sha256 = "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i";

Transforming custom object structures into Nix expressions

The earlier described use cases are basically one-on-one translations from the host language (JavaScript or PHP) to the guest language (Nix). In some cases, literal translations do not make sense -- for example, it may be possible that we already have an application with an existing data model from which we want to derive deployments that should be carried out with Nix.

In the latest versions of NiJS and PNDP, it is also possible to specify how to transform custom object structures into a Nix expression. This can be done by inheriting from the NixASTNode class or prototype and overriding the toNixAST() method.

For example, we may have a system already providing a representation of a file that should be downloaded from an external source:

function HelloSourceModel() {
this.src = "mirror://gnu/hello/hello-2.10.tar.gz";
this.sha256 = "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i";

The above module defines a constructor function composing an object that refers to the GNU Hello package provided by a GNU mirror site.

A direct translation of an object constructed by the above function to the Nix expression language does not provide anything meaningful -- it can, for example, not be used to let Nix fetch the package from the mirror site.

We can inherit from NixASTNode and implement our own custom toNixAST() function to provide a more meaningful Nix translation:

var nijs = require('nijs');
var inherit = require('nijs/lib/ast/util/inherit.js').inherit;

/* HelloSourceModel inherits from NixASTNode */
inherit(nijs.NixASTNode, HelloSourceModel);

* @see NixASTNode#toNixAST
HelloSourceModel.prototype.toNixAST = function() {
return this.args.fetchurl()({
url: new nijs.NixURL(this.src),
sha256: this.sha256

The toNixAST() function shown above composes an abstract syntax tree (AST) for a function invocation to fetchurl {} in the Nix expression language with the url and sha256 properties a parameters.

An object that inherits from the NixASTNode prototype also indirectly inherits from NixObject. This means that we can directly attach such an object to any other AST object. The generator uses the underlying toNixAST() function to automatically convert it to its AST representation:

var helloSource = new HelloSourceModel();
var output = nijs.jsToNix(helloSource, true);

In the above code fragment, we directly pass the construct HelloSourceModel object instance to the generator. The output will be the following Nix expression:

fetchurl {
url = mirror://gnu/hello/hello-2.10.tar.gz;
sha256 = "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i";

In some cases, it may not be possible to inherit from NixASTNode, for example, when the object already inherits from another prototype or class that is beyond the user's control.

It is also possible to use the NixASTNode constructor function as an adapter. For example, we can take any object with a toNixAST() function:

var helloSourceWrapper = {
toNixAST: function() {
return new nijs.NixFunInvocation({
funExpr: new nijs.NixExpression("fetchurl"),
paramExpr: {
url: new nijs.NixURL(this.src),
sha256: this.sha256

By wrapping the helloSourceWrapper object in the NixASTNode constructor, we can convert it to an object that is an instance of NixASTNode:

new nijs.NixASTNode(helloSourceWrapper)

In PHP, we can change any class into a NixASTNode by implementing the NixASTConvertable interface:

use PNDP\AST\NixASTConvertable;

class HelloSourceModel implements NixASTConvertable
* @see NixASTConvertable::toNixAST()
public function toNixAST()
return $this->args->fetchurl(array(
"url" => new NixURL($this->src),
"sha256" => $this->sha256

By passing an object that implements the NixASTConvertable interface to the NixASTNode constructor, it can be converted:

new NixASTNode(new HelloSourceModel())

Motivating use case: the node2nix and composer2nix generators

My main motivation to use custom transformations is to improve the quality of the node2nix and composer2nix generators -- the former converts NPM package configurations to Nix expressions and the latter converts PHP composer package configurations to Nix expressions.

Although NiJS and PNDP provide a number of powerful properties to improve the code generation steps of these tools, e.g. I no longer have to think much about escaping strings or pretty printing, there are still many organizational coding issues left. For example, the code that parses the configurations, fetches the external sources, and generates the code are mixed. As a consequence, the code is very hard to read, update, maintain and to ensure its correctness.

The new transformation facilities allow me to separate concerns much better. For example, both generators now have a data model that reflects the NPM and composer problem domain. For example, I could compose the following (simplified) class diagram for node2nix's problem domain:

A crucial part of node2nix's generator is the package class shown on the top left on the diagram. A package requires zero or more packages as dependencies and may provide zero or more packages in the node_modules/ folder residing in the package's base directory.

For readers not familiar with NPM's dependency management: every package can install its dependencies privately in a node_modules/ folder residing in the same base directory. The CommonJS module ensures that every file is considered to be a unique module that should not interfere with other modules. Sharing is accomplished by putting a dependency in a node_modules/ folder of an enclosing parent package.

NPM 2.x always installs a package dependency privately unless a parent package exists that can provide a conforming version. NPM 3.x (and later) will also move a package into the node_modules/ folder hierarchy as high as possible to prevent too many layers of nested node_modules/ folders (this is particularly a problem on Windows). The class structure in the above diagram reflects this kind of dependency organisation.

In addition to a package dependency graph, we also need to obtain package metadata and compute their output hashes. NPM packages originate from various kinds of sources, such as the NPM registry, Git repositories, HTTP sites and local directories on the filesystem.

To optimize the process and support sharing of common sources among packages, we can use a source cache that memorizes all unique source referencess.

The Package::resolveDependencies() method sets the generation process in motion -- it will construct the dependency graph replicating NPM's dependency resolution algorithm as faithfully as possible, and resolves all the dependencies' (and transitive dependencies) metadata.

After resolving all dependencies and their metadata, we must generate the output Nix expressions. One Nix expression is copied (the build infrastructure) and two are generated -- a composition expression and a package or collection expression.

We can also compose a class diagram for the generation infrastructure:

In the above class diagram, every generated expression is represented a class inheriting from NixASTNode. We can also reuse some classes from the domain model as constituents for the generated expressions, by also inheriting from NixASTNode and overriding the toNixAST() method:

  • The source objects can be translated into sub expressions that invoke fetchurl {} and fetchgit {}.
  • The sources cache can be translated into an attribute set exposing all sources that are used as dependencies for packages.
  • A package instance can be converted into a function invocation to nodeenv.buildNodePackage {} that, in addition to configuring build properties, binds the required dependencies to the sources in the sources cache attribute set.

By decomposing the expression into objects and combining the objects' AST representations, we can nicely modularize the generation process.

For composer2nix, we can also compose a class diagram for its domain -- the generation process:

The above class diagram has many similarities, but also some major differences compared to node2nix. composer provides so-called lock files that pinpoint the exact versions of all dependencies and transitive dependencies. As a result, we do not need to replicate composer's dependency resolution algorithm.

Instead, the generation process is driven by the ComposerConfig class that encapsulates the properties of the composer.json and composer.lock files of a package. From a composer configuration, the generator constructs a package object that refers to the package we intend to deploy and populates a source cache with source objects that come from various sources, such as Git, Mercurial and Subversion repositories, Zip files, and directories residing on the local filesystem.

For the generation process, we can adopt a similar strategy that exposes the generated Nix expressions as classes and uses some classes of the domain model as constituents for the generation process:


In this blog post, I have described a new feature for the NiJS and PNDP frameworks, making it possible to implement custom transformations. Some of its benefits are that it allows an existing object model to be reused and concerns in an application can be separated much more conveniently.

These facilities are not only useful for the improvement of the architecture of the node2nix and composer2nix generators -- at the company I work for (Conference Compass), we developed our own domain-specific configuration management tool.

Despite the fact that it uses several tools from the Nix project to carry out deployments, it uses a domain model that is not Nix-specific at all. Instead, it uses terminology and an organization that reflects company processes and systems.

For example, we use a backend-for-frontend organization that provides a backend for each mobile application that we ship. We call these backends configurators. Optionally, every configurator can import data from various external sources that we call channels. The tool's data model reflects this kind of organization, and generates Nix expressions that contain all relevant implementation details, if necessary.

Finally, the fact that I modified node2nix to have a much cleaner architecture has another reason beyond quality improvement. Currently, NPM version 5.x (that comes with Node.js 8.x) is still unsupported. To make it work with Nix, we require a slightly different generation process and a completely different builder environment. The new architecture allows me to reuse the common parts much more conveniently. More details about the new NPM support will follow (hopefully) soon.


I have released new versions of NiJS and PNDP that have the custom transformation facilities included.

Furthermore, I have decided to release new versions for node2nix and composer2nix that use the new generation facilities in their architecture. The improved architecture revealed a very uncommon but nasty bug with bundled dependencies in node2nix, that is now solved.

by Sander van der Burg ( at November 03, 2017 10:43 PM

October 04, 2017

Flying Circus

Announcing fc-userscan

NixOS manages dependencies in a very strict way—sometimes too strict? Here at Flying Circus, many users prefer to compile custom applications in home directories. They link them against libraries they have installed before by nix-env. This works well… until something is updated! On the next change anywhere down the dependency chain, libraries get new hashes in the Nix store, the garbage collector removes old versions, and user applications break until recompiled.

In this blog post, I would like to introduce fc-userscan. This little tool scans (home) directories recursively for Nix store references and registers them as per-user roots with the garbage collector. This way, dependencies will be protected even if they cease to be referenced from “official” Nix roots like the current-system profile or a user’s local Nix profile. After registering formerly unmanaged references with fc-userscan, one can fearlessly run updates and garbage collection.

This problem would not be there if everyone would just write Nix expressions for everything. This is, for various reasons, not realistic. Some users find it hard to accommodate with Nix concepts (“I just want to compile my software and don’t care about your arcane Linux distro!”). Others use deploy tools like zc.buildout which are explicitly designed for in-place updates and don’t really distinguish compile time from run time. With fc-userscan, you can get both: Nix expressions for superb reproducibility, and an acceptable degree of reliance for impurely compiled applications.

fc-userscan is published as open source software under a 3-clause BSD license. Get the Rust source code or the latest pre-compiled binaries for x86_64. Pull requests are welcome.

Example: How to get stale Nix store references

Imagine we have Python installed in our Nix user profile. When creating a Python virtual environment, we get unmanaged references into the Nix store:

$ nix-env -iA nixpkgs.python35
$ ~/.nix-profile/bin/pyvenv venv
$ ls -og venv/bin/python3.5
lrwxrwxrwx 1 71 Sep 29 11:19 venv/bin/python3.5 -> /nix/store/8lh4pxhhi4cx00pp1zxpz9pqyy44kjm6-python3-3.5.4/bin/python3.5

The garbage collector does not know anything about this reference. This is OK as long as nothing gets deleted from the Nix store. But eventually channel updates arrive. After an seemingly innocent run of nixos-rebuild or nix-env –upgrade, chances are high that the Python package’s hash has changed. The next run of nix-collect-garbage will trash the Python binary still referenced from our virtualenv.

Although we use Python for illustration, the problem is universal. You can easily run into similar trouble with C programs, for example.

Protecting unmanaged references

We run fc-userscan to detect and register Nix store references:

$ fc-userscan -v venv
fc-userscan: Scouting venv
2 references in /nix/var/nix/gcroots/profiles/per-user/ck/home/ck/venv
Processed 741 files (8 MB read) in 0.011 s
fc-userscan: Finished venv

Now the exact Python interpreter version is registered as a garbage collection root:

$ ls -og /nix/var/nix/gcroots/profiles/per-user/ck/home/ck/venv
lrwxrwxrwx 1 57 Sep 29 11:46 8lh4pxhhi4cx00pp1zxpz9pqyy44kjm6 -> /nix/store/8lh4pxhhi4cx00pp1zxpz9pqyy44kjm6-python3-3.5.4

The Python interpreter and all its dependencies are now protected. The file layout in the per-user directories is organized in such a way that is possible to re-scan arbitrary directories and always get correct results.

Feature overview

Include/exclude lists

To reduce system load, fc-userscan can be instructed to skip files that are unlikely to contain store references. Specify include/exclude globs from

  1. a default ignore file ~/.userscan-ignore if existent (in gitignore(5) format)
  2. a custom ignore file with -­-exclude-from (in gitignore(5) format)
  3. the command line with -­-exclude, -­-include.

A related option is -­-quickcheck which causes fc-userscan to stop scanning a file when there is no Nix store reference in the first n bytes of that file.


fc-userscan is able to preserve scan results between runs to avoid re-scanning unchanged files. Simply add -­-cache FILE. It uses the ctime inode attribute to determine if a file has been changed or not. The cache is stored as compressed messagepack file so that it does not take much space even on large installations.

Unzip on the fly

Some runnable application artefacts are stored as ZIP files and may contain Nix store references. The most prominent instances are compressed Python eggs and JAR files. fc-userscan has support to transparently decompress files that match glob patterns passed with -­-unzip. Note that scanning inside compressed ZIP archives is significantly slower than scanning regular files.

List mode

It is possible to scan directory hierarchies and just print all found references to stdout instead of registering them with -­-list. The output format is script friendly.

These options and many more are explained in the fc-userscan(1) man page which ships with the release.

Interaction with nix-collect-garbage

How does fc-userscan interact with nix-collect-garbage now? We at Flying Circus run fc-userscan for all users and start nix-collect-garbage directly afterwards. fc-userscan exits unsuccessfully if there was a problem (e.g., unreadable files). We recommend not to run nix-collect-garbage if fc-userscan found a problem. Otherwise, not all relevant files might have been checked for Nix store references and nix-collect-garbage might delete too much.

Here is an example script which scans all users’ home directories:

while read user home; do
  sudo -u $user -H -- \
    fc-userscan -v 2 \
    --exclude-from /etc/userscan.exclude \
    --cache $home/.cache/fc-userscan.cache \
    --unzip '*.egg'
    $home || failed=1
done < <(getent passwd | awk -F: '$4 >= 100 { print $1 " " $6 }')

if (( failed )); then
  echo "ERROR: fc-userscan failed (see above)"
  exit 1
  nix-collect-garbage --delete-older-than 3d


While in theory it would be better to drive all software installations on a NixOS system via derivations, fc-userscan brings us flexibility to protect impurely installed applications.

Cover image © 2007 by benfrantzdale CC-BY-SA 2.0

by Christian Kauhaus at October 04, 2017 12:32 PM

October 02, 2017

Sander van der Burg

Deploying PHP composer packages with the Nix package manager

In two earlier blog posts, I have described various pieces of my custom web framework that I used to actively develop many years ago. The framework is quite modular -- every concern, such as layout management, data management, the editor, and the gallery, are separated into packages that can be deployed independently, so that web applications only have to include what they actually need.

Although modularity is quite useful for a variety of reasons, the framework did not start out as being modular in the beginning -- when I just started developing web applications in PHP, I did not reuse anything at all. Slowly, I discovered similarities between my projects and started sharing snippets of common functionality between them. Gradually, I learned that keeping these common aspects up to date became a burden. As a result, I developed a "common framework" that I reused among all my PHP projects.

Having a common framework for my web application projects reduced the amount of required maintenance, but introduced a new drawback -- its size kept growing and growing. As a result, many simple web applications that only required a small subset of the framework's functionality still had to embed the entire framework, making them unnecessarily big.

Today, a bit of extra PHP code is not so much of a problem, but around the time I was still actively developing web applications, many shared web hosting providers only offered a small amount of storage capacity, typically just a few megabytes.

To cope with the growing size of the framework, I decided to modularize the code by separating the framework's concerns into packages that can be deployed independently. I "invented" my own conventions to integrate the framework packages into web applications:

  • In the base directory of the web application project, I create a lib/ directory that contains symlinks to the framework packages.
  • In every PHP script that displays a page (typically only index.php), I configure the include path to refer to the packages' content in the lib/ folder, such as:


  • Each PHP module is responsible for loading the desired classes or utility functions from the framework packages. As a result, I ended up writing a substantial amount of require() statements, such as:


After my (approximately) 8 years of absence from the PHP domain, I discovered that a tool has been developed to support convenient construction of modular PHP applications: composer. Composer is heavily inspired by the NPM package manager, that is the defacto package delivery mechanism for Node.js applications.

In the last couple of months (it progresses quite slowly as it is a non-urgent side project), I have decided to get rid of my custom modularity conventions in my framework packages, and to adopt composer instead.

Furthermore, composer is a useful deployment tool, but its scope is limited to PHP applications only. As frequent readers may probably already know, I use Nix-based solutions to deploy entire software systems (that are also composed of non-PHP packages) from a single declarative specification.

To be able to include PHP composer packages in a Nix deployment process, I have developed a generator named: composer2nix that can be used to generate Nix deployment expressions from composer configuration files.

In this blog post, I will explain the concepts of composer2nix and show how it can be used.

Using composer

Using composer is generally quite straight forward. In the most common usage scenario, there is typically a PHP project (often a web application) that requires a number of dependencies. By changing the current working folder to the project directory, and running:

$ composer install

Composer will obtain all required dependencies and stores them in the vendor/ sub directory.

The vendor/ folder follows a very specific organisation:

$ find vendor/ -maxdepth 2 -type d

The vendor/ folder structure (mostly) consists two levels: the outer directory defines the namespace of the packages and the inner directory the package names.

There are a couple of folders deviating from this convention -- most notably, the vendor/composer directory, that is used by composer to track package installations:

$ ls vendor/composer

In addition to obtaining packages and storing them in the vendor/ folder, composer also generates autoload scripts (as shown above) that can be used to automatically make code units (typically classes) provided by the packages available for use in the project. Adding the following statement to one of your project's PHP scripts:


suffices to load the functionality exposed by the packages that composer installs.

Composer can be used to install both runtime and development dependencies. Many development dependencies (such as phpunit or phpdocumentor) provide command-line utilities to carry out tasks. Composer packages can also declare which executables they provide. Composer automatically generates symlinks for all provided executables in the: vendor/bin folder:

$ ls -l vendor/bin/
lrwxrwxrwx 1 sander users 29 Sep 26 11:49 jsonlint -> ../seld/jsonlint/bin/jsonlint
lrwxrwxrwx 1 sander users 41 Sep 26 11:49 phpdoc -> ../phpdocumentor/phpdocumentor/bin/phpdoc
lrwxrwxrwx 1 sander users 45 Sep 26 11:49 phpdoc.php -> ../phpdocumentor/phpdocumentor/bin/phpdoc.php
lrwxrwxrwx 1 sander users 34 Sep 26 11:49 pndp-build -> ../svanderburg/pndp/bin/pndp-build
lrwxrwxrwx 1 sander users 46 Sep 26 11:49 validate-json -> ../justinrainbow/json-schema/bin/validate-json

For example, you can run the following command-line instruction from the base directory of a project to generate API documentation:

$ vendor/bin/phpdocumentor -d src -t out

In some cases (the composer documentation often discourages this) you may want to install end-user packages globally. They can be installed into the global composer configuration directory by running:

$ composer global require phpunit/phpunit

After installing a package globally (and adding: $HOME/.config/composer/vendor/bin directory to the PATH environment variable), we should be able to run:

$ phpunit --help

The composer configuration

The deployment operations that composer carries out are driven by a configuration file named: composer.json. An example of such a configuration file could be:

"name": "svanderburg/composer2nix",
"description": "Generate Nix expressions to build PHP composer packages",
"type": "library",
"license": "MIT",
"authors": [
"name": "Sander van der Burg",
"email": "",
"homepage": ""

"require": {
"svanderburg/pndp": "0.0.1"
"require-dev": {
"phpdocumentor/phpdocumentor": "2.9.x"

"autoload": {
"psr-4": { "Composer2Nix\\": "src/Composer2Nix" }

"bin": [ "bin/composer2nix" ]

The above configuration file declares the following configuration properties:

  • A number of meta attributes, such as the package name, description, license and authors.
  • The package type. The type: library indicates that this project is a library that can be used in another project.
  • The project's runtime (require) and development (require-dev) dependencies. In a dependency object, the keys refer to the package names and the values to version specifications that can be either:
    • A semver compatible version specifier that can be an exact version (e.g. 0.0.1), wildcard (e.g. 1.0.x), or version range (e.g. >= 1.0.0).
    • A version alias that directly (or indirectly) resolves to a branch in the VCS repository of the dependency. For example, the dev-master version specifier refers to the current master branch of the Git repository of the package.
  • The autoloader configuration. In the above example, we configure the autoloader to load all classes belonging to the Composer2Nix namespace, from the src/Composer2Nix sub directory.

By default, composer obtains all packages from the Packagist repository. However, it is also possible to consult other kinds of repositories, such as external HTTP sites or VCS repositories of various kinds (including Git, Mercurial and Subversion).

External repositories can be specified by adding a 'repositories' object to the composer configuration:

"name": "svanderburg/composer2nix",
"description": "Generate Nix expressions to build PHP composer packages",
"type": "library",
"license": "MIT",
"authors": [
"name": "Sander van der Burg",
"email": "",
"homepage": ""
"repositories": [
"type": "vcs",
"url": ""

"require": {
"svanderburg/pndp": "dev-master"
"require-dev": {
"phpdocumentor/phpdocumentor": "2.9.x"

"autoload": {
"psr-4": { "Composer2Nix\\": "src/Composer2Nix" }

"bin": [ "bin/composer2nix" ]

In the above example, we have defined PNDP's GitHub repository as an external repository and changed the version specifier of svanderburg/pndp to use the latest Git master branch.

Composer uses a version resolution strategy that will parse composer configuration files and branch names in all repositories to figure out where a version can be obtained from and takes the first option that matches the dependency specification. Packagist is consulted last, making it possible for the user to override dependencies.

Pinpointing dependency versions

The version specifiers of dependencies in a composer.json configuration file are nominal and have some drawbacks when it comes to reproducibility -- for example, the version specifier: >= 1.0.1 may resolve to version 1.0.2 today and to 1.0.3 tomorrow, making it very difficult to exactly reproduce a deployment elsewhere at a later point in time.

Although direct dependencies can be easily controlled by the user, it is quite difficult to control the version resolutions of the transitive dependencies. To cope with this problem, composer will always generate lock files (composer.lock) that pinpoint the exact dependency versions (including all transitive dependencies) the first time when it gets invoked (or when composer update is called):

"_readme": [
"This file locks the dependencies of your project to a known state",
"Read more about it at",
"This file is @generated automatically"
"content-hash": "ca5ed9191c272685068c66b76ed1bae8",
"packages": [
"name": "svanderburg/pndp",
"version": "v0.0.1",
"source": {
"type": "git",
"url": "",
"reference": "99b0904e0f2efb35b8f012892912e0d171e9c2da"
"dist": {
"type": "zip",
"url": "",
"reference": "99b0904e0f2efb35b8f012892912e0d171e9c2da",
"shasum": ""
"bin": [

By bundling the composer.lock file with the package, it becomes possible to reproduce a deployment elsewhere with the exact same package versions.

The Nix package manager

Nix is a package manager whose main purpose is to build all kinds of software packages from source code, such as GNU Autotools, CMake, Perl's MakeMaker, Apache Ant, and Python projects.

Nix's main purpose is not be a build tool (it can actually also be used for building projects, but this application area is still highly experimental). Instead, Nix manages dependencies and complements existing build tools by providing dedicated build environments to make deployments reliable and reproducible, such as clearing all environment variables, making files read-only after the package has been built, restricting network access and resetting the files' timestamps to 1.

Most importantly, in these dedicated environments Nix ensures that only specified dependencies can be found. This may probably sound inconvenient at first, but this property exists for a good reason: if a package unknowingly depends on another package then it may work on the machine where it has been built, but may fail on another machine because this unknown dependency is missing. By building a package in a pure environment in which all dependencies are known, we eliminate this problem.

To provide stricter purity guarantees, Nix isolates packages by storing them in a so-called "Nix store" (that typically resides in: /nix/store) in which every directory entry corresponds to a package. Every path in the Nix store is prefixed by hash code, such as:


The hash is derived from all build-time dependencies to build the package.

Because every package is stored in its own path and variants of packages never share the same name because of the hash prefix, it becomes harder for builds to accidentally succeed because of undeclared dependencies. Dependencies can only be found if the environment has been configured in such a way that the Nix store paths to the packages are known, for example, by configuring environment variables, such as: export PATH=/nix/store/5vyssyqvbirdihqrpqhbkq138ax64bjy-gnumake-4.2.1/bin.

The Nix expression language and build environment abstractions have all kinds of facilities to make the configuration of dependencies convenient.

Integrating composer deployments into Nix builder environments

Invoking composer in a Nix builder environment introduces an additional challenge -- composer is not only a tool that does build management (e.g. it can execute script directives that can carry out arbitrary build steps), but also dependency management. The latter property conflicts with the Nix package manager.

In a Nix builder environment, network access is typically restricted, because it affects reproducibility (although it still possible to hack around this restriction) -- when downloading a file from an external site it is not known in advance what you will get. An unknown artifact influences the outcome of a package build in unpredictable ways.

Network access in Nix build environments is only permitted in so-called fixed output derivations. For a fixed output derivation, the output hash must be known in advance so that Nix can verify whether we have obtained the artifact we want.

The solution to cope with a conflicting dependency manager is by substituting it -- we must let Nix obtain the dependencies and force the tool to only execute its build management tasks.

We can populate the vendor/ folder ourselves. As explained earlier, the composer.lock file stores the exact versions of pinpointed dependencies including all transitive dependencies. For example, when a project declares svanderburg/pndp version 0.0.1 as a dependency, it may translate to the following entry in the composer.lock file:

"packages": [
"name": "svanderburg/pndp",
"version": "v0.0.1",
"source": {
"type": "git",
"url": "",
"reference": "99b0904e0f2efb35b8f012892912e0d171e9c2da"
"dist": {
"type": "zip",
"url": "",
"reference": "99b0904e0f2efb35b8f012892912e0d171e9c2da",
"shasum": ""

As can be seen in the code fragment above, the dependency translates to two kinds of pinpointed source objects -- a source reference to a specific revision in a Git repository and a dist reference to a zipball containing a snapshot of the given Git revision.

The reason why every dependency translates to two kinds of objects is that composer supports two kinds of installation modes: source (to obtain a dependency directly from a VCS) and dist (to obtain a dependency from a zipball).

We can translate the 'dist' reference into the following Nix function invocation:

"svanderburg/pndp" = {
targetDir = "";
src = composerEnv.buildZipPackage {
name = "svanderburg-pndp-99b0904e0f2efb35b8f012892912e0d171e9c2da";
src = fetchurl {
url =;
sha256 = "19l7i7adp76bjf32x9a2ykm0r5cgcmi4wf4cm4127miy3yhs0n4y";

and the 'source' reference to the following Nix function invocation:

"svanderburg/pndp" = {
targetDir = "";
src = fetchgit {
name = "svanderburg-pndp-99b0904e0f2efb35b8f012892912e0d171e9c2da";
url = "";
rev = "99b0904e0f2efb35b8f012892912e0d171e9c2da";
sha256 = "15i311dc0123v3ppa69f49ssnlyzizaafzxxr50crdfrm8g6i4kh";

(As a sidenote: we need the targetDir property to provide compatibility with the deprecated PSR-0 autoloading standard. Old autoload packages can be stored in a sub folder of a package residing in the vendor/ structure.)

To generate the above function invocations, we need more than just the properties provided by the composer.lock file. Since download functions in Nix are fixed output derivations, we must compute the output hashes of the downloads by invoking a Nix prefetch script, such as nix-prefetch-url or nix-prefetch-git. The composer2nix generator will automatically invoke the appropriate prefetch script to augment the generated expressions with output hashes.

To ensure maximum compatibility with composer's behaviour, the dependencies obtained by Nix must be copied into to the vendor/ folder. In theory, symlinking would be more space efficient, but experiments have shown that some packages (such as phpunit) may attempt to load the project's autoload script, e.g. by invoking:


The above require invocation does not work if the dependency is a symlink -- the require path resolves to a path in the Nix store (e.g. /nix/store/...). The parent's parent path corresponds to /nix where no autoload script is stored. (As a sidenote: I have decided to still provide symlinking as an option for deployment scenarios where this is not an issue).

After some experimentation, I discovered that composer uses the following file to track which packages have been installed: vendor/composer/installed.json. The contents appears to be quite similar to the composer.lock file:

"name": "svanderburg/pndp",
"version": "v0.0.1",
"version_normalized": "",
"source": {
"type": "git",
"url": "",
"reference": "99b0904e0f2efb35b8f012892912e0d171e9c2da"
"dist": {
"type": "zip",
"url": "",
"reference": "99b0904e0f2efb35b8f012892912e0d171e9c2da",
"shasum": ""

Reconstructing the above file can be done by merging the contents of the packages and packages-dev objects in the composer.lock file.

Another missing piece in the puzzle is the autoload scripts. We can force composer to dump the autoload script, by running:

$ composer dump-autoload --optimize

The above command generates an optimized autoloader script. A non-optimized autoload script dynamically inspects the contents of the package folders to load modules. This is convenient in the development stage of a project, in which the files continuously change, but in production environments this introduces quite a bit of load time overhead.

Since packages in the Nix store can never change after they have been built, it makes no sense to generate a non-optimized autoloader script.

Finally, the last remaining practical issue, is PHP packages providing command-line utilities. Most executables have the following shebang line:

#!/usr/bin/env php

To ensure that these CLI tools work in Nix builder environments, the above shebang must be subsituted by the PHP executable that resides in the Nix store.

After carrying out the above described steps, running the following command:

$ composer install --optimize-autoloader

is simply just a formality -- it will not download or change anything.

Use cases

composer2nix has a variety of use cases. The most obvious one is to use it to package a web application project with Nix instead of composer. Running the following command generates Nix expressions from the composer configuration files:

$ composer2nix

By running the following command, we can use Nix to obtain the dependencies and generate a package with a vendor/ folder:

$ nix-build
$ ls result/
index.php vendor/

In addition to web applications, we can also deploy command-line utility projects implemented in PHP. For these kinds of projects it make more sense generate a bin/ sub folder in which the executables can be found.

For example, for the composer2nix project, we can generate a CLI-specific expression by adding the --executable parameter:

$ composer2nix --executable

We can install the composer2nix executable in our Nix profile by running:

$ nix-env -f default.nix -i

and then invoke composer2nix as follows:

$ composer2nix --help

We can also deploy third party command-line utilities directly from the Packagist repository:

$ composer2nix -p phpunit/phpunit
$ nix-env -f default.nix -iA phpunit-phpunit
$ phpunit --version

The most powerful application is not the integration with Nix itself, but the integration with other Nix projects. For example, we can define a NixOS configuration running an Apache HTTP server instance with PHP and our example web application:

{pkgs, config, ...}:

myexampleapp = import /home/sander/myexampleapp {
inherit pkgs;
services.httpd = {
enable = true;
adminAddr = "admin@localhost";
extraModules = [
{ name = "php7"; path = "${pkgs.php}/modules/"; }
documentRoot = myexampleapp;


We can deploy the above NixOS configuration as follows:

$ nixos-rebuild switch

By running only one simple command-line instruction, we have a running system with the Apache webserver serving our web application.


In addition to composer2nix, I have also been responsible for developing node2nix, a tool that generates Nix expressions from NPM package configurations. Because composer is heavily inspired by NPM, we see many similarities in the architecture of both generators. For example, both generate the same kinds of expressions (a builder environment, a packages expression and a composition expression), have a similar separation of concerns, and both use an internal DSL for generating Nix expressions (NiJS and PNDP).

There are also a number of conceptual differences -- dependencies in NPM can be private to a package or shared among multiple packages. In composer, all dependencies in a project are shared.

The reason why NPM's dependency management is more powerful is because Node.js uses the CommonJS module system. CommonJS considers each file to be a unique module. This, for example, makes it possible for one module to load a version of a package from a certain filesystem location and another version of the same package from another filesystem location within the same project.

By contrast, in PHP, isolation is accomplished by the namespace declarations in each file. Namespaces can not be dynamically altered so that multiple versions can safely coexist in one project. Furthermore, the vendor/ directory structure makes it possible to store only one variant of a package.

Despite the fact that composer's dependency management is less powerful makes constructing a generator much more straightforward compared to NPM.

Another feature that composer supports for quite some time, and NPM until very recently is pinpointing/locking dependencies. When generating Nix expressions from NPM package configurations, we must replicate NPM's dependency resolving algorithm. In composer, we can simply take whatever the composer.lock file provides. The lock file saves us from replicating the dependency lookup process making the generation process considerably easier.


My implementation is not the first attempt that tries to integrate composer with Nix. After a few days of developing, I discovered another attempt that seems to be in a very early development stage. I did not try or use this version.


composer2nix can be obtained from Packagist and my GitHub page.

Despite the fact that I did quite a bit of research, composer2nix should still be considered a prototype. One of its known limitations is that it does not support fossil repositories yet.

by Sander van der Burg ( at October 02, 2017 10:15 PM