NixOS Planet

June 25, 2020

Tweag I/O

Nix Flakes, Part 2: Evaluation caching

How Nix flakes enable caching of evaluation results of Nix expressions.

June 25, 2020 12:00 AM

Automatic Resource Optimization

As of today, will automatically select resources (CPU count and memory amount) for builds submitted to it. Based on historic build data, calculates a resource allocation that will make your build as performant as possible, while wasting minimal CPU time. This means users get faster and cheaper builds, while also taking away the user’s burden of figuring out what resource settings to use for each individual build.

Previously, all builds were assigned 4 CPUs unless the user configured resource selection differently. However, configuring different resource settings for individual builds was difficult, since Nix has no notion of such settings. Additionally, it is really tricky to know wether a build will gain anything from being allocated many CPUs, or if it just makes the build more expensive. It generally requires the user to try out the build with different settings, which is time-consuming for a single build and almost insurmountable for a large set of builds with different characteristics.

Now, each individual build will be analyzed and can be assigned between 1 and 16 CPUs, depending on how well the build utilizes multiple CPUs. The memory allocation will be adapted to minimize the amount of unused memory.

The automatic resource optimization has been tested both internally and by a selected number of beta users, and the results have been very positive so far. We’re happy to make this feature available to all users, since it aligns perfectly with the service’s core idea of being simple, cost-effective and performant.

How Does it Work?

The automatic resource optimization works in two steps:

  1. When a Nix derivation is submitted to, we look for similar derivations that have been built on before. A heuristic approach is used, where derivations are compared based on package names and version numbers. This approach can be improved in the future, by looking at more parts of the derivations, like dependencies and build scripts.

  2. A number of the most recent, most similar derivations are selected. We then analyze the build data of those derivations. Since we have developed a secure sandbox specifically for running Nix builds, we’re also able to collect a lot of data about the builds. One metric that is collected is CPU utilization, and that lets us make predictions about how well a build would scale, performance-wise, if it was given more CPUs.

    We also look at metrics about the historic memory usage, and make sure the new build is allocated enough memory.

by ( at June 25, 2020 12:00 AM

June 18, 2020

Tweag I/O

Long-term reproducibility with Nix and Software Heritage

How Nix is collaborating with Software Heritage for long-term software reproducibility.

June 18, 2020 12:00 AM

June 17, 2020


Windows-on-NixOS, part 2: Make it go fast!

This is part 2 of a series of blog posts explaining how we took an existing Windows installation on hardware and moved it into a VM running on top of NixOS. Previously, we discussed how we performed the actual storage migration. In this post, we’ll cover the various performance optimisations we tried, what worked, and what didn’t work. GPU passthrough Since the machine is, amongst other things, used for gaming, graphics performance is critical.

June 17, 2020 09:00 AM

June 11, 2020

Sander van der Burg

Using Disnix as a simple and minimalistic dependency-based process manager

In my previous blog post I have demonstrated that I can deploy an entire service-oriented system locally with Disnix without the need of obtaining any external physical or virtual machines (or even Linux containers).

The fact that I could do this with relative ease is a benefit of using my experimental process manager-agnostic deployment framework that I have developed earlier, allowing you to target a variety of process management solutions with the same declarative deployment specifications.

Most notably, the fact that the framework can also work with processes that daemonize and let foreground processes automatically daemonize, make it very convenient to do local unprivileged user deployments.

To refresh your memory: a process that daemonizes spawns another process that keeps running in the background while the invoking process terminates after the initialization is done. Since there is no way for the caller to know the PID of the daemon process, daemons typically follow the convention to write a PID file to disk (containing the daemon's process ID), so that it can eventually be reliably terminated.

In addition to spawning a daemon process that remains in the background, services should also implement a number of steps to make it well-behaving, such as resetting signals handlers, clearing privacy sensitive environment variables, and dropping privileges etc.

In earlier blog posts, I argued that managing foreground processes with a process manager is typically more reliable (e.g. a PID of a foreground process is always known to be right).

On the other hand, processes that daemonize also have certain advantages:

  • They are self contained -- they do not rely on any external services to operate. This makes it very easy to run a collection of processes for local experimentation.
  • They have a standard means to notify the caller that the service is ready. By convention, the executable that spawns the daemon process is only supposed to terminate when the daemon has been successfully initialized. For example, foreground processes that are managed by systemd, should invoke the non-standard sd_notify() function to notify systemd that they are ready.

Although these concepts are nice, properly daemonizing a process is the responsibility of the service implementer -- as a consequence, it is not a guarantee that all services will properly implement all steps to make a daemon well-behaving.

Since the management of daemons is straight forward and self contained, the Nix expression language provides all kinds of advantages over data-oriented configuration languages (e.g. JSON or YAML) and Disnix has a flexible deployment model that works with a dependency graph and a plugin system that can activate and deactivate all kinds of components, I realized that I could integrate these facilities to make my own simple dependency-based process manager.

In this blog post, I will describe how this process management approach works.

Specifying a process configuration

A simple Nix expression capturing a daemon deployment configuration might look as follows:

{writeTextFile, mydaemon}:

writeTextFile {
name = "mydaemon";
text = ''
destination = "/etc/dysnomia/process";

The above Nix expression generates a textual configuration file:

  • The process field specifies the path to executable to start (that in turn spawns a deamon process that keeps running in the background).
  • The pidFile field indicates the location of the PID file containing the process ID of the daemon process, so that it can be reliably terminated.

Most common system services (e.g. the Apache HTTP server, MySQL and PostgreSQL) can daemonize on their own and follow the same conventions. As a result, the deployment system can save you some configuration work by providing reasonable default values:

  • If no pidFile is provided, then the deployment system assumes that the daemon generates a PID file with the same name as the executable and resides in the directory that is commonly used for storing PID files: /var/run.
  • If a package provides only a single executable in the bin/ sub folder, then it is also not required to specify a process.

The fact that the configuration system provides reasonable defaults, means that for trivial services we do not have to specify any configuration properties at all -- simply providing a single executable in the package's bin/ sub folder suffices.

Do these simple configuration facilities really suffice to manage all kinds of system services? The answer is most likely no, because we may also want to manage processes that cannot daemonize on their own, or we may need to initialize some state first before the service can be used.

To provide these additional facilities, we can create a wrapper script around the executable and refer to it in the process field of the deployment specification.

The following Nix expression generates a deployment configuration for a service that requires state and only runs as a foreground process:

{stdenv, writeTextFile, writeScript, daemon, myForegroundService}:

myForegroundServiceWrapper = writeScript {
name = "myforegroundservice-wrapper";
text = ''
#! ${} -e

mkdir -p /var/lib/myservice
exec ${daemon}/bin/daemon -U -F /var/run/ -- \
writeTextFile {
name = "mydaemon";
text = ''
destination = "/etc/dysnomia/process";

As you may observe by looking at the Nix expression shown above, the Nix expression generates a wrapper script that does the following:

  • First, it creates the required state directory: /var/lib/myservice so that the service can work properly.
  • Then it invokes libslack's daemon command to automatically daemonize the service. The daemon command will automatically store a PID file containing the daemon's process ID, so that the configuration system knows how to terminate it. The value of the -F parameter passed to the daemon executable and the pidFile configuration property are the same.

Typically, in deployment systems that use a data-driven configuration language (such as YAML or JSON) obtaining a wrapped executable is a burden, but in the Nix expression language this is quite convenient -- the language allows you to automatically build packages and other static artifacts such as configuration files and scripts, and pass their corresponding Nix store paths as parameters to configuration files.

The combination of wrapper scripts and a simple configuration file suffices to manage all kinds of services, but it is fairly low-level -- to automate the deployment process of a system service, you basically need to re-implement the same kinds of configuration properties all over again.

In the Nix process mangement-framework, I have developed a high-level abstraction function for creating managed processes that can be used to target all kinds of process managers:

{createManagedProcess, runtimeDir}:

webapp = import ../../webapp;
createManagedProcess rec {
name = "webapp";
description = "Simple web application";

# This expression can both run in foreground or daemon mode.
# The process manager can pick which mode it prefers.
process = "${webapp}/bin/webapp";
daemonArgs = [ "-D" ];

environment = {
PORT = port;
PID_FILE = "${runtimeDir}/${name}.pid";

The above Nix expression is a constructor function that generates a configuration for a web application process (with an embedded HTTP server) that returns a static HTML page.

The createManagedProcess function abstraction function can be used to generate configuration artifacts for systemd, supervisord, and launchd and various kinds of scripts, such as sysvinit scripts and BSD rc scripts.

I can also easily adjust the generator infrastructure to generate the configuration files shown earlier (capturing the path of an executable and a PID file) with a wrapper script.

Managing daemons with Disnix

As explained in earlier blog posts about Disnix, services in a Disnix deployment model are abstract representations of basically any kind of deployment unit.

Every service is annotated with a type field. Disnix consults a plugin system named Dysnomia to invoke the corresponding plugin that can manage the lifecycle of that service, e.g. by activating or deactivating it.

Implementing a Dysnomia module for directly managing daemons is quite straight forward -- as an activation step I just have to start the process defined in the configuration file (or the single executable that resides in the bin/ sub folder of the package).

As a deactivation step (which purpose is to stop a process) I simply need to send a TERM signal to the PID in the PID file, by running:

$ kill $(cat $pidFile)

Translation to a Disnix deployment specification

The last remaining bits in the puzzle is process dependency management and the translation to a Disnix services model so that Disnix can carry out the deployment.

Deployments managed by the Nix process management framework are driven by so-called processes models that capture the properties of running process instances, such as:

{ pkgs ? import  { inherit system; }
, system ? builtins.currentSystem
, stateDir ? "/var"
, runtimeDir ? "${stateDir}/run"
, logDir ? "${stateDir}/log"
, cacheDir ? "${stateDir}/cache"
, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")
, forceDisableUserChange ? false
, processManager ? "disnix"

constructors = import ./constructors.nix {
inherit pkgs stateDir runtimeDir logDir tmpDir forceDisableUserChange processManager;
rec {
webapp = rec {
port = 5000;
dnsName = "webapp.local";

pkg = constructors.webapp {
inherit port;

nginxReverseProxy = rec {
port = 8080;

pkg = constructors.nginxReverseProxyHostBased {
webapps = [ webapp ];
inherit port;
} {};

The above Nix expression is a simple example of a processes model defining two running processes:

  • The webapp process is the web application process described earlier that runs an embedded HTTP server and serves a static HTML page.
  • The nginxReverseProxy is an Nginx web server that acts as a reverse proxy server for the webapp process. To make this service to work properly, it needs to be activated after the webapp process is activated. To ensure that the activation is done in the right order, webapp is passed as a process dependency to the nginxReverseProxyHostBased constructor function.

As explained in previous blog posts, Disnix deployments are driven by three kinds of deployment specifications: a services model that captures the service components of which a system consists, an infrastructure model that captures all available target machines and their configuration properties and a distribution model that maps services in the services model to machines in the infrastructure model.

The processes model and Disnix services model are quite similar -- the latter is actually a superset of the processes model.

We can translate process instances to Disnix services in a straight forward manner. For example, the nginxReverseProxy process can be translated into the following Disnix service configuration:

nginxReverseProxy = rec {
name = "nginxReverseProxy";
port = 8080;

pkg = constructors.nginxReverseProxyHostBased {
webapps = [ webapp ];
inherit port;
} {};

activatesAfter = {
inherit webapp;

type = "process";

In the above specification, the process configuration has been augmented with the following properties:

  • A name property because this is a mandatory field for every service.
  • In the process management framework all process instances are managed by the same process manager, but in Disnix services can have all kinds of shapes and formes and require a plugin to manage their life-cycles.

    To allow Disnix to manage daemons, we specify the type property to refer to our process Dysnomia module that starts and terminates a daemon from a simple textual specification.
  • The process dependencies are translated to Disnix inter-dependencies by using the activatesAfter property.

    In Disnix, inter-dependency parameters serve two purposes -- they provide the inter-dependent services with configuration parameters and they ensure the correct activation ordering.

    The activatesAfter parameter disregards the first inter-dependency property, because we are already using the process management framework's convention for propagating process dependencies.

To allow Disnix to carry out the deployment of processes only a services model does not suffice. Since we are only interested in local deployment, we can just provide an infrastructure model with only a localhost target and a distribution model that maps all services to localhost.

To accomplish this, we can use the same principles for local deployments described in the previous blog post.

An example deployment scenario

I have added a new tool called nixproc-disnix-switch to the Nix process management framework that automatically converts processes models into Disnix deployment models and invokes Disnix to locally deploy a system.

The following command will carry out the complete deployment of our webapp example system, shown earlier, using Disnix as a simple dependency-based process manager:

$ nixproc-disnix-switch --state-dir /home/sander/var \
--force-disable-user-change processes.nix

In addition to using Disnix for deploying processes, we can also use its other features. For example, another application of Disnix I typically find useful is the deployment visualization tool.

We can also use Disnix to generate a DOT graph from the deployment architecture of the currently deployed system and generate an image from it:

$ disnix-visualize >
$ dot -Tpng > out.png

Resulting in the following diagram:

In the first blog post that I wrote about the Nix process management framework (in which I explored a functional discipline using sysvinit-scripts as a basis), I was using hand-drawn diagrams to illustrate deployments.

With the Disnix backend, I can use Disnix's visualization tool to automatically generate these diagrams.


In this blog post, I have shown that by implementing a few very simple concepts, we can use Disnix as a process management backend for the experimental Nix-based process management framework.

Although it was fun to develop a simple process management solution, my goal is not to compete with existing process management solutions (such as systemd, launchd or supervisord) -- this solution is primarily designed for simple use cases and local experimentation.

For production deployments, you probably still want to use a more sophisticated solution. For example, in production scenarios you also want to check the status of running processes and send them reload instructions. These are features that the Disnix backend does not support.

The Nix process management framework supports a variety of process managers, but none of them can be universally used on all platforms that Disnix can run on. For example, the sysvinit-script module works conveniently for local deployments but is restricted to Linux only. Likewise the bsdrc-script module only works on FreeBSD (and theoretically on NetBSD and OpenBSD). supervisord works on most UNIX-like systems, but is not self contained -- processes rely on the availablity of the supervisord service to run.

This Disnix-based process management solution is simple and portable to all UNIX-like systems that Disnix has been tested on.

The process module described in this blog post is a replacement for the process module that already exists in the current release of Dysnomia. The reason why I want it to be replaced is that Dysnomia now provides better alternatives to the old process module.

For example, when it is desired to have your process managed by systemd, then the new systemd-unit module should be used that is more reliable, supports many more features and has a simpler implementation.

Furthermore, I made a couple of mistakes in the past. The old process module was originally implemented as a simple module that would start a foreground process in the background, by using the nohup command. At the time I developed that module, I did not know much about developing daemons, nor about the additional steps daemons need to carry out to make themselves well-behaving.

nohup is not a proper solution for daemonizing foreground processes, such as critical system services -- a process might inherit privacy-sensitive environment variables, does not change the current working directory to the root folder and keep external drives mounted, and could also behave unpredictably if signal handlers have been changed from the default behaviour.

At some point I believed that it is more reliable to use a process manager to manage the lifecycle of a process and adjusted the process module to do that. Originally I used Upstart for this purpose, and later I switched to systemd, with sysvinit-scripts (and the direct appraoch with nohup as alternative implemenations).

Basically the process module provided three kinds of implementations in which none of them provided an optimal deployment experience.

I made a similar mistake with Dysnomia's wrapper module. Originally, its only purpose was to delegate the execution of deployment activities to a wrapper script included with the component that needs to be deployed. Because I was using this script mostly to deploy daemons, I have also adjusted the wrapper module to use an external process manager to manage the lifecycle of the daemon that the wrapper script might spawn.

Because of these mistakes and poor separation of functionality, I have decided to deprecate the old process and wrapper modules. Since they are frequently used and I do not want to break compatibility with old deployments, they can still be used if Dysnomia is configured in legacy mode, which is the default setting for the time being.

When using the old modules, Dysnomia will display a warning message explaining you that you should migrate to better alternatives.


The process Dysnomia module described in this blog post is part of the current development version of Dysnomia and will become available in the next release.

The Nix process management framework (which is still a highly-experimental prototype) includes the disnix backend (described in this blog post), allowing you to automatically translate a processes model to Disnix deployment models and uses Disnix to deploy a system.

by Sander van der Burg ( at June 11, 2020 06:15 PM

May 26, 2020

Sander van der Burg

Deploying heterogeneous service-oriented systems locally with Disnix

In the previous blog post, I have shown a new useful application area that is built on top of the combination of my experimental Nix-based process management framework and Disnix.

Both of these underlying solutions have a number of similarities -- as their names obviously suggest, they both strongly depend on the Nix package manager to deploy all their package dependencies and static configuration artifacts, such as configuration files.

Furthermore, they are both driven by models written in the Nix expression language to automate the deployment processes of entire systems.

These models are built on a number of simple conventions that are frequently used in the Nix packages repository:

  • All units of which a system consists are defined as Nix expressions declaring a function. Each function parameter refers to a dependency or configuration property required to construct the unit from its sources.
  • To compose a particular variant of a unit, we must invoke the function that builds and configures the unit with parameters providing the dependencies and configuration properties that the unit needs.
  • To make all units conveniently accessible from a single location, the content of the configuration units is typically blended into a symlink tree called Nix profiles.

Besides these commonalities, their main difference is that the process management framework is specifically designed as a solution for systems that are composed out of running processes (i.e. daemons in UNIX terminology).

This framework makes it possible to construct multiple instances of running processes, isolate their resources (by avoiding conflicting resource configuration properties), and manage running process with a variety of process management solutions, such as sysvinit scripts, BSD rc scripts, systemd, launchd and supervisord.

The process management framework is quite useful for single machine deployments and local experimentation, but it does not do any distributed deployment and heterogeneous service deployment -- it cannot (at least not conveniently) deploy units that are not daemons, such as databases, Java web applications deployed to a Servlet container, PHP applications deployed to a PHP-enabled web server etc.

Disnix is a solution to automate the deployment processes of service-oriented systems -- distributed systems that are composed of components, using a variety of technologies, into a network of machines.

To accomplish full automation, Disnix integrates and combines a number of activities and tools, such as Nix for package management and Dysnomia for state management (Dysnomia takes care of the activation, deactivation steps for services, and can optionally manage snapshots and restores of state). Dysnomia provides a plugin system that makes it possible to manage a variety of component types, including processes and databases.

Disnix and Dysnomia can also include the features of the Nix process management framework for the deployment of services that are running processes, if desired.

The scope of Disnix is quite broad in comparison to the process management framework, but it can also be used to automate all kinds of sub problems. For example, it can also be used as a remote package deployment solution to build and deploy packages in a network of heterogeneous machines (e.g. Linux and macOS).

After comparing the properties of both deployment solutions, I have identified another interesting sub use case for Disnix -- deploying heterogeneous service-oriented systems (that are composed out of components using a variety of technologies) locally for experimentation purposes.

In this blog post, I will describe how Disnix can be used for local deployments.

Motivating example: deploying a Java-based web application and web service system

One of the examples I have shown in the previous blog post, is an over engineered Java-based web application and web service system which only purpose is to display the string: "Hello world!".

The "Hello" string is returned by the HelloService and consumed by another service called HelloWorldService that composes the sentence "Hello world!" from the first message. The HelloWorld web application is the front-end responsible for displaying the sentence to the end user.

When deploying the system to a single target machine, it could have the following deployment architecture:

In the architecture diagram shown above, ovals denote services, arrows inter-dependency relationships (requiring that a service gets activated before another), the dark grey colored boxes container environments, and the light grey colored box a machine (which is only one machine in the above example).

As you may notice, only one service in the diagram shown above is a daemon, namely Apache Tomcat (simpleAppservingTomcat) that can be managed by the experimental Nix process management framework.

The remainder of the services have a different kind of form -- the web application front-end (HelloWorld) is a Java web application that is embedded in Catalina, the Servlet container that comes with Apache Tomcat. The web services are Axis2 archives that are deployed to the Axis2 container (that in turn is a web application managed by Apache Tomcat).

In the previous blog post, I have shown that we can deploy and distribute these services over a small network of machines.

It is also possible to completely deploy this system locally, without any external physical or virtual machines, and network connectivity.

Configuring the client interface for local deployment

To execute deployment tasks remotely, Disnix invokes an external process that is called a client interface. By default, Disnix uses the disnix-ssh-client that remotely executes commands via SSH and transfers data via SCP.

It is also possible to use alternative client interfaces so that different communication protocols and methods can be used. For example, there is also an external package that provides a SOAP client disnix-soap-client and a NixOps client (disnix-nixops-client).

Communication with a local Disnix service instance can also be done with a client interface. For example, configuring the following environment variable:

$ export DISNIX_CLIENT_INTERFACE=disnix-client

instructs the Disnix tools to use the D-Bus client to communicate with a local Disnix service instance.

It is also possible to bypass the local Disnix service and directly execute all deployment activities with the following interface:

$ export DISNIX_CLIENT_INTERFACE=disnix-runactivity

The disnix-runactivity client interface is particularly useful for single-user/unprivileged user deployments. In the former case, you need a Disnix D-Bus daemon running in the background that authorizes the user to execute deployments. For the latter, nothing is required beyond a single user Nix installation.

Deploying the example system locally

As explained in earlier blog posts about Disnix, deployments are driven by three kinds of deployment specifications: a services model capturing all the services of which a system consists and how they depend on each other, an infrastructure model captures all available target machines and their relevant configuration properties (including so-called container services that can host application services) and the distribution model maps services in the services model to target machines in the infrastructure model (and container services that a machine may provide).

Normally, Disnix deploys services to remote machines defined in the infrastructure model. For local deployments, we simply need to provide an infrastructure model with only one entry:

{ = "localhost";

In the distribution model, we must map all services to the localhost target:


simpleAppservingTomcat = [ infrastructure.localhost ];
axis2 = [ infrastructure.localhost ];

HelloService = [ infrastructure.localhost ];
HelloWorldService = [ infrastructure.localhost ];
HelloWorld = [ infrastructure.localhost ];

With the above infrastructure and distribution model that facilitates local deployment, and the services model of the example system shown above, we can deploy the entire system on our local machine:

$ disnix-env -s services.nix -i infrastructure-local.nix -d distribution-local.nix

Deploying the example system locally as an unprivileged user

The deployment scenario shown earlier supports local deployment, but still requires super-user privileges. For example, to deploy Apache Tomcat, we must have write access to the state directory: /var to configure Apache Tomcat's state and deploy the Java web application archives. An unprivileged user typically lacks the permissions to perform modifications in the /var directory.

One of they key features of the Nix process management framework is that it makes all state directories are configurable. State directories can be changed in such a way that also unprivileged users can deploy services (e.g. by changing the state directory to a sub folder in the user's home directory).

Disnix service models can also define these process management configuration parameters:

{ pkgs, system, distribution, invDistribution
, stateDir ? "/var"
, runtimeDir ? "${stateDir}/run"
, logDir ? "${stateDir}/log"
, cacheDir ? "${stateDir}/cache"
, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")
, forceDisableUserChange ? false
, processManager ? "systemd"

processType =
if processManager == null then "managed-process"
else if processManager == "sysvinit" then "sysvinit-script"
else if processManager == "systemd" then "systemd-unit"
else if processManager == "supervisord" then "supervisord-program"
else if processManager == "bsdrc" then "bsdrc-script"
else if processManager == "cygrunsrv" then "cygrunsrv-service"
else if processManager == "launchd" then "launchd-daemon"
else throw "Unknown process manager: ${processManager}";

constructors = import ../../../nix-processmgmt/examples/service-containers-agnostic/constructors.nix {
inherit pkgs stateDir runtimeDir logDir cacheDir tmpDir forceDisableUserChange processManager;

customPkgs = import ../top-level/all-packages.nix {
inherit system pkgs stateDir;
rec {
simpleAppservingTomcat = constructors.simpleAppservingTomcat {
httpPort = 8080;
type = processType;

The above Nix expression shows a partial Nix services model for the Java example system. The first four function parameters: pkgs, system, distribution, and invDistribution are standard Disnix service model parameters.

The remainder of the parameters are specific to the process management framework -- they allow you to change the state directories, force disable user changing (this is useful for unprivileged user deployments) and the process manager it should use for daemons.

I have added a new command-line parameter (--extra-params) to the Disnix tools that can be used to propagate values for these additional parameters.

With the following command-line instruction, we change the base directory of the state directories to the user's home directory, force disable user changing (only a privileged user can do this), and change the process manager to sysvinit scripts:

$ disnix-env -s services.nix -i infrastructure-local.nix -d distribution-local.nix \
--extra-params '{
stateDir = "/home/sander/var";
processManager = "sysvinit";
forceDisableUserChange = true;

With the above command, we can deploy the example system completely as an unprivileged user, without requiring any process/service manager to manage Apache Tomcat.

Working with predeployed container services

In our examples so far, we have deployed systems that are entirely self contained. However, it is also possible to deploy services to container services that have already been deployed by other means. For example, it is also possible to install Apache Tomcat with your host system's distribution and use Dysnomia to integrate with that.

To allow Disnix to deploy services to these containers, we need an infrastructure model that knows its properties. We can automatically generate an infrastructure model from the Dysnomia container configuration files, by running:

$ disnix-capture-infra infrastructure.nix > \

and using the captured infrastructure model to locally deploy the system:

$ disnix-env -s services.nix -i infrastructure-captured.nix -d distribution-local.nix

Undeploying a system

For local experimentation, it is probably quite common that you want to completely undeploy the system as soon as you no longer need it. Normally, this should be done by writing an empty distribution model and redeploying the system with that empty distribution model, but that is still a bit of a hassle.

In the latest development version of Disnix, an undeploy can be done with the following command-line instruction:

$ disnix-env --undeploy -i infrastructure.nix


The --extra-params and --undeploy Disnix command-line options are part of the current development version of Disnix and will become available in the next release.

by Sander van der Burg ( at May 26, 2020 09:49 PM

May 25, 2020

Tweag I/O

Nix Flakes, Part 1: An introduction and tutorial

An introduction to Nix flakes and a tutorial on how to use them.

May 25, 2020 12:00 AM

April 30, 2020

Sander van der Burg

Deploying container and application services with Disnix

As described in many previous blog posts, Disnix's purpose is to deploy service-oriented systems -- systems that can be decomposed into inter-connected service components, such as databases, web services, web applications and processes -- to networks of machines.

To use Disnix effectively, two requirements must be met:

  • A system must be decomposed into independently deployable services, and these services must be packaged with Nix.
  • Services may require other services that provide environments with essential facilities to run them. In Disnix terminology, these environments are called containers. For example, to host a MySQL database, Disnix requires a MySQL DBMS as a container, to run a Java web application archive you need a Java Servlet container, such as Apache Tomcat, and to run a daemon it needs a process manager, such as systemd, launchd or supervisord.

Disnix was originally designed to only deploy the (functional) application components (called services in Disnix terminology) of which a service-oriented systems consists, but it was not designed to handle the deployment of any underlying container services.

In my PhD thesis, I called Disnix's problem domain service deployment. Another problem domain that I identified was infrastructure deployment that concerns the deployment of machine configurations, including container services.

The fact that these problem domains are separated means that, if we want to fully deploy a service-oriented system from scratch, we basically need to do infrastructure deployment first, e.g. install a collection of machines with system software and these container services, such as MySQL and Apache Tomcat, and once that is done, we can use these machines as deployment targets for Disnix.

There are a variety of solutions available to automate infrastructure deployment. Most notably, NixOps can be used to automatically deploy networks of NixOS configurations, and (if desired) automatically instantiate virtual machines in a cloud/IaaS environment, such as Amazon EC2.

Although combining NixOps for infrastructure deployment with Disnix for service deployment works great in many scenarios, there are still a number of concerns that are not adequately addressed:

  • Infrastructure and service deployment are still two (somewhat) separated processes. Although I have developed an extension toolset (called DisnixOS) to combine Disnix with the deployment concepts of NixOS and NixOps, we still need to run two kinds of deployment procedures. Ideally, it would be nice to fully automate the entire deployment process with only one command.
  • Although NixOS (and NixOps that extends NixOS' concepts to networks of machines and the cloud) do a great job in fully automating the deployments of machines, we can only reap their benefits if we can permit ourselves use to NixOS, which is a particular Linux distribution flavour -- sometimes you may need to deploy services to conventional Linux distributions, or different kinds of operating systems (after all, one of the reasons to use service-oriented systems is to be able to use a diverse set of technologies).

    The Nix package manager also works on other operating systems than Linux, such macOS, but there is no Nix-based deployment automation solution that can universally deploy infrastructure components to other operating systems (the only other infrastructure deployment solution that provides similar functionality to NixOS is the the nix-darwin repository, that can only be used on macOS).
  • The NixOS module system does not facilitate the deployment of multiple instances of infrastructure components. Although this is probably a very uncommon use case, it is also possible to run two MySQL DBMS services on one machine and use both of them as Disnix deployment targets for databases.

In a Disnix-context, services have no specific meaning or shape and can basically represent anything -- a satellite tool providing a plugin system (called Dysnomia) takes care of most of their deployment steps, such as their activation and deactivation.

A couple of years ago, I have demonstrated with a proof of concept implementation that we can use Disnix and Dysnomia's features to deploy infrastructure components. This deployment approach is also capable of deploying multiple instances of container services to one machine.

Recently, I have revisited that idea again and extended it so that we can now deploy a service-oriented system including most underlying container services with a single command-line instruction.

About infrastructure deployment solutions

As described in the introduction, Disnix's purpose is service deployment and not infrastructure deployment. In the past, I have been using a variety of solutions to manage the underlying infrastructure of service-oriented systems:

  • In the very beginning, while working on my master thesis internship (in which I built the first prototype version of Disnix), there was not much automation at all -- for most of my testing activities I manually created VirtualBox virtual machines and manually installed NixOS on them, with all essential container servers, such as Apache Tomcat and MySQL, because these were the container services that my target system required.

    Even after some decent Nix-based automated solutions appeared, I still ended up doing manual deployments for non-NixOS machines. For example, I still remember the steps I had to perform to prepare myself for the demo I gave at NixCon 2015, in which I configured a small heterogeneous network consisting of an Ubuntu, NixOS, and Windows machine. It took me many hours of preparation time to get the demo right.
  • Some time later, for a research paper about declarative deployment and testing, we have developed a tool called nixos-deploy-network that deploys NixOS configurations in a network of machines and is driven by a networked NixOS configuration file.
  • Around the same time, I have also developed a similar tool called: disnixos-deploy-network that uses Disnix's deployment mechanisms to remotely deploy a network of NixOS configurations. It was primarily developed to show that Disnix's plugin system: Dysnomia, could also treat entire NixOS configurations as services.
  • When NixOps appeared (initially it was called Charon), I have also created facilities in the DisnixOS toolset to integrate with it -- for example DisnixOS can automatically convert a NixOps configuration to a Disnix infrastructure model.
  • And finally, I have created a proof of concept implementation that shows that Disnix can also treat every container service as a Disnix service and deploy it.

The idea behind the last approach is that we deploy two systems in sequential order with Disnix -- the former consisting of the container services and the latter of the application services.

For example, if we want to deploy a system that consists of a number of Java web applications and MySQL databases, such as the infamous Disnix StaffTracker example application (Java version), then we must first deploy a system with Disnix that provides the containers: the MySQL DBMS and Apache Tomcat:

$ disnix-env -s services-containers.nix \
-i infrastructure-bare.nix \
-d distribution-containers.nix \
--profile containers

As described in earlier blog posts about Disnix, deployments are driven by three configuration files -- the services model captures all distributable components of which the system consists (called services in a Disnix-context), the infrastructure model captures all target machines in the network and their relevant properties, and the distribution model specifies the mappings of services in the services model to the target machines (and container services already available on the machines in the network).

All the container services in the services model provide above refer to systemd services, that in addition to running Apache Tomcat and MySQL, also do the following:

  • They bundle a Dysnomia plugin that can be used to manage the life-cycles of Java web applications and MySQL databases.
  • They bundle a Dysnomia container configuration file capturing all relevant container configuration properties, such as the MySQL TCP port the daemon listens to, and the Tomcat web application deployment directory.

For example, the Nix expression that configures Apache Tomcat has roughly the following structure:

{stdenv, dysnomia, httpPort, catalinaBaseDir, instanceSuffix ? ""}:

stdenv.mkDerivation {
name = "simpleAppservingTomcat";
postInstall = ''
# Add Dysnomia container configuration file for a Tomcat web application
mkdir -p $out/etc/dysnomia/containers
cat > $out/etc/dysnomia/containers/tomcat-webapplication${instanceSuffix} <<EOF
tomcatPort=${toString httpPort}

# Copy the Dysnomia module that manages an Apache Tomcat web application
mkdir -p $out/libexec/dysnomia
ln -s ${dysnomia}/libexec/dysnomia/tomcat-webapplication $out/libexec/dysnomia

First, the Nix expression will build and configure Apache Tomcat (this is left out of the example to keep it short). After Apache Tomcat has been built and configured, the Nix expression generates the container configuration file and copies the tomcat-webapplication Dysnomia module from the Dysnomia toolset.

The disnix-env command-line instruction shown earlier, deploys container services to target machines in the network, using a bare infrastructure model that does not provide any container services except the init system (which is systemd on NixOS). The profile parameter specifies a Disnix profile to tell the tool that we are deploying a different kind of system than the default.

If the command above succeeds, then we have all required container services at our disposal. The deployment architecture of the resulting system may look as follows:

In the above diagram, the light grey colored boxes correspond to machines in a network, the dark grey boxes to container environments, and white ovals to services.

As you may observe, we have deployed three services -- to the test1 machine we have deployed an Apache Tomcat service (that itself is managed by systemd), and to the test2 machine we have deployed both Apache Tomcat and the MySQL server (both their lifecycles are managed with systemd).

We can run the following command to generate a new infrastructure model that provides the properties of these newly deployed container services:

$ disnix-capture-infra infrastructure-bare.nix > infrastructure.nix

As shown earlier, the retrieved infrastructure model provides all relevant configuration properties of the MySQL and Apache Tomcat containers that we have just deployed, because they expose their configuration properties via container configuration files.

By using the retrieved infrastructure model and running the following command, we can deploy our web application and database components:

$ disnix-env -s services.nix \
-i infrastructure.nix \
-d distribution.nix \
--profile services

In the above command-line invocation, the services model contains all application components, and the distribution model maps these application components to the corresponding target machines and their containers.

As with the previous disnix-env command invocation, we provide a --profile parameter to tell Disnix that we are deploying a different system. If we would use the same profile parameter as in the previous example, then Disnix will undeploy the container services and tries to upgrade the system with the application services, which will obviously fail.

If the above command succeeds, then we have successfully deployed both the container and application services that our example system requires, resulting in a fully functional and activated system with a deployment architecture that may have the following structure:

As may you may observe by looking at the diagram above, we have deployed a system that consists of a number of MySQL databases, Java web services and Java web applications.

The diagram uses the same notational conventions used in the previous diagram. The arrows denote inter-dependency relationships, telling Disnix that one service depends on another, and that dependency should be deployed first.

Exposing services as containers

The Disnix service container deployment approach that I just described works, but it is not an integrated solution -- it has a limitation that is comparable to the infrastructure and services deployment separation that I have explained earlier. It requires you to run two deployments: one for the containers and one for the services.

In the blog post that I wrote a couple of years ago, I also explained that in order to fully automate the entire process with a single command, this might eventually lead to "a layered deployment approach" -- the idea was to combine several system deployment processes into one. For example, you might want to deploy a service manager in the first layer, the container services for application components in the second, and in the third the application components themselves.

I also argued that it is probably not worth spending a lot of effort in automating multiple deployment layers -- for nearly all systems that I deployed there were only two "layers" that I need to keep track of -- the infrastructure layer providing container services, and a service layer providing the application services. NixOps sufficed as a solution to automate the infrastructure parts for most of my use cases, except for deployment to non-NixOS machines, and deploying multiple instances of container services, which is a very uncommon use case.

However, I got inspired to revisit this problem again after I completed my work described in the previous blog post -- in my previous blog post, I have created a process manager-agnostic service management framework that works with a variety of process managers on a variety of operating systems.

Combining this framework with Disnix, makes it possible to also easily deploy container services (most of them are daemons) to non-NixOS machines, including non-Linux machines, such as macOS and FreeBSD from the same declarative specifications.

Moreover, this framework also provides facilities to easily deploy multiple instances of the same service to the same machine.

Revisiting this problem also made me think about the "layered approach" again, and after some thinking I have dropped the idea. The problem of using layers is that:

  • We need to develop another tool that integrates the deployment processes of all layers into one. In addition to the fact that we need to implement more automation, this introduces many additional technical challenges -- for example, if we want to deploy three layers and the deployment of the second fails, how are we going to do a rollback?
  • A layered approach is somewhat "imperative" -- each layer deploys services that include Dysnomia modules and Dysnomia container configuration files. The Disnix service on each target machine performs a lookup in the Nix profile that contains all packages of the containers layer to find the required Dysnomia modules and container configuration files.

    Essentially, Dysnomia modules and container configurations are stored in a global namespace. This means the order in which the deployment of the layers is executed is important and that each layer can imperatively modify the behaviour of each Dysnomia module.
  • Because we need to deploy the system on layer-by-layer basis, we cannot for example, deploy multiple services in another layer that have no dependency in parallel, making a deployment process slower than it should be.

After some thinking, I came up with a much simpler approach -- I have introduced a new concept to the Disnix services model that makes it possible to annotate services with a specification of the container services that it provides. This information can be used by application services that need to deploy to this container service.

For example, we can annotate the Apache Tomcat service in the Disnix services model as follows:

{ pkgs, system, distribution, invDistribution
, stateDir ? "/var"
, runtimeDir ? "${stateDir}/run"
, logDir ? "${stateDir}/log"
, cacheDir ? "${stateDir}/cache"
, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")
, forceDisableUserChange ? false
, processManager ? "systemd"

constructors = import ../../../nix-processmgmt/examples/services-agnostic/constructors.nix {
inherit pkgs stateDir runtimeDir logDir cacheDir tmpDir forceDisableUserChange processManager;
rec {
simpleAppservingTomcat = rec {
name = "simpleAppservingTomcat";
pkg = constructors.simpleAppservingTomcat {
inherit httpPort;
commonLibs = [ "${pkgs.mysql_jdbc}/share/java/mysql-connector-java.jar" ];
httpPort = 8080;
catalinaBaseDir = "/var/tomcat/webapps";
type = "systemd-unit";
providesContainers = {
tomcat-webapplication = {
httpPort = 8080;
catalinaBaseDir = "/var/tomcat/webapps";

GeolocationService = {
name = "GeolocationService";
pkg = customPkgs.GeolocationService;
dependsOn = {};
type = "tomcat-webapplication";


In the above example, the simpleAppservingTomcat service refers to an Apache Tomcat server that serves Java web applications for one particular virtual host. The providesContainers property tells Disnix that the service is a container provider, providing a container named: tomcat-webapplication with the following properties:

  • For HTTP traffic, Apache Tomcat should listen on TCP port 8080
  • The Java web application archives (WAR files) should be deployed to the Catalina Servlet container. By copying the WAR files to the /var/tomcat/webapps directory, they should be automatically hot-deployed.

The other service in the services model (GeolocationService) is a Java web application that should be deployed to a Apache Tomcat container service.

If in a Disnix distribution model, we map the Apache Tomcat service (simpleAppservingTomcat) and the Java web application (GeolocationService) to the same machine:


simpleAppservingTomcat = [ infrastructure.test1 ];
GeolocationService = [ infrastructure.test1 ];

Disnix will automatically search for a suitable container service provider for each service.

In the above scenario, Disnix knows that simpleAppservingTomcat provides a tomcat-webapplication container. The GeolocationService uses the type: tomcat-webapplication indicating that it needs to deployed to a Apache Tomcat servlet container.

Because these services have been deployed to the same machine Disnix will make sure that Apache Tomcat gets activated before the GeolocationService, and uses the Dysnomia module that is bundled with the simpleAppservingTomcat to handle the deployment of the Java web application.

Furthermore, the properties that simpleAppservingTomcat exposes in the providesContainers attribute set, are automatically propagated as container parameters to the GeolocationService Nix expression, so that it knows where the WAR file should be copied to, to automatically hot-deploy the service.

If Disnix does not detect a service that provides a required container deployed to the same machine, then it will fall back to its original behaviour -- it automatically propagates the properties of a container in the infrastructure model, and assumes the the container service is already deployed by an infrastructure deployment solution.


The notation used for the simpleAppservingTomcat service (shown earlier) refers to an attribute set. An attribute set also makes it possible to specify multiple container instances. However, it is far more common that we only need one single container instance.

Moreover, there is some redundancy -- we need to specify certain properties in two places. Some properties can both belong to a service, as well as the container properties that we want to propagate to the services that require it.

We can also use a shorter notation to expose only one single container:

simpleAppservingTomcat = rec {
name = "simpleAppservingTomcat";
pkg = constructors.simpleAppservingTomcat {
inherit httpPort;
commonLibs = [ "${pkgs.mysql_jdbc}/share/java/mysql-connector-java.jar" ];
httpPort = 8080;
catalinaBaseDir = "/var/tomcat/webapps";
type = "systemd-unit";
providesContainer = "tomcat-webapplication";

In the above example, we have rewritten the service configuration of simpleAppserviceTomcat to use the providesContainer attribute referring to a string. This shorter notation will automatically expose all non-reserved service properties as container properties.

For our example above, this means that it will automatically expose httpPort, and catalinaBaseDir and ignores the remaining properties -- these remaining properties have a specific purpose for the Disnix deployment system.

Although the notation above simplifies things considerably, the above example still contains a bit of redundancy -- some of the container properties that we want to expose to application services, also need to be propagated to the constructor function requiring us to specify the same properties twice.

We can eliminate this redundancy by encapsulating the creation of the service properties attribute set a constructor function. With a constructor function, we can simply write:

simpleAppservingTomcat = constructors.simpleAppservingTomcat {
httpPort = 8080;
commonLibs = [ "${pkgs.mysql_jdbc}/share/java/mysql-connector-java.jar" ];
type = "systemd-unit";

Example: deploying container and application services as one system

By applying the techniques described in the previous section to the StaffTracker example (e.g. distributing a simpleAppservingTomcat and mysql to the same machines that host Java web applications and MySQL databases), we can deploy the StaffTracker system including all its required container services with a single command-line instruction:

$ disnix-env -s services-with-containers.nix \
-i infrastructure-bare.nix \
-d distribution-with-containers.nix

The corresponding deployment architecture visualization may look as follows:

As you may notice, the above diagram looks very similar to the previously shown deployment architecture diagram of the services layer.

What has been added are the container services -- the ovals with the double borders denote services that are also container providers. The labels describe both the name of the service and the containers that it provides (behind the arrow ->).

Furthermore, all the services that are hosted inside a particular container environment (e.g. tomcat-webapplication) have a local inter-dependency on the corresponding container provider service (e.g. simpleAppservingTomcat), causing Disnix to activate Apache Tomcat before the web applications that are hosted inside it.

Another thing you might notice, is that we have not completely eliminated the dependency on an infrastructure deployment solution -- the MySQL DBMS and Apache Tomcat service are deployed as systemd-unit requiring the presence of systemd on the target system. Systemd should be provided as part of the target Linux distribution, and cannot be managed by Disnix because it runs as PID 1.

Example: deploying multiple container service instances and application services

One of my motivating reasons to use Disnix as a deployment solution for container services is to be able to deploy multiple instances of them to the same machine. This can also be done in a combined container and application services deployment approach.

To allow, for example, to have two instance of Apache Tomcat to co-exist on one machine, we must configure them in such a way their resources do not conflict:

{ pkgs, system, distribution, invDistribution
, stateDir ? "/var"
, runtimeDir ? "${stateDir}/run"
, logDir ? "${stateDir}/log"
, cacheDir ? "${stateDir}/cache"
, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")
, forceDisableUserChange ? false
, processManager ? "systemd"

constructors = import ../../../nix-processmgmt/examples/service-containers-agnostic/constructors.nix {
inherit pkgs stateDir runtimeDir logDir cacheDir tmpDir forceDisableUserChange processManager;
rec {
simpleAppservingTomcat-primary = constructors.simpleAppservingTomcat {
instanceSuffix = "-primary";
httpPort = 8080;
httpsPort = 8443;
serverPort = 8005;
ajpPort = 8009;
commonLibs = [ "${pkgs.mysql_jdbc}/share/java/mysql-connector-java.jar" ];
type = "systemd-unit";

simpleAppservingTomcat-secondary = constructors.simpleAppservingTomcat {
instanceSuffix = "-secondary";
httpPort = 8081;
httpsPort = 8444;
serverPort = 8006;
ajpPort = 8010;
commonLibs = [ "${pkgs.mysql_jdbc}/share/java/mysql-connector-java.jar" ];
type = "systemd-unit";


The above partial services model defines two Apache Tomcat instances, that have been configured to listen to different TCP ports (for example the primary Tomcat instance listens to HTTP traffic on port 8080, whereas the secondary instance listens on port 8081), and serving web applications from a different deployment directories. Because their properties do not conflict, they can co-exist on the same machine.

With the following distribution model, we can deploy multiple container providers to the same machine and distribute application services to them:


# Container providers

mysql-primary = [ infrastructure.test1 ];
mysql-secondary = [ infrastructure.test1 ];
simpleAppservingTomcat-primary = [ infrastructure.test2 ];
simpleAppservingTomcat-secondary = [ infrastructure.test2 ];

# Application components

GeolocationService = {
targets = [
{ target = infrastructure.test2;
container = "tomcat-webapplication-primary";
RoomService = {
targets = [
{ target = infrastructure.test2;
container = "tomcat-webapplication-secondary";
StaffTracker = {
targets = [
{ target = infrastructure.test2;
container = "tomcat-webapplication-secondary";
staff = {
targets = [
{ target = infrastructure.test1;
container = "mysql-database-secondary";
zipcodes = {
targets = [
{ target = infrastructure.test1;
container = "mysql-database-primary";

In the first four lines of the distribution model shown above, we distribute the container providers. As you may notice, we distribute two MySQL instances that should co-exist on machine test1 and two Apache Tomcat instances that should co-exist on machine test2.

In the remainder of the distribution model, we map Java web applications and MySQL databases to these container providers. As explained in the previous blog post about deploying multiple container service instances, if no container is specified in the distribution model, Disnix will auto map the service to the container that has the same name as the service's type.

In the above example, we have two instances of each container service with a different name. As a result, we need to use the more verbose notation for distribution mappings to instruct Disnix to which container provider we want to deploy the service.

Deploying the system with the following command-line instruction:

$ disnix-env -s services-with-multicontainers.nix \
-i infrastructure-bare.nix \
-d distribution-with-multicontainers.nix

results in a running system that may has the following deployment architecture:

As you may notice, we have MySQL databases and Java web application distributed over mutiple container providers residing on the same machine. All services belong to the same system, deployed by a single Disnix command.

A more extreme example: multiple process managers

By exposing services as container providers in Disnix, my original requirements were met. Because the facilities are very flexible, I also discovered that there is much more I could do.

For example, on more primitive systems that do not have systemd, I could also extend the services and distribution models in such a way that I can deploy supervisord as a process manager first (as a sysvinit-script that does not require any process manager service), then use supervisord to manage MySQL and Apache Tomcat, and then use the Dysnomia plugin system to deploy the databases and Java web applications to these container services managed by supervisord:

As you may notice, the deployment architecture above looks similar to the first combined deployment example, with supervisord added as an extra container provider service.

More efficient reuse: expose any kind of service as container provider

In addition to managed processes (which the MySQL DBMS and Apache Tomcat services are), any kind of Disnix service can act as a container provider.

An example of such a non-process managed container provider could be Apache Axis2. In the StaffTracker example, all data access is provided by web services. These web services are implemented as Java web applications (WAR files) embedding an Apache Axis2 container that embeds an Axis2 Application Archive (AAR file) providing the web service implementation.

Every web application that is a web service includes its own implementation of Apache Axis2.

It is also possible to deploy a single Axis2 web application to Apache Tomcat, and treat each Axis2 Application Archive as a separate deployment unit using the axis2-webservice identifier as a container provider for any service of the type: axis2-webservice:

{ pkgs, system, distribution, invDistribution
, stateDir ? "/var"
, runtimeDir ? "${stateDir}/run"
, logDir ? "${stateDir}/log"
, cacheDir ? "${stateDir}/cache"
, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")
, forceDisableUserChange ? false
, processManager ? "systemd"

constructors = import ../../../nix-processmgmt/examples/service-containers-agnostic/constructors.nix {
inherit pkgs stateDir runtimeDir logDir cacheDir tmpDir forceDisableUserChange processManager;

customPkgs = import ../top-level/all-packages.nix {
inherit system pkgs stateDir;
rec {
### Container providers

simpleAppservingTomcat = constructors.simpleAppservingTomcat {
httpPort = 8080;
commonLibs = [ "${pkgs.mysql_jdbc}/share/java/mysql-connector-java.jar" ];
type = "systemd-unit";

axis2 = customPkgs.axis2 {};

### Web services

HelloService = {
name = "HelloService";
pkg = customPkgs.HelloService;
dependsOn = {};
type = "axis2-webservice";

HelloWorldService = {
name = "HelloWorldService";
pkg = customPkgs.HelloWorldService;
dependsOn = {
inherit HelloService;
type = "axis2-webservice";


In the above partial services model, we have defined two container providers:

  • simpleAppservingTomcat that provides a Servlet container in which Java web applications (WAR files) can be hosted.
  • The axis2 service is a Java web application that acts as a container provider for Axis2 web services.

The remaining services are Axis2 web services that can be embedded inside the shared Axis2 container.

If we deploy the above example system, e.g.:

$ disnix-env -s services-optimised.nix \
-i infrastructure-bare.nix \
-d distribution-optimised.nix

may result in the following deployment architecture:

As may be observed when looking at the above architecture diagram, the web services deployed to the test2 machine, use a shared Axis2 container, that is embedded as a Java web application inside Apache Tomcat.

The above system has a far better degree of reuse, because it does not use redundant copies of Apache Axis2 for each web service.

Although it is possible to have a deployment architecture with a shared Axis2 container, this shared approach is not always desirable to use. For example, database connections managed by Apache Tomcat are shared between all web services embedded in an Axis2 container, which is not always desirable from a security point of view.

Moreover, an unstable web service embedded in an Axis2 container might also tear the container down causing the other web services to crash as well. Still, the deployment system does not make it difficult to use a shared approach, when it is desired.


With this new feature addition to Disnix, that can expose services as container providers, it becomes possible to deploy both container services and application services as one integrated system.

Furthermore, it also makes it possible to:

  • Deploy multiple instances of container services and deploy services to them.
  • For process-based service containers, we can combine the process manager-agostic framework described in the previous blog post, so that we can use them with any process manager on any operating system that it supports.

The fact that Disnix can now also deploy containers does not mean that it no longer relies on external infrastructure deployment solutions anymore. For example, you still need target machines at your disposal that have Nix and Disnix installed and need to be remotely connectable, e.g. through SSH. For this, you still require an external infrastructure deployment solution, such as NixOps.

Furthermore, not all container services can be managed by Disnix. For example, systemd, that runs as a system's PID 1, cannot be installed by Disnix. Instead, it must already be provided by the target system's Linux distribution (In NixOS' case it is Nix that deploys it, but it is not managed by Disnix).

And there may also be other reasons why you may still want to use separated deployment processes for container and service deployment. For example, you may want to deploy to container services that cannot be managed by Nix/Disnix, or you may work in an organization in which two different teams take care of the infrastructure and the services.


The new features described in this blog post are part of the current development versions of Dysnomia and Disnix that can be obtained from my GitHub page. These features will become generally available in the next release.

Moreover, I have extended all my public Disnix examples with container deployment support (including the Java-based StaffTracker and composition examples shown in this blog post). These changes currently reside in the servicesascontainers Git branches.

The nix-processmgmt repository contains shared constructor functions for all kinds of system services, e.g. MySQL, Apache HTTP server, PostgreSQL and Apache Tomcat. These functions can be reused amongst all kinds of Disnix projects.

by Sander van der Burg ( at April 30, 2020 08:39 PM

April 23, 2020

Craige McWhirter

Building Daedalus Flight on NixOS

NixOS Daedalus Gears by Craige McWhirter

Daedalus Flight was recently released and this is how you can build and run this version of Deadalus on NixOS.

If you want to speed the build process up, you can add the IOHK Nix cache to your own NixOS configuration:


nix.binaryCaches = [
nix.binaryCachePublicKeys = [

If you haven't already, you can clone the Daedalus repo and specifically the 1.0.0 tagged commit:

$ git clone --branch 1.0.0

Once you've cloned the repo and checked you're on the 1.0.0 tagged commit, you can build Daedalus flight with the following command:

$ nix build -f . daedalus --argstr cluster mainnet_flight

Once the build completes, you're ready to launch Daedalus Flight:

$ ./result/bin/daedalus

To verify that you have in fact built Daedalus Flight, first head to the Daedalus menu then About Daedalus. You should see a title such as "DAEDALUS 1.0.0". The second check, is to press [Ctl]+d to access Daedalus Diagnostocs and your Daedalus state directory should have mainnet_flight at the end of the path.

If you've got these, give yourself a pat on the back and grab yourself a refreshing bevvy while you wait for blocks to sync.

Daedalus FC1 screenshot

by Craige McWhirter at April 23, 2020 11:28 PM

April 18, 2020

Binary Cache Support

Up until now, has not supported directly fetching build dependencies from binary caches like or Cachix. All build dependencies have instead been uploaded from the user’s local machine to the first time they’ve been needed.

Today, this bottleneck has been removed, since now can fetch build dependencies directly from binary caches, without taxing users’ upload bandwidth.

By default, the official Nix binary cache ( is added to all accounts, but a user can freely decide on which caches that should be queried for build dependencies (including Cachix caches).

An additional benefit of the new support for binary caches is that users that trust the same binary caches automatically share build dependencies from those caches. This means that if one user’s build has triggered a download from for example, the next user that comes along and needs the same build dependency doesn’t have to spend time on downloading that dependency.

For more information on how to use binary caches with, see the documentation.

by ( at April 18, 2020 12:00 AM

April 13, 2020

Graham Christensen

Erase your darlings

I erase my systems at every boot.

Over time, a system collects state on its root partition. This state lives in assorted directories like /etc and /var, and represents every under-documented or out-of-order step in bringing up the services.

“Right, run myapp-init.”

These small, inconsequential “oh, oops” steps are the pieces that get lost and don’t appear in your runbooks.

“Just download ca-certificates to … to fix …”

Each of these quick fixes leaves you doomed to repeat history in three years when you’re finally doing that dreaded RHEL 7 to RHEL 8 upgrade.

“Oh, touch /etc/ipsec.secrets or the l2tp tunnel won’t work.”

Immutable infrastructure gets us so close

Immutable infrastructure is a wonderfully effective method of eliminating so many of these forgotten steps. Leaning in to the pain by deleting and replacing your servers on a weekly or monthly basis means you are constantly testing and exercising your automation and runbooks.

The nugget here is the regular and indiscriminate removal of system state. Destroying the whole server doesn’t leave you much room to forget the little tweaks you made along the way.

These techniques work great when you meet two requirements:

  • you can provision and destroy servers with an API call
  • the servers aren’t inherently stateful

Long running servers

There are lots of cases in which immutable infrastructure doesn’t work, and the dirty secret is those servers need good tools the most.

Long-running servers cause long outages. Their runbooks are outdated and incomplete. They accrete tweaks and turn in to an ossified, brittle snowflake — except its arms are load-bearing.

Let’s bring the ideas of immutable infrastructure to these systems too. Whether this system is embedded in a stadium’s jumbotron, in a datacenter, or under your desk, we can keep the state under control.

FHS isn’t enough

The hard part about applying immutable techniques to long running servers is knowing exactly where your application state ends and the operating system, software, and configuration begin.

This is hard because legacy operating systems and the Filesystem Hierarchy Standard poorly separate these areas of concern. For example, /var/lib is for state information, but how much of this do you actually care about tracking? What did you configure in /etc on purpose?

The answer is probably not a lot.

You may not care, but all of this accumulation of junk is a tarpit. Everything becomes harder: replicating production, testing changes, undoing mistakes.

New computer smell

Getting a new computer is this moment of cleanliness. The keycaps don’t have oils on them, the screen is perfect, and the hard drive is fresh and unspoiled — for about an hour or so.

Let’s get back to that.

How is this possible?

NixOS can boot with only two directories: /boot, and /nix.

/nix contains read-only system configurations, which are specified by your configuration.nix and are built and tracked as system generations. These never change. Once the files are created in /nix, the only way to change the config’s contents is to build a new system configuration with the contents you want.

Any configuration or files created on the drive outside of /nix is state and cruft. We can lose everything outside of /nix and /boot and have a healthy system. My technique is to explicitly opt in and choose which state is important, and only keep that.

How this is possible comes down to the boot sequence.

For NixOS, the bootloader follows the same basic steps as a standard Linux distribution: the kernel starts with an initial ramdisk, and the initial ramdisk mounts the system disks.

And here is where the similarities end.

NixOS’s early startup

NixOS configures the bootloader to pass some extra information: a specific system configuration. This is the secret to NixOS’s bootloader rollbacks, and also the key to erasing our disk on each boot. The parameter is named systemConfig.

On every startup the very early boot stage knows what the system’s configuration should be: the entire system configuration is stored in the read-only /nix/store, and the directory passed through systemConfig has a reference to the config. Early boot then manipulates /etc and /run to match the chosen setup. Usually this involves swapping out a few symlinks.

If /etc simply doesn’t exist, however, early boot creates /etc and moves on like it were any other boot. It also creates /var, /dev, /home, and any other core directories that must be present.

Simply speaking, an empty / is not surprising to NixOS. In fact, the NixOS netboot, EC2, and installation media all start out this way.

Opting out

Before we can opt in to saving data, we must opt out of saving data by default. I do this by setting up my filesystem in a way that lets me easily and safely erase the unwanted data, while preserving the data I do want to keep.

My preferred method for this is using a ZFS dataset and rolling it back to a blank snapshot before it is mounted. A partition of any other filesystem would work just as well too, running mkfs at boot, or something similar. If you have a lot of RAM, you could skip the erase step and make / a tmpfs.

Opting out with ZFS

When installing NixOS, I partition my disk with two partitions, one for the boot partition, and another for a ZFS pool. Then I create and mount a few datasets.

My root dataset:

# zfs create -p -o mountpoint=legacy rpool/local/root

Before I even mount it, I create a snapshot while it is totally blank:

# zfs snapshot rpool/local/root@blank

And then mount it:

# mount -t zfs rpool/local/root /mnt

Then I mount the partition I created for the /boot:

# mkdir /mnt/boot
# mount /dev/the-boot-partition /mnt/boot

Create and mount a dataset for /nix:

# zfs create -p -o mountpoint=legacy rpool/local/nix
# mkdir /mnt/nix
# mount -t zfs rpool/local/nix /mnt/nix

And a dataset for /home:

# zfs create -p -o mountpoint=legacy rpool/safe/home
# mkdir /mnt/home
# mount -t zfs rpool/safe/home /mnt/home

And finally, a dataset explicitly for state I want to persist between boots:

# zfs create -p -o mountpoint=legacy rpool/safe/persist
# mkdir /mnt/persist
# mount -t zfs rpool/safe/persist /mnt/persist

Note: in my systems, datasets under rpool/local are never backed up, and datasets under rpool/safe are.

And now safely erasing the root dataset on each boot is very easy: after devices are made available, roll back to the blank snapshot:

  boot.initrd.postDeviceCommands = lib.mkAfter ''
    zfs rollback -r rpool/local/root@blank

I then finish the installation as normal. If all goes well, your next boot will start with an empty root partition but otherwise be configured exactly as you specified.

Opting in

Now that I’m keeping no state, it is time to specify what I do want to keep. My choices here are different based on the role of the system: a laptop has different state than a server.

Here are some different pieces of state and how I preserve them. These examples largely use reconfiguration or symlinks, but using ZFS datasets and mount points would work too.

Wireguard private keys

Create a directory under /persist for the key:

# mkdir -p /persist/etc/wireguard/

And use Nix’s wireguard module to generate the key there:

  networking.wireguard.interfaces.wg0 = {
    generatePrivateKeyFile = true;
    privateKeyFile = "/persist/etc/wireguard/wg0";

NetworkManager connections

Create a directory under /persist, mirroring the /etc structure:

# mkdir -p /persist/etc/NetworkManager/system-connections

And use Nix’s etc module to set up the symlink:

  etc."NetworkManager/system-connections" = {
    source = "/persist/etc/NetworkManager/system-connections/";

Bluetooth devices

Create a directory under /persist, mirroring the /var structure:

# mkdir -p /persist/var/lib/bluetooth

And then use systemd’s tmpfiles.d rules to create a symlink from /var/lib/bluetooth to my persisted directory:

  systemd.tmpfiles.rules = [
    "L /var/lib/bluetooth - - - - /persist/var/lib/bluetooth"

SSH host keys

Create a directory under /persist, mirroring the /etc structure:

# mkdir -p /persist/etc/ssh

And use Nix’s openssh module to create and use the keys in that directory:

  services.openssh = {
    enable = true;
    hostKeys = [
        path = "/persist/ssh/ssh_host_ed25519_key";
        type = "ed25519";
        path = "/persist/ssh/ssh_host_rsa_key";
        type = "rsa";
        bits = 4096;

ACME certificates

Create a directory under /persist, mirroring the /var structure:

# mkdir -p /persist/var/lib/acme

And then use systemd’s tmpfiles.d rules to create a symlink from /var/lib/acme to my persisted directory:

  systemd.tmpfiles.rules = [
    "L /var/lib/acme - - - - /persist/var/lib/acme"

Answering the question “what am I about to lose?”

I found this process a bit scary for the first few weeks: was I losing important data each reboot? No, I wasn’t.

If you’re worried and want to know what state you’ll lose on the next boot, you can list the files on your root filesystem and see if you’re missing something important:

# tree -x /
├── bin
│   └── sh -> /nix/store/97zzcs494vn5k2yw-dash-
├── boot
├── dev
├── etc
│   ├── asound.conf -> /etc/static/asound.conf
... snip ...

ZFS can give you a similar answer:

# zfs diff rpool/local/root@blank
M	/
+	/nix
+	/etc
+	/root
+	/var/lib/is-nix-channel-up-to-date
+	/etc/pki/fwupd
+	/etc/pki/fwupd-metadata
... snip ...

Your stateless future

You may bump in to new state you meant to be preserving. When I’m adding new services, I think about the state it is writing and whether I care about it or not. If I care, I find a way to redirect its state to /persist.

Take care to reboot these machines on a somewhat regular basis. It will keep things agile, proving your system state is tracked correctly.

This technique has given me the “new computer smell” on every boot without the datacenter full of hardware, and even on systems that do carry important state. I have deployed this strategy to systems in the large and small: build farm servers, database servers, my NAS and home server, my raspberry pi garage door opener, and laptops.

NixOS enables powerful new deployment models in so many ways, allowing for systems of all shapes and sizes to be managed properly and consistently. I think this model of ephemeral roots is yet another example of this flexibility and power. I would like to see this partitioning scheme become a reference architecture and take us out of this eternal tarpit of legacy.

April 13, 2020 12:00 AM

April 11, 2020

Graham Christensen

ZFS Datasets for NixOS

The outdated and historical nature of the Filesystem Hierarchy Standard means traditional Linux distributions have to go to great lengths to separate “user data” from “system data.”

NixOS’s filesystem architecture does cleanly separate user data from system data, and has a much easier job to do.

Traditional Linuxes

Because FHS mixes these two concerns across the entire hierarchy, splitting these concerns requires identifying every point across dozens of directories where the data is the system’s or the user’s. When adding ZFS to the mix, the installers typically have to create over a dozen datasets to accomplish this.

For example, Ubuntu’s upcoming ZFS support creates 16 datasets:

├── ROOT
│   └── ubuntu_lwmk7c
│       ├── log
│       ├── mail
│       ├── snap
│       ├── spool
│       ├── srv
│       ├── usr
│       │   └── local
│       ├── var
│       │   ├── games
│       │   └── lib
│       │       ├── AccountServices
│       │       ├── apt
│       │       ├── dpkg
│       │       └── NetworkManager
│       └── www

Going through the great pains of separating this data comes with significant advantages: a recursive snapshot at any point in the tree will create an atomic, point-in-time snapshot of every dataset below.

This means in order to create a consistent snapshot of the system data, an administrator would only need to take a recursive snapshot at ROOT. The same is true for user data: take a recursive snapshot of USERDATA and all user data is saved.


Because Nix stores all of its build products in /nix/store, NixOS doesn’t mingle these two concerns. NixOS’s runtime system, installed packages, and rollback targets are all stored in /nix.

User data is not.

This removes the entire complicated tree of datasets to facilitate FHS, and leaves us with only a few needed datasets.


Design for the atomic, recursive snapshots when laying out the datasets.

In particular, I don’t back up the /nix directory. This entire directory can always be rebuilt later from the system’s configuration.nix, and isn’t worth the space.

One way to model this might be splitting up the data into three top-level datasets:

├── local
│   └── nix
├── system
│   └── root
└── user
    └── home

In tank/local, I would store datasets that should almost never be snapshotted or backed up. tank/system would store data that I would want periodic snapshots for. Most importantly, tank/user would contain data I want regular snapshots and backups for, with a long retention policy.

From here, you could add a ZFS dataset per user:

├── local
│   └── nix
├── system
│   └── root
└── user
    └── home
        ├── grahamc
        └── gustav

Or a separate dataset for /var:

├── local
│   └── nix
├── system
│   ├── var
│   └── root
└── user

Importantly, this gives you three buckets for independent and regular snapshots.

The important part is having /nix under its own top-level dataset. This makes it a “cousin” to the data you do want backup coverage on, making it easier to take deep, recursive snapshots atomically.


  • Enable compression with compression=on. Specifying on instead of lz4 or another specific algorithm will always pick the best available compression algorithm.
  • The dataset containing journald’s logs (where /var lives) should have xattr=sa and acltype=posixacl set to allow regular users to read their journal.
  • Nix doesn’t use atime, so atime=off on the /nix dataset is fine.
  • NixOS requires (as of 2020-04-11) mountpoint=legacy for all datasets. NixOS does not yet have tooling to require implicitly created ZFS mounts to settle before booting, and mountpoint=legacy plus explicit mount points in hardware-configuration.nix will ensure all your datasets are mounted at the right time.

I don’t know how to pick ashift, and usually just allow ZFS to guess on my behalf.


I only create two partitions:

  1. /boot formatted vfat for EFI, or ext4 for BIOS
  2. The ZFS dataset partition.

There are spooky articles saying only give ZFS entire disks. The truth is, you shouldn’t split a disk into two active partitions. Splitting the disk this way is just fine, since /boot is rarely read or written.

Note: If you do partition the disk, make sure you set the disk’s scheduler to none. ZFS takes this step automatically if it does control the entire disk.

On NixOS, you an set your scheduler to none via:

{ boot.kernelParams = [ "elevator=none" ]; }

Clean isolation

NixOS’s clean separation of concerns reduces the amount of complexity we need to track when considering and planning our datasets. This gives us flexibility later, and enables some superpowers like erasing my computer on every boot, which I’ll write about on Monday.

April 11, 2020 12:00 AM

March 27, 2020

New Resources

On the support side of the service, two new resources have been published:

  •, collecting all available documentation for users.

  • The feedback repository on GitHub, providing a way to report issues or ask questions related to the service.

These resources are mainly useful for beta users, but they are open to anyone. And anyone is of course welcome to request a free beta account for evaluating, by just sending me an email.

by ( at March 27, 2020 12:00 AM

March 23, 2020

Matthew Bauer

Announcing Nixiosk

Today I’m announcing a project I’ve been working on for the last few weeks. I’m calling it Nixiosk which is kind of a smashing together of the words NixOS and Kiosk. The idea is to have an easy way to make locked down, declarative systems

My main application of this is my two Raspberry Pi systems that I own. Quite a few people have installed NixOS on these systems, but usually they are starting from some prebuilt image. A major goal of this project is to make it easy to build these images yourself. For this to work, I’ve had to make lots of changes to NixOS cross-compilation ecosystem, but the results seem to be very positive. I also want the system to be locked down so that no user can login directly on the machine. Instead, all administration is done on a remote machine, and deployed through SSH and Nix remote builders.

Right now, I have RetroArch (a frontend for a bunch of emulators) on my Raspberry Pi 4, and Epiphany (a web browser) on my Raspberry Pi 0. Both systems seem to be working pretty well.


1 Deploying

1.1 Install Nix

If you haven’t already, you need to install Nix. This can be done through the installer:

$ bash <(curl -L

1.2 Cache

To speed things up, you should setup a binary cache for nixiosk. This can be done easily through Cachix. First, install Cachix:

$ nix-env -iA cachix -f

Then, use the nixiosk cache:

$ cachix use nixiosk

1.3 Configuration

To make things simple, it just reads from an ad-hoc JSON file that describe the hardware plus some other customizations. It looks like this:

    "hostName": "nixiosk",
    "hardware": "raspberryPi4",
    "authorizedKeys": [],
    "program": {
        "package": "epiphany",
        "executable": "/bin/epiphany",
        "args": [""]
    "networks": {
        "my-router": "0000000000000000000000000000000000000000000000000000000000000000",
    "locale": {
        "timeZone": "America/New_York",
        "regDom": "US",
        "lang": "en_US.UTF-8"
    "localSystem": {
        "system": "x86_64-linux",
        "sshUser": "me",
        "hostName": "my-laptop-host",

Here’s a basic idea of what each of these fields do:

  • hostName: Name of the host to use. If mDNS is configured on your network, this can be used to identify the IP address of the device via “<hostName>.local”.
  • hardware: A string describing what hardware we are using. Valid values currently are “raspberryPi0”, “raspberryPi1”, “raspberryPi2”, “raspberryPi3”, “raspberryPi4”.
  • authorizedKeys: A list of SSH public keys that are authorized to make changes to your device. Note this is required because no passwords will be set for this system.
  • program: What to do in the kiosk. This should be a Nixpkgs attribute (package), an executable in that package, and a list of args.
  • networks: This is a name/value pairing of SSIDs to PSK passphrases. This can be found with the wpa_passphrase(8) command from wpa_supplicant.
  • locale: This provides some information of what localizations to use. You can set regulation domain, language, time zone via “regDom”, “lang”, and “timeZone”. If unspecified, defaults to US / English / New York.
  • localSystem: Information on system to use for remote builder. Optional.

1.4 Initial deployment

The deployment is pretty easy provided you have Nix installed. Here are some steps:

$ git clone
$ cd nixiosk/
$ cp nixiosk.json.sample nixiosk.json

Now you need to make some changes to nixiosk.json to reflect what you want your system to do. The important ones are ‘authorizedKeys’ and ‘networks’ so that your systems can startup and you can connect to it.

If you have an SSH key setup, you can get its value with:

$ cat $HOME/.ssh/
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC050iPG8ckY/dj2O3ol20G2lTdr7ERFz4LD3R4yqoT5W0THjNFdCqavvduCIAtF1Xx/OmTISblnGKf10rYLNzDdyMMFy7tUSiC7/T37EW0s+EFGhS9yOcjCVvHYwgnGZCF4ec33toE8Htq2UKBVgtE0PMwPAyCGYhFxFLYN8J8/xnMNGqNE6iTGbK5qb4yg3rwyrKMXLNGVNsPVcMfdyk3xqUilDp4U7HHQpqX0wKrUvrBZ87LnO9z3X/QIRVQhS5GqnIjRYe4L9yxZtTjW5HdwIq1jcvZc/1Uu7bkMh3gkCwbrpmudSGpdUlyEreaHOJf3XH4psr6IMGVJvxnGiV9 mbauer@dellbook

which will give you a line for “authorizedKeys” like:

"authorizedKeys": ["ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC050iPG8ckY/dj2O3ol20G2lTdr7ERFz4LD3R4yqoT5W0THjNFdCqavvduCIAtF1Xx/OmTISblnGKf10rYLNzDdyMMFy7tUSiC7/T37EW0s+EFGhS9yOcjCVvHYwgnGZCF4ec33toE8Htq2UKBVgtE0PMwPAyCGYhFxFLYN8J8/xnMNGqNE6iTGbK5qb4yg3rwyrKMXLNGVNsPVcMfdyk3xqUilDp4U7HHQpqX0wKrUvrBZ87LnO9z3X/QIRVQhS5GqnIjRYe4L9yxZtTjW5HdwIq1jcvZc/1Uu7bkMh3gkCwbrpmudSGpdUlyEreaHOJf3XH4psr6IMGVJvxnGiV9 mbauer@dellbook"],

and you can get a PSK value for your WiFi network with:

$ nix run nixpkgs.wpa_supplicant -c wpa_passphrase my-network

so your .json file looks like:

"networks": {
  "my-network": "17e76a6490ac112dbeba996caa7cd1387c6ebf6ce721ef704f92b681bb2e9000",

Now, after inserting your Raspberry Pi SD card into the primary slot, you can deploy to it with:

$ ./ /dev/mmcblk0

You can now eject your SD card and insert it into your Raspberry Pi. It will boot immediately to an Epiphany browser, loading

Troubleshooting steps can be found in the README.

1.5 Redeployments

You can pretty easily make changes to a running system given you have SSH access. This is as easy as cloning the running config:

$ git clone ssh://root@nixiosk.local/etc/nixos/configuration.git nixiosk-configuration
$ cd nixiosk-configuration

Then, make some changes in your repo. After your done, you can just run ‘git push’ to redeploy.

$ git add .
$ git commit
$ git push

You’ll see the NixOS switch-to-configuration log in your command output. If all is successful, the system should immediately reflect your changes. If not, the output of Git should explain what went wrong.

Note, that some versions of the Raspberry Pi like the 0 and the 1 are not big enough to redeploy the whole system. You will probably need to setup remote builders. This is described in the README.

2 Technology

Here are some of the pieces that make the Kiosk system possible:

  • Cage / Wayland: Cage is a Wayland compositor that allows only one application to display at a time. This makes the system a true Kiosk.
  • NixOS - A Linux distro built on top of functional package management.
  • Basalt: A tool to manage NixOS directly from Git. This allows doing push-to-deploy directly to NixOS.
  • Plymouth: Nice graphical boot animations. Right now, it uses the NixOS logo but in the future this should be configurable so that you can include your own branding.
  • OpenSSH: Since no direct login is available, SSH is required for remote administration.
  • Avahi: Configures mDNS registration for the system, allowing you to remember host names instead of IP addresses.

I would also like to include some more tools to make administration easier:

  • ddclient / miniupnp: Allow registering external IP address with a DNS provider. This would enable administration outside of the device’s immediate network.

3 Project

You can try it out right now if you have an Raspberry Pi system. Other hardware is probably not too hard, but may require tweaking. The project page is available at and issues and pull requests are welcomed.

March 23, 2020 12:00 AM

March 18, 2020


Proposal for improving Nix error messages

I’m lucky to be in touch with a lot of people that use Nix day to day. One of the most occouring annoyances that pops up more frequently with those starting with Nix are confusing error messages. Since Nix community has previously succesfully stepped up and funded removal of Perl to reduce barriers for source code contributions, I think we ought to do the same for removing barriers when using Nix.

by Domen Kožar ( at March 18, 2020 08:00 AM