NixOS Planet

February 08, 2019

Matthew Bauer

Call for proofreaders and beta testers for 19.03

This was originally published on Discourse. I am putting it here for posterity reasons.

We get lots of contributors in Nixpkgs and NixOS who modify our source code. They are the most common type of contribution we receive. But, there is actually a great need for other types of contributions that don’t involve programming at all! For the benefit of new users, I am going to outline how you can easily contribute to the community and help make 19.03 the best NixOS release yet.

1 Proofreading

We have two different manuals in the NixOS/nixpkgs repo. One is for Nixpkgs, the set of all software. And the other is for NixOS, our Linux distro. Proofreading these manuals is important in helping new users learn about how our software works.

When you find an issue, you can do one of two things. The first and most encouraged is to open a PR on GitHub fixing the documentation. Both manuals are written in docbook. You can see the source for each here:

GitHub allows you to edit these files directly on the web. You can also always use your own Git client. For reference on writing in DocBook, I recommend reading through

An alternative if you are unable to fix the documentation yourself is to open an issue. We use the same issue tracker includes any issues with Nixpkgs/NixOS and can be accessed through GitHub Issues. Please be sure to provide a link to where in the manual the issue is as well as what is incorrect or otherwise confusing.

2 Beta testing

An alternative to proofreading is beta testing. There are a number of ways to do this, but I would suggest using VirtualBox. Some information on installing VirtualBox can be found online, but you should just need to set these NixOS options: = true;

and add your user to the vboxusers group:

users.users.<user>.extraGroups = [ "vboxusers" ];

then rebuild your NixOS machine (sudo nixos-rebuild switch), and run this command to start virtualbox:


Other distros have their own ways of installing VirtualBox, see Download VirtualBox for more info.

You can download an unstable NixOS .ova file directly here. (WARNING: this will be a large file, a little below 1GB).

Once downloaded, you can import this .ova file directly into VirtualBox using “File” -> “Import Appliance…”. Select the .ova file downloaded from above and click through a series of Next dialogs, using the provided defaults. After this, you can boot your NixOS machine by selecting it from the list on the left and clicking “Start”.

The next step is to just play around with the NixOS machine and try to break it! You can report any issues you find on the GitHub Issues tracker. We use the same issue tracker for both NixOS and Nixpkgs. Just try to make your issues as easy to reproduce as possible. Be specific on where the problem is and how someone else could recreate the problem for themselves.

February 08, 2019 12:00 AM

December 25, 2018

Ollie Charles

Solving Planning Problems with Fast Downward and Haskell

In this post I’ll demonstrate my new fast-downward library and show how it can be used to solve planning problems. The name comes from the use of the backend solver - Fast Downward. But what’s a planning problem?

Roughly speaking, planning problems are a subclass of AI problems where we need to work out a plan that moves us from an initial state to some goal state. Typically, we have:

  • A known starting state - information about the world we know to be true right now.
  • A set of possible effects - deterministic ways we can change the world.
  • A goal state that we wish to reach.

With this, we need to find a plan:

  • A solution to a planning problem is a plan - a totally ordered sequence of steps that converge the starting state into the goal state.

Planning problems are essentially state space search problems, and crop up in all sorts of places. The common examples are that of moving a robot around, planning logistics problems, and so on, but they can be used for plenty more! For example, the Beam library uses state space search to work out how to converge a database from one state to another (automatic migrations) by adding/removing columns.

State space search is an intuitive approach - simply build a graph where nodes are states and edges are state transitions (effects), and find a path (possibly shortest) that gets you from the starting state to a state that satisfies some predicates. However, naive enumeration of all states rapidly grinds to a halt. Forming optimal plans (least cost, least steps, etc) is an extremely difficult problem, and there is a lot of literature on the topic (see ICAPS - the International Conference on Automated Planning and Scheduling and recent International Planning Competitions for an idea of the state of the art). The fast-downward library uses the state of the art Fast Downward solver and provides a small DSL to interface to it with Haskell.

In this post, we’ll look at using fast-downward in the context of solving a small planning problem - moving balls between rooms via a robot. This post is literate Haskell, here’s the context we’ll be working in:

If you’d rather see the Haskell in it’s entirety without comments, simply head to the end of this post.

Modelling The Problem

Defining the Domain

As mentioned, in this example, we’ll consider the problem of transporting balls between rooms via a robot. The robot has two grippers and can move between rooms. Each gripper can hold zero or one balls. Our initial state is that everything is in room A, and our goal is to move all balls to room B.

First, we’ll introduce some domain specific types and functions to help model the problem. The fast-downward DSL can work with any type that is an instance of Ord.

A ball in our model is modelled by its current location. As this changes over time, it is a Var - a state variable.

A gripper in our model is modelled by its state - whether or not it’s holding a ball.

Finally, we’ll introduce a type of all possible actions that can be taken:

With this, we can now begin modelling the specific instance of the problem. We do this by working in the Problem monad, which lets us introduce variables (Vars) and specify their initial state.

Setting the Initial State

First, we introduce a state variable for each of the 4 balls. As in the problem description, all balls are initially in room A.

Next, introduce a variable for the room the robot is in - which also begins in room A.

We also introduce variables to track the state of each gripper.

This is sufficient to model our problem. Next, we’ll define some effects to change the state of the world.

Defining Effects

Effects are computations in the Effect monad - a monad that allows us to read and write to variables, and also fail (via MonadPlus). We could define these effects as top-level definitions (which might be better if we were writing a library), but here I’ll just define them inline so they can easily access the above state variables.

Effects may be used at any time by the solver. Indeed, that’s what solving planning problems is all about! The hard part is choosing effects intelligently, rather than blindly trying everything. Fortunately, you don’t need to worry about that - Fast Downward will take care of that for you!

Picking Up Balls

The first effect takes a ball and a gripper, and attempts to pick up that ball with that gripper.

  1. First we check that the gripper is empty. This can be done concisely by using an incomplete pattern match. do notation desugars incomplete pattern matches to a call to fail, which in the Effect monad simply means “this effect can’t currently be used”.

  2. Next, we check where the ball and robot are, and make sure they are both in the same room.

  3. Here we couldn’t choose a particular pattern match to use, because picking up a ball should be possible in either room. Instead, we simply observe the location of both the ball and the robot, and use an equality test with guard to make sure they match.

  4. If we got this far then we can pick up the ball. The act of picking up the ball is to say that the ball is now in a gripper, and that the gripper is now holding a ball.

  5. Finally, we return some domain specific information to use if the solver chooses this effect. This has no impact on the final plan, but it’s information we can use to execute the plan in the real world (e.g., sending actual commands to the robot).

Moving Between Rooms

This effect moves the robot to the room adjacent to its current location.

This is an “unconditional” effect as we don’t have any explicit guards or pattern matches. We simply flip the current location by an adjacency function.

Again, we finish by returning some information to use when this effect is chosen.

Dropping Balls

Finally, we have an effect to drop a ball from a gripper.

  1. First we check that the given gripper is holding a ball, and the given ball is in a gripper.

  2. If we got here then those assumptions hold. We’ll update the location of the ball to be the location of the robot, so first read out the robot’s location.

  3. Empty the gripper

  4. Move the ball.

  5. And we’re done! We’ll just return a tag to indicate that this effect was chosen.

Solving Problems

With our problem modelled, we can now attempt to solve it. We invoke solve with a particular search engine (in this case A* with landmark counting heuristics). We give the solver two bits of information:

  1. A list of all effects - all possible actions the solver can use. These are precisely the effects we defined above, but instantiated for all balls and grippers.
  2. A goal state. Here we’re using a list comprehension which enumerates all balls, adding the condition that the ball location must be InRoom RoomB.

So far we’ve been working in the Problem monad. We can escape this monad by using runProblem :: Problem a -> IO a. In our case, a is SolveResult Action, so running the problem might give us a plan (courtesy of solve). If it did, we’ll print the plan.

fast-downward allows you to extract a totally ordered plan from a solution, but can also provide a partiallyOrderedPlan. This type of plan is a graph (partial order) rather than a list (total order), and attempts to recover some concurrency. For example, if two effects do not interact with each other, they will be scheduled in parallel.

Well, Did it Work?!

All that’s left is to run the problem!

> main
Found a plan!
1: PickUpBall
2: PickUpBall
3: SwitchRooms
4: DropBall
5: DropBall
6: SwitchRooms
7: PickUpBall
8: PickUpBall
9: SwitchRooms
10: DropBall
11: DropBall

Woohoo! Not bad for 0.02 secs, too :)

Behind The Scenes

It might be interesting to some readers to understand what’s going on behind the scenes. Fast Downward is a C++ program, yet somehow it seems to be running Haskell code with nothing but an Ord instance - there are no marshalling types involved!

First, let’s understand the input to Fast Downward. Fast Downward requires an encoding in its own SAS format. This format has a list of variables, where each variable contains a list of values. The contents of the values aren’t actually used by the solver, rather it just works with indices into the list of values for a variable. This observations means we can just invent values on the Haskell side and careful manage mapping indices back and forward.

Next, Fast Downward needs a list of operators which are ground instantiations of our effects above. Ground instantiations of operators mention exact values of variables. Recounting our gripper example, pickUpBallWithGripper b gripper actually produces 2 operators - one for each room. However, we didn’t have to be this specific in the Haskell code, so how are we going to recover this information?

fast-downward actually performs expansion on the given effects to find out all possible ways they could be called, by non-deterministically evaluating them to find a fixed point.

A small example can be seen in the moveRobotToAdjacentRoom Effect. This will actually produce two operators - one to move from room A to room B, and one to move from room B to room A. The body of this Effect is (once we inline the definition of modifyVar)

Initially, we only know that robotLocation can take the value RoomA, as that is what the variable was initialised with. So we pass this in, and see what the rest of the computation produces. This means we evaluate adjacent RoomA to yield RoomB, and write RoomB into robotLocation. We’re done for the first pass through this effect, but we gained new information - namely that robotLocation might at some point contain RoomB. Knowing this, we then rerun the effect, but the first readVar gives us two paths:

This shows us that robotLocation might also be set to RoomA. However, we already knew this, so at this point we’ve reached a fixed point.

In practice, this process is ran over all Effects at the same time because they may interact - a change in one Effect might cause new paths to be found in another Effect. However, because fast-downward only works with finite domain representations, this algorithm always terminates. Unfortunately, I have no way of enforcing this that I can see, which means a user could infinitely loop this normalisation process by writing modifyVar v succ, which would produce an infinite number of variable assignments.


CircuitHub are using this in production (and I mean real, physical production!) to coordinate activities in its factories. By using AI, we have a declarative interface to the production process – rather than saying what steps are to be performed, we can instead say what state we want to end up in and we can trust the planner to find a suitable way to make it so.

Haskell really shines here, giving a very powerful way to present problems to the solver. The industry standard is PDDL, a Lisp-like language that I’ve found in practice is less than ideal to actually encode problems. By using Haskell, we:

  • Can easily feed the results of the planner into a scheduler to execute the plan, with no messy marshalling.
  • Use well known means of abstraction to organise the problem. For example, in the above we use Haskell as a type of macro language – using do notation to help us succinctly formulate the problem.
  • Abstract out the details of planning problems so the rest of the team can focus on the domain specific details – i.e., what options are available to the solver, and the domain specific constraints they are subject to.

fast-downward is available on Hackage now, and I’d like to express a huge thank you to CircuitHub for giving me the time to explore this large space and to refine my work into the best solution I could think of. This work is the result of numerous iterations, but I think it was worth the wait!

Appendix: Code Without Comments

Here is the complete example, as a single Haskell block:

{-# language DisambiguateRecordFields #-}

module FastDownward.Examples.Gripper where

import Control.Monad
import qualified FastDownward.Exec as Exec
import FastDownward.Problem

data Room = RoomA | RoomB
  deriving (Eq, Ord, Show)

adjacent :: Room -> Room
adjacent RoomA = RoomB
adjacent RoomB = RoomA

data BallLocation = InRoom Room | InGripper
  deriving (Eq, Ord, Show)

data GripperState = Empty | HoldingBall
  deriving (Eq, Ord, Show)

type Ball = Var BallLocation

type Gripper = Var GripperState

data Action = PickUpBall | SwitchRooms | DropBall
  deriving (Show)

problem :: Problem (Maybe [Action])
problem = do
  balls <- replicateM 4 (newVar (InRoom RoomA))
  robotLocation <- newVar RoomA
  grippers <- replicateM 2 (newVar Empty)

    pickUpBallWithGripper :: Ball -> Gripper -> Effect Action
    pickUpBallWithGripper b gripper = do
      Empty <- readVar gripper
      robotRoom <- readVar robotLocation
      ballLocation <- readVar b
      guard (ballLocation == InRoom robotRoom)
      writeVar b InGripper
      writeVar gripper HoldingBall
      return PickUpBall

    moveRobotToAdjacentRoom :: Effect Action
    moveRobotToAdjacentRoom = do
      modifyVar robotLocation adjacent
      return SwitchRooms

    dropBall :: Ball -> Gripper -> Effect Action
    dropBall b gripper = do
      HoldingBall <- readVar gripper
      InGripper <- readVar b
      robotRoom <- readVar robotLocation
      writeVar b (InRoom robotRoom)
      writeVar gripper Empty
      return DropBall

    ( [ pickUpBallWithGripper b g | b <- balls, g <- grippers ]
        ++ [ dropBall b g | b <- balls, g <- grippers ]
        ++ [ moveRobotToAdjacentRoom ]
    [ b ?= InRoom RoomB | b <- balls ]

main :: IO ()
main = do
  plan <- runProblem problem
  case plan of
    Nothing ->
      putStrLn "Couldn't find a plan!"

    Just steps -> do
      putStrLn "Found a plan!"
      zipWithM_ (\i step -> putStrLn $ show i ++ ": " ++ show step) [1::Int ..] steps

cfg :: Exec.SearchEngine
cfg =
  Exec.AStar Exec.AStarConfiguration
    { evaluator =
        Exec.LMCount Exec.LMCountConfiguration
          { lmFactory =
              Exec.LMExhaust Exec.LMExhaustConfiguration
                { reasonableOrders = False
                , onlyCausalLandmarks = False
                , disjunctiveLandmarks = True
                , conjunctiveLandmarks = True
                , noOrders = False
          , admissible = False
          , optimal = False
          , pref = True
          , alm = True
          , lpSolver = Exec.CPLEX
          , transform = Exec.NoTransform
          , cacheEstimates = True
    , lazyEvaluator = Nothing
    , pruning = Exec.Null
    , costType = Exec.Normal
    , bound = Nothing
    , maxTime = Nothing

by Oliver Charles at December 25, 2018 12:00 AM

November 26, 2018

Matthew Bauer

Subjective ranking of build systems

1 My subjective ranking of build systems

Very few of us are happy with our choices of build systems. There are a lot out there and none feel quite right to many people. I wanted to offer my personal opinions on build systems. Every build system is “bad” in its own way. But some are much worse than others.

As a maintainer for Nixpkgs, we have to deal with . I’ve avoided build systems that are language-specific. Those build systems are usually the only choice for your language, so ranking them will inevitably include opinions on the language itself. So, I’ve included in this list only language neutral build systems. In addition, I’ve filtered out any build systems that are not included in Nixpkgs. This perspective is going to prioritize features that make your project easiest to package in cross-platform ways. It’s very subjective, so I only speak for myself here.

I separate two kinds of software used for packages. One is the “meta” build system that provides an abstract interface to create build rules. The other is the build runner that will run the rules. Most meta build systems support targeting multiple backends.

1.1 What makes a good build system?

Some criteria I have for these build systems.

  • Good defaults builtin. By default, packages should support specifying “prefix” and “destination directory”.
  • Works with widely available software. Being able to generate Makefiles is a big bonus. Everyone has access to make - not everyone has Ninja. This is often needed for bootstrapping.
  • Supports cross compilation concepts. A good separation between buildtime and runtime is a must have! In addition you should be able to set build, host, and target from the command line. This makes things much easier for packaging and bootstrapping.
  • Detection of dependencies reuses existing solutions. Pkgconfig provides an easy way to detect absolute directories. No need to reinvent the wheel here.
  • The less dependencies the better! Requiring a Java or Python runtime means it takes longer to rebuild the world. They introduce bottlenecks where every package needs to wait for these runtimes to be built before we can start running things in parallel.

1.2 Ranking of meta build systems from bad to worse

  1. GNU Autotools (Automake & autoconf)
  2. Meson
  3. CMake
  4. gyp
  5. qmake
  6. imake
  7. premake

GNU Autotools comes in at the top. It has the best support for cross compilation of any meta build system. It has been around for a while and means that the classic “./configure && make && make install” work. Because the configure script is just a simple bash script, packages don’t have to depend directly on GNU Autotools at build time. This is a big plus in bootstrapping software. I think Meson has made a lot of progress in improving its cross compilation support. It’s not quite there in my opinion, as it requires you to create cross tool files instead of using command line arguments.

1.3 Ranking of build runners from bad to worse

  1. GNU Make
  2. Ninja
  3. Bazel
  5. SCons

GNU Make is still the top choice in my personal opinion. It has been around for a while, but Makefiles are widely understood and GNU Make is included everywhere. In addition, the Makefile build rule format is easy to parallelize. Ninja still requires Python to build itself. This adds to the Nixpkgs bottleneck because Python is not included in the bootstrap tools. While there are some speedups in Ninja, they don’t appear to be significant enough to be worth switching at this time. At the same time, Ninja is still a perfectly good choice if you value performance over legacy support.

1.4 Conclusion

In Nixpkgs, we have made an attempt to support whatever build system you are using. But, some are definitely better than others.

My main goal here is to try to get software authors to think more critically about what build system they are using. In my opinion, it is better to use well known software over more obscure systems. These shouldn’t be taken as a universal truth. Everyone has their own wants and needs. But, if your build system comes in at the bottom of this list, you might want to consider switching to something else!

November 26, 2018 12:00 AM

November 08, 2018


The nixops defaults module

Avoiding code repetition in a nixops deployment As with most configuration management tools, there are some options in nixops that need to be defined for virtually any machine in a deployment. These global options tend to be abstracted in a common base profile that is simply included at the top of a node configuration. This base profile can be used for including default packages, services or machine configuration usually needed on all machines—like networking debug tools and admin users with access to the whole network.

November 08, 2018 02:00 PM

October 30, 2018

Sander van der Burg

Auto patching prebuilt binary software packages for deployment with the Nix package manager

As explained in many previous blog posts, most of the quality properties of the Nix package manager (such as reliable deployment) stem from the fact that all packages are stored in a so-called Nix store, in which every package resides in its own isolated folder with a hash prefix that is derived from all build inputs (such as: /nix/store/gf00m2nz8079di7ihc6fj75v5jbh8p8v-zlib-1.2.11).

This unorthodox naming convention makes it possible to safely store multiple versions and variants of the same package next to each other.

Although isolating packages in the Nix store provides all kinds of benefits, it also has a big drawback -- common components, such as shared libraries, can no longer be found in their "usual locations", such as /lib.

For packages that are built from source with the Nix package manager this is typically not a problem:

  • The Nix expression language computes the Nix store paths for the required packages. By simply referring to the variable that contains the build result, you can obtain the Nix store path of the package, without having to remember them yourself.
  • Nix statically binds shared libraries to ELF binaries by modifying the binary's RPATH field. As a result, binaries no longer rely on the presence of their library dependencies in global locations (such as /lib), but use the libraries stored in isolation in the Nix store.
  • The GNU linker (the ld command) has been wrapped to transparently add the paths of all the library package to the RPATH field of the ELF binary, whenever a dynamic library is provided.

As a result, you can build most packages from source code by simply executing their standardized build procedures in a Nix builder environment, such as: ./configure --prefix=$out; make; make install.

When it is desired to deploy prebuilt binary packages with Nix then you may probably run into various kinds of challenges:

  • ELF executables require the presence of an ELF interpreter in /lib/ (on x86) and /lib/ (on x86-64), which is impure and does not exist in NixOS.
  • ELF binaries produced by conventional means typically have no RPATH configured. As a result, they expect libraries to be present in global namespaces, such as /lib. Since these directories do not exist in NixOS an executable will typically fail to work.

To make prebuilt binaries work in NixOS, there are basically two solutions -- it is possible to compose so-called FHS user environments from a set of Nix packages in which shared components can be found in their "usual locations". The drawback is that it requires special privileges and additional work to compose such environments.

The preferred solution is to patch prebuilt ELF binaries with patchelf (e.g. appending the library dependencies to the RPATH of the executable) so that their dependencies are loaded from the Nix store. I wrote a guide that demonstrates how to do this for a number of relatively simple packages.

Although it is possible to patch prebuilt ELF binaries to make them run work from the Nix store, such a process is typically tedious and time consuming -- you must dissect a package, search for all relevant ELF binaries, figure out which libraries a binary requires, find the corresponding packages that provide them and then update the deployment instructions to patch the ELF binaries.

For small projects, a manual binary patching process is still somewhat manageable, but for a complex project such as the Android SDK, that provides a large collection of plugins containing a mix of many 32-bit and 64-bit executables, manual patching is quite labourious, in particular when it is desired to keep all plugins up to date -- plugin packages are updated quite frequently forcing the packager to re-examine all binaries over and over again.

To make the Android SDK patching process easier, I wrote a small tool that can mostly automate it. The tool can also be used for other kinds of binary packages.

Automatic searching for library locations

In order to make ELF binaries work, they must be patched in such a way that they use an ELF interpreter from the Nix store and their RPATH fields should contain all paths to the libraries that they require.

We can gather a list of required libraries for an executable, by running:

$ patchelf --print-needed ./zipmix

Instead of manually patching the executable with this provided information, we can also create a function that searches for the corresponding libraries in a list of search paths. The tool could take the first path that provides the required libraries.

For example, by setting the following colon-separated seach environment variable:

$ export libs=/nix/store/7y10kn6791h88vmykdrddb178pjid5bv-glibc-2.27/lib:/nix/store/xh42vn6irgl1cwhyzyq1a0jyd9aiwqnf-zlib-1.2.11/lib

The tool can automatically discover that the path: /nix/store/7y10kn6791h88vmykdrddb178pjid5bv-glibc-2.27/lib provides both and

We can also run into situations in which we cannot find any valid path to a required library -- in such cases, we can throw an error and notify the user.

It is also possible extend the searching approach to the ELF interpreter. The following command provides the path to the required ELF interpreter:

$ patchelf --print-interpreter ./zipmix

We can search in the list of library packages for the ELF interpreter as well so that we no longer have to explicitly specify it.

Dealing with multiple architectures

Another problem with the Android SDK is that plugin packages may provide both x86 and x86-64 binaries. You cannot link libraries compiled for x86 against an x86-64 executable and vice versa. This restriction could introduce a new kind of risk in the automatic patching process.

Fortunately, it is also possible to figure out for what kind of architecture a binary was compiled:

$ readelf -h ./zipmix
ELF Header:
Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00
Class: ELF64
Data: 2's complement, little endian
Version: 1 (current)
OS/ABI: UNIX - System V
ABI Version: 0
Type: EXEC (Executable file)
Machine: Advanced Micro Devices X86-64

The above command-line instruction shows that we have a 64-bit binary (Class: ELF64) compiled for the x86-64 architecture (Machine: Advanced Micro Devices X86-64)

I have also added a check that ensures that the tool will only add a library path to the RPATH if the architecture of the library is compatible with the binary. As a result, it is not possible to accidentally link a library with an incompatible architecture to a binary.

Patching collections of binaries

Another inconvenience is the fact that Android SDK plugins typically provide more than one binary that needs to be patched. We can also recursively search an entire directory for ELF binaries:

$ autopatchelf ./bin

The above command-line instruction recursively searches for binaries in the bin/ sub directory and automatically patches them.

Sometimes recursively patching executables in a directory hierarchy could have undesired side effects. For example, the Android SDK also provides emulators having their own set of ELF binaries that need to run in the emulator. Patching these binaries typically breaks the software running in the emulator. We can also disable recursion if this is desired:

$ autopatchelf --no-recurse ./bin

or revert to patching individual executables:

$ autopatchelf ./zipmix

The result

The result of having most aspects automated of a binary patching process results in a substantial reduction in code size for the Nix expressions that need to deploy prebuilt packages.

In my previous blog post, I have shown two example cases for which I manually derived the patchelf instructions that I need to run. By using the autopatchelf tool I can significantly decrease the size of the corresponding Nix expressions.

For example, the following expression deploys kzipmix:

{stdenv, fetchurl, autopatchelf, glibc}:

stdenv.mkDerivation {
name = "kzipmix-20150319";
src = fetchurl {
url =;
sha256 = "0fv3zxhmwc3p34larp2d6rwmf4cxxwi71nif4qm96firawzzsf94";
buildInputs = [ autopatchelf ];
libs = stdenv.lib.makeLibraryPath [ glibc ];
installPhase = ''
${if stdenv.system == "i686-linux" then "cd i686"
else if stdenv.system == "x86_64-linux" then "cd x86_64"
else throw "Unsupported system architecture: ${stdenv.system}"}
mkdir -p $out/bin
cp zipmix kzip $out/bin
autopatchelf $out/bin

In the expression shown above, it suffices to simply move the executable to $out/bin and running autopatchelf.

I have also shown a more complicated example demonstrating how to patch the Quake 4 demo. I can significantly reduce the amount of code by substituting all the patchelf instructions by a single autopatchelf invocation:

{stdenv, fetchurl, glibc, SDL, xlibs}:

stdenv.mkDerivation {
name = "quake4-demo-1.0";
src = fetchurl {
url =;
sha256 = "0wxw2iw84x92qxjbl2kp5rn52p6k8kr67p4qrimlkl9dna69xrk9";
buildInputs = [ autopatchelf ];
libs = stdenv.lib.makeLibraryPath [ glibc SDL xlibs.libX11 xlibs.libXext ];

buildCommand = ''
# Extract files from the installer
cp $src
bash ./ --noexec --keep
# Move extracted files into the Nix store
mkdir -p $out/libexec
mv quake4-linux-1.0-demo $out/libexec
cd $out/libexec/quake4-linux-1.0-demo
# Remove obsolete setup files
rm -rf
# Patch ELF binaries
autopatchelf .
# Remove that conflicts with Mesa3D's
rm ./bin/Linux/x86/
# Create wrappers for the executables and ensure that they are executable
for i in q4ded quake4
mkdir -p $out/bin
cat > $out/bin/$i <<EOF
#! ${} -e
cd $out/libexec/quake4-linux-1.0-demo
./bin/Linux/x86/$i.x86 "\$@"
chmod +x $out/libexec/quake4-linux-1.0-demo/bin/Linux/x86/$i.x86
chmod +x $out/bin/$i

For the Android SDK, there is even a more substantial win in code size reductions. The following Nix expression is used to patch the Android build-tools plugin package:

{deployAndroidPackage, lib, package, os, autopatchelf, makeWrapper, pkgs, pkgs_i686}:

deployAndroidPackage {
inherit package os;
buildInputs = [ autopatchelf makeWrapper ];

libs_x86_64 = lib.optionalString (os == "linux")
(lib.makeLibraryPath [ pkgs.glibc pkgs.zlib pkgs.ncurses5 ]);
libs_i386 = lib.optionalString (os == "linux")
(lib.makeLibraryPath [ pkgs_i686.glibc pkgs_i686.zlib pkgs_i686.ncurses5 ]);

patchInstructions = ''
${lib.optionalString (os == "linux") ''
export libs_i386=$packageBaseDir/lib:$libs_i386
export libs_x86_64=$packageBaseDir/lib64:$libs_x86_64
autopatchelf $packageBaseDir/lib64 libs --no-recurse
autopatchelf $packageBaseDir libs --no-recurse

wrapProgram $PWD/mainDexClasses \
--prefix PATH : ${pkgs.jdk8}/bin
noAuditTmpdir = true;

The above expression specifies the search libraries per architecture for x86 (i386) and x86_64 and automatically patches the binaries in the lib64/ sub folder and base directories. The autopatchelf tool ensures that no library of an incompatible architecture gets linked to a binary.


The automated patching approach described in this blog post is not entirely a new idea -- in Nixpkgs, Aszlig Neusepoff created an autopatchelf hook that is integrated into the fixup phase of the stdenv.mkDerivation {} function. It shares a number of similar features -- it accepts a list of library packages (the runtimeDependencies environment variable) and automatically adds the provided runtime dependencies to the RPATH of all binaries in all the output folders.

There are also a number of differences -- my approach provides an autopatchelf command-line tool that can be invoked in any stage of a build process and provides full control over the patching process. It can also be used outside a Nix builder environment, which is useful for experimentation purposes. This increased level of flexibility is required for more complex prebuilt binary packages, such as the Android SDK and its plugins -- for some plugins, you cannot generalize the patching process and you typically require more control.

It also offers better support to cope with repositories providing binaries of multiple architectures -- while the Nixpkgs version has a check that prevents incompatible libraries from being linked, it does not allow you to have fine grained control over library paths to consider for each architecture.

Another difference between my implementation and the autopatchelf hook is that it works with colon separated library paths instead of white space delimited Nix store paths. The autopatchelf hook makes the assumption that a dependency (by convention) stores all shared libraries in the lib/ sub folder.

My implementation works with arbitrary library paths and arbitrary environment variables that you can specify as a parameter. To patch certain kinds of Android plugins, you must be able to refer to libraries that reside in unconventional locations in the same package. You can even use the LD_LIBRARY_PATH environment variable (that is typically used to dynamically load libraries from a set of locations) in conjunction with autopatchelf to make dynamic library references static.

There is also a use case that the autopatchelf command-line tool does not support -- the autopatchelf hook can also be used for source compiled projects whose executables may need to dynamically load dependencies via the dlopen() function call.

Dynamically loaded libraries are not known at link time (because they are not provided to the Nix-wrapped ld command), and as a result, they are not added to the RPATH of an executable. The Nixpkgs autopatchelf hook allows you to easily supplement the library paths of these dynamically loaded libraries after the build process completes.


The autopatchelf command-line tool can be found in the nix-patchtools repository. The goal of this repository to provide a collection of tools that help making the patching processes of complex prebuilt packages more convenient. In the future, I may identify more patterns and provide additional tooling to automate them.

autopatchelf is prominently used in my refactored version of the Android SDK to automatically patch all ELF binaries. I have the intention to integrate this new Android SDK implementation into Nixpkgs soon.

Follow up

UPDATE: In the meantime, I have been working with Aszlig, the author of the autopatchelf hook, to get the functionality I need for auto patching the Android SDK integrated in Nixpkgs.

The result is that the Nixpkgs' version now implements a number of similar features and is also capable of patching the Android SDK. The build-tools expression shown earlier, is now implemented as follows:

{deployAndroidPackage, lib, package, os, autoPatchelfHook, makeWrapper, pkgs, pkgs_i686}:

deployAndroidPackage {
inherit package os;
buildInputs = [ autoPatchelfHook makeWrapper ]
++ lib.optionalString (os == "linux") [ pkgs.glibc pkgs.zlib pkgs.ncurses5 pkgs_i686.glibc pkgs_i686.zlib pkgs_i686.ncurses5 ];

patchInstructions = ''
${lib.optionalString (os == "linux") ''
addAutoPatchelfSearchPath $packageBaseDir/lib
addAutoPatchelfSearchPath $packageBaseDir/lib64
autoPatchelf --no-recurse $packageBaseDir/lib64
autoPatchelf --no-recurse $packageBaseDir

wrapProgram $PWD/mainDexClasses \
--prefix PATH : ${pkgs.jdk8}/bin
noAuditTmpdir = true;

In the above expression we do the following:

  • By adding the autoPatchelfHook package as a buildInput, we can invoke the autoPatchelf function in the builder environment and use it in any phase of a build process. To prevent the fixup hook from doing any work (that generalizes the patch process and makes the wrong assumptions for the Android SDK), the deployAndroidPackage function propagates the dontAutoPatchelf = true; parameter to the generic builder so that this fixup step will be skipped.
  • The autopatchelf hook uses the packages that are specified as buildInputs to find the libraries it needs, whereas my implementation uses libs, libs_i386 or libs_x86_64 (or any other environment variable that is specified as a command-line parameter). It is robust enough to skip incompatible libraries, e.g. x86 libraries for x86-64 executables.
  • My implementation works with colon separated library paths whereas autopatchelf hook works with Nix store paths making the assumption that there is a lib/ sub folder in which the libraries can be found that it needs. As a result, I no longer use the lib.makeLibraryPath function.
  • In some cases, we also want the autopatchelf hook to inspect non-standardized directories, such as uncommon directories in the same package. To make this work, we can add additional paths to the search cache by invoking the addAutoPatchelfSearchPath function.

by Sander van der Burg ( at October 30, 2018 10:57 PM

October 26, 2018


The NixOps resources.machines option

The resources.machines attribute set NixOps provides the evaluated node configurations of a deployment network in the resources.machines attribute set. Using this information, one can easily implement machine configurations that are generated from options in an existing network. For example, a reverse proxy that automatically proxies to all other webservers in the network—one which could handle TLS termination for all of them—can be generated without having to manually define individual virtual hosts.

October 26, 2018 11:00 AM

October 01, 2018

Graham Christensen

Optimising Docker Layers for Better Caching with Nix

Nix users value its isolated, repeatable builds and simple sharing of development environments. Nix makes it easy to go back in time and rebuild software from years ago without issue.

At the same time, the value of the container ecosystem is huge. Tying in to the schedulers, orchestration, and monitoring is very valuable.

Nix has been able to generate Docker images for several years now, however the typical approach to layering with Nix is to generate one fat image with all of the dependencies. This fat image offers no sharing, is slow to build, upload, and download.

In this post I talk about how I fix this problem and use Nix to automatically create multi-layered Docker images, allowing a high amount of caching between images.

Docker uses layers

Docker’s use of layering is well known, and its benefits are undeniable: sharing a “base” system is a simple abstraction which allows extending a well known image with your own code.

A Docker image is a sequence of layers, where each member is a filesystem diff, adding and removing files from its parent member:

Efficient layering is hard because there are no rules

When there are no restrictions on what a command will do, the only way to fully capture its effects is to snapshot the full filesystem.

Most package managers will write files to shared global directories like /usr, /bin, and /etc.

This means that the only way to represent the changes between installing package A and installing package B is to take a full snapshot of the filesystem.

As a user you might manually create rules to improve the behavior of the cache: add your code towards the end of a Dockerfile, or install common libraries in a single RUN instruction, even if you don’t want them all.

These rules make sense: If a Dockerfile adds code and then installs packages, Docker can’t cache the installation because it can’t know that the package installation isn’t influenced by the code addition. Docker also can’t know that installing package A has nothing to do with package B and the changes are separately cachable.

With restrictions, we can make better optimisations

Nix does have rules.

The most important and relevant rule when considering distribution and Docker layers is:

A package build can’t write to arbitrary places on the disk.

A build can only write to a specific directory known as $out, like /nix/store/ibfx7ryqnqf01qfzj4v7qhzhkd2v9mm7-file-5.34. When you add a new package to your system, you know it didn’t modify /etc or /bin.

How does file find its dependencies? It doesn’t – they are hard-coded:

$ ldd /nix/store/ibfx7ryqnqf01qfzj4v7qhzhkd2v9mm7-file-5.34/bin/file

This provides great, cache-friendly properties:

  1. You know exactly what path changed when you added file.
  2. You know exactly what paths file depends on.
  3. Once a path is created, it will never change again.

Think graphs, not layers

If you consider the properties Nix provides, you can see it already constructs a graph internally to represent software and its dependencies: it natively has a better understanding of the software than Docker is interested in.

Specifically, Nix uses a Directed Acyclic Graph to store build output, where each node is a specific, unique, and immutable path in /nix/store:

Or to use a real example, Nix itself can render a graph of a package’s dependencies:

Flattening Graphs to Layers

In a naive world we can simply walk the tree and create a layer out of each path:

and this image is valid: if you pulled any of these layers, you would automatically get all the layers below it, resulting in a complete set of dependencies.

Things get a bit more complicated for a graph with a wider graph, how do you flatten something like Bash:

If we had to flatten this to an ordered sequence, obviously bash-interactive-4.4-p23 is at the top, but does readline-7.0p5 come next? Why not bash-4.4p23?

It turns out we don’t have to solve this problem exactly, because I lied about how Docker represents layers.

How Docker really represents an Image

Docker’s layers are content addressable and aren’t required to explicitly reference a parent layer. This means a layer for readline-7.0p5 doesn’t have to mention that it has any relationship to ncurses-6.1 or glibc-2.27 at all.

Instead each image has a manifest which defines the order:

  "Layers": [

If you have only built Docker images using a Dockerfile, then you would expect the way we flatten our graph to be critically important. If we sometimes picked readline-7.0p5 to come first and other times picked bash-4.4p23 then we may never make cache hits.

However since the Image defines the order, we don’t have to solve this impossible problem: we can order the layers in any way we want and the layer cache will always hit.

Docker doesn’t support an infinite number of layers

Docker has a limit of 125 layers, but big packages with lots of dependencies can easily have more than 125 store paths.

It is important that we still successfully build an image if we go over this limit, but what do we do with the extra layers?

In the interest of shortness, let’s pretend Docker only lets you have four layers, and we want to fit five. Out of the Bash example, which two paths do we combine in to one layer?

  • bash-interactive-4.4-p23
  • bash-4.4p23
  • readline-7.0p5
  • ncurses-6.1
  • glibc-2.27

Smushing Layers

I decided the best solution is to combine the layers which are less likely to be a cache hit with other software. Picking the most low level, fundamental paths and making them a separate layer means my next image will most likely also share some of those layers too.

Ideally it would end up with at least glibc, and ncurses in separate layers. Visually, it is hard to tell if either readline or bash-4.4p23 would be better served as an individual layer. One of them should be, certainly.

My actual solution

My prioritization algorithm is a simple graph-based popularity contest. The idea is to weight each node more heavily the deeper and more references they have.

Starting with the dependency graph of Bash from before,

we first duplicate nodes in the graph so each node is only pointed to once:

we then replace each leaf node with a counter, starting at 1:

each node whose only children are counters are then combined with their children, and their children’s counters summed, then incremented:

we then repeat the process:

we repeat this process until there is only one node:

and finally we sort the paths in each popularity bucket by name to ensure the list is consistently generated to get the paths ordered by cachability:

  • glibc-2.27: 10
  • ncurses-6.1: 4
  • bash-4.4-p23: 2
  • readline-7.0p5: 2
  • bash-interactive-4.4-p23: 1

This solution has properly put foundational paths which are most commonly referred to at the top, improving its chances of cache hit. The algorithm has also put the likely-to-change application right at the bottom in case the last layers need to be combined.

Let’s consider a much larger image. In this image, we set the maximum number of layers to 120, but the image has 200 store paths. Under this design the 119 most fundamental store paths will have their own layers, and we store the remaining 81 paths together in the 120th layer.

With this new approach of automatically layering store paths I can now generate images with very efficient caching between different images.

For a practical example of a PHP application with a MySQL database.

First we build a MySQL image:

# mysql.nix
  pkgs = import (builtins.fetchTarball {
    url = "";
    sha256 = "05a3jjcqvcrylyy8gc79hlcp9ik9ljdbwf78hymi5b12zj2vyfh6";
  }) {};
in pkgs.dockerTools.buildLayeredImage {
  name = "mysql";
  tag = "latest";
  config.Cmd = [ "${pkgs.mysql}/bin/mysqld" ];
  maxLayers = 120;
$ nix-build ./mysql.nix
$ docker load < ./result

Then we build a PHP image:

# php.nix
  pkgs = import (builtins.fetchTarball {
    url = "";
    sha256 = "05a3jjcqvcrylyy8gc79hlcp9ik9ljdbwf78hymi5b12zj2vyfh6";
  }) {};
in pkgs.dockerTools.buildLayeredImage {
  name = "grahamc/php";
  tag = "latest";
  config.Cmd = [ "${pkgs.php}/bin/php" ];
  maxLayers = 120;
$ nix-build ./php.nix
$ docker load < ./result

and export the two image layers:

$ docker inspect mysql | jq -r '.[] | .RootFS.Layers | .[]' | sort > mysql
$ docker inspect php | jq -r '.[] | .RootFS.Layers | .[]' | sort > php

and look at this, the PHP and MySQL images share twenty layers:

$ comm -1 -2 php mysql

Where before you wouldn’t bother trying to have your application and database images share layers, with Nix the layer sharing is completely automatic.

The automatic splitting and prioritization has improved image push and fetch times by an order of magnitude. Having multiple images allows Docker to request more than one at a time.

Thank you Target for having sponsored this work with Tweag in NixOS/nixpkgs#47411.

October 01, 2018 12:00 AM

September 21, 2018

Sander van der Burg

Creating Nix build function abstractions for pluggable SDKs

Two months ago, I decomposed the stdenv.mkDerivation {} function abstraction in the Nix packages collection that is basically the de-facto way in the Nix expression language to build software packages from source.

I identified some of its major concerns and developed my own implementation that is composed of layers in which each layer gradually adds a responsibility until it has most of the features that the upstream version also has.

In addition to providing a better separation of concerns, I also identified a pattern that I repeatedly use to create these abstraction layers:

{stdenv, foo, bar}:
{name, buildInputs ? [], ...}@args:

extraArgs = removeAttrs args [ "name" "buildInputs" ];
stdenv.someBuildFunction ({
name = "mypackage-"+name;
buildInputs = [ foo bar ] ++ buildInputs;
} // extraArgs)

Build function abstractions that follow this pattern (as outlined in the code fragment shown above) have the following properties:

  • The outer function header (first line) specifies all common build-time dependencies required to build a project. For example, if we want to build a function abstraction for Python projects, then python is such a common build-time dependency.
  • The inner function header specifies all relevant build parameters and accepts an arbitrary number of arguments. Some arguments have a specific purpose for the kind of software project that we want to build (e.g. name and buildInputs) while other arguments can be passed verbatim to the build function abstraction that we use as a basis.
  • In the body, we invoke a function abstraction (quite frequently stdenv.mkDerivation {}) that builds the project. We use the build parameters that have a specific meaning to configure specialized build properties and we pass all remaining build parameters that are not conflicting verbatim to the build function that we use a basis.

    A subset of these arguments have no specific meaning and are simply exposed as environment variables in the builder environment.

    Because some parameters are already being used for a specific purpose and others may be incompatible with the build function that we invoke in the body, we compose a variable named: extraArgs in which we remove the conflicting arguments.

Aside from having a function that is tailored towards the needs of building a specific software project (such as a Python project), using this pattern provides the following additional benefits:

  • A build procedure is extendable/tweakable -- we can adjust the build procedure by adding or changing the build phases, and tweak them by providing build hooks (that execute arbitrary command-line instructions before or after the execution of a phase). This is particularly useful to build additional abstractions around it for more specialized deployment procedures.
  • Because an arbitrary number of arguments can be propagated (that can be exposed as environment variables in the build environment), we have more configuration flexibility.

The original objective of using this pattern is to create an abstraction function for GNU Make/GNU Autotools projects. However, this pattern can also be useful to create custom abstractions for other kinds of software projects, such as Python, Perl, Node.js etc. projects, that also have (mostly) standardized build procedures.

After completing the blog post about layered build function abstractions, I have been improving the Nix packages/projects that I maintain. In the process, I also identified a new kind of packaging scenario that is not yet covered by the pattern shown above.

Deploying SDKs

In the Nix packages collection, most build-time dependencies are fully functional software packages. Notable exceptions are so-called SDKs, such as the Android SDK -- the Android SDK "package" is only a minimal set of utilities (such as a plugin manager, AVD manager and monitor).

In order to build Android projects from source code and manage Android app installations, you need to install a variety of plugins, such as build-tools, platform-tools, platform SDKs and emulators.

Installing all plugins is typically a much too costly operation -- it requires you to download many gigabytes of data. In most cases, you only want to install a very small subset of them.

I have developed a function abstraction that makes it possible to deploy the Android SDK with a desired set of plugins, such as:

with import <nixpkgs> {};

androidComposition = androidenv.composeAndroidPackages {
toolsVersion = "25.2.5";
platformToolsVersion = "27.0.1";
buildToolsVersions = [ "27.0.3" ];
includeEmulator = true;
emulatorVersion = "27.2.0";

When building the above expression (default.nix) with the following command-line instruction:

$ nix-build

We get an Android SDK installation, with tools plugin version 25.2.5, platform-tools version 27.0.1, one instance of the build-tools (version 27.0.1) and an emulator of version 27.0.2. The Nix package manager will download the required plugins automatically.

Writing build function abstractions for SDKs

If you want to create function abstractions for software projects that depend on an SDK, you not only have to execute a build procedure, but you must also compose the SDK in such a way that all plugins are installed that a project requires. If any of the mandatory plugins are missing, the build will most likely fail.

As a result, the function interface must also provide parameters that allow you to configure the plugins in addition to the build parameters.

A very straight forward approach is to write a function whose interface contains both the plugin and build parameters, and propagates each of the required parameters to the SDK composition function, but manually writing this mapping has a number of drawbacks -- it duplicates functionality of the SDK composition function, it is tedious to write, and makes it very difficult to keep it consistent in case the SDK's functionality changes.

As a solution, I have extended the previously shown pattern with support for SDK deployments:

{composeMySDK, stdenv}:
{foo, bar, ...}@args:

mySDKFormalArgs = builtins.functionArgs composeMySDK;
mySDKArgs = builtins.intersectAttrs mySDKFormalArgs args;
mySDK = composeMySDK mySDKArgs;
extraArgs = removeAttrs args ([ "foo" "bar" ]
++ builtins.attrNames mySDKFormalArgs);
stdenv.mkDerivation ({
buildInputs = [ mySDK ];
buildPhase = ''
} // extraArgs)

In the above code fragment, we have added the following steps:

  • First, we dynamically extract the formal arguments of the function that composes the SDK (mySDKFormalArgs).
  • Then, we compute the intersection of the formal arguments of the composition function and the actual arguments from the build function arguments set (args). The resulting attribute set (mySDKArgs) are the actual arguments we need to propagate to the SDK composition function.
  • The next step is to deploy the SDK with all its plugins by propagating the SDK arguments set as function parameters to the SDK composition function (mySDK).
  • Finally, we remove the arguments that we have passed to the SDK composition function from the extra arguments set (extraArgs), because these parameters have no specific meaning for the build procedure.

With this pattern, the build abstraction function evolves automatically with the SDK composition function without requiring me to make any additional changes.

To build an Android project from source code, I can write an expression such as:


androidenv.buildApp {
# Build parameters
name = "MyFirstApp";
src = ../../src/myfirstapp
antFlags = "-Dtarget=android-16";

# SDK composition parameters
platformVersions = [ 16 ];
toolsVersion = "25.2.5";
platformToolsVersion = "27.0.1";
buildToolsVersions = [ "27.0.3" ];

The expression shown above has the following properties:

  • The above function invocation propagates three build parameters: name referring to the name of the Nix package, src referring to a filesystem location that contains the source code of an Android project, and antFlags that contains command-line arguments that are passed to the Apache Ant build tool.
  • It propagates four SDK composition parameters: platformVersions referring to the platform SDKs that must be installed, toolsVersion to the version of the tools package, platformToolsVersion to the platform-tools package and buildToolsVersion to the build-tool packages.

By evaluating the above function invocation, the Android SDK with the plugins will be composed, and the corresponding SDK will be passed as a build input to the builder environment.

In the build environment, Apache Ant gets invoked build that builds the project from source code. The android.buildApp implementation will dynamically propagate the SDK composition parameters to the androidenv.composeAndroidPackages function.


The extended build function abstraction pattern described in this blog post is among the structural improvements I have been implementing in the mobile app building infrastructure in Nixpkgs. Currently, it is used in standalone test versions of the Nix android build environment, iOS build environment and Titanium build environment.

The Titanium SDK build function abstraction (a JavaScript-based cross-platform development framework that can produce Android, iOS, and several other kinds of applications from the same codebase) automatically composes both Xcode wrappers and Android SDKs to make the builds work.

The test repositories can be found on my GitHub page and the changes live in the nextgen branches. At some point, they will be reintegrated into the upstream Nixpkgs repository.

Besides mobile app development SDKs, this pattern is generic enough to be applied to other kinds of projects as well.

by Sander van der Burg ( at September 21, 2018 10:30 PM

September 11, 2018


Building Customised NixOS Images

To set up a NixOS system, you usually boot into a live NixOS system and install it onto a local disk as outlined in the manual. You can then modify the system configuration to tailor it to your needs. The build system Hydra builds live images like ISO images, container tarballs or AMIs based on their definition in nixpkgs. These images are made available for download on the official website.

September 11, 2018 06:00 PM

September 10, 2018

Munich NixOS Meetup

NixOS Munich Community Meetup

photoMunich NixOS Meetup

There are no talks or specific topics planned for now.

If you are interested in giving a talk, feel free to ask us, otherwise we will have a casual discussion session, help each other on issues and hack away at bugs we find or features we'd like!

Anyone is welcome, no matter if you haven't started using Nix(OS), yet or are using it in production!

Drinks and food will be provided.

München - Germany

Thursday, September 20 at 7:00 PM


September 10, 2018 02:53 PM

September 04, 2018

Domen Kozar

Recent Cachix downtime

Cachix - Nix binary cache as a service was down:

  • On Aug 22nd from 16:55 until 18:55 UTC (120 minutes)
  • On Aug 23rd from 20:01 until 20:09 UTC (8 minutes)

On the 22nd there was no action from my side; the service recovered itself. I did have monitoring configured and I received email alerts, but I have not noticed them.

I have spent most of the 23rd gathering data and evidence on what went wrong. Just before monitoring stopped receiving data at 16:58 UTC, white-box system monitoring revealed:

  • Outgoing bandwidth skyrocketed to 23MB/s
  • Resident memory went through the roof to ~90%

On 23rd I have immediately seen the service was down and I've rebooted the machine.

I have spent a significant amount of time trying to determine if a specific request caused this, but it seems likely that it was just an overload, although I have not proved this theory.

Countermeasures taken

a) Server-side is implemented in GHC Haskell, so I have enabled -O2. Although GHC wiki on Performance says it is indistinguishable from -O1, in the last week I've seen an approximately 10% reduction of resident memory and most importantly, fewer memory spikes. Again, no hard evidence, time will tell.

b) Most importantly, production now runs with GHCRTS='-M2G' flag, limiting overall heap to 2G of memory, so we are not depending on the Linux OOM killer to handle out of memory situations. It is not entirely clear to me why the machine was unresponsive for two hours since OOM should have kicked in, but during that period there was not a single monitoring datapoint sent.

c) I have configured EKG to send GC stats to datadog so if it happens again, that should provide better insight into what is going on with memory consumption.

Countermeasures to be taken

1) Use a service like Pagerduty to be alerted immediately on the phone

2) Upgrade Datadog agent to version 6, which allows more precise per process monitoring

So far I am quite happy how Haskell works in production. I have taken Well-Typed training on GHC performance and if this turns out to be a space leak, I am confident that I will find it.

The only thing that saddens me, coming from Python, is that GHC has poor profiling options for long-running programs. Compiling GHC with profiling options significantly slows the performance. There is unmerged work making the GHC eventlog useful for such cases, but the state of this work is unclear.

Looking forward

So there it is, the first operational issue with Cachix. Despite this issue, I am happy to have made the choices that both allow me to respond quickly to the needs of Nix community, yet still allow me to further improve and stabilize the code with confidence as the product matures.

Speaking of maturing the product, I will share another announcement soon!

by Domen Kožar at September 04, 2018 09:00 AM

August 02, 2018

Sander van der Burg

Automating Mendix application deployments with Nix

As explained in a previous blog post, Mendix is a low-code development platform -- the general idea behind low-code application development is that instead of writing (textual) code, you model an application, such as the data structures and the corresponding views. One of the benefits of Mendix is that it makes you more productive as a developer, for certain classes of applications.

Although low-code development is conceptually different from a development perspective compared to more "traditional" development approaches (that require you to write code), there is one particular aspect a Mendix application lifecycle has in common. Eventually, you will have to deploy your app to an environment that makes your application available to end users.

For users of the Mendix cloud portal, deploying an application is quite convenient: with just a few simple mouse clicks your application gets deployed to a test, acceptance or production environment.

However, managing on-premise application deployments or actually managing applications in the cloud is all but a simple job. There all all kinds of challenges you need to cope with, such as:

  • Making sure that all dependencies of an app are present, such as a database for storage.
  • Executing all relevant deployment activities to make an app available for use.
  • Upgrading is risky and difficult -- it may break the application and introduce downtime.

There are a variety of deployment solutions available to manage deployment processes. However, no solution is perfect -- every tool has its strengths and weaknesses and no tool is a perfect fit. As a result, we still have to develop custom solutions that automate missing parts in a deployment process and we have many kinds of additional complexities that we need to cope with.

Recently, I investigated whether it would be possible to deploy Mendix applications, with my favourite class of deployment utilities from the Nix project, and I gave an introduction to the Nix project to the R&D department at Mendix.

Using tools from the Nix project

For readers not familiar with Nix: the tools in the Nix project solve many configuration management problems in their own unique way. The basis of all the tools is the Nix package manager that borrows concepts from purely functional programming languages, such as Haskell.

To summarize Nix in just a few sentences: deploying a package with Nix is the same thing as invoking a pure function that constructs a package from source code and its build-time dependencies (that are provided as function parameters). To accomplish purity, Nix composes so-called "pure build environments", in which various restrictions are imposed on the build script to ensure that the outcome will be (almost) identical if a package is built with the same build inputs.

The purely functional deployment model has all kinds of benefits -- for example, it provides very strong dependency completeness and reproducibility guarantees, and all kinds of optimizations (e.g. a package that has been deployed before does not have to be built again, packages that have no dependency on each other can be built in parallel, builds can be downloaded from a remote location or delegated to another machine).

Another important property that all tools in the Nix project have in common is declarative deployment -- instead of describing the deployment activities that need to be carried out, you describe the structure of your system that want to deploy, e.g. the packages, a system configuration, or a network of machines/services. The deployment tools infer the activities that need to be carried out to get the system deployed.

Automating Mendix application deployments with Nix

As an experiment, I investigated how Mendix application deployments could fit in Nix's vision of declarative deployment -- the objective is to take a Mendix project created by the modeler (essentially the "source code" form of an application), write a declarative deployment specification for it, and use the tools from the Nix project to get a machine running with all required components to make the app run.

To bring a Mendix application in a running state, we require the following ingredients:

  • We must obtain the Mendix runtime that interprets the Mendix models. Packaging the Mendix runtime in Nix is fairly straight forward -- simply unzipping the distribution, and moving the package contents into the Nix store, and adding a wrapper script launches the runtime suffices.
  • We must produce a Mendix Deployment Archive (MDA file) that creates a Zip container with all artifacts required to run a Mendix app by the runtime. An MDA file can be produced from a Mendix project by invoking the MxBuild tool. Since MxBuild is required for this, I had to package it as well. Packaging mxbuild is a bit trickier, as it requires mono and Node.js.

Building an MDA file with Nix

The most interesting part is writing a new function abstraction for building MDA files with Nix -- in a Nix builder environment, (almost) any build tool can be used albeit with restrictions that are imposed on them to make builds more pure.

We can also create a function abstraction that invokes mxbuild in a Nix builder environment:

{stdenv, mxbuild, jdk, nodejs}:
{name, mendixVersion, looseVersionCheck ? false, buildInputs ? [], ...}@args:

mxbuildPkg = mxbuild."${mendixVersion}";
extraArgs = removeAttrs args [ "buildInputs" ];
stdenv.mkDerivation ({
buildInputs = [ mxbuildPkg nodejs ] ++ buildInputs;
installPhase = ''
mkdir -p $out
mxbuild --target=package \
--output=$out/${name}.mda \
--java-home ${jdk} \
--java-exe-path ${jdk}/bin/java \
${stdenv.lib.optionalString looseVersionCheck "--loose-version-check"} \
"$(echo *.mpr)"
mkdir -p $out/nix-support
echo "file binary-dist \"$(echo $out/*.mda)\"" > $out/nix-support/hydra-build-products
} // extraArgs)

The above expression is a function that composes another function that takes common Mendix parameters -- the application name, the version of MxBuild that we want, and whether we want to use a strict or loose version check (it is possible to compile a project developed for a different version of Mendix, if desired).

In the body, we create an output directory in the Nix store, we invoke mxbuild to compile to MDA app and put it in the Nix store, and we generate a configuration file that makes it possible to expose the MDA file as a build product, when Hydra: the Nix-based continuous integration service is being used.

With the build function shown in the code fragment above, we can write a Nix expression for a Mendix project:

{ pkgs ? import { inherit system; }
, system ? builtins.currentSystem

mendixPkgs = import ./nixpkgs-mendix/top-level/all-packages.nix {
inherit pkgs system;
mendixPkgs.packageMendixApp {
name = "conferenceschedule";
src = /home/sander/SharedWindowsFolder/ConferenceSchedule-main;
mendixVersion = "7.13.1";

The above expression (conferenceschedule.nix) can be used to build an MDA file for a project named: conferenceschedule, residing in the /home/sander/SharedWindowsFolder/ConferenceSchedule-main directory using Mendix version 7.13.1.

By running the following command-line instruction, we can use Nix to build our MDA:

$ nix-build conferenceschedule.nix
$ ls /nix/store/nbaa7fnzi0xw9nkf27mixyr9awnbj16i-conferenceschedule
conferenceschedule.mda nix-support

In addition to building an MDA, Nix will also download the dependencies: the Mendix runtime and MxBuild, if they have not been installed yet.

Running a Mendix application

Producing an MDA file is an important ingredient in the deployment lifecycle of a Mendix application, but it is not entirely what we want -- what we really want is a running system. To get a running system, additional steps are required beyond producing an MDA:

  • We must unzip the MDA file into a directory with write permissions.
  • We must create writable state sub directories, e.g. data/tmp, data/files.
  • After starting the runtime, we must configure the admin interface, to send instructions to the runtime to initialize the database and start the app:

    $ export M2EE_ADMIN_PORT=9000
    $ export M2EE_ADMIN_PASS=secret
  • Finally, we must communicate over the admin interface to configure, initialize the database and start the app:

    curlCmd="curl -X POST http://localhost:$M2EE_ADMIN_PORT \
    -H 'Content-Type: application/json' \
    -H 'X-M2EE-Authentication: $(echo -n "$M2EE_ADMIN_PASS" | base64)' \
    -H 'Connection: close'"
    $curlCmd -d '{ "action": "update_appcontainer_configuration", "params": { "runtime_port": 8080 } }'
    $curlCmd -d '{ "action": "update_configuration", "params": { "DatabaseType": "HSQLDB", "DatabaseName": "myappdb", "DTAPMode": "D" } }'
    $curlCmd -d '{ "action": "execute_ddl_commands" }'
    $curlCmd -d '{ "action": "start" }'

These deployment steps cannot be executed by Nix, because Nix's purpose is to manage packages, but not the state of a running process. To automate these remaining parts, we generate scripts that execute the above listed steps.

NixOS integration

NixOS is a Linux distribution that extends Nix's deployment facilities to complete systems. Aside from using the Nix package manage to deploy all packages including the Linux kernel, NixOS' main objective is to deploy an entire system from a single declarative specification capturing the structure of an entire system.

NixOS uses systemd for managing system services. The systemd configuration files are generated by the Nix package manager. We can integrate our Mendix activation scripts with a generated systemd job to fully automate the deployment of a Mendix application.

{pkgs, ...}:

... =
runScripts = ...
appContainerConfigJSON = ...
configJSON = ...
in {
enable = true;
description = "My Mendix App";
wantedBy = [ "" ];
environment = {
M2EE_ADMIN_PASS = "secret";
M2EE_ADMIN_PORT = "9000";
MENDIX_STATE_DIR = "/home/mendix";
serviceConfig = {
ExecStartPre = "${runScripts}/bin/undeploy-app";
ExecStart = "${runScripts}/bin/start-appcontainer";
ExecStartPost = "${runScripts}/bin/configure-appcontainer ${appContainerConfigJSON} ${configJSON}";

The partial NixOS configuration shown above defines a systemd job that runs three scripts (as shown in the last three lines):

  • The undeploy-app script removes all non-state artefacts from the working directory.
  • The start-appcontainer script starts the Mendix runtime.
  • The configure-appcontainer script configures the runtime, such as the embedded Jetty server and the database, and starts the application.

Writing a systemd job (as shown above) is a bit cumbersome. To make it more convenient to use, I captured all Mendix runtime functionality in a NixOS module, with an interface exposing all relevant configuration properties.

By importing the Mendix NixOS module into a NixOS configuration, we can conveniently define a machine configuration that runs our Mendix application:

{pkgs, ...}:

require = [ ../nixpkgs-mendix/nixos/modules/mendixappcontainer.nix ];

services = {
openssh.enable = true;

mendixAppContainer = {
enable = true;
adminPassword = "secret";
databaseType = "HSQLDB";
databaseName = "myappdb";
DTAPMode = "D";
app = import ../../conferenceschedule.nix {
inherit pkgs;
inherit (pkgs.stdenv) system;

networking.firewall.allowedTCPPorts = [ 8080 ];

In the above configuration, the mendixAppContainer captures all the properties of the Mendix application that we want to run:

  • The password for communicating over the admin interface.
  • The type of database we want to use (in this particular case an in memory HSQLDB instance) and the name of the database.
  • Whether we want to use the application in development (D), test (T), acceptance (A) or production (P) mode.
  • A reference to the MDA that we want to deploy (deployed by a Nix expression that invokes the Mendix build function abstraction shown earlier).

By writing a NixOS configuration file, storing it in /etc/nixos/configuration.nix and running the following command-line instruction:

$ nixos-rebuild switch

A complete system gets deployed with the Nix package manager that runs our Mendix application.

For production use, HSQLDB and directly exposing the embedded Jetty HTTP server is not recommended -- instead a more sophisticated database, such as PostgreSQL should be used. For serving HTTP requests, it is recommended to use nginx as a reverse proxy and use it to serve static data and provide caching.

It is also possible to extend the above configuration with a PostgreSQL and nginx system service. The NixOS module system can be used to retrieve the properties from the Mendix app container to make the configuration process more convenient.


In this blog post, I have investigated how Mendix applications can be deployed by using tools from the Nix project. This resulted in the following deployment functionality:

  • A Nix function that can be used to compile an MDA file from a Mendix project.
  • Generated scripts that configure and launch the runtime and the application.
  • A NixOS module that can be used to deploy a running Mendix app as part of a NixOS machine configuration.

Future work

Currently, only single machine deployments are possible. It may also be desirable to connect a Mendix application to a database that is stored on a remote machine. Furthermore, we may also want to deploy multiple Mendix applications to multiple machines in a network. With Disnix, it is possible to automate such scenarios.


The Nix function abstractions and NixOS module can be obtained from the Mendix GitHub page and used under the terms and conditions of the Apache Software License version 2.0.


The work described in this blog post is the result of the so-called "crafting days", in which Mendix supports its employees to experiment completely freely two full days a month.

Furthermore, I have given a presentation about the functionality described in this blog post and an introduction to the Nix project:

and I have also written an introduction-oriented article about it on the Mendix blog.

by Sander van der Burg ( at August 02, 2018 09:51 PM

Graham Christensen

an EPYC NixOS build farm

EPYC vs m1.xlarge.x86

Nix is a powerful package manager for Linux and other Unix systems that makes package management reliable and reproducible. It provides atomic upgrades and rollbacks, side-by-side installation of multiple versions of a package, multi-user package management and easy setup of build environments.

The Nix community has collected and curated build instructions (expressions) for many thousands of packages in the Nix package collection, Nixpkgs. The collection is a large GitHub repository, which receives over a thousand pull requests each month. Some of these changes can sometimes cause all of the packages to rebuild.

To test changes to Nixpkgs and release updates for Nix and NixOS, we necessarily created our own build infrastructure. This allows us to give better quality guarantees to our users.

The NixOS infrastructure team runs several types of servers: VMs on AWS, bare metal, macOS systems, among others. We build thousands of packages a day, sometimes reaching many tens of thousands per day.

Some of our builds depend on unique features like KVM which are only available by using bare metal servers, and all of them benefit from numerous, powerful cores.

For over a year now, (where you can natively deploy NixOS by the way!) has been generously providing bare metal hardware build resources for the NixOS build farm, and together we were curious how the new EPYC from AMD would compare to the hardware we were already using.

For this benchmark we are comparing Packet’s m1.xlarge.x86 against Packet’s first EPYC machine, c2.medium.x86. Hydra already runs a m1.xlarge.x86 build machine, so the comparison will be helpful in deciding if we should replace it with EPYC hardware.

AMD EPYC has the chance to reduce our hardware footprint, reduce our need for our AWS scale-out, and improve our turnaround time for time-sensitive security patches.

System Comparison:

  m1.xlarge.x86 c2.medium.x86 (EPYC)
NixOS Version 18.03.132610.49a6964a425 18.03.132610.49a6964a425
Cost per Hour $1.70/hr $1.00/hr
CPU 24 Physical Cores @ 2.2 GHz 24 Physical Cores @ 2.2 GHz
  2 x Xeon® E5-2650 v4 1 x AMD EPYC™ 7401P
RAM 256 GB of ECC RAM 64 GB of ECC RAM

Benchmark Methods

All of these tests were run on a fresh server running NixOS 18.03 and building Nixpkgs revision 49a6964a425.

For each test, I ran each build five times on each machine with --check which forces a local rebuild:

checkM() {
  nix-build . -A "$1"
  for i in $(seq 1 5); do
    rm result
    /run/current-system/sw/bin/time -ao "./time-$1" \
      nix-build . -A "$1" --check

Benchmark Results

Kernel Builds

NixOS builds a generously featureful kernel by default, and the build can take some time. However, the compilation is easy to spread across multiple cores. In what we will further see is a theme, the EPYC beat the Intel CPU with by about five minutes, or about 35% speed-up.

nix-build '<nixpkgs>' -A linuxPackages.kernel

Trial m1.xlarge.x86 (seconds) EPYC (seconds) Speed-up
1 823.77 535.30 35.02%
2 821.27 536.94 34.62%
3 824.92 538.45 34.73%
4 827.74 537.79 35.03%
5 827.37 539.98 34.74%

NixOS Tests

The NixOS release process is novel in that package updates happen automatically as an entire cohesive set. We of course test that specific software compiles properly, but we are also able to perform fully automatic integration tests. We test that the operating system can boot, desktop environments work, and even server tests like validating that our MySQL server replication is still working. These tests happen automatically on every release and is a unique benefit to the Nix build system being applied to an operating system.

These tests use QEMU and KVM, and spawn one or more virtual machines running NixOS.

Plasma5 and KDE

The Plasma5 test looks like the following:

  1. First launches a NixOS VM configured to run the Plasma5 desktop manager.

  2. Uses Optical Character Recognition to wait until it sees the system is ready for Alice to log in and then types in her password:
    $machine->waitForText(qr/Alice Foobar/);
  3. Waits for the Desktop to be showing, which reliably indicates she has finished logging in:
    $machine->waitForWindow("^Desktop ");
  4. Launches Dolphin, Konsole, and Settings and waits for each window to appear before continuing:
    $machine->execute("su - alice -c 'dolphin &'");
    $machine->waitForWindow(" Dolphin");
    $machine->execute("su - alice -c 'konsole &'");
    $machine->execute("su - alice -c 'systemsettings5 &'");
  5. If all of these work correctly, the VM shuts down and the tests pass.

Better than the kernel tests, we’re pushing a 40% improvement.

nix-build '<nixpkgs/nixos/tests/plasma5.nix>'

Trial m1.xlarge.x86 (seconds) EPYC (seconds) Speed-up
1 185.73 115.23 37.96%
2 189.53 116.11 38.74%
3 191.88 115.18 39.97%
4 189.38 116.05 38.72%
5 188.98 115.54 38.86%

MySQL Replication

The MySQL replication test launches a MySQL master and two slaves, and runs some basic replication tests. NixOS allows you to define replication as part of the regular configuration management, so I will start by showing the machine configuration of a slave:

      services.mysql.replication.role = "slave";
      services.mysql.replication.serverId = 2;
      services.mysql.replication.masterHost = "master";
      services.mysql.replication.masterUser = "replicate";
      services.mysql.replication.masterPassword = "secret";
  1. This test starts by starting the $master and waiting for MySQL to be healthy:
  2. Continues to start $slave1 and $slave2 and wait for them to be up:
  3. It then validates some of the scratch data loaded in to the $master has replicated properly to $slave2:
         echo 'use testdb; select * from tests' \
             | mysql -u root -N | grep 4
  4. Then shuts down $slave2:
    $slave2->succeed("systemctl stop mysql");
  5. Writes some data to the $master:
         echo 'insert into testdb.tests values (123, 456);' \
             | mysql -u root -N
  6. Starts $slave2, and verifies the queries properly replicated from the $master to the slave:
    $slave2->succeed("systemctl start mysql");
         echo 'select * from testdb.tests where Id = 123;' \
             | mysql -u root -N | grep 456

Due to the multiple VM nature, and increased coordination between the nodes, we saw a 30% increase.

nix-build '<nixpkgs/nixos/tests/mysql-replication.nix>'

Trial m1.xlarge.x86 (seconds) EPYC (seconds) Speed-up
1 42.32 29.94 29.25%
2 43.46 29.48 32.17%
3 42.43 29.83 29.70%
4 43.27 29.66 31.45%
5 42.07 29.37 30.19%


The BitTorrent test follows the same pattern of starting and stopping a NixOS VM, but this test takes it a step further and tests with four VMs which talk to each other. I could do a whole post just on this test, but in short:

  • a machine serving as the tracker, named $tracker.
  • a client machine, $client1
  • another client machine, $client2
  • a router which facilitates some of the incredible things this test is actually doing.

I’m going to gloss over the details here, but:

  1. The $tracker starts seeding a file:
    $tracker->succeed("opentracker -p 6969 &");
    my $pid = $tracker->succeed("transmission-cli \
         /tmp/test.torrent -M -w /tmp/data");
  2. $client1 fetches the file from the tracker:
    $client1->succeed("transmission-cli \
         http://tracker/test.torrent -w /tmp &");
  3. Kills the seeding process on tracker so now only $client1 is able to serve the file:
    $tracker->succeed("kill -9 $pid");
  4. $client2 fetches the file from $client1:
    $client2->succeed("transmission-cli \
         http://tracker/test.torrent -M -w /tmp &");

If both $client1 and $client2 receive the file intact, the test passes.

This test sees a much lower performance improvement, largely due to the networked coordination across four VMs.

nix-build '<nixpkgs/nixos/tests/bittorrent.nix>'

Trial m1.xlarge.x86 (seconds) EPYC (seconds) Speed-up
1 54.22 45.37 16.32%
2 54.40 45.51 16.34%
3 54.57 45.34 16.91%
4 55.31 45.32 18.06%
5 56.07 45.45 18.94%

The remarkable part I left out is Client1 uses UPnP to open a port of the firewall of the router which Client2 uses to read from Client1.

Standard Environment

Our standard build environment, stdenv, is at the deepest part of this tree and almost nothing can be built until after it is completed. stdenv is like build-essential on Ubuntu.

This is an important part of the performance story for us: Nix builds are represented by a tree, where Nix will schedule as many parallel builds as possible as long as its parents are done building. Single core performance is the primary factor impacting how long these builds take. Shaving off even a few minutes means our entire build cluster is able to get to share the work sooner.

For this reason, the stdenv test is the one exception to the methodology. I wanted to test a full build from bootstrap to a working standard build environment. To force this, I changed the very root build causing everything “beneath” it to require a rebuild by applying the following patch:

diff --git a/pkgs/stdenv/linux/default.nix b/pkgs/stdenv/linux/default.nix
index 63b4c8ecc24..1cd27f216f9 100644
--- a/pkgs/stdenv/linux/default.nix
+++ b/pkgs/stdenv/linux/default.nix
@@ -37,6 +37,7 @@ let

   commonPreHook =
       ${if system == "x86_64-linux" then "NIX_LIB64_IN_SELF_RPATH=1" else ""}

The impact on build time here is stunning and makes an enormous difference: almost a full 20 minutes shaved off the bootstrapping time.

nix-build '<nixpkgs>' -A stdenv

Trial m1.xlarge.x86 (seconds) EPYC (seconds) Speed-up
1 2,984.24 1,803.40 39.57%
2 2,976.10 1,808.97 39.22%
3 2,990.66 1,808.21 39.54%
4 2,999.36 1,808.30 39.71%
5 2,988.46 1,818.84 39.14%


This EPYC machine has made a remarkable improvement in our build times and is helping the NixOS community push timely security updates and software updates to users and businesses alike. We look forward to expanding our footprint to keep up with the incredible growth of the Nix project.

Thank you to for providing this hardware free of charge for this test through their EPYC Challenge.

and thank you Gustav, Daiderd, Andi, and zimbatm for their help with this article

August 02, 2018 12:00 AM

July 26, 2018

Sander van der Burg

Layered build function abstractions for building Nix packages

I have shown quite a few Nix expression examples on my blog. When it is desired to write a Nix expression for a package, it is a common habit to invoke the stdenv.mkDerivation {} function, or functions that are abstractions built around it.

For example, if we want to build a package, such as the trivial GNU Hello package, we can write the following expression:

with import <nixpkgs> {};

stdenv.mkDerivation {
name = "hello-2.10";

src = fetchurl {
url = mirror://gnu/hello/hello-2.10.tar.gz;
sha256 = "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i";

meta = {
description = "A program that produces a familiar, friendly greeting";
longDescription = ''
GNU Hello is a program that prints "Hello, world!" when you run it.
It is fully customizable.
homepage =;
license = "GPLv3+";

and build it with the Nix package manager as follows:

$ nix-build

The above code fragment does probably not look too complicated and is quite easy to repeat for build other kinds of GNU Autotools/GNU Make-based packages. However, stdenv.mkDerivation {} is a big/complex function abstraction that has many responsibilities.

Its most important responsibility is to compose a so-called pure build environments, in which various restrictions are imposed on the build scripts to provide better guarantees that builds are pure (meaning: that they always produce the same (nearly) bit-identical result if the dependencies are the same), such as:

  • Build scripts can only write to designated output directories and temp directories. They are restricted from writing to any other file system location.
  • All environment variables are cleared and some of them are set to default or dummy values, such as search path environment variables (e.g. PATH).
  • All build results are made immutable by removing the write permission bits and their timestamps are reset to one second after the epoch.
  • Running builds as unprivileged users.
  • Optionally, builds run in a chroot environment and use namespaces to restrict access to the host filesystem and the network as much as possible.

In addition to purity, the stdenv.mkDerivation {} function has many additional responsibilities. For example, it also implements a generic builder that is clever enough to build a GNU Autotools/GNU Make project without specifying any build instructions.

For example, the above Nix expression for GNU Hello does not specify any build instructions. The generic builder automatically unpacks the tarball, opens the resulting directory and invokes ./configure --prefix=$out; make; make install with the appropriate parameters.

Because stdenv.mkDerivation {} has many responsibilities and nearly all packages in Nixpkgs depend on it, its implementation is very complex (e.g. thousands of lines of code) and hard to change.

As a personal exercise, I have developed a function abstraction with similar functionality from scratch. My implementation can be decomposed into layers in which every abstraction layer gradually adds additional responsibilities.

Writing "raw" derivations

stdenv.mkDerivation is a function abstraction, not a feature of the Nix expression language. To compose "pure" build environments, stdenv.mkDerivation invokes a Nix expression language construct -- the derivation {} builtin.

(As a sidenote: derivation is strictly speaking not a builtin, but an abstraction built around the derivationStrict builtin, but this is something internal to the Nix package manager. It does not matter for the scope of this blog post).

Despite the fact that this low level function is not commonly used, it is also possible to directly invoke it and compose low-level "raw" derivations to build packages. For example, we can write the following Nix expression (default.nix):

derivation {
name = "test";
builder = ./;
system = "x86_64-linux";
person = "Sander";

The above expression invokes the derivation builtin function that composes a "pure" build environment:

  • The name attribute specifies the name of the package, that should appear in the resulting Nix store path.
  • The builder attribute specifies that the executable should be run inside the pure build environment.
  • The system attribute is used to tell Nix that this build should be carried out for x86-64 Linux systems. When Nix is unable to build the package for the requested system architecture, it can also delegate a build to a remote machine that is capable.
  • All attributes (including the attributes described earlier) are converted to environment variables (e.g. strings, numbers and URLs are converted to strings and the boolean value: 'true' is converted to '1') and can be used by the builder process for a variety of reasons.

We can implement the builder process (the build script) as follows:

#!/bin/sh -e

echo "Hello $person" > $out

The above script generates a greeting message for the provided person (exposed as an environment variable by Nix) and writes it to the Nix store (the output path is provided by the out environment variable).

We can evaluate the Nix expression (and generate the output file with the Hello greeting) by running:

$ nix-build
$ cat result
Hello Sander

The return value of the derivation {} function is a bit confusing. At first sight, it appears to be a string corresponding to the output path in the Nix store. However, some investigation with the nix repl tool reveals that it is much more than that:

$ nix repl
Welcome to Nix version 2.0.4. Type :? for help.

when importing the derivation:

nix-repl> test = import ./default.nix

and describing the result:

nix-repl> :t test
a set

we will see that the result is actually an attribute set, not a string. By requesting the attribute names, we will see the following attributes:

nix-repl> builtins.attrNames test
[ "all" "builder" "drvAttrs" "drvPath" "name" "out" "outPath" "outputName" "person" "system" "type" ]

It appears that the resulting attribute set has the same attributes as the parameters that we passed to derivation, augmented by the following additional attributes:

  • The type attribute that refers to the string: "derivation".
  • The drvAttrs attribute refers to an attribute set containing the original parameters passed to derivation {}.
  • drvPath and outPath refer to the Nix store paths of the store derivation file and output of the build. A side effect of requesting these members is that the expression gets evaluated or built.
  • The out attribute is a reference to the derivation producing the out result, all is a list of derivations of all outputs produced (Nix derivations can also produce multiple output paths in the Nix store).
  • In case there are multiple outputs, the outputName determines the name of the output path that is the default.

Providing basic dependencies

Although we can use the low-level derivation {} function to produce a very simple output file in the Nix store, it is not very useful on its own.

One important limitation is that we only have a (Bourne-compatible) shell (/bin/sh), but no other packages in the "pure" build environment. Nix prevents unspecified dependencies from being found to make builds more pure.

Since a pure build environment is almost entirely empty (with the exception of the shell), the amount of things we can do in an environment created by derivation {} is very limited -- most of the commands that build scripts run are provided by executables belonging to external packages, e.g. commands such as cat, ls (GNU Coreutils), grep (GNU Grep) or make (GNU Make) and should be added to the PATH search environment variable in the build environment.

We may also want to configure additional environment variables to make builds more pure -- for example, on Linux systems, we want to set the TZ (timezone) environment variable to UTC to prevent error messages, such as: "Local time zone must be set--see zic manual page".

To make the execution of more complex build scripts more convenient, we can create a setup script that we can include in a every build script that adds basic utilities to the PATH search environment variable, configures these additional environment variables, and sets the SHELL environment variable to the bash shell residing in the Nix store. We can create a package named: stdenv that provides a setup script to accomplish this:

{bash, basePackages, system}:

shell = "${bash}/bin/sh";
derivation {
name = "stdenv";
inherit shell basePackages system;
builder = shell;
args = [ "-e" ./ ];

The builder script of the stdenv package can be implemented as follows:

set -e

# Setup PATH for base packages
for i in $basePackages

export PATH="$basePackagesPath"

# Create setup script
mkdir $out
cat > $out/setup <<EOF
export SHELL=$shell
export PATH="$basePackagesPath"

# Allow the user to install stdenv using nix-env and get the packages
# in stdenv.
mkdir $out/nix-support
echo "$basePackages" > $out/nix-support/propagated-user-env-packages

The above script adds all base packages (GNU Coreutils, Findutils, Diffutils, sed, grep, gawk and bash) to the PATH of builder and creates a script in $out/setup that exports the PATH environment variable and the location to the bash shell.

We can use the stdenv (providing this setup script) as a dependency for building a package, such as:


derivation {
name = "hello";
inherit stdenv;
builder = ./;
system = "x86_64-linux";

In the corresponding builder script, we include the setup script in the first line and we, for example, invoke various external commands to generate a shell script that says: "Hello world!":

#!/bin/sh -e
source $stdenv/setup

mkdir -p $out/bin

cat > $out/bin/hello <<EOF
#!$SHELL -e

echo "Hello"

chmod +x $out/bin/hello

The above script works because the setup script adds GNU Coreutils (that includes cat, mkdir and chmod) to the PATH of the builder.

Writing more simple derivations

Using a setup script makes writing build scripts somewhat practical, but there are still a number inconveniences we have to cope with.

The first inconvenience is the system parameter -- in most cases, we want to build a package for the same architecture as the host system's architecture and preferably we want the same architecture for all other packages that we intend to deploy.

Another issue is the shell. /bin/sh is, in a sandbox-enabled Nix installations, a minimal Bourne-compatible shell provided by Busybox, or a reference to the host system's shell in non-sandboxed installations. The latter case could be considered an impurity, because we do not know what kind of shell (e.g. bash, dash, ash ?) or version of a shell we are using (e.g. 3.2.57, 4.3.30 ?). Ideally, we want to use a shell that is provided as a Nix package in the Nix store, because that version is pure.

(As a sidenote: in Nixpkgs, we use the bash shell to run build commands, but this is not a strict requirement. For example, GNU Guix (a package manager that uses several components of the Nix package manager) uses both Guile as a host and guest language. In theory, we could also launch a different kind of interpreter than bash).

The third issue is the meta parameter -- for every package, it is possible to specify meta-data, such as a description, license and homepage reference as an attribute set. Unfortunately, attribute sets cannot be converted to environment variables. To deal with this problem, the meta attribute needs to be removed before we invoke derivation {} and be readded to the return attribute set. (IMO I believe this ideally should be something the Nix package manager could solve by itself).

We can hide all these inconveniences by creating a simple abstraction function that I will call: stdenv.simpleDerivation that can be implemented as follows:

{stdenv, system, shell}:
{builder, ...}@args:

extraArgs = removeAttrs args [ "builder" "meta" ];

buildResult = derivation ({
inherit system stdenv;
builder = shell; # Make bash the default builder
args = [ "-e" builder ]; # Pass builder executable as parameter to bash
setupSimpleDerivation = ./;
} // extraArgs);
buildResult //
# Readd the meta attribute to the resulting attribute set
(if args ? meta then { inherit (args) meta; } else {})

The above Nix expression basically removes the meta argument, then invokes the derivation {} function, sets the system parameter, uses bash as builder and passes the builder executable as an argument to bash. After building the package, the meta attribute gets readded to the result.

With this abstraction, we can reduce the complexity of the previously shown Nix expression to something very simple:


stdenv.simpleDerivation {
name = "hello";
builder = ./;
meta = {
description = "This is a simple testcase";

The function abstraction is also sophisticated enough to build something more complex, such as GNU Hello. We can write the following Nix expression that passes all dependencies that it requires as function parameters:

{stdenv, fetchurl, gnumake, gnutar, gzip, gcc, binutils}:

stdenv.simpleDerivation {
name = "hello-2.10";
src = fetchurl {
url = mirror://gnu/hello/hello-2.10.tar.gz;
sha256 = "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i";
inherit stdenv gnumake gnutar gzip gcc binutils;
builder = ./;

We can use the following builder script to build GNU Hello:

source $setupSimpleDerivation

export PATH=$PATH:$gnumake/bin:$gnutar/bin:$gzip/bin:$gcc/bin:$binutils/bin

tar xfv $src
cd hello-2.10
./configure --prefix=$out
make install

The above script imports a setup script configuring basic dependencies, then extends the PATH environment variable with additional dependencies, and then executes the commands to build GNU Hello -- unpacking the tarball, running the configure script, building the project, and installing the package.

The run command abstraction

We can still improve a bit upon the function abstraction shown previously -- one particular inconvenience that remains is that you have to write two files to get a package built -- a Nix expression that composes the build environment and a builder script that carries out the build steps.

Another repetitive task is configuring search path environment variables (e.g. PATH, PYTHONPATH, CLASSPATH etc.) to point to the appropriate directories in the Nix store. As may be noticed by looking at the code of the previous builder script, this process is tedious.

To address these inconveniences, I have created another abstraction function called: stdenv.runCommand that extends the previous abstraction function -- when no builder parameter has been provided, this function executes a generic builder that will evaluate the buildCommand environment variable containing a string with shell commands to execute. This feature allows us to rewrite the first example (that generates a shell script) to one file:


stdenv.runCommand {
name = "hello";
buildCommand = ''
mkdir -p $out/bin
cat > $out/bin/hello <<EOF
#! ${} -e

echo "Test"
chmod +x $out/bin/hello

Another feature of the stdenv.runCommand abstraction is to provide a generic mechanism to configure build-time dependencies -- all build-time dependencies that a package needs can be provided as a list of buildInputs. The generic builder carries out all necessary build steps to make them available. For example, when a package provides a bin/ sub folder, then it will be automatically added to the PATH environment variable.

Every package can bundle a script that modifies the build environment so that it knows how dependencies for this package can be configured. For example, the following partial expression represents the Perl package that bundles a setup script:

{stdenv, ...}:

stdenv.mkDerivation {
name = "perl";
setupHook = ./

The setup hook can automatically configure the PERL5LIB search path environment variable for all packages that provide Perl modules:

addToSearchPath PERL5LIB $1/lib/perl5/site_perl


When we add perl as a build input to a package, then its setup hook configures the generic builder in such a way that the PERL5LIB environment variable is automatically configured when we provide a Perl module as a build input.

We can also more conveniently build GNU Hello, by using the buildInputs parameter:

{stdenv, fetchurl, gnumake, gnutar, gzip, gcc, binutils}:

stdenv.runCommand {
name = "hello-2.10";
src = fetchurl {
url = mirror://gnu/hello/hello-2.10.tar.gz;
sha256 = "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i";
buildInputs = [ gnumake gnutar gzip gcc binutils ];
buildCommand = ''
tar xfv $srcb
cd hello-2.10
./configure --prefix=$out
make install

Compared to the previous GNU Hello example, this Nix expression is much simpler and more intuitive to write.

The run phases abstraction

We can improve the ease of use for build processes even further. GNU Hello, and many other GNU packages and other system software used for Linux are GNU Autotools/GNU Make based and follow similar conventions including the build commands you need to carry out. Likewise, many other software projects use standardized build tools that follow conventions.

As a result, when you have to maintain a collection of packages, you probably end up writing the same kinds of build instructions over and over again.

To alleviate this problem, I have created another abstraction layer, named: stdenv.runPhases making it possible to define and execute phases in a specific order. Every phase has a pre and post hook (a script that executes before and after each phase) and can be disabled or reenabled with a do* or dont* flag.

With this abstraction function, we can divide builds into phases, such as:


stdenv.runPhases {
name = "hello";
phases = [ "build" "install" ];
buildPhase = ''
cat > hello <<EOF
#! ${} -e
echo "Hello"
chmod +x hello
installPhase = ''
mkdir -p $out/bin
mv hello $out/bin

The above Nix expression executes a build and install phase. In the build phase, we construct a script that echoes "Hello", and in the install phase we move the script into the Nix store and we make it executable.

In addition to environment variables, it is also possible to define the phases in a setup script as shell functions. For example, we can also use a builder script:


stdenv.runPhases {
name = "hello2";
builder = ./;

and define the phases in the builder script:

source $setupRunPhases

phases="build install"

cat > hello <<EOF
#! $SHELL -e
echo "Hello"
chmod +x hello

mkdir -p $out/bin
mv hello $out/bin


Another feature of this abstraction is that we can also define exitHook and failureHook parameters that will be executed if the builder succeeds or fails.

In the next sections, I will show abstractions built on top of stdenv.runPhases that can be used to hide implementation details of common build procedures.

The generic build abstraction

For many build procedures, we need to carry out the same build steps, such as: unpacking the source archives, applying patches, and stripping debug symbols from the resulting ELF executables.

I have created another build function abstraction named: stdenv.genericBuild that implements a number of common build phases:

  • The unpack phase generically unpacks the provided sources, makes it content writable and opens the source directory. The unpack command is determined by the unpack hook that each potential unpacker provides -- for example, the GNU tar package includes a setup hook that untars the file if it looks like a tarball or compressed tarball:

    case "$1" in
    tar xfv "$1"
    return 1

  • The patch phase applies any patch that is provided by the patches parameter uncompressing them when necessary. The uncompress file operation also works with setup hooks -- uncompressor packages (such as gzip and bzip2) provide a setup hook that uncompresses the file if it is of the right filetype.
  • The strip phase processes all sub directories containing ELF binaries (e.g. bin/ and lib/) and strips their debugging symbols. This reduces the size of the binaries and removes non-deterministic timestamps.
  • The patchShebangs phase processes all scripts with a shebang line and changes it to correspond to a path in the Nix store.
  • The compressManPages phase compresses all manual pages with gzip.

We can also add GNU patch as as base package for this abstraction function, since it is required to execute the patch phase. As a result, it does not need to be specified as a build dependency for each package.

This function abstraction alone is not very useful, but it captures all common aspects that most build tools use, such as GNU Make, CMake or SCons projects.

I can reduce the size of the previously shown GNU Hello example Nix expression to the following:

{stdenv, fetchurl, gnumake, gnutar, gzip, gcc, binutils}:

stdenv.genericBuild {
name = "hello-2.10";
src = fetchurl {
url = mirror://gnu/hello/hello-2.10.tar.gz;
sha256 = "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i";
buildInputs = [ gnumake gnutar gzip gcc binutils ];
buildCommandPhase = ''
./configure --prefix=$out
make install

In the above expression, I no longer have to specify how to unpack the download GNU Hello source tarball.

GNU Make/GNU Autotools abstraction

We can extend the previous function abstraction even further with phases that automate a complete GNU Make/GNU Autotools based workflow. This abstraction is what we can call stdenv.mkDerivation and is comparable in terms of features with the implementation in Nixpkgs.

We can adjust the phases to include a configure, build, check and install phase. The configure phase checks whether a configure script exists and executes it. The build, check and install phases will execute: make, make check and make install with appropriate parameters.

We can also add common packages that we need to build these projects as base packages so that they no longer have to be provided as a build input: GNU Tar, gzip, bzip2, xz, GNU Make, Binutils and GCC.

With these additional phases and base packages, we can reduce the GNU Hello example to the following expression:

{stdenv, fetchurl}:

stdenv.mkDerivation {
name = "hello-2.10";
src = fetchurl {
url = mirror://gnu/hello/hello-2.10.tar.gz;
sha256 = "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i";

The above Nix expression does not contain any installation instructions -- the generic builder is able to figure out all steps on its own.

Composing custom function abstractions

I have shown several build abstraction layers implementing most features that are in the Nixpkgs version of stdenv.mkDerivation. Aside from clarity, another objective of splitting this function in layers is to make the composition of custom build abstractions more convenient.

For example, we can implement the trivial builder named: writeText whose only responsibility is to write a text file into the Nix store, by extending stdenv.runCommand. This abstraction suffices because writeText does not require any build tools, such as GNU Make and GCC, and it also does not need any generic build procedure executing phases:


{ name # the name of the derivation
, text
, executable ? false # run chmod +x ?
, destination ? "" # relative path appended to $out eg "/bin/foo"
, checkPhase ? "" # syntax checks, e.g. for scripts

stdenv.runCommand {
inherit name text executable;
passAsFile = [ "text" ];

# Pointless to do this on a remote machine.
preferLocalBuild = true;
allowSubstitutes = false;

buildCommand = ''
mkdir -p "$(dirname "$target")"

if [ -e "$textPath" ]
mv "$textPath" "$target"
echo -n "$text" > "$target"

[ "$executable" = "1" ] && chmod +x "$target" || true

We can also make a builder for Perl packages, by extending: stdenv.mkDerivation -- Perl packages also use GNU Make as a build system. Its only difference is the configuration step -- it runs Perl's MakeMaker script to generate the Makefile. We can simply replace the configuration phase for GNU Autotools by an implementation that invokes MakeMaker.

When developing custom abstractions, I basically follow this pattern:

{stdenv, foo, bar}:
{name, buildInputs ? [], ...}@args:

extraArgs = removeAttrs args [ "name" "buildInputs" ];
stdenv.someBuildFunction ({
name = "mypackage-"+name;
buildInputs = [ foo bar ] ++ buildInputs;
} // extraArgs)

  • A build function is a nested function in which the first line is a function header that captures the common build-time dependencies required to build a package. For example, when we want to build Perl packages, then perl is such a common dependency.
  • The second line is the inner function header that captures the parameters that should be passed to the build function. The notation allows an arbitrary number of parameters. The parameters in the { } block (name, buildInputs) are considered to have a specific use in the body of the function. The remainder of parameters are non-essential -- they are used as environment variables in the builder environment or they can be propagated to other functions.
  • We compose an extraArgs variable that contains all non-essential arguments that we can propagate to the build function. Basically, all function arguments that are used in the body need to be removed and function arguments that are attribute sets, because they cannot be converted to strings.
  • In the body of the function, we set up important aspects of the build environment, such as the mandatory build parameters, and we propagate the remaining function arguments to the builder abstraction function.

Following this pattern also ensures that the builder is flexible enough to be extended and modified. For example, by extending a function that is based on stdenv.runPhases the builder can be extended with custom phases and build hooks.


In this blog post, I have derived my own reimplementation of Nixpkgs's stdenv.mkDerivation function that consists of the following layers each gradually adding functionality to the "raw" derivation {} builtin:

  1. "Raw" derivations
  2. The setup script ($stdenv/setup)
  3. Simple derivation (stdenv.simpleDerivation)
  4. The run command abstraction (stdenv.runCommand)
  5. The run phases abstraction (stdenv.runPhases)
  6. The generic build abstraction (stdenv.genericBuild)
  7. The GNU Make/GNU Autotools abstraction (stdenv.mkDerivation)

The features that the resulting stdenv.mkDerivation provides are very similar to the Nixpkgs version, but not entirely identical. Most notably, cross compiling support is completely absent.

From the experience, I have a number of improvement suggestions that we may want to implement in Nixpkgs version to improve the quality and clarity of the generic builder infrastructure:

  • We could also split the implementation of stdenv.mkDerivation and the corresponding script into layered sub functions. Currently, the script is huge (e.g. over 1200 LOC) and has many responsibilities (perhaps too many). By splitting the build abstraction functions and their corresponding setup scripts, we can separate concerns better and reduce the size of the script so that it becomes more readable and better maintainable.
  • In the Nixpkgs implementation, the phases that the generic builder executes are built for GNU Make/GNU Autotools specifically. Furthermore, the invocation of pre and post hooks and do and dont flags are all hand coded for every phase (there is no generic mechanism that deals with them). As a result, when you define a new custom phase, you need to reimplement the same aspects over and over again. In my implementation, you only have to define phases -- the generic builder automatically executes the coresponding pre and post hooks and evaluates the do and dont flags.
  • In the Nixpkgs implementation there is no uncompressHook -- as a result, the decompression of patch files is completely handcoded for every uncompressor, e.g. gzip, bzip2, xz etc. In my implementation, we can delegate this responsibility to any potential uncompressor package.
  • In my implementation, I turned some of the phases of the generic builder into command-line tools that can be invoked outside the build environment (e.g. patch-shebangs, compress-man). This makes it easier to experiment with these tools and to make adjustments.

The biggest benefit of having separated concerns is flexibility when composing custom abstractions -- for example, the writeText function in Nixpkgs is built on top of stdenv.mkDerivation that includes GNU Make and GCC as dependencies, but does not depend on it. As a result, when one of these packages get updated all generated text files need to be updated as well, while there is no real dependency on it. When using a more minimalistic function, such as stdenv.runCommand this problem will go away.


I have created a new GitHub repository called: nix-lowlevel-experiments. It contains the implementation of all function abstractions described in this blog post, including some test cases that demonstrate how these functions can be used.

In the future, I will probably experiment with other low level Nix concepts and add them to this repository as well.

by Sander van der Burg ( at July 26, 2018 09:58 PM

July 25, 2018

Matthew Bauer

Beginner’s guide to cross compilation in Nixpkgs

1 What is cross compilation?

First, compilation refers to converting human-readable source code into computer-readable object code. Usually the computer you are building the code for is the same as the computer you are running on1. In cross compilation, however, that is not the case. We can build code for any computer that our compiler supports!

Cross-compilation is not a new idea at all. GCC and Autoconf are ancient tools that we use internally in Nixpkgs. But, getting those ideas to work well with Nix’s functional dependency model has taken years and years of work from the Nix community. We are finally to the point where an end user can easily start cross compiling things themselves.

2 Unstable channel

If you do not have Nix installed, you can install it available at The rest of the guide will assume that Nix is already installed.

Much work has gone into bringing cross compilation support to Nixpkgs. While Nixpkgs has had some support for cross compiling for awhile, recent changes have made cross compilation much easier and more elegant. These changes will be required for this guide. We plan to have a stable version of this ready for 18.09. Before that though, you will need to use the unstable channel. Things can be built with the unstable channel fairly easily with Nix 2.0. For instance, to build the hello program,

nix build -f channel:nixos-unstable hello

The rest of the guide will use nixos-unstable as the channel. However, once 18.09 is released, you should be able to also use the stable channel.

3 Building things

One of the important principles of cross compilation in Nixpkgs is handling native and cross compilation identically. This means that it should be possible to cross-compile any package in Nixpkgs with little to no modification at all. If your derivation specifies its dependencies correctly, Nix/Nixpkgs can figure out how to build it.

So now it’s time to show what we can do with Nixpkgs cross compilation framework2. I’ve compiled a short list of cross package sets along with their corresponding attribute names.

  • Raspberry Pi (pkgsCross.raspberryPi)
  • x86_64 Musl (pkgsCross.musl64)
  • Android (pkgsCross.aarch64-android-prebuilt)
  • iPhone (pkgsCross.iphone64)
  • Windows (pkgsCross.mingwW64)

So, if you are familiar with Nixpkgs, you would know that if you wanted to build Emacs for your native computer you can just run,

$ nix build -f channel:nixos-unstable pkgs.emacs

Likewise, if you wanted to build Emacs for a Raspberry Pi, you can just run,

$ nix build -f channel:nixos-unstable pkgsCross.raspberryPi.emacs

The built package will be in the same ARM machine code used by the Raspberry Pi. The important thing to notice here is that we have the power to build in any package in Nixpkgs for any of the platforms listed above. Of course, many of these will have issues due to not being portable, but with time we can make both Nixpkgs & the free software world better at handling cross compilation. Any of the software listed in ‘nix search’ should be possible to cross compile through the pkgsCross attribute.

Some more examples of things that I have worked on,

  1. Windows

    $ nix build -f channel:nixos-unstable pkgsCross.mingw32.hello
    $ nix run -f channel:nixos-unstable wine -c ./result/bin/hello.exe
    Hello, world!
  2. Android

    $ nix build -f channel:nixos-unstable \
  3. iPhone3

    $ nix build -f channel:nixos-unstable \

Notice that the pkgsCross attribute is just sugar to a more powerful & composable interface to Nixpkgs. This can be specified from the command line with,

$ nix build -f channel:nixos-unstable \
      --arg crossSystem '{ config = "<arch>-<vendor>-<kernel>-<environment>"; }'

For instance you may want to cross-compile Firefox for ARM64 Linux. This is as easy as4:

$ nix build -f channel:nixos-unstable \
      --arg crossSystem '{ config = "arm64-unknown-linux-gnu"; }'

You can be much more specific with what you want through crossSystem. Many more combinations are possible, but they all revolve around that four-part string config listed. It corresponds to <arch>-<vendor>-<kernel>-<environment> and is commonly called the LLVM triple5. The LLVM triple has become the standard way to specify systems across many free software toolchains including GCC, Binutils, Clang, libffi, etc. There is more information that can be specified in crossSystem & localSystem within Nixpkgs but this is not covered here as they are heavily dependent on the specific toolchain being used.

4 When things break

While the fundamentals of cross compiling in Nixpkgs are very good, individual packages will sometimes be broken. This is sometime because the package definition in Nixpkgs is incorrect. There are some common mistakes that occur that I want to cover here. First, the difference between ‘build-time’ vs ‘runtime’ dependencies6.

  • build-time dependencies: tools that will be run on the computer doing the cross compiling
  • runtime dependencies: libraries and tools that will run on the computer we are targeting.

In Nixpkgs, build-time dependencies should be put in nativeBuildInputs. Runtime dependencies should be put in buildInputs. Currently, this distinction has no effect on native compilation but it is crucial for correct cross-compilation. There are proposals to Nixpkgs to enforce the use of buildInputs as nativeBuildInputs even on native builds but this is yet to be agreed on7.

Sometimes your package will pull in a dependency indirectly so that dependency is not listed in buildInputs or nativeBuildInputs. This breaks the package splicing that goes on behind the scenes to make pick up the package set to get each package. To fix it, you will have to splice the package yourself. This is fairly straightforward. For examples, let’s say that your package depends on the pkgs.git git executable to be available through the GIT_CMD variable, which means it is not listed in nativeBuildInputs. In this case, you should instead refer to git as pkgs.buildPackages.git. This will pick up the build package set instead of the target package set.

There are a few more things that can go wrong within Nixpkgs. If you need to conditionally do something only when cross compiling (say a configure flag like --enable-cross-compilation), you should use stdenv.hostPlatform != stdenv.buildPlatform. If you want to check, for instance, that the platform you are building for is a Windows computer, just use stdenv.hostPlatform.isWindows, in the same way that you can also check for Linux with stdenv.hostPlatform.isLinux. These cases are often necessary, but remember they should only be used when absolutely needed. The more code we share between platforms, the more code is tested.

Sometimes packages are just not written in a cross-friendly way. This will usually happen just because the software author has not thought of how to handle cross compilation8. We want to work with software authors to make this process easier & contribute to the portability of free software. This takes time but we are definitely making progress. Contributions are always encouraged to the Nixpkgs repo.

5 Further reading

The concepts introduced here are also available in the Nixpkgs manual. These are the relevant sections/chapters:

GNU Automake also has a section on build vs. host vs. target. This will help clarify some of the naming conventions in Nixpkgs:



This is referred to as native compilation.


All examples are provided by the file lib/systems/examples.nix in Nixpkgs.


Cross-compilation to iPhone, unfortunately, requires that you download the unfree XCode environment. This is a consequence of Apple’s choices regarding what toolchains they allow.


In fact, each of these correspond to a value for crossSystem listed in lib/systems/examples.nix.


Of course there are 4 of them, so LLVM quadruple seems like a better name.


Like a few other parts of this article, this is somewhat of a simplification. There are many other types of dependencies but they all revolve around the build-time vs runtime distinction.


See strictDeps in pkgs/stdenv/generic/


Or even worse, they have thought about cross-compilation, but embraced many anti-patterns that break with Nixpkgs’ cross-compilation framework.

July 25, 2018 12:00 AM