NixOS Planet

March 07, 2019

Hercules Labs

Announcing Cachix private binary caches and 0.2.0 release

In March 2018 we’ve set on a mission to streamline Nix usage in teams. Today we are shipping Nix private binary cache support to Cachix.

You can now share an unlimited number of binary caches in your group of developers, protected from public use with just a few clicks.

Authorization is based on GitHub organizations/teams (if this is a blocker for you, let us know what you need).

To get started, head over to https://cachix.org and choose a plan that suits your private binary cache needs:

Create Nix private binary cache

At the same time cachix 0.2.0 cli is out with major improvements to NixOS usage. If you run the following as root you’ll get:

$ cachix use hie-nix
Cachix configuration written to /etc/nixos/cachix.nix.
Binary cache hie-nix configuration written to /etc/nixos/cachix/hie-nix.nix.

To start using cachix add the following to your /etc/nixos/configuration.nix:

    imports = [ ./cachix.nix ];

Then run:

    $ nixos-rebuild switch

Thank you for your feedback in the poll answers. It’s clear what we should do next:

  1. Multiple signing keys (key rotation, multiple trusted users, …)

  2. Search over binary cache contents

  3. Documentation

Happy cache sharing!

– Domen

March 07, 2019 12:00 AM

February 27, 2019

Sander van der Burg

Generating functional architecture documentation from Disnix service models

In my previous blog post, I have described a minimalistic architecture documentation approach for service-oriented systems based on my earlier experiences with setting up basic configuration management repositories. I used this approach to construct a documentation catalog for the platform I have been developing at Mendix.

I also explained my motivation -- it improves developer effectiveness, team consensus and the on-boarding of new team members. Moreover, it is a crucial ingredient in improving the quality of a system.

Although we are quite happy with the documentation, my biggest inconvenience is that I had to derive it entirely by hand -- I consulted various kinds of sources, but since existing documentation and information provided by people may be incomplete or inconsistent, I considered the source code and deployment configuration files the ultimate source of truth, because no matter how elegantly a diagram is drawn, it is useless if it does not match the actual implementation.

Because a manual documentation process is very costly and time consuming, a more ideal situation would be to have an automated approach that automatically derives architecture documentation from deployment specifications.

Since I am developing a deployment framework for service-oriented systems myself (Disnix), I have decided to extend it with a generator that can derive architecture diagrams and supplemental descriptions from the deployment models using the conventions I have described in my previous blog post.

Visualizing deployment architectures in Disnix


As explained in my previous blog post, the notation that I used for the diagrams was not something I invented from scratch, but something I borrowed from Disnix.

Disnix already has a feature (for quite some time) that can visualize deployment architectures referring to a description that shows how the functional parts (the services/components) are mapped to physical resources (e.g. machines/containers) in a network.

For example, after deploying a service-oriented system, such as my example web application system, by running:


$ disnix-env -s services.nix -i infrastructure.nix \
-d distribution-bundles.nix

You can visualize the corresponding deployment architecture of the system, by running:


$ disnix-visualize > out.dot

The above command-line instruction generates a directed graph in the DOT language. The resulting dot file can be converted into a displayable image (such as a PNG or SVG file) by running:


$ dot -Tpng out.dot > out.png

Resulting in a diagram of the deployment architecture that may look as follows:


The above diagram uses the following notation:

  • The light grey boxes denote machines in a network. In the above deployment scenario, we have two them.
  • The ovals denote services (more specifically: in a Disnix-context, they reflect any kind of distributable deployment unit). Services can have almost any shape, such as web services, web applications, and databases. Disnix uses a plugin system called Dysnomia to make sure that the appropriate deployment steps are carried out for a particular type of service.
  • The arrows denote inter-dependencies. When a service points to another service means that the latter is an inter-dependency of the former service. Inter-dependency relationships ensure that the dependent service gets all configuration properties so that it knows how to reach the dependency and the deployment system makes sure that inter-dependencies of a specific service are deployed first.

    In some cases, enforcing the right activation order of activation may be expensive. It is also possible to drop the ordering requirement, as denoted by the dashed arrows. This is acceptable for redirects from the portal application, but not acceptable for database connections.
  • The dark grey boxes denote containers. Containers can be any kind of runtime environment that hosts zero or more distributable deployment units. For example, the container service of a MySQL database is a MySQL DBMS, whereas the container service of a Java web application archive can be a Java Servlet container, such as Apache Tomcat.

Visualizing the functional architecture of service-oriented systems


The services of which a service-oriented systems is composed are flexible -- they can be deployed to various kinds of environments, such a test environment, a second fail-over production environment or a local machine.

Because services can be deployed to a variety of targets, it may also be desired to get an architectural view of the functional parts only.

I created a new tool called: dydisnix-visualize-services that can be used to generate functional architecture diagrams by visualizing the services in the Disnix services model:


The above diagram is a visual representation of the services model of the example web application system, using a similar notation as the deployment architecture without showing any environment characteristics:

  • Ovals denote services and arrows denote inter-dependency relationships.
  • Every service is annotated with its type, so that it becomes clear what kind of a shape a service has and what kind of deployment procedures need to be carried out.

Despite the fact that the above diagram is focused on the functional parts, it may still look quite detailed, even from a functional point of view.

Essentially, the architecture of my example web application system is a "system of sub systems" -- each sub system provides an isolated piece of functionality consisting of a database backend and web application front-end bundle. The portal sub system is the entry point and responsible for guiding the users to the sub systems implementing the functionality that they want to use.

It is also possible to annotate services in the Disnix services model with a group and description property:


{distribution, invDistribution, pkgs, system}:

let
customPkgs = import ../top-level/all-packages.nix {
inherit pkgs system;
};

groups = {
homework = "Homework";
literature = "Literature";
...
};
in
{
homeworkdb = {
name = "homeworkdb";
pkg = customPkgs.homeworkdb;
type = "mysql-database";
group = groups.homework;
description = "Database backend of the Homework subsystem";
};

homework = {
name = "homework";
pkg = customPkgs.homework;
dependsOn = {
inherit usersdb homeworkdb;
};
type = "apache-webapplication";
appName = "Homework";
group = groups.homework;
description = "Front-end of the Homework subsystem";
};

...
}

In the above services model, I have grouped every database and web application front-end bundle in a group that represents a sub system (such as Homework). By adding the --group-subservices parameter to the dydisnix-visualize-services command invocation, we can simplify the diagram to only show the sub systems and how these sub systems are inter-connected:


$ dydisnix-visualize-services -s services.nix -f png \
--group-subservices

resulting in the following functional architecture diagram:


As may be observed in the picture above, all services have been grouped. The service groups are denoted by ovals with dashed borders.

We can also query sub architecture diagrams of every group/sub system. For example, the following command generates a sub architecture diagram for the Homework group:


$ dydisnix-visualize-services -s services.nix -f png \
--group Homework --group-subservices

resulting in the following diagram:


The above diagram will only show the the services in the Homework group and their context -- i.e. non-transitive dependencies and services that have a dependency on any service in the requested group.

Services that exactly fit the group or any of its parent groups will be displayed verbatim (e.g. the homework database back-end and front-end). The other services will be categorized into in the lowest common sub group (the Users and Portal sub systems).

For more complex architectures consisting of many layers, you may probably want to generate all available architecture diagrams in one command invocation. It is also possible to run the visualization tool in batch mode. In batch mode, it will recursively generate diagrams for the top-level architecture and every possible sub group and stores them in a specified output folder:


$ dydisnix-visualize-services --batch -s services.nix -f svg \
--output-dir out

Generating supplemental documentation


Another thing I have explained in my previous blog post is that providing diagrams is useful, but they cannot clear up all confusion -- you also need to document and clarify additional details, such as the purposes of the services.

It also possible to generate a documentation page for each group showing a table of services with their descriptions and types:

The following command generates a documentation page for the Homework group:


$ dydisnix-document-services -s services.nix --group Homework

It is also possible to adjust the generation process by providing a documentation configuration file (by using the --docs parameter):


$ dydisnix-document-services -f services.nix --docs docs.nix \
--group Homework

The are a variety of settings that can be provided in a documentation configuration file:


{
groups = {
Homework = "Homework subsystem";
Literature = "Literature subsystem";
...
};

fields = [ "description" "type" ];

descriptions = {
type = "Type";
description = "Description";
};
}

The above configuration file specifies the following properties:

  • The descriptions for every group.
  • Which fields should be displayed in the overview table. It is possible to display any property of a service.
  • A description of every field in the services model.

Like the visualization tool, the documentation tool can also be used in batch mode to generate pages for all possible groups and sub groups.

Generating a documentation catalog


In addition to generating architecture diagrams and descriptions, it is also possible to combine both tools to automatically generate a complete documentation catalog for a service-oriented system, such as the web application example system:


$ dydisnix-generate-services-docs -s services.nix --docs docs.nix \
-f svg --output-dir out

By opening the entry page in the output folder, you will get an overview of the top-level architecture, with a description of the groups.


By clicking on a group hyperlink, you can inspect the sub architecture of the corresponding group, such as the 'Homework' sub system:


The above page displays the sub architecture diagram of the 'Homework' subsystem and a description of all services belonging to that group.

Another particularly interesting aspect is the 'Portal' sub system:


The portal's purpose is to redirect users to functionality provided by the other sub systems. The above architecture diagram displays all the sub systems in grouped form to illustrate that there is a dependency relationship, but without revealing all their internal details that clutters the diagram with unnecessary implementation details.

Other features


The tools support more use cases than those described in this blog post -- it is also possible, for example, to create arbitrary layers of sub groups by using the '/' character as a delimiter in the group identifier. I also used the company platform as an example case, that can be decomposed into four layers.

Availability


The tools described in this blog post are part of the latest development version of Dynamic Disnix -- a very experimental extension framework built on top of Disnix that can be used to make service-oriented systems self-adaptive by redeploying their services in case of events.

The reason why I have added these tools to Dynamic Disnix (and not the core Disnix toolset) is because the extension toolset has an infrastructure to parse and reflect over individual Disnix models.

Although I promised to make an official release of Dynamic Disnix a very long time ago, this still has not happened yet. However, the documentation feature is a compelling reason to stabilize the code and make the framework more usable.

by Sander van der Burg (noreply@blogger.com) at February 27, 2019 11:09 PM

February 24, 2019

Matthew Bauer

Static Nix: a command-line swiss army knife

Nix is an extremely useful package manager. But, not all systems have it installed. Without root priveleges, you cannot create the /nix directory required for it to work.

With static linking, and some new features added in Nix 2.0, you can fairly easily use the Nix package manager in these unpriveleged context1. To make this even easier, I am publishing prebuilt a x86_64 binary on my personal website. It will reside permanently at https://matthewbauer.us/nix (5M download).

1 Trying it out

You can use it like this,

$ curl https://matthewbauer.us/nix | sh -s run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable hello -c hello
Hello World!

You can use any package provided by Nixpkgs (using the attribute name). This gives you a swiss army knife of command line tools. I have compiled some cool commands to try out. There examples of various tools, games and demos that you can use through Nix, without installing anything! Everything is put into temporary directories2.

1.1 Dev tools

$ nix=$(mktemp); \
  curl https://matthewbauer.us/nix > $nix && \
  chmod +x $nix && \
  $nix run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable \
  bashInteractive curl git htop imagemagick file findutils jq nix openssh pandoc

1.2 Emacs

$ nix=$(mktemp); \
  curl https://matthewbauer.us/nix > $nix && \
  chmod +x $nix && \
  $nix run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable \
  emacs -c emacs

1.3 File manager

$ nix=$(mktemp); \
  curl https://matthewbauer.us/nix > $nix && \
  chmod +x $nix && \
  $nix run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable \
  ranger -c ranger

1.4 Fire

$ curl https://matthewbauer.us/nix | \
  sh -s run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable \
  aalib -c aafire

1.5 Fortune

$ curl https://matthewbauer.us/nix | \
  sh -s run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable \
  bash cowsay fortune -c sh -c 'cowsay $(fortune)'

1.6 Nethack

$ nix=$(mktemp); \
  curl https://matthewbauer.us/nix > $nix && \
  chmod +x $nix && \
  $nix run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable \
  nethack -c nethack

1.7 Weather

$ curl https://matthewbauer.us/nix | \
  sh -s run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable \
  bash curl cowsay -c sh -c 'cowsay $(curl wttr.in/?format=3)'

1.8 World map

$ curl https://matthewbauer.us/nix | \
  sh -s run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable \
  bash coreutils curl libcaca ncurses -c bash -c \
  'img=$(mktemp ${TMPDIR:-/tmp}/XXX.jpg); \
  curl -k https://www.cia.gov/library/publications/the-world-factbook/attachments/images/large/world-physical.jpg > $img \
  && img2txt -W $(tput cols) -f utf8 $img'

1.9 Youtube

$ curl https://matthewbauer.us/nix | \
  sh -s run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable \
  bash youtube-dl mplayer -c sh -c \
  'mplayer -vo caca $(youtube-dl --no-check-certificate -g https://www.youtube.com/watch?v=dQw4w9WgXcQ)'

1.10 And more…

Lots more cool things are possible. Look through the packages provided by Nixpkgs if you need inspiration.

2 Avoid installing and extracting each time

This method of using Nix has some upfront cost. This is because https://matthewbauer.us/nix must be downloaded each time and the embedded .tar.gz file extracted. If you want Nix to stay around permanently, you have to follow a few tricks. Total install size is about 11M. Using this method, you will reduce startup and keep Nix in your path at each login.

I have two ways of doing this. One the “easy” way is just running this script.

$ curl https://matthewbauer.us/nix.sh | sh

The other is the “safe” way and involves running some commands in order. These are the same commands run by the script, but this lets you audit everything being done line by line.

$ t=$(mktemp -d)
$ curl https://matthewbauer.us/nix > $t/nix.sh
$ pushd $t
$ sh nix.sh --extract
$ popd
$ mkdir -p $HOME/bin/ $HOME/share/nix/corepkgs/
$ mv $t/dat/nix $HOME/bin/
$ mv $t/dat/share/nix/corepkgs/* $HOME/share/nix/corepkgs/
$ echo export 'PATH=$HOME/bin:$PATH' >> $HOME/.profile
$ echo export 'NIX_DATA_DIR=$HOME/share' >> $HOME/.profile
$ source $HOME/.profile
$ rm -rf $t

You can now run the Nix commands above as you need to, and it will be available on each login. Remember to always add the arguments -f channel:nixpkgs-unstable and --store $HOME/.cache/nix/store, otherwise Nix will be confused on how to handle the missing /nix/store and other environment variables.

3 Build it yourself

This is certainly a security vulnerability so you may want to build static Nix for youself from my pull request. Of course you can’t build static Nix without Nix, so this would need to be done from a system that has Nix installed. You can build it yourself, provided you have git and nix installed, like this,

$ git clone https://github.com/matthewbauer/nixpkgs.git
$ cd nixpkgs
$ git checkout static-nix
$ nix-build -A pkgsStatic.nix

Then, copy it to your machine without Nix installed (provided you have ssh installed), like this,

$ scp ./result/bin/nix your-machine:
$ ssh your-machine
$ ./nix ...

Footnotes:

1

Note that you will need to be able to set up a private namespace. This is enabled by default on Linux, but some distros have specifically disabled it. See this issue for more discussion.

2

While ideally we would not need temporary directories at all, some of these commands require it. This is because they check whether they are in a pipe and refuse to run if so. Your temporary directory should be cleaned each time your reboot anyway. The Nix packages will be installed in $HOME/.cache/nix/store but they can be removed at any time.

February 24, 2019 12:00 AM

February 08, 2019

Matthew Bauer

Call for proofreaders and beta testers for 19.03

This was originally published on Discourse. I am putting it here for posterity reasons.

We get lots of contributors in Nixpkgs and NixOS who modify our source code. They are the most common type of contribution we receive. But, there is actually a great need for other types of contributions that don’t involve programming at all! For the benefit of new users, I am going to outline how you can easily contribute to the community and help make 19.03 the best NixOS release yet.

1 Proofreading

We have two different manuals in the NixOS/nixpkgs repo. One is for Nixpkgs, the set of all software. And the other is for NixOS, our Linux distro. Proofreading these manuals is important in helping new users learn about how our software works.

When you find an issue, you can do one of two things. The first and most encouraged is to open a PR on GitHub fixing the documentation. Both manuals are written in docbook. You can see the source for each here:

GitHub allows you to edit these files directly on the web. You can also always use your own Git client. For reference on writing in DocBook, I recommend reading through docbook.rocks.

An alternative if you are unable to fix the documentation yourself is to open an issue. We use the same issue tracker includes any issues with Nixpkgs/NixOS and can be accessed through GitHub Issues. Please be sure to provide a link to where in the manual the issue is as well as what is incorrect or otherwise confusing.

2 Beta testing

An alternative to proofreading is beta testing. There are a number of ways to do this, but I would suggest using VirtualBox. Some information on installing VirtualBox can be found online, but you should just need to set these NixOS options:

virtualisation.virtualbox.host.enable = true;

and add your user to the vboxusers group:

users.users.<user>.extraGroups = [ "vboxusers" ];

then rebuild your NixOS machine (sudo nixos-rebuild switch), and run this command to start virtualbox:

VirtualBox

Other distros have their own ways of installing VirtualBox, see Download VirtualBox for more info.

You can download an unstable NixOS .ova file directly here. (WARNING: this will be a large file, a little below 1GB).

Once downloaded, you can import this .ova file directly into VirtualBox using “File” -> “Import Appliance…”. Select the .ova file downloaded from above and click through a series of Next dialogs, using the provided defaults. After this, you can boot your NixOS machine by selecting it from the list on the left and clicking “Start”.

The next step is to just play around with the NixOS machine and try to break it! You can report any issues you find on the GitHub Issues tracker. We use the same issue tracker for both NixOS and Nixpkgs. Just try to make your issues as easy to reproduce as possible. Be specific on where the problem is and how someone else could recreate the problem for themselves.

February 08, 2019 12:00 AM

January 03, 2019

Hercules Labs

elm2nix 0.1

Our frontend is written in Elm and our deployments are done with Nix.

There are many benefits to using Nix for packaging: like reproducible installation with binaries, being able to diff for changes, rollback support, etc, with a single generic tool.

We benefit most once the whole deployment is Nixified, which is what we’ve done to our frontend in last December.

History

Back in November 2016 I’ve written down some ideas on how elm2nix could work. In December 2017 the first prototype of elm2nix was born, but it required a fork of the Elm compiler to gathering your project’s dependencies. Elm 0.19 came out with pinning for all dependencies, making it feasible for other packaging software to build Elm projects.

Installation and usage

Today we’re releasing elm2nix 0.1 and it’s already available in nixpkgs unstable and stable channels. The easiest way to install it is:

# install Nix
$ curl https://nixos.org/nix/install | sh

# activate Nix environment
$ source ~/.nix-profile/etc/profile.d/nix.sh

# install elm2nix binary
$ nix-env -i elm2nix

Given your current directory with an Elm 0.19 project, here are three commands to get Elm project build with Nix.

First, we need to generate a Nix file with metadata about dependencies:

$ elm2nix convert > elm-srcs.nix

Second, since Elm 0.19 packaging maintains a snapshot of http://package.elm-lang.org

$ elm2nix snapshot > versions.dat

And last, we generate a Nix expression by template, which ties everything together:

$ elm2nix init > default.nix

By default, the template will use just plain elm-make to compile your project.

To build using Nix and see the generated output in Chromium:

$ nix-build
$ chromium ./result/Main.html

What could be improved in Elm

You may notice that elm2nix convert output includes sha256 hashes. Nix will require hashes for anything that’s fetched from the internet. Elm package index does provide an endpoint with hashes, but then Nix needs to know what’s the hash of the endpoint response.

To address this issue it would be ideal if there would be elm.lock file or similar with all dependencies pinned including their hashes - then Nix would have everything available locally. Other package managers for various languages are slowly going towards outputing metadata file that explains the whole build process. This can be considered an API between build planning and the actual builder.

Another minor issue comes from versions.dat. Ideally instead of committing to git repository a few megabytes of serialized JSON, one would be able to point to an url that would present binary file pinned at some specific time - allowing it to always be verifiable with upfront known hash.

What could be improved in Nix

Nix expression generated by elm2nix init could be upstreamed to nixpkgs or another Nix repository. This would allow for small footprint in an application and stable documentation.

Default expression might not be enough for everyone, as you can use Parcel, Webpack or any other asset management tool for building the project. There’s room for all common environments.

Closing thoughts

Stay tuned for another post on how to develop Elm applications with Parcel and Nix.

Since you’re here, we’re building next-generation CI and binary caching services, take a look at Hercules CI and Cachix.

– Domen

January 03, 2019 12:00 AM

December 25, 2018

Ollie Charles

Solving Planning Problems with Fast Downward and Haskell

In this post I’ll demonstrate my new fast-downward library and show how it can be used to solve planning problems. The name comes from the use of the backend solver - Fast Downward. But what’s a planning problem?

Roughly speaking, planning problems are a subclass of AI problems where we need to work out a plan that moves us from an initial state to some goal state. Typically, we have:

  • A known starting state - information about the world we know to be true right now.
  • A set of possible effects - deterministic ways we can change the world.
  • A goal state that we wish to reach.

With this, we need to find a plan:

  • A solution to a planning problem is a plan - a totally ordered sequence of steps that converge the starting state into the goal state.

Planning problems are essentially state space search problems, and crop up in all sorts of places. The common examples are that of moving a robot around, planning logistics problems, and so on, but they can be used for plenty more! For example, the Beam library uses state space search to work out how to converge a database from one state to another (automatic migrations) by adding/removing columns.

State space search is an intuitive approach - simply build a graph where nodes are states and edges are state transitions (effects), and find a path (possibly shortest) that gets you from the starting state to a state that satisfies some predicates. However, naive enumeration of all states rapidly grinds to a halt. Forming optimal plans (least cost, least steps, etc) is an extremely difficult problem, and there is a lot of literature on the topic (see ICAPS - the International Conference on Automated Planning and Scheduling and recent International Planning Competitions for an idea of the state of the art). The fast-downward library uses the state of the art Fast Downward solver and provides a small DSL to interface to it with Haskell.

In this post, we’ll look at using fast-downward in the context of solving a small planning problem - moving balls between rooms via a robot. This post is literate Haskell, here’s the context we’ll be working in:

If you’d rather see the Haskell in it’s entirety without comments, simply head to the end of this post.

Modelling The Problem

Defining the Domain

As mentioned, in this example, we’ll consider the problem of transporting balls between rooms via a robot. The robot has two grippers and can move between rooms. Each gripper can hold zero or one balls. Our initial state is that everything is in room A, and our goal is to move all balls to room B.

First, we’ll introduce some domain specific types and functions to help model the problem. The fast-downward DSL can work with any type that is an instance of Ord.

A ball in our model is modelled by its current location. As this changes over time, it is a Var - a state variable.

A gripper in our model is modelled by its state - whether or not it’s holding a ball.

Finally, we’ll introduce a type of all possible actions that can be taken:

With this, we can now begin modelling the specific instance of the problem. We do this by working in the Problem monad, which lets us introduce variables (Vars) and specify their initial state.

Setting the Initial State

First, we introduce a state variable for each of the 4 balls. As in the problem description, all balls are initially in room A.

Next, introduce a variable for the room the robot is in - which also begins in room A.

We also introduce variables to track the state of each gripper.

This is sufficient to model our problem. Next, we’ll define some effects to change the state of the world.

Defining Effects

Effects are computations in the Effect monad - a monad that allows us to read and write to variables, and also fail (via MonadPlus). We could define these effects as top-level definitions (which might be better if we were writing a library), but here I’ll just define them inline so they can easily access the above state variables.

Effects may be used at any time by the solver. Indeed, that’s what solving planning problems is all about! The hard part is choosing effects intelligently, rather than blindly trying everything. Fortunately, you don’t need to worry about that - Fast Downward will take care of that for you!

Picking Up Balls

The first effect takes a ball and a gripper, and attempts to pick up that ball with that gripper.

  1. First we check that the gripper is empty. This can be done concisely by using an incomplete pattern match. do notation desugars incomplete pattern matches to a call to fail, which in the Effect monad simply means “this effect can’t currently be used”.

  2. Next, we check where the ball and robot are, and make sure they are both in the same room.

  3. Here we couldn’t choose a particular pattern match to use, because picking up a ball should be possible in either room. Instead, we simply observe the location of both the ball and the robot, and use an equality test with guard to make sure they match.

  4. If we got this far then we can pick up the ball. The act of picking up the ball is to say that the ball is now in a gripper, and that the gripper is now holding a ball.

  5. Finally, we return some domain specific information to use if the solver chooses this effect. This has no impact on the final plan, but it’s information we can use to execute the plan in the real world (e.g., sending actual commands to the robot).

Moving Between Rooms

This effect moves the robot to the room adjacent to its current location.

This is an “unconditional” effect as we don’t have any explicit guards or pattern matches. We simply flip the current location by an adjacency function.

Again, we finish by returning some information to use when this effect is chosen.

Dropping Balls

Finally, we have an effect to drop a ball from a gripper.

  1. First we check that the given gripper is holding a ball, and the given ball is in a gripper.

  2. If we got here then those assumptions hold. We’ll update the location of the ball to be the location of the robot, so first read out the robot’s location.

  3. Empty the gripper

  4. Move the ball.

  5. And we’re done! We’ll just return a tag to indicate that this effect was chosen.

Solving Problems

With our problem modelled, we can now attempt to solve it. We invoke solve with a particular search engine (in this case A* with landmark counting heuristics). We give the solver two bits of information:

  1. A list of all effects - all possible actions the solver can use. These are precisely the effects we defined above, but instantiated for all balls and grippers.
  2. A goal state. Here we’re using a list comprehension which enumerates all balls, adding the condition that the ball location must be InRoom RoomB.

So far we’ve been working in the Problem monad. We can escape this monad by using runProblem :: Problem a -> IO a. In our case, a is SolveResult Action, so running the problem might give us a plan (courtesy of solve). If it did, we’ll print the plan.

fast-downward allows you to extract a totally ordered plan from a solution, but can also provide a partiallyOrderedPlan. This type of plan is a graph (partial order) rather than a list (total order), and attempts to recover some concurrency. For example, if two effects do not interact with each other, they will be scheduled in parallel.

Well, Did it Work?!

All that’s left is to run the problem!

> main
Found a plan!
1: PickUpBall
2: PickUpBall
3: SwitchRooms
4: DropBall
5: DropBall
6: SwitchRooms
7: PickUpBall
8: PickUpBall
9: SwitchRooms
10: DropBall
11: DropBall

Woohoo! Not bad for 0.02 secs, too :)

Behind The Scenes

It might be interesting to some readers to understand what’s going on behind the scenes. Fast Downward is a C++ program, yet somehow it seems to be running Haskell code with nothing but an Ord instance - there are no marshalling types involved!

First, let’s understand the input to Fast Downward. Fast Downward requires an encoding in its own SAS format. This format has a list of variables, where each variable contains a list of values. The contents of the values aren’t actually used by the solver, rather it just works with indices into the list of values for a variable. This observations means we can just invent values on the Haskell side and careful manage mapping indices back and forward.

Next, Fast Downward needs a list of operators which are ground instantiations of our effects above. Ground instantiations of operators mention exact values of variables. Recounting our gripper example, pickUpBallWithGripper b gripper actually produces 2 operators - one for each room. However, we didn’t have to be this specific in the Haskell code, so how are we going to recover this information?

fast-downward actually performs expansion on the given effects to find out all possible ways they could be called, by non-deterministically evaluating them to find a fixed point.

A small example can be seen in the moveRobotToAdjacentRoom Effect. This will actually produce two operators - one to move from room A to room B, and one to move from room B to room A. The body of this Effect is (once we inline the definition of modifyVar)

Initially, we only know that robotLocation can take the value RoomA, as that is what the variable was initialised with. So we pass this in, and see what the rest of the computation produces. This means we evaluate adjacent RoomA to yield RoomB, and write RoomB into robotLocation. We’re done for the first pass through this effect, but we gained new information - namely that robotLocation might at some point contain RoomB. Knowing this, we then rerun the effect, but the first readVar gives us two paths:

This shows us that robotLocation might also be set to RoomA. However, we already knew this, so at this point we’ve reached a fixed point.

In practice, this process is ran over all Effects at the same time because they may interact - a change in one Effect might cause new paths to be found in another Effect. However, because fast-downward only works with finite domain representations, this algorithm always terminates. Unfortunately, I have no way of enforcing this that I can see, which means a user could infinitely loop this normalisation process by writing modifyVar v succ, which would produce an infinite number of variable assignments.

Conclusion

CircuitHub are using this in production (and I mean real, physical production!) to coordinate activities in its factories. By using AI, we have a declarative interface to the production process – rather than saying what steps are to be performed, we can instead say what state we want to end up in and we can trust the planner to find a suitable way to make it so.

Haskell really shines here, giving a very powerful way to present problems to the solver. The industry standard is PDDL, a Lisp-like language that I’ve found in practice is less than ideal to actually encode problems. By using Haskell, we:

  • Can easily feed the results of the planner into a scheduler to execute the plan, with no messy marshalling.
  • Use well known means of abstraction to organise the problem. For example, in the above we use Haskell as a type of macro language – using do notation to help us succinctly formulate the problem.
  • Abstract out the details of planning problems so the rest of the team can focus on the domain specific details – i.e., what options are available to the solver, and the domain specific constraints they are subject to.

fast-downward is available on Hackage now, and I’d like to express a huge thank you to CircuitHub for giving me the time to explore this large space and to refine my work into the best solution I could think of. This work is the result of numerous iterations, but I think it was worth the wait!

Appendix: Code Without Comments

Here is the complete example, as a single Haskell block:

{-# language DisambiguateRecordFields #-}

module FastDownward.Examples.Gripper where

import Control.Monad
import qualified FastDownward.Exec as Exec
import FastDownward.Problem


data Room = RoomA | RoomB
  deriving (Eq, Ord, Show)


adjacent :: Room -> Room
adjacent RoomA = RoomB
adjacent RoomB = RoomA


data BallLocation = InRoom Room | InGripper
  deriving (Eq, Ord, Show)


data GripperState = Empty | HoldingBall
  deriving (Eq, Ord, Show)


type Ball = Var BallLocation


type Gripper = Var GripperState

  
data Action = PickUpBall | SwitchRooms | DropBall
  deriving (Show)


problem :: Problem (Maybe [Action])
problem = do
  balls <- replicateM 4 (newVar (InRoom RoomA))
  robotLocation <- newVar RoomA
  grippers <- replicateM 2 (newVar Empty)

  let
    pickUpBallWithGripper :: Ball -> Gripper -> Effect Action
    pickUpBallWithGripper b gripper = do
      Empty <- readVar gripper
  
      robotRoom <- readVar robotLocation
      ballLocation <- readVar b
      guard (ballLocation == InRoom robotRoom)
  
      writeVar b InGripper
      writeVar gripper HoldingBall
  
      return PickUpBall


    moveRobotToAdjacentRoom :: Effect Action
    moveRobotToAdjacentRoom = do
      modifyVar robotLocation adjacent
      return SwitchRooms


    dropBall :: Ball -> Gripper -> Effect Action
    dropBall b gripper = do
      HoldingBall <- readVar gripper
      InGripper <- readVar b
  
      robotRoom <- readVar robotLocation
      writeVar b (InRoom robotRoom)
  
      writeVar gripper Empty
  
      return DropBall

  
  solve
    cfg
    ( [ pickUpBallWithGripper b g | b <- balls, g <- grippers ]
        ++ [ dropBall b g | b <- balls, g <- grippers ]
        ++ [ moveRobotToAdjacentRoom ]
    )
    [ b ?= InRoom RoomB | b <- balls ]

  
main :: IO ()
main = do
  plan <- runProblem problem
  case plan of
    Nothing ->
      putStrLn "Couldn't find a plan!"

    Just steps -> do
      putStrLn "Found a plan!"
      zipWithM_ (\i step -> putStrLn $ show i ++ ": " ++ show step) [1::Int ..] steps


cfg :: Exec.SearchEngine
cfg =
  Exec.AStar Exec.AStarConfiguration
    { evaluator =
        Exec.LMCount Exec.LMCountConfiguration
          { lmFactory =
              Exec.LMExhaust Exec.LMExhaustConfiguration
                { reasonableOrders = False
                , onlyCausalLandmarks = False
                , disjunctiveLandmarks = True
                , conjunctiveLandmarks = True
                , noOrders = False
                }
          , admissible = False
          , optimal = False
          , pref = True
          , alm = True
          , lpSolver = Exec.CPLEX
          , transform = Exec.NoTransform
          , cacheEstimates = True
          }
    , lazyEvaluator = Nothing
    , pruning = Exec.Null
    , costType = Exec.Normal
    , bound = Nothing
    , maxTime = Nothing
    }

by Oliver Charles at December 25, 2018 12:00 AM

December 18, 2018

Hercules Labs

Hercules CI Development Update

We’ve been making good progress since our October 25th NixCon demo.

Some of the things we’ve built and worked on since NixCon:

  • Realise the derivations and show their status
  • Minimal build logs
  • Keeping track of agent state
  • GitHub build statuses
  • Improved agent logging
  • Work on Cachix private caches
  • Incorporating
  • Plenty of small fixes, improvements and some open source work

Here you can see attributes being streamed as they are evaluated and CI immediately starts to build each attribute and shows cached derivation statuses:

evaluating

Since we’ve started dogfooding a few weeks ago, we’ve been getting valuable insight. There’s plenty of things to do and bugs to fix. Once we’re happy with the user experience for the minimal workflow, we’ll contact email subscribers and start handing out early access.

If you’d like to be part of early adopters or just be notified of development, make sure to subscribe to Hercules CI.

– Robert

December 18, 2018 12:00 AM

November 26, 2018

Matthew Bauer

Subjective ranking of build systems

1 My subjective ranking of build systems

Very few of us are happy with our choices of build systems. There are a lot out there and none feel quite right to many people. I wanted to offer my personal opinions on build systems. Every build system is “bad” in its own way. But some are much worse than others.

As a maintainer for Nixpkgs, we have to deal with . I’ve avoided build systems that are language-specific. Those build systems are usually the only choice for your language, so ranking them will inevitably include opinions on the language itself. So, I’ve included in this list only language neutral build systems. In addition, I’ve filtered out any build systems that are not included in Nixpkgs. This perspective is going to prioritize features that make your project easiest to package in cross-platform ways. It’s very subjective, so I only speak for myself here.

I separate two kinds of software used for packages. One is the “meta” build system that provides an abstract interface to create build rules. The other is the build runner that will run the rules. Most meta build systems support targeting multiple backends.

1.1 What makes a good build system?

Some criteria I have for these build systems.

  • Good defaults builtin. By default, packages should support specifying “prefix” and “destination directory”.
  • Works with widely available software. Being able to generate Makefiles is a big bonus. Everyone has access to make - not everyone has Ninja. This is often needed for bootstrapping.
  • Supports cross compilation concepts. A good separation between buildtime and runtime is a must have! In addition you should be able to set build, host, and target from the command line. This makes things much easier for packaging and bootstrapping.
  • Detection of dependencies reuses existing solutions. Pkgconfig provides an easy way to detect absolute directories. No need to reinvent the wheel here.
  • The less dependencies the better! Requiring a Java or Python runtime means it takes longer to rebuild the world. They introduce bottlenecks where every package needs to wait for these runtimes to be built before we can start running things in parallel.

1.2 Ranking of meta build systems from bad to worse

  1. GNU Autotools (Automake & autoconf)
  2. Meson
  3. CMake
  4. gyp
  5. qmake
  6. imake
  7. premake

GNU Autotools comes in at the top. It has the best support for cross compilation of any meta build system. It has been around for a while and means that the classic “./configure && make && make install” work. Because the configure script is just a simple bash script, packages don’t have to depend directly on GNU Autotools at build time. This is a big plus in bootstrapping software. I think Meson has made a lot of progress in improving its cross compilation support. It’s not quite there in my opinion, as it requires you to create cross tool files instead of using command line arguments.

1.3 Ranking of build runners from bad to worse

  1. GNU Make
  2. Ninja
  3. Bazel
  4. boost.build
  5. SCons

GNU Make is still the top choice in my personal opinion. It has been around for a while, but Makefiles are widely understood and GNU Make is included everywhere. In addition, the Makefile build rule format is easy to parallelize. Ninja still requires Python to build itself. This adds to the Nixpkgs bottleneck because Python is not included in the bootstrap tools. While there are some speedups in Ninja, they don’t appear to be significant enough to be worth switching at this time. At the same time, Ninja is still a perfectly good choice if you value performance over legacy support.

1.4 Conclusion

In Nixpkgs, we have made an attempt to support whatever build system you are using. But, some are definitely better than others.

My main goal here is to try to get software authors to think more critically about what build system they are using. In my opinion, it is better to use well known software over more obscure systems. These shouldn’t be taken as a universal truth. Everyone has their own wants and needs. But, if your build system comes in at the bottom of this list, you might want to consider switching to something else!

November 26, 2018 12:00 AM

November 21, 2018

Hercules Labs

Using Elm and Parcel for zero configuration web asset management

At some point our frontend had ~200 lines of Webpack configuration and it was going to grow by at least another 200 lines for a simple change of having two top-level html outptus. My mind went “what happened to convention over configuration in the frontend sphere”? I discovered Parcel.

In this short post I’ll show you how to get started with Parcel and Elm to achieve zero configuration for managing your assets.

With very little configuration you get:

  • live Elm reloading
  • ~100ms rebuilds with a warm cache (measured using ~1500 loc)
  • --debug when using a server (this change hasn’t been released yet)
  • --optimize when using a build
  • importing an asset (from node_modules or relatively) just does the right thing
  • bundling and minification of assets

Bootstrapping

Using yarn or npm (btw, pnpm is currently the most sane Node package manager):

mkdir parcel-elm-demo && cd parcel-elm-demo
yarn add parcel-bundler elm
yarn run elm init

We’ll create a minimal Elm program to start with in src/Main.elm:

import Html exposing (..)

main = text "Hello world"

Next we create a top-level asset that Parcel will build, index.html:

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <meta http-equiv="X-UA-Compatible" content="ie=edge">
  </head>
  <body>
    <div id="main"></div>
    <script src="./index.js"></script>
  </body>
</html>

Note how we’re including src as relative path to a JavaScript file. The general idea in Parcel is that you import files of any asset type, and Parcel will know how to process and bundle it.

The gist of glueing Elm and Parcel together is in index.js:

import { Elm } from './src/Main.elm'

Elm.Main.init({
  node: document.getElementById('main'),
});

At the import Parcel will detect that it’s an Elm asset, so it will use Elm to compile it and then exports the Elm Javascript object as we use it to initialize Elm in the DOM.

Running Parcel

There are two main operations in Parcel. First, you have a development mode via live-reloading server:

$ yarn run parcel index.html
yarn run v1.9.4
$ /home/ielectric/dev/parcel-demo/node_modules/.bin/parcel index.html
Server running at http://localhost:1234
✨  Built in 456ms.

Open http://localhost:1234 and modify src/Main.elm. You should see on the terminal

✨  Built in 123ms.

and your page will be refreshed automatically.

To produce a final bundle for production, you use parcel with a command, build:

$ yarn run parcel build index.html
yarn run v1.9.4
$ /home/ielectric/dev/parcel-demo/node_modules/.bin/parcel build index.html
✨  Built in 2.07s.

dist/parcel-demo.bc110dbc.js     7.93 KB    1.55s
dist/parcel-demo.4f74f229.map      373 B      3ms
dist/index.html                    288 B    442ms
Done in 2.87s.

Your minified, bundled Elm project is now ready to be deployed.

Parcel isn’t perfect (yet!)

There are a few issues still around.

That’s it. Last but not least, we’re building next-generation CI and binary caching services, take a look at Hercules CI and Cachix.

– Domen

November 21, 2018 12:00 AM

November 08, 2018

Mayflower

The nixops defaults module

Avoiding code repetition in a nixops deployment As with most configuration management tools, there are some options in nixops that need to be defined for virtually any machine in a deployment. These global options tend to be abstracted in a common base profile that is simply included at the top of a node configuration. This base profile can be used for including default packages, services or machine configuration usually needed on all machines—like networking debug tools and admin users with access to the whole network.

November 08, 2018 02:00 PM

October 30, 2018

Sander van der Burg

Auto patching prebuilt binary software packages for deployment with the Nix package manager

As explained in many previous blog posts, most of the quality properties of the Nix package manager (such as reliable deployment) stem from the fact that all packages are stored in a so-called Nix store, in which every package resides in its own isolated folder with a hash prefix that is derived from all build inputs (such as: /nix/store/gf00m2nz8079di7ihc6fj75v5jbh8p8v-zlib-1.2.11).

This unorthodox naming convention makes it possible to safely store multiple versions and variants of the same package next to each other.

Although isolating packages in the Nix store provides all kinds of benefits, it also has a big drawback -- common components, such as shared libraries, can no longer be found in their "usual locations", such as /lib.

For packages that are built from source with the Nix package manager this is typically not a problem:

  • The Nix expression language computes the Nix store paths for the required packages. By simply referring to the variable that contains the build result, you can obtain the Nix store path of the package, without having to remember them yourself.
  • Nix statically binds shared libraries to ELF binaries by modifying the binary's RPATH field. As a result, binaries no longer rely on the presence of their library dependencies in global locations (such as /lib), but use the libraries stored in isolation in the Nix store.
  • The GNU linker (the ld command) has been wrapped to transparently add the paths of all the library package to the RPATH field of the ELF binary, whenever a dynamic library is provided.

As a result, you can build most packages from source code by simply executing their standardized build procedures in a Nix builder environment, such as: ./configure --prefix=$out; make; make install.

When it is desired to deploy prebuilt binary packages with Nix then you may probably run into various kinds of challenges:

  • ELF executables require the presence of an ELF interpreter in /lib/ld-linux.so.2 (on x86) and /lib/ld-linux-x86-64.so.2 (on x86-64), which is impure and does not exist in NixOS.
  • ELF binaries produced by conventional means typically have no RPATH configured. As a result, they expect libraries to be present in global namespaces, such as /lib. Since these directories do not exist in NixOS an executable will typically fail to work.

To make prebuilt binaries work in NixOS, there are basically two solutions -- it is possible to compose so-called FHS user environments from a set of Nix packages in which shared components can be found in their "usual locations". The drawback is that it requires special privileges and additional work to compose such environments.

The preferred solution is to patch prebuilt ELF binaries with patchelf (e.g. appending the library dependencies to the RPATH of the executable) so that their dependencies are loaded from the Nix store. I wrote a guide that demonstrates how to do this for a number of relatively simple packages.

Although it is possible to patch prebuilt ELF binaries to make them run work from the Nix store, such a process is typically tedious and time consuming -- you must dissect a package, search for all relevant ELF binaries, figure out which libraries a binary requires, find the corresponding packages that provide them and then update the deployment instructions to patch the ELF binaries.

For small projects, a manual binary patching process is still somewhat manageable, but for a complex project such as the Android SDK, that provides a large collection of plugins containing a mix of many 32-bit and 64-bit executables, manual patching is quite labourious, in particular when it is desired to keep all plugins up to date -- plugin packages are updated quite frequently forcing the packager to re-examine all binaries over and over again.

To make the Android SDK patching process easier, I wrote a small tool that can mostly automate it. The tool can also be used for other kinds of binary packages.

Automatic searching for library locations


In order to make ELF binaries work, they must be patched in such a way that they use an ELF interpreter from the Nix store and their RPATH fields should contain all paths to the libraries that they require.

We can gather a list of required libraries for an executable, by running:


$ patchelf --print-needed ./zipmix
libm.so.6
libc.so.6

Instead of manually patching the executable with this provided information, we can also create a function that searches for the corresponding libraries in a list of search paths. The tool could take the first path that provides the required libraries.

For example, by setting the following colon-separated seach environment variable:


$ export libs=/nix/store/7y10kn6791h88vmykdrddb178pjid5bv-glibc-2.27/lib:/nix/store/xh42vn6irgl1cwhyzyq1a0jyd9aiwqnf-zlib-1.2.11/lib

The tool can automatically discover that the path: /nix/store/7y10kn6791h88vmykdrddb178pjid5bv-glibc-2.27/lib provides both libm.so.6 and libc.so.6.

We can also run into situations in which we cannot find any valid path to a required library -- in such cases, we can throw an error and notify the user.

It is also possible extend the searching approach to the ELF interpreter. The following command provides the path to the required ELF interpreter:


$ patchelf --print-interpreter ./zipmix
/lib64/ld-linux-x86-64.so.2

We can search in the list of library packages for the ELF interpreter as well so that we no longer have to explicitly specify it.

Dealing with multiple architectures


Another problem with the Android SDK is that plugin packages may provide both x86 and x86-64 binaries. You cannot link libraries compiled for x86 against an x86-64 executable and vice versa. This restriction could introduce a new kind of risk in the automatic patching process.

Fortunately, it is also possible to figure out for what kind of architecture a binary was compiled:


$ readelf -h ./zipmix
ELF Header:
Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00
Class: ELF64
Data: 2's complement, little endian
Version: 1 (current)
OS/ABI: UNIX - System V
ABI Version: 0
Type: EXEC (Executable file)
Machine: Advanced Micro Devices X86-64

The above command-line instruction shows that we have a 64-bit binary (Class: ELF64) compiled for the x86-64 architecture (Machine: Advanced Micro Devices X86-64)

I have also added a check that ensures that the tool will only add a library path to the RPATH if the architecture of the library is compatible with the binary. As a result, it is not possible to accidentally link a library with an incompatible architecture to a binary.

Patching collections of binaries


Another inconvenience is the fact that Android SDK plugins typically provide more than one binary that needs to be patched. We can also recursively search an entire directory for ELF binaries:


$ autopatchelf ./bin

The above command-line instruction recursively searches for binaries in the bin/ sub directory and automatically patches them.

Sometimes recursively patching executables in a directory hierarchy could have undesired side effects. For example, the Android SDK also provides emulators having their own set of ELF binaries that need to run in the emulator. Patching these binaries typically breaks the software running in the emulator. We can also disable recursion if this is desired:


$ autopatchelf --no-recurse ./bin

or revert to patching individual executables:


$ autopatchelf ./zipmix

The result


The result of having most aspects automated of a binary patching process results in a substantial reduction in code size for the Nix expressions that need to deploy prebuilt packages.

In my previous blog post, I have shown two example cases for which I manually derived the patchelf instructions that I need to run. By using the autopatchelf tool I can significantly decrease the size of the corresponding Nix expressions.

For example, the following expression deploys kzipmix:


{stdenv, fetchurl, autopatchelf, glibc}:

stdenv.mkDerivation {
name = "kzipmix-20150319";
src = fetchurl {
url = http://static.jonof.id.au/dl/kenutils/kzipmix-20150319-linux.tar.gz;
sha256 = "0fv3zxhmwc3p34larp2d6rwmf4cxxwi71nif4qm96firawzzsf94";
};
buildInputs = [ autopatchelf ];
libs = stdenv.lib.makeLibraryPath [ glibc ];
installPhase = ''
${if stdenv.system == "i686-linux" then "cd i686"
else if stdenv.system == "x86_64-linux" then "cd x86_64"
else throw "Unsupported system architecture: ${stdenv.system}"}
mkdir -p $out/bin
cp zipmix kzip $out/bin
autopatchelf $out/bin
'';
}

In the expression shown above, it suffices to simply move the executable to $out/bin and running autopatchelf.

I have also shown a more complicated example demonstrating how to patch the Quake 4 demo. I can significantly reduce the amount of code by substituting all the patchelf instructions by a single autopatchelf invocation:


{stdenv, fetchurl, glibc, SDL, xlibs}:

stdenv.mkDerivation {
name = "quake4-demo-1.0";
src = fetchurl {
url = ftp://ftp.idsoftware.com/idstuff/quake4/demo/quake4-linux-1.0-demo.x86.run;
sha256 = "0wxw2iw84x92qxjbl2kp5rn52p6k8kr67p4qrimlkl9dna69xrk9";
};
buildInputs = [ autopatchelf ];
libs = stdenv.lib.makeLibraryPath [ glibc SDL xlibs.libX11 xlibs.libXext ];

buildCommand = ''
# Extract files from the installer
cp $src quake4-linux-1.0-demo.x86.run
bash ./quake4-linux-1.0-demo.x86.run --noexec --keep
# Move extracted files into the Nix store
mkdir -p $out/libexec
mv quake4-linux-1.0-demo $out/libexec
cd $out/libexec/quake4-linux-1.0-demo
# Remove obsolete setup files
rm -rf setup.data
# Patch ELF binaries
autopatchelf .
# Remove libgcc_s.so.1 that conflicts with Mesa3D's libGL.so
rm ./bin/Linux/x86/libgcc_s.so.1
# Create wrappers for the executables and ensure that they are executable
for i in q4ded quake4
do
mkdir -p $out/bin
cat > $out/bin/$i <<EOF
#! ${stdenv.shell} -e
cd $out/libexec/quake4-linux-1.0-demo
./bin/Linux/x86/$i.x86 "\$@"
EOF
chmod +x $out/libexec/quake4-linux-1.0-demo/bin/Linux/x86/$i.x86
chmod +x $out/bin/$i
done
'';
}

For the Android SDK, there is even a more substantial win in code size reductions. The following Nix expression is used to patch the Android build-tools plugin package:


{deployAndroidPackage, lib, package, os, autopatchelf, makeWrapper, pkgs, pkgs_i686}:

deployAndroidPackage {
inherit package os;
buildInputs = [ autopatchelf makeWrapper ];

libs_x86_64 = lib.optionalString (os == "linux")
(lib.makeLibraryPath [ pkgs.glibc pkgs.zlib pkgs.ncurses5 ]);
libs_i386 = lib.optionalString (os == "linux")
(lib.makeLibraryPath [ pkgs_i686.glibc pkgs_i686.zlib pkgs_i686.ncurses5 ]);

patchInstructions = ''
${lib.optionalString (os == "linux") ''
export libs_i386=$packageBaseDir/lib:$libs_i386
export libs_x86_64=$packageBaseDir/lib64:$libs_x86_64
autopatchelf $packageBaseDir/lib64 libs --no-recurse
autopatchelf $packageBaseDir libs --no-recurse
''}

wrapProgram $PWD/mainDexClasses \
--prefix PATH : ${pkgs.jdk8}/bin
'';
noAuditTmpdir = true;
}

The above expression specifies the search libraries per architecture for x86 (i386) and x86_64 and automatically patches the binaries in the lib64/ sub folder and base directories. The autopatchelf tool ensures that no library of an incompatible architecture gets linked to a binary.

Discussion


The automated patching approach described in this blog post is not entirely a new idea -- in Nixpkgs, Aszlig Neusepoff created an autopatchelf hook that is integrated into the fixup phase of the stdenv.mkDerivation {} function. It shares a number of similar features -- it accepts a list of library packages (the runtimeDependencies environment variable) and automatically adds the provided runtime dependencies to the RPATH of all binaries in all the output folders.

There are also a number of differences -- my approach provides an autopatchelf command-line tool that can be invoked in any stage of a build process and provides full control over the patching process. It can also be used outside a Nix builder environment, which is useful for experimentation purposes. This increased level of flexibility is required for more complex prebuilt binary packages, such as the Android SDK and its plugins -- for some plugins, you cannot generalize the patching process and you typically require more control.

It also offers better support to cope with repositories providing binaries of multiple architectures -- while the Nixpkgs version has a check that prevents incompatible libraries from being linked, it does not allow you to have fine grained control over library paths to consider for each architecture.

Another difference between my implementation and the autopatchelf hook is that it works with colon separated library paths instead of white space delimited Nix store paths. The autopatchelf hook makes the assumption that a dependency (by convention) stores all shared libraries in the lib/ sub folder.

My implementation works with arbitrary library paths and arbitrary environment variables that you can specify as a parameter. To patch certain kinds of Android plugins, you must be able to refer to libraries that reside in unconventional locations in the same package. You can even use the LD_LIBRARY_PATH environment variable (that is typically used to dynamically load libraries from a set of locations) in conjunction with autopatchelf to make dynamic library references static.

There is also a use case that the autopatchelf command-line tool does not support -- the autopatchelf hook can also be used for source compiled projects whose executables may need to dynamically load dependencies via the dlopen() function call.

Dynamically loaded libraries are not known at link time (because they are not provided to the Nix-wrapped ld command), and as a result, they are not added to the RPATH of an executable. The Nixpkgs autopatchelf hook allows you to easily supplement the library paths of these dynamically loaded libraries after the build process completes.

Availability


The autopatchelf command-line tool can be found in the nix-patchtools repository. The goal of this repository to provide a collection of tools that help making the patching processes of complex prebuilt packages more convenient. In the future, I may identify more patterns and provide additional tooling to automate them.

autopatchelf is prominently used in my refactored version of the Android SDK to automatically patch all ELF binaries. I have the intention to integrate this new Android SDK implementation into Nixpkgs soon.

Follow up


UPDATE: In the meantime, I have been working with Aszlig, the author of the autopatchelf hook, to get the functionality I need for auto patching the Android SDK integrated in Nixpkgs.

The result is that the Nixpkgs' version now implements a number of similar features and is also capable of patching the Android SDK. The build-tools expression shown earlier, is now implemented as follows:


{deployAndroidPackage, lib, package, os, autoPatchelfHook, makeWrapper, pkgs, pkgs_i686}:

deployAndroidPackage {
inherit package os;
buildInputs = [ autoPatchelfHook makeWrapper ]
++ lib.optionalString (os == "linux") [ pkgs.glibc pkgs.zlib pkgs.ncurses5 pkgs_i686.glibc pkgs_i686.zlib pkgs_i686.ncurses5 ];

patchInstructions = ''
${lib.optionalString (os == "linux") ''
addAutoPatchelfSearchPath $packageBaseDir/lib
addAutoPatchelfSearchPath $packageBaseDir/lib64
autoPatchelf --no-recurse $packageBaseDir/lib64
autoPatchelf --no-recurse $packageBaseDir
''}

wrapProgram $PWD/mainDexClasses \
--prefix PATH : ${pkgs.jdk8}/bin
'';
noAuditTmpdir = true;
}

In the above expression we do the following:

  • By adding the autoPatchelfHook package as a buildInput, we can invoke the autoPatchelf function in the builder environment and use it in any phase of a build process. To prevent the fixup hook from doing any work (that generalizes the patch process and makes the wrong assumptions for the Android SDK), the deployAndroidPackage function propagates the dontAutoPatchelf = true; parameter to the generic builder so that this fixup step will be skipped.
  • The autopatchelf hook uses the packages that are specified as buildInputs to find the libraries it needs, whereas my implementation uses libs, libs_i386 or libs_x86_64 (or any other environment variable that is specified as a command-line parameter). It is robust enough to skip incompatible libraries, e.g. x86 libraries for x86-64 executables.
  • My implementation works with colon separated library paths whereas autopatchelf hook works with Nix store paths making the assumption that there is a lib/ sub folder in which the libraries can be found that it needs. As a result, I no longer use the lib.makeLibraryPath function.
  • In some cases, we also want the autopatchelf hook to inspect non-standardized directories, such as uncommon directories in the same package. To make this work, we can add additional paths to the search cache by invoking the addAutoPatchelfSearchPath function.

by Sander van der Burg (noreply@blogger.com) at October 30, 2018 10:57 PM

October 26, 2018

Mayflower

The NixOps resources.machines option

The resources.machines attribute set NixOps provides the evaluated node configurations of a deployment network in the resources.machines attribute set. Using this information, one can easily implement machine configurations that are generated from options in an existing network. For example, a reverse proxy that automatically proxies to all other webservers in the network—one which could handle TLS termination for all of them—can be generated without having to manually define individual virtual hosts.

October 26, 2018 11:00 AM

October 01, 2018

Graham Christensen

Optimising Docker Layers for Better Caching with Nix

Nix users value its isolated, repeatable builds and simple sharing of development environments. Nix makes it easy to go back in time and rebuild software from years ago without issue.

At the same time, the value of the container ecosystem is huge. Tying in to the schedulers, orchestration, and monitoring is very valuable.

Nix has been able to generate Docker images for several years now, however the typical approach to layering with Nix is to generate one fat image with all of the dependencies. This fat image offers no sharing, is slow to build, upload, and download.

In this post I talk about how I fix this problem and use Nix to automatically create multi-layered Docker images, allowing a high amount of caching between images.

Docker uses layers

Docker’s use of layering is well known, and its benefits are undeniable: sharing a “base” system is a simple abstraction which allows extending a well known image with your own code.

A Docker image is a sequence of layers, where each member is a filesystem diff, adding and removing files from its parent member:

Efficient layering is hard because there are no rules

When there are no restrictions on what a command will do, the only way to fully capture its effects is to snapshot the full filesystem.

Most package managers will write files to shared global directories like /usr, /bin, and /etc.

This means that the only way to represent the changes between installing package A and installing package B is to take a full snapshot of the filesystem.

As a user you might manually create rules to improve the behavior of the cache: add your code towards the end of a Dockerfile, or install common libraries in a single RUN instruction, even if you don’t want them all.

These rules make sense: If a Dockerfile adds code and then installs packages, Docker can’t cache the installation because it can’t know that the package installation isn’t influenced by the code addition. Docker also can’t know that installing package A has nothing to do with package B and the changes are separately cachable.

With restrictions, we can make better optimisations

Nix does have rules.

The most important and relevant rule when considering distribution and Docker layers is:

A package build can’t write to arbitrary places on the disk.

A build can only write to a specific directory known as $out, like /nix/store/ibfx7ryqnqf01qfzj4v7qhzhkd2v9mm7-file-5.34. When you add a new package to your system, you know it didn’t modify /etc or /bin.

How does file find its dependencies? It doesn’t – they are hard-coded:

$ ldd /nix/store/ibfx7ryqnqf01qfzj4v7qhzhkd2v9mm7-file-5.34/bin/file
	linux-vdso.so.1
	/nix/store/ibfx7ryqnqf01qfzj4v7qhzhkd2v9mm7-file-5.34/lib/libmagic.so.1
	/nix/store/bv6znzsv2qkbcwwa251dx7n5dshz3nr3-zlib-1.2.11/lib/libz.so.1
	/nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/lib/libc.so.6
	/nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/lib/ld-linux-x86-64.so.2

This provides great, cache-friendly properties:

  1. You know exactly what path changed when you added file.
  2. You know exactly what paths file depends on.
  3. Once a path is created, it will never change again.

Think graphs, not layers

If you consider the properties Nix provides, you can see it already constructs a graph internally to represent software and its dependencies: it natively has a better understanding of the software than Docker is interested in.

Specifically, Nix uses a Directed Acyclic Graph to store build output, where each node is a specific, unique, and immutable path in /nix/store:

Or to use a real example, Nix itself can render a graph of a package’s dependencies:

Flattening Graphs to Layers

In a naive world we can simply walk the tree and create a layer out of each path:

and this image is valid: if you pulled any of these layers, you would automatically get all the layers below it, resulting in a complete set of dependencies.

Things get a bit more complicated for a graph with a wider graph, how do you flatten something like Bash:

If we had to flatten this to an ordered sequence, obviously bash-interactive-4.4-p23 is at the top, but does readline-7.0p5 come next? Why not bash-4.4p23?

It turns out we don’t have to solve this problem exactly, because I lied about how Docker represents layers.

How Docker really represents an Image

Docker’s layers are content addressable and aren’t required to explicitly reference a parent layer. This means a layer for readline-7.0p5 doesn’t have to mention that it has any relationship to ncurses-6.1 or glibc-2.27 at all.

Instead each image has a manifest which defines the order:

{
  "Layers": [
    "bash-interactive-4.4-p23",
    "bash-4.4p23",
    "readline-7.0p5",
     ...
  ]
}

If you have only built Docker images using a Dockerfile, then you would expect the way we flatten our graph to be critically important. If we sometimes picked readline-7.0p5 to come first and other times picked bash-4.4p23 then we may never make cache hits.

However since the Image defines the order, we don’t have to solve this impossible problem: we can order the layers in any way we want and the layer cache will always hit.

Docker doesn’t support an infinite number of layers

Docker has a limit of 125 layers, but big packages with lots of dependencies can easily have more than 125 store paths.

It is important that we still successfully build an image if we go over this limit, but what do we do with the extra layers?

In the interest of shortness, let’s pretend Docker only lets you have four layers, and we want to fit five. Out of the Bash example, which two paths do we combine in to one layer?

  • bash-interactive-4.4-p23
  • bash-4.4p23
  • readline-7.0p5
  • ncurses-6.1
  • glibc-2.27

Smushing Layers

I decided the best solution is to combine the layers which are less likely to be a cache hit with other software. Picking the most low level, fundamental paths and making them a separate layer means my next image will most likely also share some of those layers too.

Ideally it would end up with at least glibc, and ncurses in separate layers. Visually, it is hard to tell if either readline or bash-4.4p23 would be better served as an individual layer. One of them should be, certainly.

My actual solution

My prioritization algorithm is a simple graph-based popularity contest. The idea is to weight each node more heavily the deeper and more references they have.

Starting with the dependency graph of Bash from before,

we first duplicate nodes in the graph so each node is only pointed to once:

we then replace each leaf node with a counter, starting at 1:

each node whose only children are counters are then combined with their children, and their children’s counters summed, then incremented:

we then repeat the process:

we repeat this process until there is only one node:

and finally we sort the paths in each popularity bucket by name to ensure the list is consistently generated to get the paths ordered by cachability:

  • glibc-2.27: 10
  • ncurses-6.1: 4
  • bash-4.4-p23: 2
  • readline-7.0p5: 2
  • bash-interactive-4.4-p23: 1

This solution has properly put foundational paths which are most commonly referred to at the top, improving its chances of cache hit. The algorithm has also put the likely-to-change application right at the bottom in case the last layers need to be combined.

Let’s consider a much larger image. In this image, we set the maximum number of layers to 120, but the image has 200 store paths. Under this design the 119 most fundamental store paths will have their own layers, and we store the remaining 81 paths together in the 120th layer.


With this new approach of automatically layering store paths I can now generate images with very efficient caching between different images.

For a practical example of a PHP application with a MySQL database.

First we build a MySQL image:

# mysql.nix
let
  pkgs = import (builtins.fetchTarball {
    url = "https://github.com/grahamc/nixpkgs/archive/layered-docker-images.tar.gz";
    sha256 = "05a3jjcqvcrylyy8gc79hlcp9ik9ljdbwf78hymi5b12zj2vyfh6";
  }) {};
in pkgs.dockerTools.buildLayeredImage {
  name = "mysql";
  tag = "latest";
  config.Cmd = [ "${pkgs.mysql}/bin/mysqld" ];
  maxLayers = 120;
}
$ nix-build ./mysql.nix
$ docker load < ./result

Then we build a PHP image:

# php.nix
let
  pkgs = import (builtins.fetchTarball {
    url = "https://github.com/grahamc/nixpkgs/archive/layered-docker-images.tar.gz";
    sha256 = "05a3jjcqvcrylyy8gc79hlcp9ik9ljdbwf78hymi5b12zj2vyfh6";
  }) {};
in pkgs.dockerTools.buildLayeredImage {
  name = "grahamc/php";
  tag = "latest";
  config.Cmd = [ "${pkgs.php}/bin/php" ];
  maxLayers = 120;
}
$ nix-build ./php.nix
$ docker load < ./result

and export the two image layers:

$ docker inspect mysql | jq -r '.[] | .RootFS.Layers | .[]' | sort > mysql
$ docker inspect php | jq -r '.[] | .RootFS.Layers | .[]' | sort > php

and look at this, the PHP and MySQL images share twenty layers:

$ comm -1 -2 php mysql
sha256:0296e7b7d4b7d593fc533f06c5cc22f56c99c8ab0ed4301a6c7829ce8b18c6fa
sha256:1114fa18ba6be7e5db7728ae747a6bf0aab48fedf3ed9e95aff1f9ce51903698
sha256:181ca82fe4d920b46488d29b507d432dd121d9b2842cf8c720c0939e03e4d6a0
sha256:190b2db337109cb9692cd004aeed4b06fef0c10c700a9ba7939d73e697e3bbcc
sha256:1c4afa44a489abf7c7fe8fa642647a8a2786bf7581486e6a308e1078484784e6
sha256:424ad95540cb66adb9e16d768ccc7010923a9ca09dda1f56e4a08804c2de12e6
sha256:537b4796037a46b64fa39c42f642925704abbaab475e5c72a8fc15c258dc1e85
sha256:69bf797b6b6763938b98139814d8be884693806e0e5a50ac4fbbf11eb45f4f27
sha256:6c7f6a555dee6eebfc5497e84b1cacb4fff035e2101d8421768c0c2db54e8da2
sha256:7dcff293e9f411c63d6f1365795a89ac0058995de0f192ad9fb103ab56533ed3
sha256:86b1c4990ca1c80ad5dc0977fe37ab0f80e0f95e03fe79769b451d97e9f7a8f3
sha256:9d8278ca2542af2ff9cf2d8de439758f6b3bbb84bbe6b7a44edf8080b73d2949
sha256:a5c8632ba0135b465956281008a4f9c2263232d92c020b56e1d839aaa4b74834
sha256:b9fe88bf1364613ec01968480d7cb305d69e3de78bb4a56e3448298ffcc25139
sha256:c0c9c2eaa522dd31901c49a40577a68e3ae02cc75226a248813134046b299099
sha256:c2f4e79836f999ff389b82b8636b807c2baa5b702509b2991e651508519857d5
sha256:cdfb1dfb3ca2f8df9e87e5fa33b91a402a8b4ad0ede164dbdf4c25aded618ed3
sha256:ffd8e3d222cb85f642677642037a0e7886a565babdc0e229cc83147895a8ed2c
sha256:ffecd238fa95b110a1b5f71034b2bd358358758abb52fd098241200d94111979

Where before you wouldn’t bother trying to have your application and database images share layers, with Nix the layer sharing is completely automatic.

The automatic splitting and prioritization has improved image push and fetch times by an order of magnitude. Having multiple images allows Docker to request more than one at a time.

Thank you Target for having sponsored this work with Tweag in NixOS/nixpkgs#47411.

October 01, 2018 12:00 AM

September 21, 2018

Sander van der Burg

Creating Nix build function abstractions for pluggable SDKs

Two months ago, I decomposed the stdenv.mkDerivation {} function abstraction in the Nix packages collection that is basically the de-facto way in the Nix expression language to build software packages from source.

I identified some of its major concerns and developed my own implementation that is composed of layers in which each layer gradually adds a responsibility until it has most of the features that the upstream version also has.

In addition to providing a better separation of concerns, I also identified a pattern that I repeatedly use to create these abstraction layers:


{stdenv, foo, bar}:
{name, buildInputs ? [], ...}@args:

let
extraArgs = removeAttrs args [ "name" "buildInputs" ];
in
stdenv.someBuildFunction ({
name = "mypackage-"+name;
buildInputs = [ foo bar ] ++ buildInputs;
} // extraArgs)

Build function abstractions that follow this pattern (as outlined in the code fragment shown above) have the following properties:

  • The outer function header (first line) specifies all common build-time dependencies required to build a project. For example, if we want to build a function abstraction for Python projects, then python is such a common build-time dependency.
  • The inner function header specifies all relevant build parameters and accepts an arbitrary number of arguments. Some arguments have a specific purpose for the kind of software project that we want to build (e.g. name and buildInputs) while other arguments can be passed verbatim to the build function abstraction that we use as a basis.
  • In the body, we invoke a function abstraction (quite frequently stdenv.mkDerivation {}) that builds the project. We use the build parameters that have a specific meaning to configure specialized build properties and we pass all remaining build parameters that are not conflicting verbatim to the build function that we use a basis.

    A subset of these arguments have no specific meaning and are simply exposed as environment variables in the builder environment.

    Because some parameters are already being used for a specific purpose and others may be incompatible with the build function that we invoke in the body, we compose a variable named: extraArgs in which we remove the conflicting arguments.

Aside from having a function that is tailored towards the needs of building a specific software project (such as a Python project), using this pattern provides the following additional benefits:

  • A build procedure is extendable/tweakable -- we can adjust the build procedure by adding or changing the build phases, and tweak them by providing build hooks (that execute arbitrary command-line instructions before or after the execution of a phase). This is particularly useful to build additional abstractions around it for more specialized deployment procedures.
  • Because an arbitrary number of arguments can be propagated (that can be exposed as environment variables in the build environment), we have more configuration flexibility.

The original objective of using this pattern is to create an abstraction function for GNU Make/GNU Autotools projects. However, this pattern can also be useful to create custom abstractions for other kinds of software projects, such as Python, Perl, Node.js etc. projects, that also have (mostly) standardized build procedures.

After completing the blog post about layered build function abstractions, I have been improving the Nix packages/projects that I maintain. In the process, I also identified a new kind of packaging scenario that is not yet covered by the pattern shown above.

Deploying SDKs


In the Nix packages collection, most build-time dependencies are fully functional software packages. Notable exceptions are so-called SDKs, such as the Android SDK -- the Android SDK "package" is only a minimal set of utilities (such as a plugin manager, AVD manager and monitor).

In order to build Android projects from source code and manage Android app installations, you need to install a variety of plugins, such as build-tools, platform-tools, platform SDKs and emulators.

Installing all plugins is typically a much too costly operation -- it requires you to download many gigabytes of data. In most cases, you only want to install a very small subset of them.

I have developed a function abstraction that makes it possible to deploy the Android SDK with a desired set of plugins, such as:


with import <nixpkgs> {};

let
androidComposition = androidenv.composeAndroidPackages {
toolsVersion = "25.2.5";
platformToolsVersion = "27.0.1";
buildToolsVersions = [ "27.0.3" ];
includeEmulator = true;
emulatorVersion = "27.2.0";
};
in
androidComposition.androidsdk

When building the above expression (default.nix) with the following command-line instruction:


$ nix-build
/nix/store/zvailnl4f1261cn87s9n29lhj9i7y7iy-androidsdk

We get an Android SDK installation, with tools plugin version 25.2.5, platform-tools version 27.0.1, one instance of the build-tools (version 27.0.1) and an emulator of version 27.0.2. The Nix package manager will download the required plugins automatically.

Writing build function abstractions for SDKs


If you want to create function abstractions for software projects that depend on an SDK, you not only have to execute a build procedure, but you must also compose the SDK in such a way that all plugins are installed that a project requires. If any of the mandatory plugins are missing, the build will most likely fail.

As a result, the function interface must also provide parameters that allow you to configure the plugins in addition to the build parameters.

A very straight forward approach is to write a function whose interface contains both the plugin and build parameters, and propagates each of the required parameters to the SDK composition function, but manually writing this mapping has a number of drawbacks -- it duplicates functionality of the SDK composition function, it is tedious to write, and makes it very difficult to keep it consistent in case the SDK's functionality changes.

As a solution, I have extended the previously shown pattern with support for SDK deployments:


{composeMySDK, stdenv}:
{foo, bar, ...}@args:

let
mySDKFormalArgs = builtins.functionArgs composeMySDK;
mySDKArgs = builtins.intersectAttrs mySDKFormalArgs args;
mySDK = composeMySDK mySDKArgs;
extraArgs = removeAttrs args ([ "foo" "bar" ]
++ builtins.attrNames mySDKFormalArgs);
in
stdenv.mkDerivation ({
buildInputs = [ mySDK ];
buildPhase = ''
${mySDK}/bin/build
'';
} // extraArgs)

In the above code fragment, we have added the following steps:

  • First, we dynamically extract the formal arguments of the function that composes the SDK (mySDKFormalArgs).
  • Then, we compute the intersection of the formal arguments of the composition function and the actual arguments from the build function arguments set (args). The resulting attribute set (mySDKArgs) are the actual arguments we need to propagate to the SDK composition function.
  • The next step is to deploy the SDK with all its plugins by propagating the SDK arguments set as function parameters to the SDK composition function (mySDK).
  • Finally, we remove the arguments that we have passed to the SDK composition function from the extra arguments set (extraArgs), because these parameters have no specific meaning for the build procedure.

With this pattern, the build abstraction function evolves automatically with the SDK composition function without requiring me to make any additional changes.

To build an Android project from source code, I can write an expression such as:


{androidenv}:

androidenv.buildApp {
# Build parameters
name = "MyFirstApp";
src = ../../src/myfirstapp
antFlags = "-Dtarget=android-16";

# SDK composition parameters
platformVersions = [ 16 ];
toolsVersion = "25.2.5";
platformToolsVersion = "27.0.1";
buildToolsVersions = [ "27.0.3" ];
}

The expression shown above has the following properties:

  • The above function invocation propagates three build parameters: name referring to the name of the Nix package, src referring to a filesystem location that contains the source code of an Android project, and antFlags that contains command-line arguments that are passed to the Apache Ant build tool.
  • It propagates four SDK composition parameters: platformVersions referring to the platform SDKs that must be installed, toolsVersion to the version of the tools package, platformToolsVersion to the platform-tools package and buildToolsVersion to the build-tool packages.

By evaluating the above function invocation, the Android SDK with the plugins will be composed, and the corresponding SDK will be passed as a build input to the builder environment.

In the build environment, Apache Ant gets invoked build that builds the project from source code. The android.buildApp implementation will dynamically propagate the SDK composition parameters to the androidenv.composeAndroidPackages function.

Availability


The extended build function abstraction pattern described in this blog post is among the structural improvements I have been implementing in the mobile app building infrastructure in Nixpkgs. Currently, it is used in standalone test versions of the Nix android build environment, iOS build environment and Titanium build environment.

The Titanium SDK build function abstraction (a JavaScript-based cross-platform development framework that can produce Android, iOS, and several other kinds of applications from the same codebase) automatically composes both Xcode wrappers and Android SDKs to make the builds work.

The test repositories can be found on my GitHub page and the changes live in the nextgen branches. At some point, they will be reintegrated into the upstream Nixpkgs repository.

Besides mobile app development SDKs, this pattern is generic enough to be applied to other kinds of projects as well.

by Sander van der Burg (noreply@blogger.com) at September 21, 2018 10:30 PM

September 11, 2018

Mayflower

Building Customised NixOS Images

To set up a NixOS system, you usually boot into a live NixOS system and install it onto a local disk as outlined in the manual. You can then modify the system configuration to tailor it to your needs. The build system Hydra builds live images like ISO images, container tarballs or AMIs based on their definition in nixpkgs. These images are made available for download on the official website.

September 11, 2018 06:00 PM