NixOS Planet

July 11, 2019


Leveraging NixOS Tests in your Project

NixOS contains infrastructure for building integration tests based on QEMU/KVM virtual machines running NixOS. Tests built on this infrastructure are continuously run on new nixpkgs versions to ensure that NixOS continues to install and boot and that various services continue to operate correctly. This post illustrates how one may test a simple web service using NixOS tests. To have a simple enough example at hand, we wrote a small service in PHP—a classical guestbook in which visitors can leave a message that will then be written to a database and subsequently shown to later visitors of the same site.

July 11, 2019 03:00 PM

July 09, 2019

Hercules Labs

Hercules CI #5 update requiredSystemFeatures, Cachix and Darwin support

What’s new?

We’ve released hercules-ci-agent 0.3 , which brings in Cachix and Darwin (macOS) support alongside with requiredSystemFeatures.


TOML configuration

Previously the agent was configured via CLI options. Those are now all part of a configuration file formatted using TOML.

Support for binary caches

Added support for Cachix binary caches to share resulting binaries either with the public and/or between developers and/or just multiple agents.

Multi-agent and Darwin support

With binary caches to share derivations and binaries between machines, you’re now able to have multiple agents running.

Sharing binaries between machines takes time (bandwidth) so we recommend upgrading agent hardware over adding more agents.

In addition to Linux, Darwin (macOS) also became a supported deployment platform for the agent.

requiredSystemFeatures support

Derivations are now dispatched also based on requiredSystemFeatures derivation attribute that allows dispatching specific derivations to specific agents.

Cachix 0.2.1

Upgrade via the usual:

$ nix-env -iA cachix -f  

The most notable improvement is default compression has been lowered to increase throughput and it’s overridable via ``–compression-level`.

See Changelog for more details.

What’s next?

Known issues we’re resolving:

  • Builds that are in progress while agent is restarted won’t be re-queued. We’re prioritizing this one. Expect a bugfix in next deployment.
  • Evaluation and building is slower with Cachix. We’re going to add bulk query support and upstream caches to mitigate that.
  • Having a lot of failed derivations (>10k) will get frontend unresponsive.
  • Cachix auth tokens for private binary caches are personal. We’ll add support to create tokens specific to a cache.

If you notice any other bugs or annoyances please let us know.

Preview phase

The preview phase will now extend to all subscribers, which is the final phase before we’re launching publicly.

You can also receive our latest updates via Twitter or read the previous development update.

July 09, 2019 12:00 AM

July 04, 2019

Munich NixOS Meetup

NixOS Munich Community Meetup

photoMunich NixOS Meetup

The theme of this meetup is: How do you use the Nix ecosystem in your projects?

We (Mayflower) will showcase how we use Gitlab, Hydra & nixops for Continuous Integration and Deployment. If you want to share your setup with us, just show up and show us. :-)

ATTENTION: The Mayflower Munich office has moved to Laim! Please note the new address!

Food and beverages will be provided. We will BBQ on our new rooftop terrace!

München 80687 - Germany

Thursday, July 4 at 6:30 PM


July 04, 2019 03:24 PM

May 15, 2019

Hercules Labs

gitignore for Nix


Nix, when used as a development build tool, needs to solve the same problem that git does: ignore some files. We’ve extended nix-gitignore so that Nix can more reliably use the configuration that you’ve already written for git.


When you tell Nix to build your project, you need to tell it which source files to build. This is done by using path syntax in a derivation or string interpolation.

mkDerivation {
  src = ./vendored/cowsay;
  postPatch = ''
    # Contrived example of using a file in string interpolation
    # The patch file is put in /nix/store and the interpolation
    # produces the appropriate store path.
    patch -lR ${./cowsay-remove-alpaca.patch}
  # ...

This works well, until you find that Nix unexpectedly rebuilds your derivation because a temporary, hidden file has changed. One of those files you filtered out of your git tree with a ‘gitignore’ file…

Nix, as a build tool or package manager, was not designed with any specific version control system in mind. In fact it predates any dominance of git, because Nix’s general solution to the file ignoring problem, filterSource, was already implemented in 2007.

Over the last two to three years, various people have written functions to reuse these gitignore files. We have been using an implementation by @siers over the last couple of months and it has served us well, until we had a gitignore file that wasn’t detected because it was in a parent directory of the source directory we wanted to use.

I was nerd sniped.

Two months later, I finally got around to the implementation and I’m happy to announce that it solves some other problems as well. It reuses the tested rules by siers, doesn’t use import from derivation and can read all the files that it needs to.


You can import the gitignoreSource function from the repo like below, or use your favorite pinning method.

{ pkgs ? import <nixpkgs> {} }
  inherit (pkgs.stdenv) mkDerivation;
  inherit (import (builtins.fetchTarball "") { }) gitignoreSource;
mkDerivation {
  src = gitignoreSource ./vendored/cowsay;
  postPatch = ''
    patch -lR ${./cowsay-remove-alpaca.patch}
  # ...

That’s all there is to it.

It also composes with cleanSourceWith if you like to filter out some other files as well.


Here’s a comparison with the pre-existing implementation I found.

The latest up to date comparison table is available on the repo.

Feature \ Implementation cleanSource siers siers recursive icetan Profpatsch numtide this project
Ignores .git ✔️ ✔️ ✔️ ✔️ ✔️ ✔️ ✔️
No special Nix configuration ✔️ ✔️ ✔️ ✔️ ✔️   ✔️
No import from derivation ✔️ ✔️   ✔️ ✔️ ✔️ ✔️
Uses subdirectory gitignores     ✔️     ✔️ ✔️
Uses parent gitignores           ✔️ ? ✔️
Uses user gitignores           ✔️ ✔️
Has a test suite   ✔️ ✔️ ✔️   ? ✔️
Works with restrict-eval / Hydra ✔️ ✔️   ✔️ ✔️   ✔️
Included in nixpkgs ✔️ ✔️ ✔️        
✔️ Supported
✔️ ? Probably supported
  Not supported
? Probably not supported
- Not applicable or depends

Inclusion in Nixpkgs

I think it would be really nice to have this function in Nixpkgs, but it needs to be tested in practice first. This is where you can help out! Please give the project (GitHub) a spin and leave a thumbs up if it worked for you (issue).

Closing thoughts

I am happy to contribute to the friendly and inventive Nix community. Even though this gitignore project is just a small contribution, it wouldn’t have been possible without the ideas and work of siers, icetan, and everyone behind Nix and Nixpkgs in general.

As a company we are working hard to make good products to support the community and companies that want to use Nix. One of our goals is to keep making contributions like this, so please try our binary cache as a service, which is free for open source and just as easy to set up privately for companies. If you have an interest in our Nix CI, please subscribe.

– Robert

May 15, 2019 12:00 AM

May 14, 2019

Hercules Labs

Hercules CI #3 development update

What’s new?

Precise derivations improvements

Dependency failure tree

If a dependency failed for an attribute, you can now explore the dependency stack down to the actual build failure.

There’s also a rebuild button to retry the build for the whole stack, from the failed dependency down up to and including the build you clicked. We’ve addressed some of the styling issues visible on smaller screens.

Fixed an issue where users would end up being logged out

hercules-ci-agent 0.2

  • use gitignore instead of nix-gitignore
  • fix build on Darwin
  • limit internal concurrency to max eight OS threads for beefier machines
  • show version on --help
  • build against NixOS 19.03 as default
  • propagate agent information to agent view: Nix version, substituters, platform and Nix features

Focus for the next sprint

Cachix and thus Darwin support

The last bits missing (besides testing) are sharing derivations and artifacts between agents using cachix and the ease of Darwin agent deployment with accompanying documentation.

Stuck jobs when restarting the agent

Currently when you restart an agent that is doing work, jobs claimed by the agent will appear stuck in the queue. This sprint is planned to ship a way to remedy the issue manually via the UI. Later on it will be automatically handled by agent ping-alive.

Preview phase

Once we’re done with Darwin and Cachix support, we’ll hand out preview access to everyone who will have signed up for preview access.

You can also receive our latest updates via Twitter or read the previous development update.

May 14, 2019 12:00 AM

May 06, 2019

Matthew Bauer

Nixpkgs macOS Stdenv Updates

Over the past couple of months, I have been working on updating the macOS stdenv in Nixpkgs. This has significant impact on users of Nix/Nixpkgs on macOS. So, I want to explain what’s being updated, what the benefits are, and how we can minimize breakages.

1 macOS/Darwin stdenv changes

First, to summarize the changes that impact stdenv and the Darwin infrastructure. The PR is available at NixOS/nixpkgs PR #56744. This update has been in the works for the last few months, and is currently in the staging-next branch, waiting to be merged in NixOS/nixpkgs PR #60491. It should land on master and nixpkgs-unstable in the next few days. The main highlights are —

  • Change default LLVM toolchain from 5 to 7. LLVM 5 stdenv is still available through llvmPackages_5.stdenv attribute path.
  • Upgrade Apple SDK from 10.10 to 10.12.
  • Update libSystem symbols from 10.10 (XNU 3789.1.32) to 10.12 (XNU 3789.1.32).
  • Removed old patches to support old stdenv in Qt 5 and elsewhere.

These macOS SDK upgrades are equivalent to setting -mmacosx-version-min to 10.12 in XCode. As a result, we will break compatibility with macOS before 10.12.

2 Why do we need to set a minimum macOS version?

Without knowing internals of Nixpkgs, it might not be clear why we need to set a minimum macOS version. For instance with Linux, we are able to support any Linux kernel in Nixpkgs without any problem. The answer to this requires some understanding of how the kernel and userspace function.

Nixpkgs is able to support multiple Linux kernels because we can use multiple Libc’s at one time. For any executable, a Nix closure will include both its own Libc and the dynamic linker in its closure. This works in Linux where multiple Libc’s can be used, but not on macOS where only one Libc is available.

In short, Linux and macOS deal with compatibility between built binaries in different ways. They represent two opposite ends in how Unix-like kernels maintain compatibility with their userspace binaries.

2.1 Linux syscall compatibility

The kernel is responsible for managing core operating system functions such as start-up, memory management, device abstractions, and process isolation. For it to function, the kernel needs to interact with the rest of the operating system which is collectively referred to as “userspace”. Executables in userspace use “syscalls” to tell the kernel what to do. These syscalls are very low-level and usually not called directly by a process. Instead, an abstraction layer is provided by the standard C library, or Libc.

Linux is unique among operating systems due to the fact that the Kernel and Libc are developed independently. Linux is maintained by creator Linus Torvalds and a community of contributors. Glibc, the most popular Libc for Linux, is maintained by the GNU project. As a result, Linux has a strong separation between Syscalls and Libc.

Linux does not tie itself to any specific Libc. Even though Glibc is used in almost all distros, many alternatives are available. For instance, Musl provides a more lightweight version of Glibc, while Bionic is the Libc used in the Android operating system. In addition, multiple versions of each of these Libc’s can be used on any one kernel, even at the same time. This can become very common when using multiple Nixpkgs versions at one time.

To accomplish this, Linux provides a stable list of syscalls that it has maintained across many versions. This is specified for i386 at arch/x86/entry/syscalls/syscall_32.tbl in the kernel tree. The syscalls specified here are the interface through which the Libc communicates with the kernel. As a result, applications built in 1992 can run on a modern kernel, provided it comes with copies of all its libraries1.

2.2 macOS Libc compatibility

The macOS Libc is called libSystem. It is available on all macOS systems at /usr/lib/libSystem.B.dylib. This library is the main interface that binary compatibility is maintained in macOS. Unlike Linux, macOS maintains a stable interface in libSystem that all executables are expected to link to. This interface is guaranteed by Apple to be stable between versions.

In Nixpkgs, we maintain this compatibility through a list of symbols that are exported by libSystem. This is a simple text list and is available for viewing at NixOS/nixpkgs/pkgs/os-specific/darwin/apple-source-releases/Libsystem/system_c_symbols. The symbol list is created by listing symbols (nm) on the minimum macOS version that we support (for my PR, 10.12). We do some linking tricks to ensure that everything that we build in Nixpkgs only contains those symbols. This means that we can reproducibly build on newer versions of macOS, while maintaining compatibility with older macOS versions. Unfortunately, newer symbols introduced in later versions cannot be used even on systems that have those symbols.

A side effect of macOS design, is that fully static executables are not supported in macOS as they are on Linux. Without a stable syscall interface, there is nothing to provide compatibility between versions. As a result, Apple does not support this type of linking2.

There is no mandated reason why we need to use libSystem directly. In fact, some languages like Go have attempted to instead use the syscall interface directly. There is no reason why this couldn’t work, however, upgrades between versions will almost certainly break binaries. Go eventually abandoned this scheme in time for Go 1.12 (proposed by Nixpkgs macOS contributor copumpkin!)

2.3 Others

Some other examples may be useful. They mostly fall on one side or the other of the Syscall / Libc divide —

  • FreeBSD - breaks syscall compatibility between major releases, should use Libc for longterm binary compatibility.
  • OpenBSD - similarly, changes syscall interface, perhaps even more often than FreeBSD3.
  • NetBSD - apparently has maintained syscall compatibility since 1992. 4
  • Windows, Solaris, Fuchsia - I cannot find any information on these and how they handle binary compatibility.

2.4 LLVM triple

As a side note, this difference can be clearly seen in how we specify target systems. The LLVM triple is a 3 or 4-part string specifying what we want to build for. The parts of the triple correspond to:

  • <cpu> — the CPU architecture that we are building for. Examples include x86_64, aarch64, armv7l, etc.
  • <vendor> — an arbitrary string specifying the vendor for the toolchain. In Nixpkgs, this should always be unknown.
  • <kernel> — the kernel to build for (linux).
  • <abi> — the kernel ABI to use. On Linux, this corresponds to the Libc we are using (gnu for Glibc, musl for Musl, android for Bionic).

When building for Linux, we can build for any version of Linux at one time. No version information is required. In addition, we must specify what “ABI” we want to use. In Nix, this is not very important because the Libc is provided by the closure. In fact, Nix has its own version of the LLVM triple called a Nix system tuple that omits the <abi> portion altogether! It corresponds to <cpu>-<kernel> from the LLVM triple.

In comparison, when building for BSDs, we must specify which version of the kernel we are building for. In addition, we leave off the <abi> portion, because there is only one Libc available for these platforms. They are even included in the same tree as the kernel. Examples of BSD triples include,

  • aarch64-apple-darwin16.0.0
  • x86_64-unknown-freebsd12.0
  • i386-unknown-openbsd5.8
  • armv7l-unknown-netbsd7.99

3 Compatibility table

Looking through the old versions, I’ve compiled a list of what I think are the corresponding macOS versions for each Nixpkgs release. As you can see, we try to support at least 3 previous macOS releases. This also happens to be about what Apple supports through security updates5.

Nixpkgs release macOS version
19.09 10.12, 10.13, 10.14, 10.15?
19.03 10.116, 10.12, 10.13, 10.14
18.09 10.11, 10.12, 10.13, 10.14
18.03 10.11, 10.12, 10.13, 10.14
17.09 10.10, 10.11, 10.12, 10.13
17.03 10.10, 10.11, 10.12
16.09 10.10, 10.11, 10.12
16.03 10.9?, 10.10, 10.11, 10.12

We know that some users are stuck on older versions of macOS due to reasons outside of their control. As a result, we will try to support the 19.03 branch for a little bit longer than is usually done. If your organization uses 10.11, it might be a good idea to update to a newer version along with your update to Nixpkgs 19.09.

4 Conclusion

My main goal has been to show better how Nixpkgs and macOS system interact. I got a little bit sidetracked exploring differences in binary compatibility between different operating systems. But, this should help users to better understand the differences in how macOS and Linux works in relation to Nixpkgs.



It would be interesting to test this in practice. Finding a Libc that would work might be the hardest part. Even better if we could use Nix’s closures!


According to the_why_of_y on Hacker News,


macOS updates come out about every year and Apple offers about 3 months support. More information is available at


There is an issue with building on 10.11 with the new swift-corelibs derivation. As a result, you need to use prebuilt version to avoid this issue.

May 06, 2019 12:00 AM

April 30, 2019

Hercules Labs

Sprint #2 development update

What’s new in sprint #2?

Precise derivations

Agent 0.2 will communicate the structure of the derivation closure to the service, which allows us to traverse the derivation tree and dispatch each derivation to multiple agents.

Neither source or binary data used by Nix on the agent is ever sent to the service.

We will release agent 0.2 after more testing and UI improvements. as we’re still doing testing and UI improvements.

Git-rich metadata

Each job now includes a branch name, commit message and the committer:

Job rich metadata

Job page information

The job page shows information that triggered the build and timing information:

Job page

Focus for sprint #3

Cachix support

We’ve already made progress of parsing Cachix configuration, but there’s the actual pushing of derivations to be done.

Darwin / Features support

Now that precise derivations are working, they need to get aware of platforms for Darwin support. Same goes for infamous Nix “features”, which work like labels that can be used to dispatch individual derivations to specific groups of agents.

Preview phase

You’re still in time to sign up for preview access or follow us on Twitter as we will be expanding access in the coming weeks.

April 30, 2019 12:00 AM

April 16, 2019

Hercules Labs

Sprint #1 development update

Two weeks ago we launched preview access of our CI for Nix users. Thank you all for giving us the feedback through the poll and individually. We are overwhelmed with the support we got.

Focus of the preview launch was to build a fast, reliable, easy to use CI. Today you can connect your github repository in a few clicks, deploy an agent and all your commits are being tested with their status reported to GitHub.

In our latest sprint we have fixed a few issues, mainly centered around usability and clarity of what’s going on with your projects.

The following features were shipped:

The following bugs fixes were shipped:

  • When there is no agent available, enqueued jobs will show instructions to setup one

  • To prevent CSRF we’ve tightened SameSite cookie from Lax to Strict

  • CDN used to serve stale assets due to caching misconfiguration

  • Numerous fixes to the UI:

    • breadcrumbs now allow user to switch account or just navigate to it
    • no more flickering when switching pages
    • some jobs used to be stuck in Building phase
    • more minor improvements

In our upcoming sprint, #2 we will focus on:

  • Fine-grained dispatch of individual derivations (instead of just top-level derivation closures from attributes as we shipped in the preview) - what follows is testing and presenting derivations in the UI

  • Currently we only store the git revision for each job, which will be expanded to include more metadata like branch name, commit message, author, etc

  • If time allows, preliminary cachix support

You’re still in time to sign up for preview access as we will be expanding access in the following weeks.

April 16, 2019 12:00 AM

March 07, 2019

Hercules Labs

Announcing Cachix private binary caches and 0.2.0 release

In March 2018 we’ve set on a mission to streamline Nix usage in teams. Today we are shipping Nix private binary cache support to Cachix.

You can now share an unlimited number of binary caches in your group of developers, protected from public use with just a few clicks.

Authorization is based on GitHub organizations/teams (if this is a blocker for you, let us know what you need).

To get started, head over to and choose a plan that suits your private binary cache needs:

Create Nix private binary cache

At the same time cachix 0.2.0 cli is out with major improvements to NixOS usage. If you run the following as root you’ll get:

$ cachix use hie-nix
Cachix configuration written to /etc/nixos/cachix.nix.
Binary cache hie-nix configuration written to /etc/nixos/cachix/hie-nix.nix.

To start using cachix add the following to your /etc/nixos/configuration.nix:

    imports = [ ./cachix.nix ];

Then run:

    $ nixos-rebuild switch

Thank you for your feedback in the poll answers. It’s clear what we should do next:

  1. Multiple signing keys (key rotation, multiple trusted users, …)

  2. Search over binary cache contents

  3. Documentation

Happy cache sharing!

March 07, 2019 12:00 AM

February 27, 2019

Sander van der Burg

Generating functional architecture documentation from Disnix service models

In my previous blog post, I have described a minimalistic architecture documentation approach for service-oriented systems based on my earlier experiences with setting up basic configuration management repositories. I used this approach to construct a documentation catalog for the platform I have been developing at Mendix.

I also explained my motivation -- it improves developer effectiveness, team consensus and the on-boarding of new team members. Moreover, it is a crucial ingredient in improving the quality of a system.

Although we are quite happy with the documentation, my biggest inconvenience is that I had to derive it entirely by hand -- I consulted various kinds of sources, but since existing documentation and information provided by people may be incomplete or inconsistent, I considered the source code and deployment configuration files the ultimate source of truth, because no matter how elegantly a diagram is drawn, it is useless if it does not match the actual implementation.

Because a manual documentation process is very costly and time consuming, a more ideal situation would be to have an automated approach that automatically derives architecture documentation from deployment specifications.

Since I am developing a deployment framework for service-oriented systems myself (Disnix), I have decided to extend it with a generator that can derive architecture diagrams and supplemental descriptions from the deployment models using the conventions I have described in my previous blog post.

Visualizing deployment architectures in Disnix

As explained in my previous blog post, the notation that I used for the diagrams was not something I invented from scratch, but something I borrowed from Disnix.

Disnix already has a feature (for quite some time) that can visualize deployment architectures referring to a description that shows how the functional parts (the services/components) are mapped to physical resources (e.g. machines/containers) in a network.

For example, after deploying a service-oriented system, such as my example web application system, by running:

$ disnix-env -s services.nix -i infrastructure.nix \
-d distribution-bundles.nix

You can visualize the corresponding deployment architecture of the system, by running:

$ disnix-visualize >

The above command-line instruction generates a directed graph in the DOT language. The resulting dot file can be converted into a displayable image (such as a PNG or SVG file) by running:

$ dot -Tpng > out.png

Resulting in a diagram of the deployment architecture that may look as follows:

The above diagram uses the following notation:

  • The light grey boxes denote machines in a network. In the above deployment scenario, we have two them.
  • The ovals denote services (more specifically: in a Disnix-context, they reflect any kind of distributable deployment unit). Services can have almost any shape, such as web services, web applications, and databases. Disnix uses a plugin system called Dysnomia to make sure that the appropriate deployment steps are carried out for a particular type of service.
  • The arrows denote inter-dependencies. When a service points to another service means that the latter is an inter-dependency of the former service. Inter-dependency relationships ensure that the dependent service gets all configuration properties so that it knows how to reach the dependency and the deployment system makes sure that inter-dependencies of a specific service are deployed first.

    In some cases, enforcing the right activation order of activation may be expensive. It is also possible to drop the ordering requirement, as denoted by the dashed arrows. This is acceptable for redirects from the portal application, but not acceptable for database connections.
  • The dark grey boxes denote containers. Containers can be any kind of runtime environment that hosts zero or more distributable deployment units. For example, the container service of a MySQL database is a MySQL DBMS, whereas the container service of a Java web application archive can be a Java Servlet container, such as Apache Tomcat.

Visualizing the functional architecture of service-oriented systems

The services of which a service-oriented systems is composed are flexible -- they can be deployed to various kinds of environments, such a test environment, a second fail-over production environment or a local machine.

Because services can be deployed to a variety of targets, it may also be desired to get an architectural view of the functional parts only.

I created a new tool called: dydisnix-visualize-services that can be used to generate functional architecture diagrams by visualizing the services in the Disnix services model:

The above diagram is a visual representation of the services model of the example web application system, using a similar notation as the deployment architecture without showing any environment characteristics:

  • Ovals denote services and arrows denote inter-dependency relationships.
  • Every service is annotated with its type, so that it becomes clear what kind of a shape a service has and what kind of deployment procedures need to be carried out.

Despite the fact that the above diagram is focused on the functional parts, it may still look quite detailed, even from a functional point of view.

Essentially, the architecture of my example web application system is a "system of sub systems" -- each sub system provides an isolated piece of functionality consisting of a database backend and web application front-end bundle. The portal sub system is the entry point and responsible for guiding the users to the sub systems implementing the functionality that they want to use.

It is also possible to annotate services in the Disnix services model with a group and description property:

{distribution, invDistribution, pkgs, system}:

customPkgs = import ../top-level/all-packages.nix {
inherit pkgs system;

groups = {
homework = "Homework";
literature = "Literature";
homeworkdb = {
name = "homeworkdb";
pkg = customPkgs.homeworkdb;
type = "mysql-database";
group = groups.homework;
description = "Database backend of the Homework subsystem";

homework = {
name = "homework";
pkg = customPkgs.homework;
dependsOn = {
inherit usersdb homeworkdb;
type = "apache-webapplication";
appName = "Homework";
group = groups.homework;
description = "Front-end of the Homework subsystem";


In the above services model, I have grouped every database and web application front-end bundle in a group that represents a sub system (such as Homework). By adding the --group-subservices parameter to the dydisnix-visualize-services command invocation, we can simplify the diagram to only show the sub systems and how these sub systems are inter-connected:

$ dydisnix-visualize-services -s services.nix -f png \

resulting in the following functional architecture diagram:

As may be observed in the picture above, all services have been grouped. The service groups are denoted by ovals with dashed borders.

We can also query sub architecture diagrams of every group/sub system. For example, the following command generates a sub architecture diagram for the Homework group:

$ dydisnix-visualize-services -s services.nix -f png \
--group Homework --group-subservices

resulting in the following diagram:

The above diagram will only show the the services in the Homework group and their context -- i.e. non-transitive dependencies and services that have a dependency on any service in the requested group.

Services that exactly fit the group or any of its parent groups will be displayed verbatim (e.g. the homework database back-end and front-end). The other services will be categorized into in the lowest common sub group (the Users and Portal sub systems).

For more complex architectures consisting of many layers, you may probably want to generate all available architecture diagrams in one command invocation. It is also possible to run the visualization tool in batch mode. In batch mode, it will recursively generate diagrams for the top-level architecture and every possible sub group and stores them in a specified output folder:

$ dydisnix-visualize-services --batch -s services.nix -f svg \
--output-dir out

Generating supplemental documentation

Another thing I have explained in my previous blog post is that providing diagrams is useful, but they cannot clear up all confusion -- you also need to document and clarify additional details, such as the purposes of the services.

It also possible to generate a documentation page for each group showing a table of services with their descriptions and types:

The following command generates a documentation page for the Homework group:

$ dydisnix-document-services -s services.nix --group Homework

It is also possible to adjust the generation process by providing a documentation configuration file (by using the --docs parameter):

$ dydisnix-document-services -f services.nix --docs docs.nix \
--group Homework

The are a variety of settings that can be provided in a documentation configuration file:

groups = {
Homework = "Homework subsystem";
Literature = "Literature subsystem";

fields = [ "description" "type" ];

descriptions = {
type = "Type";
description = "Description";

The above configuration file specifies the following properties:

  • The descriptions for every group.
  • Which fields should be displayed in the overview table. It is possible to display any property of a service.
  • A description of every field in the services model.

Like the visualization tool, the documentation tool can also be used in batch mode to generate pages for all possible groups and sub groups.

Generating a documentation catalog

In addition to generating architecture diagrams and descriptions, it is also possible to combine both tools to automatically generate a complete documentation catalog for a service-oriented system, such as the web application example system:

$ dydisnix-generate-services-docs -s services.nix --docs docs.nix \
-f svg --output-dir out

By opening the entry page in the output folder, you will get an overview of the top-level architecture, with a description of the groups.

By clicking on a group hyperlink, you can inspect the sub architecture of the corresponding group, such as the 'Homework' sub system:

The above page displays the sub architecture diagram of the 'Homework' subsystem and a description of all services belonging to that group.

Another particularly interesting aspect is the 'Portal' sub system:

The portal's purpose is to redirect users to functionality provided by the other sub systems. The above architecture diagram displays all the sub systems in grouped form to illustrate that there is a dependency relationship, but without revealing all their internal details that clutters the diagram with unnecessary implementation details.

Other features

The tools support more use cases than those described in this blog post -- it is also possible, for example, to create arbitrary layers of sub groups by using the '/' character as a delimiter in the group identifier. I also used the company platform as an example case, that can be decomposed into four layers.


The tools described in this blog post are part of the latest development version of Dynamic Disnix -- a very experimental extension framework built on top of Disnix that can be used to make service-oriented systems self-adaptive by redeploying their services in case of events.

The reason why I have added these tools to Dynamic Disnix (and not the core Disnix toolset) is because the extension toolset has an infrastructure to parse and reflect over individual Disnix models.

Although I promised to make an official release of Dynamic Disnix a very long time ago, this still has not happened yet. However, the documentation feature is a compelling reason to stabilize the code and make the framework more usable.

by Sander van der Burg ( at February 27, 2019 11:09 PM

February 24, 2019

Matthew Bauer

Static Nix: a command-line swiss army knife

Nix is an extremely useful package manager. But, not all systems have it installed. Without root priveleges, you cannot create the /nix directory required for it to work.

With static linking, and some new features added in Nix 2.0, you can fairly easily use the Nix package manager in these unpriveleged context1. To make this even easier, I am publishing prebuilt a x86_64 binary on my personal website. It will reside permanently at (5M download).

1 Trying it out

You can use it like this,

$ curl | sh -s run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable hello -c hello
Hello World!

You can use any package provided by Nixpkgs (using the attribute name). This gives you a swiss army knife of command line tools. I have compiled some cool commands to try out. There examples of various tools, games and demos that you can use through Nix, without installing anything! Everything is put into temporary directories2.

1.1 Dev tools

$ nix=$(mktemp); \
  curl > $nix && \
  chmod +x $nix && \
  $nix run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable \
  bashInteractive curl git htop imagemagick file findutils jq nix openssh pandoc

1.2 Emacs

$ nix=$(mktemp); \
  curl > $nix && \
  chmod +x $nix && \
  $nix run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable \
  emacs -c emacs

1.3 File manager

$ nix=$(mktemp); \
  curl > $nix && \
  chmod +x $nix && \
  $nix run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable \
  ranger -c ranger

1.4 Fire

$ curl | \
  sh -s run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable \
  aalib -c aafire

1.5 Fortune

$ curl | \
  sh -s run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable \
  bash cowsay fortune -c sh -c 'cowsay $(fortune)'

1.6 Nethack

$ nix=$(mktemp); \
  curl > $nix && \
  chmod +x $nix && \
  $nix run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable \
  nethack -c nethack

1.7 Weather

$ curl | \
  sh -s run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable \
  bash curl cowsay -c sh -c 'cowsay $(curl'

1.8 World map

$ curl | \
  sh -s run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable \
  bash coreutils curl libcaca ncurses -c bash -c \
  'img=$(mktemp ${TMPDIR:-/tmp}/XXX.jpg); \
  curl -k > $img \
  && img2txt -W $(tput cols) -f utf8 $img'

1.9 Youtube

$ curl | \
  sh -s run --store $HOME/.cache/nix/store -f channel:nixpkgs-unstable \
  bash youtube-dl mplayer -c sh -c \
  'mplayer -vo caca $(youtube-dl --no-check-certificate -g'

1.10 And more…

Lots more cool things are possible. Look through the packages provided by Nixpkgs if you need inspiration.

2 Avoid installing and extracting each time

This method of using Nix has some upfront cost. This is because must be downloaded each time and the embedded .tar.gz file extracted. If you want Nix to stay around permanently, you have to follow a few tricks. Total install size is about 11M. Using this method, you will reduce startup and keep Nix in your path at each login.

I have two ways of doing this. One the “easy” way is just running this script.

$ curl | sh

The other is the “safe” way and involves running some commands in order. These are the same commands run by the script, but this lets you audit everything being done line by line.

$ t=$(mktemp -d)
$ curl > $t/
$ pushd $t
$ sh --extract
$ popd
$ mkdir -p $HOME/bin/ $HOME/share/nix/corepkgs/
$ mv $t/dat/nix $HOME/bin/
$ mv $t/dat/share/nix/corepkgs/* $HOME/share/nix/corepkgs/
$ echo export 'PATH=$HOME/bin:$PATH' >> $HOME/.profile
$ echo export 'NIX_DATA_DIR=$HOME/share' >> $HOME/.profile
$ source $HOME/.profile
$ rm -rf $t

You can now run the Nix commands above as you need to, and it will be available on each login. Remember to always add the arguments -f channel:nixpkgs-unstable and --store $HOME/.cache/nix/store, otherwise Nix will be confused on how to handle the missing /nix/store and other environment variables.

3 Build it yourself

This is certainly a security vulnerability so you may want to build static Nix for youself from my pull request. Of course you can’t build static Nix without Nix, so this would need to be done from a system that has Nix installed. You can build it yourself, provided you have git and nix installed, like this,

$ git clone
$ cd nixpkgs
$ git checkout static-nix
$ nix-build -A pkgsStatic.nix

Then, copy it to your machine without Nix installed (provided you have ssh installed), like this,

$ scp ./result/bin/nix your-machine:
$ ssh your-machine
$ ./nix ...



Note that you will need to be able to set up a private namespace. This is enabled by default on Linux, but some distros have specifically disabled it. See this issue for more discussion.


While ideally we would not need temporary directories at all, some of these commands require it. This is because they check whether they are in a pipe and refuse to run if so. Your temporary directory should be cleaned each time your reboot anyway. The Nix packages will be installed in $HOME/.cache/nix/store but they can be removed at any time.

February 24, 2019 12:00 AM

February 08, 2019

Matthew Bauer

Call for proofreaders and beta testers for 19.03

This was originally published on Discourse. I am putting it here for posterity reasons.

We get lots of contributors in Nixpkgs and NixOS who modify our source code. They are the most common type of contribution we receive. But, there is actually a great need for other types of contributions that don’t involve programming at all! For the benefit of new users, I am going to outline how you can easily contribute to the community and help make 19.03 the best NixOS release yet.

1 Proofreading

We have two different manuals in the NixOS/nixpkgs repo. One is for Nixpkgs, the set of all software. And the other is for NixOS, our Linux distro. Proofreading these manuals is important in helping new users learn about how our software works.

When you find an issue, you can do one of two things. The first and most encouraged is to open a PR on GitHub fixing the documentation. Both manuals are written in docbook. You can see the source for each here:

GitHub allows you to edit these files directly on the web. You can also always use your own Git client. For reference on writing in DocBook, I recommend reading through

An alternative if you are unable to fix the documentation yourself is to open an issue. We use the same issue tracker includes any issues with Nixpkgs/NixOS and can be accessed through GitHub Issues. Please be sure to provide a link to where in the manual the issue is as well as what is incorrect or otherwise confusing.

2 Beta testing

An alternative to proofreading is beta testing. There are a number of ways to do this, but I would suggest using VirtualBox. Some information on installing VirtualBox can be found online, but you should just need to set these NixOS options: = true;

and add your user to the vboxusers group:

users.users.<user>.extraGroups = [ "vboxusers" ];

then rebuild your NixOS machine (sudo nixos-rebuild switch), and run this command to start virtualbox:


Other distros have their own ways of installing VirtualBox, see Download VirtualBox for more info.

You can download an unstable NixOS .ova file directly here. (WARNING: this will be a large file, a little below 1GB).

Once downloaded, you can import this .ova file directly into VirtualBox using “File” -> “Import Appliance…”. Select the .ova file downloaded from above and click through a series of Next dialogs, using the provided defaults. After this, you can boot your NixOS machine by selecting it from the list on the left and clicking “Start”.

The next step is to just play around with the NixOS machine and try to break it! You can report any issues you find on the GitHub Issues tracker. We use the same issue tracker for both NixOS and Nixpkgs. Just try to make your issues as easy to reproduce as possible. Be specific on where the problem is and how someone else could recreate the problem for themselves.

February 08, 2019 12:00 AM

January 03, 2019

Hercules Labs

elm2nix 0.1

Our frontend is written in Elm and our deployments are done with Nix.

There are many benefits to using Nix for packaging: like reproducible installation with binaries, being able to diff for changes, rollback support, etc, with a single generic tool.

We benefit most once the whole deployment is Nixified, which is what we’ve done to our frontend in last December.


Back in November 2016 I’ve written down some ideas on how elm2nix could work. In December 2017 the first prototype of elm2nix was born, but it required a fork of the Elm compiler to gathering your project’s dependencies. Elm 0.19 came out with pinning for all dependencies, making it feasible for other packaging software to build Elm projects.

Installation and usage

Today we’re releasing elm2nix 0.1 and it’s already available in nixpkgs unstable and stable channels. The easiest way to install it is:

# install Nix
$ curl | sh

# activate Nix environment
$ source ~/.nix-profile/etc/profile.d/

# install elm2nix binary
$ nix-env -i elm2nix

Given your current directory with an Elm 0.19 project, here are three commands to get Elm project build with Nix.

First, we need to generate a Nix file with metadata about dependencies:

$ elm2nix convert > elm-srcs.nix

Second, since Elm 0.19 packaging maintains a snapshot of

$ elm2nix snapshot > versions.dat

And last, we generate a Nix expression by template, which ties everything together:

$ elm2nix init > default.nix

By default, the template will use just plain elm-make to compile your project.

To build using Nix and see the generated output in Chromium:

$ nix-build
$ chromium ./result/Main.html

What could be improved in Elm

You may notice that elm2nix convert output includes sha256 hashes. Nix will require hashes for anything that’s fetched from the internet. Elm package index does provide an endpoint with hashes, but then Nix needs to know what’s the hash of the endpoint response.

To address this issue it would be ideal if there would be elm.lock file or similar with all dependencies pinned including their hashes - then Nix would have everything available locally. Other package managers for various languages are slowly going towards outputing metadata file that explains the whole build process. This can be considered an API between build planning and the actual builder.

Another minor issue comes from versions.dat. Ideally instead of committing to git repository a few megabytes of serialized JSON, one would be able to point to an url that would present binary file pinned at some specific time - allowing it to always be verifiable with upfront known hash.

What could be improved in Nix

Nix expression generated by elm2nix init could be upstreamed to nixpkgs or another Nix repository. This would allow for small footprint in an application and stable documentation.

Default expression might not be enough for everyone, as you can use Parcel, Webpack or any other asset management tool for building the project. There’s room for all common environments.

Closing thoughts

Stay tuned for another post on how to develop Elm applications with Parcel and Nix.

Since you’re here, we’re building next-generation CI and binary caching services, take a look at Hercules CI and Cachix.

January 03, 2019 12:00 AM

December 25, 2018

Ollie Charles

Solving Planning Problems with Fast Downward and Haskell

In this post I’ll demonstrate my new fast-downward library and show how it can be used to solve planning problems. The name comes from the use of the backend solver - Fast Downward. But what’s a planning problem?

Roughly speaking, planning problems are a subclass of AI problems where we need to work out a plan that moves us from an initial state to some goal state. Typically, we have:

  • A known starting state - information about the world we know to be true right now.
  • A set of possible effects - deterministic ways we can change the world.
  • A goal state that we wish to reach.

With this, we need to find a plan:

  • A solution to a planning problem is a plan - a totally ordered sequence of steps that converge the starting state into the goal state.

Planning problems are essentially state space search problems, and crop up in all sorts of places. The common examples are that of moving a robot around, planning logistics problems, and so on, but they can be used for plenty more! For example, the Beam library uses state space search to work out how to converge a database from one state to another (automatic migrations) by adding/removing columns.

State space search is an intuitive approach - simply build a graph where nodes are states and edges are state transitions (effects), and find a path (possibly shortest) that gets you from the starting state to a state that satisfies some predicates. However, naive enumeration of all states rapidly grinds to a halt. Forming optimal plans (least cost, least steps, etc) is an extremely difficult problem, and there is a lot of literature on the topic (see ICAPS - the International Conference on Automated Planning and Scheduling and recent International Planning Competitions for an idea of the state of the art). The fast-downward library uses the state of the art Fast Downward solver and provides a small DSL to interface to it with Haskell.

In this post, we’ll look at using fast-downward in the context of solving a small planning problem - moving balls between rooms via a robot. This post is literate Haskell, here’s the context we’ll be working in:

If you’d rather see the Haskell in it’s entirety without comments, simply head to the end of this post.

Modelling The Problem

Defining the Domain

As mentioned, in this example, we’ll consider the problem of transporting balls between rooms via a robot. The robot has two grippers and can move between rooms. Each gripper can hold zero or one balls. Our initial state is that everything is in room A, and our goal is to move all balls to room B.

First, we’ll introduce some domain specific types and functions to help model the problem. The fast-downward DSL can work with any type that is an instance of Ord.

A ball in our model is modelled by its current location. As this changes over time, it is a Var - a state variable.

A gripper in our model is modelled by its state - whether or not it’s holding a ball.

Finally, we’ll introduce a type of all possible actions that can be taken:

With this, we can now begin modelling the specific instance of the problem. We do this by working in the Problem monad, which lets us introduce variables (Vars) and specify their initial state.

Setting the Initial State

First, we introduce a state variable for each of the 4 balls. As in the problem description, all balls are initially in room A.

Next, introduce a variable for the room the robot is in - which also begins in room A.

We also introduce variables to track the state of each gripper.

This is sufficient to model our problem. Next, we’ll define some effects to change the state of the world.

Defining Effects

Effects are computations in the Effect monad - a monad that allows us to read and write to variables, and also fail (via MonadPlus). We could define these effects as top-level definitions (which might be better if we were writing a library), but here I’ll just define them inline so they can easily access the above state variables.

Effects may be used at any time by the solver. Indeed, that’s what solving planning problems is all about! The hard part is choosing effects intelligently, rather than blindly trying everything. Fortunately, you don’t need to worry about that - Fast Downward will take care of that for you!

Picking Up Balls

The first effect takes a ball and a gripper, and attempts to pick up that ball with that gripper.

  1. First we check that the gripper is empty. This can be done concisely by using an incomplete pattern match. do notation desugars incomplete pattern matches to a call to fail, which in the Effect monad simply means “this effect can’t currently be used”.

  2. Next, we check where the ball and robot are, and make sure they are both in the same room.

  3. Here we couldn’t choose a particular pattern match to use, because picking up a ball should be possible in either room. Instead, we simply observe the location of both the ball and the robot, and use an equality test with guard to make sure they match.

  4. If we got this far then we can pick up the ball. The act of picking up the ball is to say that the ball is now in a gripper, and that the gripper is now holding a ball.

  5. Finally, we return some domain specific information to use if the solver chooses this effect. This has no impact on the final plan, but it’s information we can use to execute the plan in the real world (e.g., sending actual commands to the robot).

Moving Between Rooms

This effect moves the robot to the room adjacent to its current location.

This is an “unconditional” effect as we don’t have any explicit guards or pattern matches. We simply flip the current location by an adjacency function.

Again, we finish by returning some information to use when this effect is chosen.

Dropping Balls

Finally, we have an effect to drop a ball from a gripper.

  1. First we check that the given gripper is holding a ball, and the given ball is in a gripper.

  2. If we got here then those assumptions hold. We’ll update the location of the ball to be the location of the robot, so first read out the robot’s location.

  3. Empty the gripper

  4. Move the ball.

  5. And we’re done! We’ll just return a tag to indicate that this effect was chosen.

Solving Problems

With our problem modelled, we can now attempt to solve it. We invoke solve with a particular search engine (in this case A* with landmark counting heuristics). We give the solver two bits of information:

  1. A list of all effects - all possible actions the solver can use. These are precisely the effects we defined above, but instantiated for all balls and grippers.
  2. A goal state. Here we’re using a list comprehension which enumerates all balls, adding the condition that the ball location must be InRoom RoomB.

So far we’ve been working in the Problem monad. We can escape this monad by using runProblem :: Problem a -> IO a. In our case, a is SolveResult Action, so running the problem might give us a plan (courtesy of solve). If it did, we’ll print the plan.

fast-downward allows you to extract a totally ordered plan from a solution, but can also provide a partiallyOrderedPlan. This type of plan is a graph (partial order) rather than a list (total order), and attempts to recover some concurrency. For example, if two effects do not interact with each other, they will be scheduled in parallel.

Well, Did it Work?!

All that’s left is to run the problem!

> main
Found a plan!
1: PickUpBall
2: PickUpBall
3: SwitchRooms
4: DropBall
5: DropBall
6: SwitchRooms
7: PickUpBall
8: PickUpBall
9: SwitchRooms
10: DropBall
11: DropBall

Woohoo! Not bad for 0.02 secs, too :)

Behind The Scenes

It might be interesting to some readers to understand what’s going on behind the scenes. Fast Downward is a C++ program, yet somehow it seems to be running Haskell code with nothing but an Ord instance - there are no marshalling types involved!

First, let’s understand the input to Fast Downward. Fast Downward requires an encoding in its own SAS format. This format has a list of variables, where each variable contains a list of values. The contents of the values aren’t actually used by the solver, rather it just works with indices into the list of values for a variable. This observations means we can just invent values on the Haskell side and careful manage mapping indices back and forward.

Next, Fast Downward needs a list of operators which are ground instantiations of our effects above. Ground instantiations of operators mention exact values of variables. Recounting our gripper example, pickUpBallWithGripper b gripper actually produces 2 operators - one for each room. However, we didn’t have to be this specific in the Haskell code, so how are we going to recover this information?

fast-downward actually performs expansion on the given effects to find out all possible ways they could be called, by non-deterministically evaluating them to find a fixed point.

A small example can be seen in the moveRobotToAdjacentRoom Effect. This will actually produce two operators - one to move from room A to room B, and one to move from room B to room A. The body of this Effect is (once we inline the definition of modifyVar)

Initially, we only know that robotLocation can take the value RoomA, as that is what the variable was initialised with. So we pass this in, and see what the rest of the computation produces. This means we evaluate adjacent RoomA to yield RoomB, and write RoomB into robotLocation. We’re done for the first pass through this effect, but we gained new information - namely that robotLocation might at some point contain RoomB. Knowing this, we then rerun the effect, but the first readVar gives us two paths:

This shows us that robotLocation might also be set to RoomA. However, we already knew this, so at this point we’ve reached a fixed point.

In practice, this process is ran over all Effects at the same time because they may interact - a change in one Effect might cause new paths to be found in another Effect. However, because fast-downward only works with finite domain representations, this algorithm always terminates. Unfortunately, I have no way of enforcing this that I can see, which means a user could infinitely loop this normalisation process by writing modifyVar v succ, which would produce an infinite number of variable assignments.


CircuitHub are using this in production (and I mean real, physical production!) to coordinate activities in its factories. By using AI, we have a declarative interface to the production process – rather than saying what steps are to be performed, we can instead say what state we want to end up in and we can trust the planner to find a suitable way to make it so.

Haskell really shines here, giving a very powerful way to present problems to the solver. The industry standard is PDDL, a Lisp-like language that I’ve found in practice is less than ideal to actually encode problems. By using Haskell, we:

  • Can easily feed the results of the planner into a scheduler to execute the plan, with no messy marshalling.
  • Use well known means of abstraction to organise the problem. For example, in the above we use Haskell as a type of macro language – using do notation to help us succinctly formulate the problem.
  • Abstract out the details of planning problems so the rest of the team can focus on the domain specific details – i.e., what options are available to the solver, and the domain specific constraints they are subject to.

fast-downward is available on Hackage now, and I’d like to express a huge thank you to CircuitHub for giving me the time to explore this large space and to refine my work into the best solution I could think of. This work is the result of numerous iterations, but I think it was worth the wait!

Appendix: Code Without Comments

Here is the complete example, as a single Haskell block:

{-# language DisambiguateRecordFields #-}

module FastDownward.Examples.Gripper where

import Control.Monad
import qualified FastDownward.Exec as Exec
import FastDownward.Problem

data Room = RoomA | RoomB
  deriving (Eq, Ord, Show)

adjacent :: Room -> Room
adjacent RoomA = RoomB
adjacent RoomB = RoomA

data BallLocation = InRoom Room | InGripper
  deriving (Eq, Ord, Show)

data GripperState = Empty | HoldingBall
  deriving (Eq, Ord, Show)

type Ball = Var BallLocation

type Gripper = Var GripperState

data Action = PickUpBall | SwitchRooms | DropBall
  deriving (Show)

problem :: Problem (Maybe [Action])
problem = do
  balls <- replicateM 4 (newVar (InRoom RoomA))
  robotLocation <- newVar RoomA
  grippers <- replicateM 2 (newVar Empty)

    pickUpBallWithGripper :: Ball -> Gripper -> Effect Action
    pickUpBallWithGripper b gripper = do
      Empty <- readVar gripper
      robotRoom <- readVar robotLocation
      ballLocation <- readVar b
      guard (ballLocation == InRoom robotRoom)
      writeVar b InGripper
      writeVar gripper HoldingBall
      return PickUpBall

    moveRobotToAdjacentRoom :: Effect Action
    moveRobotToAdjacentRoom = do
      modifyVar robotLocation adjacent
      return SwitchRooms

    dropBall :: Ball -> Gripper -> Effect Action
    dropBall b gripper = do
      HoldingBall <- readVar gripper
      InGripper <- readVar b
      robotRoom <- readVar robotLocation
      writeVar b (InRoom robotRoom)
      writeVar gripper Empty
      return DropBall

    ( [ pickUpBallWithGripper b g | b <- balls, g <- grippers ]
        ++ [ dropBall b g | b <- balls, g <- grippers ]
        ++ [ moveRobotToAdjacentRoom ]
    [ b ?= InRoom RoomB | b <- balls ]

main :: IO ()
main = do
  plan <- runProblem problem
  case plan of
    Nothing ->
      putStrLn "Couldn't find a plan!"

    Just steps -> do
      putStrLn "Found a plan!"
      zipWithM_ (\i step -> putStrLn $ show i ++ ": " ++ show step) [1::Int ..] steps

cfg :: Exec.SearchEngine
cfg =
  Exec.AStar Exec.AStarConfiguration
    { evaluator =
        Exec.LMCount Exec.LMCountConfiguration
          { lmFactory =
              Exec.LMExhaust Exec.LMExhaustConfiguration
                { reasonableOrders = False
                , onlyCausalLandmarks = False
                , disjunctiveLandmarks = True
                , conjunctiveLandmarks = True
                , noOrders = False
          , admissible = False
          , optimal = False
          , pref = True
          , alm = True
          , lpSolver = Exec.CPLEX
          , transform = Exec.NoTransform
          , cacheEstimates = True
    , lazyEvaluator = Nothing
    , pruning = Exec.Null
    , costType = Exec.Normal
    , bound = Nothing
    , maxTime = Nothing

by Oliver Charles at December 25, 2018 12:00 AM

December 18, 2018

Hercules Labs

Hercules CI Development Update

We’ve been making good progress since our October 25th NixCon demo.

Some of the things we’ve built and worked on since NixCon:

  • Realise the derivations and show their status
  • Minimal build logs
  • Keeping track of agent state
  • GitHub build statuses
  • Improved agent logging
  • Work on Cachix private caches
  • Incorporating
  • Plenty of small fixes, improvements and some open source work

Here you can see attributes being streamed as they are evaluated and CI immediately starts to build each attribute and shows cached derivation statuses:


Since we’ve started dogfooding a few weeks ago, we’ve been getting valuable insight. There’s plenty of things to do and bugs to fix. Once we’re happy with the user experience for the minimal workflow, we’ll contact email subscribers and start handing out early access.

If you’d like to be part of early adopters or just be notified of development, make sure to subscribe to Hercules CI.

December 18, 2018 12:00 AM