NixOS Planet

July 11, 2018

Graham Christensen

🏃💨 cache.nixos.org, now more local!

I’m delighted to be able to announce that users all around the world will now have a great experience when fetching from the NixOS cache.

I heard several times from users in Hong Kong and Singapore that the cache was “slow”, but I didn’t know it was this slow! After working closely with a team of Nix users in Bangalore, I experienced first-hand just how eye-wateringly slow it could be.

The NixOS cache is now being served from all of AWS CloudFront edge locations, significantly reducing latency for users in Asia, Africa, South America, and Oceania.

By expanding the cache’s distribution settings to include all of the edge locations, performance has been substantially improved build time:

    Sydney
GHC
117.06 MiB
Before 178.491s
  After
Cold Cache
73.612s
  After
Hot Cache
15.707s
     
Graphical ISO closure
1,660.95 MiB
Before 2,326.957s
  After
Cold Cache
376.014s
  After
Hot Cache
25.328s

Experiments in Tokyo and Hong Kong produced similar results.

NixOS’s cache is stored in AWS S3, and distributed using AWS CloudFront. This combination gives us the excellent durability guarantees of S3 combined with the large geographical distribution of CloudFront.

Until today, the NixOS cache was only served through edge nodes in the United States, Canada, and Europe.

A big thank-you to Amine Chikhaoui, and Eelco Dolstra for their help in researching this change and turning on such a massive improvement.

July 11, 2018 12:00 AM

June 01, 2018

Domen Kozar

Announcing Cachix - Binary Cache as a Service

In the last 6 years working with Nix and mostly in last two years full-time, I've noticed a few patterns.

These are mostly direct or indirect result of not having a "good enough" infrastructure to support how much Nix has grown (1600+ contributors, 1500 pull requests per month).

Without further ado, I am announcing https://cachix.org - Binary Cache as a Service that is ready to be used after two months of work.

What problem(s) does cachix solve?

The main motivation is to save you time and compute resources waiting for your packages to build. By using a shared cache of already built packages, you'll only have to build your project once.

This should also speed up CI builds, as Nix can take use of granular caching of each package, rather than caching the whole build.

Another one (which I personally consider even more important) is decentralization of work produced by Nix developers. Up until today, most devs pushed their software updates into the nixpkgs repository, which has the global binary cache at https://cache.nixos.org.

But as the community grew, fitting different ideologies into one global namespace became impossible. I consider nixpkgs community to be mature but sometimes clash of ideologies with rational backing occurs. Some want packages to be featureful by default, some prefer them to be minimalist. Some might prefer lots of configuration knobs available (for example cross-compilation support or musl/glib swapping), some might prefer the build system to do just one thing, as it's easier to maintain.

These are not right or wrong opinions, but rather a specific view of use cases that software might or might not cover.

There are also many projects that don't fit into nixpkgs because their releases are too frequent, they are not available under permissive license, are simpler to manage over complete control or maintainers simply disagree with requirements that nixpkgs developers impose on contributors.

And that's fine. What we've learned in the past is not to fight these ideas, but allow them to co-exist in different domains.

If you're interested:

Domen (domen@enlambda.com)

by Domen Kožar at June 01, 2018 10:00 AM

May 13, 2018

Matthew Bauer

Channel Changing with Nix

1 Introduction to channels

One of the many underappreciated feature of Nix is its ability to travel back in time. Functional dependencies mean that you can easily pull in old releases of NixOS & Nixpkgs without changing your environment at all! It’s surprisingly easy in Nix 2.0 with its support for Import From Derivation.

First, I will provide some code to get us started. This Nix script is what I use as my “channel changer”. It bootstraps the use of old channels. In Nix-world, channels are just what we call the CI-tested branch of NixOS/Nixpkgs1. The NixOS maintainers have been making releases consistently since 2013, so there is a lot of interesting history.

2 Channel changing

Here is my script that I will refer to later on in the post as “channels.nix” (be sure to try it out yourself!)2

let mapAttrs = f: set: builtins.listToAttrs (
      map (attr: { name = attr; value = f set.${attr}; })
    (builtins.attrNames set));
    channels = {
      aardvark    = "13.10";
      baboon      = "14.04";
      caterpillar = "14.12";
      dingo       = "15.09";
      emu         = "16.03";
      flounder    = "16.09";
      gorilla     = "17.03";
      hummingbird = "17.09";
      impala      = "18.03";
    };
in mapAttrs (v:
     import (builtins.fetchTarball
       "https://nixos.org/channels/nixos-${v}/nixexprs.tar.xz") {})
   channels

As you can see from the script there have been 9 releases in total. We use a different letter of the alphabet for each release, starting with A for Aardvark. We are now up to I for Impala3. New releases happen every 6 months with Aardvark released in December 2013. The releases are versioned as YY.MM which is a common practice for Linux distros.

3 ‘nix run’ magic

In my Nix script, I have created an “attribute” for each version that has been released. With Nix 2.0, it is very easy to run packages from them. Here is the command to run hello world from Hummingbird.

nix run -f channels.nix hummingbird.hello -c hello
Hello, world!

This has run the hello executable from the hummingbird release. Since you are most likely not running Hummingbird, it may take a while to the first time. However, once Nix has downloaded the needed files future execution will be instantaneous. The package is completely self-contained! To start, we will do examples in Impala (18.03) so that things go a little faster.

There are lots of packages in Nixpkgs so we don’t have to restrict ourselves to just hello. Let’s try out cowsay first.

nix run -f channels.nix impala.cowsay -c cowsay hello
 _______
< hello >
 -------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

There are many, many more of these commands. I’ve included a few below for you to try out on your own.

# Look up the weather
nix run -f channels.nix impala.curl -c curl wttr.in/seville

# Download music
nix run -f channels.nix impala.youtube-dl -c \
    youtube-dl -t --extract-audio \
    --audio-format mp3 \
    https://www.youtube.com/watch?v=dQw4w9WgXcQ

# Go see a Star War
nix run -f channels.nix impala.telnet -c telnet towel.blinkenlights.nl 666
nix run -f channels.nix impala.sox -c bash -c \
    'for n in E2 A2 D3 G3 B3 E4;
     do play -n synth 4 pluck $n repeat 2;
     done'

# Play Nethack
nix run -f channels.nix impala.nethack -c nethack

# Get your fortune
nix run -f channels.nix impala.fortune -c fortune

4 The macOS+Nix odyssey

The fact that Nix works so well on macOS is a miracle in its own right. Apple has a proprietary ABI but Nix is intended to be used with free software. To get around this, many hacks are necessary including taking Apple’s standard C library4. Anyway, I was interested in how well the binaries produced by Nixpkgs hold up on my MacBook. For reference, here are the versions of macOS available when each release happened. Those familiar with macOS internals will remember some significant differences between these versions.

NixOS release macOS release
Aardvark (13.10) Mountain Lion (10.8)
Baboon (14.04) Mavericks (10.9)
Caterpillar (14.12) Yosemite (10.10)
Dingo (15.09) Yosemite (10.10)
Emu (16.03) El Capitan (10.11)
Flounder (16.09) El Capitan (10.11)
Gorilla (17.03) Sierra (10.12)
Hummingbird (17.09) High Sierra (10.13)
Impala (18.03) High Sierra (10.13)

So, my MacBook is running the latest macOS 10.13. Naturally we can test that Impala & Hummingbird will work correctly. hello is a good tester, of course, not comprehensive.

nix run -f channels.nix impala.hello -c hello
Hello, world!

nix run -f channels.nix hummingbird.hello -c hello
Hello, world!

But now let’s test Gorilla. It was released when macOS Sierra was still around but the ABI should be compatible.

nix run -f channels.nix gorilla.hello -c hello
dyld: Library not loaded: /usr/lib/system/libsystem_coretls.dylib
 Referenced from: /nix/store/v7i520r9c2p8z6vk26n53hfrxgqn8cl9-Libsystem-osx-10.11.6/lib/libSystem.B.dylib
 Reason: image not found
sh: line 1: 23628 Abort trap: 6           nix run -f channels.nix gorilla.hello -c hello

Oh no!

We can see that libSystem 10.11 has been downloaded for us5. However, libSystem is referring to an image that isn’t on our machine. libsystem_coretls.dylib must have existed in 10.11 macOS but been removed since then6.

At this point, it may look like Nixpkgs will be broken going backwards. But, I want to try Flounder just to see what happens.

nix run -f channels.nix flounder.hello -c hello
Hello, world!

Amazingly, it worked! I am still not sure what the differences are, but it seems that the older executable is still available. Let’s try out Emu to see what happens there.

nix run -f channels.nix emu.hello -c hello
builder for '/nix/store/s41jnb4kmxxbwj40c5l88k9ma0mwfy0b-hello-2.10.drv' failed due to signal 4 (Illegal instruction: 4)
error: build of '/nix/store/s41jnb4kmxxbwj40c5l88k9ma0mwfy0b-hello-2.10.drv' failed

Wow! Again we hit an issue. This is the infamouse Illegal instruction: 4 bug that is frequently hit in Nixpkgs7. It occurs when an executable uses instructions that have been blocked by the XNU kernel. This is usually because they are considered insecure so a patch is needed to fix it. We no longer support Emu, so this is probably the end of the line. Let’s try Dingo out just to be sure though.

nix run -f channels.nix dingo.hello -c hello
builder for '/nix/store/1cyagihl211vsis9bz09cqaz3h2yyc23-libxml2-2.9.3.drv' failed with exit code 77; last 10 log lines:
 checking for awk... awk
 checking whether make sets $(MAKE)... yes
 checking whether make supports nested variables... yes
 checking whether make supports nested variables... (cached) yes
 checking for gcc... gcc
 checking whether the C compiler works... no
 configure: error: in `/private/tmp/nix-build-libxml2-2.9.3.drv-0/libxml2-2.9.3':
 configure: error: C compiler cannot create executables
 See `config.log' for more details

cannot build derivation '/nix/store/jd4y5aps1z61jqbhsz1gy408zwwa49w4-clang-3.6.2.drv': 1 dependencies couldn't be built
cannot build derivation '/nix/store/n4q29z97dc1p9mqrn2ydhlfmsqwbgx0j-libarchive-3.1.2.drv': 1 dependencies couldn't be built
cannot build derivation '/nix/store/vh2bh7gaw2m0rgxscf3mhm1d3rz3xwfg-clang-wrapper-3.6.2.drv': 1 dependencies couldn't be built
cannot build derivation '/nix/store/zg90kfmf99h03z0fl03gw3gh105mb02c-cmake-3.3.1.drv': 1 dependencies couldn't be built
cannot build derivation '/nix/store/45ndaky3079nd78042384f8hbidq7f7q-libc++abi-3.6.2.drv': 1 dependencies couldn't be built
cannot build derivation '/nix/store/mmyz6rrddfahwl23i9d9vjh7wa8irp5k-stdenv-darwin-boot-3.drv': 1 dependencies couldn't be built
cannot build derivation '/nix/store/lqjabx84kndk75y8m0lq7zh5190k6zzz-hello-2.10.drv': 1 dependencies couldn't be built
error: build of '/nix/store/lqjabx84kndk75y8m0lq7zh5190k6zzz-hello-2.10.drv' failed

This is a curious error because it is very different from the previous one. Back here we were still using Clang 3.3 & it looks like bootstrapping is failing on our newer machines. I was not using Nix at this time (late 2015), so I will have to defer to someone who remembers that time better. Let’s keep going.

nix run -f channels.nix caterpillar.hello -c hello
error: attribute 'hello' in selection path 'caterpillar.hello' not found

nix run -f channels.nix baboon.hello -c hello
error: attribute 'hello' in selection path 'baboon.hello' not found

nix run -f channels.nix aardvark.hello -c hello
error: attribute 'hello' in selection path 'aardvark.hello' not found

I’ve grouped them together because they have the same output. It appears that hello was not available back then! I’m not sure what is going on. Again, I will defer to someone else to explain why this happens. But, I know for a fact that GNU Hello is one of the first packages to be packaged in the Nix language8.

5 Conclusion

I wanted to also look at what happens on Linux when you go back through channels. I don’t have time currently so I am just including what I have. Anyway, if you are able to report back what happens on Linux when running these old channels, it would certainly be interesting.

My main goal was to just share some useful things in Nix that I don’t think many people outside of the core Nix community know about. Documentation has gotten better recently but lots of times people like to just read blog posts like this. Hopefully you got a feel for what can be done in Nix.

Footnotes:

1

The difference between NixOS & Nixpkgs can sometimes cause confusion especially because they are hosted in the same repository. We usually refer to NixOS for the Linux-specific distro while Nixpkgs refers to the cross-platform set of packages. Here I am referring to them collectively.

2

Note that the channel changing script is not necessary. You can always refer to the Nixpkgs version directly with the -f argument. The script is just an easy way to introduce people to the concept.

3

The in-development version of NixOS/Nixpkgs will be a J for Jackrabbit.

4

Apple’s C standard library is called libSystem. Note that unlike Glibc & Musl it contains much, much more than what is needed to compile simple C programs.

5

Note that the same libSystem is used for all of Nixpkgs to peliminate having to do SDK detection. Eventually we will update this to 10.12 or 10.13 but we prefer to stay a couple releases behind.

6

This is not a complete explanation, but the best I can do for those not aware of the internals of Nixpkgs.

7

See GitHub issue #17372.

8

See release 0.5.

May 13, 2018 12:00 AM

March 31, 2018

Matthew Bauer

2018 Summer

Quick post to annouce summer plans. I have accepted a summer internship position at Obsidian Systems in the greatest city in the world. Very excited to be working with a team of really smart people.

March 31, 2018 12:00 AM

March 29, 2018

Matthew Bauer

66% of Nixpkgs are up-to-date

Just wanted to congratulate everyone for all of the work in keeping Nixpkgs up-to-date!

Repology has just updated our stats and we have shown significant improvement. 66% of nixpkgs-unstable packages are up-to-date, with a total of 6112 up-to-date packages1. Nixpkgs is slowly growing to match package sets that come in many of the big Linux distros. We now have more up-to-date packages than rolling release distros Arch (5924) and Manjaro (5708)! We certainly have a ways to go to compete with heavyweights like FreeBSD Ports (15410), DragonFly BSD Ports (14268), and Debian (14915) but these are ancient projects that have had a huge headstart. We are definitely making progress and it’s visible in the Repology graphs2.

For comparison, in February, nixpkgs-unstable had 60% of packages up-to-date, with a total of 5439 up-to-date packages3. This kind of progress over 1 month is tremendously helpful. Remember that most of these updates will be included in 18.09, not the soon to be release 18.03. This protects users from potentially breaking changes as well as giving maintainers plenty of time to verify that the updates have not had any negative effects.

Ryan Mulligan’s nix-update has made updating packages much easier and I think it will be a game changer for Nixpkgs in the long run. There are still some kinks in it but I think it is slowly getting better. It has certainly been a headache to handle the huge increase in pull requests, but we are slowly clearing through that glut of outdated software. If an update to nixpkgs-unstable breaks something for you, please be sure to open an issue.

March 29, 2018 12:00 AM

March 19, 2018

Munich NixOS Meetup

NixOS für Einsteiger

photoMunich NixOS Meetup

- Profpatsch: Nix(OS): Package-management done right (German)

Augsburg - Germany

Monday, March 19 at 7:00 PM

2

https://www.meetup.com/Munich-NixOS-Meetup/events/248479678/

March 19, 2018 11:37 AM

March 18, 2018

Joachim Schiele

nix-language-atlas-emscripten

motivation

in the nix-language-atlas series on lastlog.de/blog i want to discuss how well programming languages, for which i’m familiar with, integrate with nix. today, let’s revisit emscripten, as there also had been improvements since i wrote about it last time.

projects we have done:

what’s new

  • emscripten toolchain:
    • refactored to force a common revision (for example 1.37.16) on emscripten, emscripten-fastcomp and emscripten-fastcomp-clang called emscriptenVersion
    • added a unit test in emscripten to verify a small part of the toolchain
    • initial emscripten documentation in nixpkgs (not on https://nixos.org/nixpkgs/manual yet)
  • added 2 more unit tests & repaired all builds

for details see the PR https://github.com/NixOS/nixpkgs/pull/37291.

want to give this a shot from nixos?

git clone https://github.com/nixos/nixpkgs/
cd nixpkgs
git checkout f41a3e7d7d327ea66459d17bfbe4a751b2496cb1
nix-env -f default.nix -I nixpkgs=. -iA emscriptenPackages
installing ‘emscripten-json-c-0.13’
installing ‘emscripten-libxml2-2.9.7’
installing ‘emscripten-xmlmirror’
installing ‘emscripten-zlib-1.2.11’
...

dynamic/static libraries

in the above toolchain we are using libraries in .so format, not the .a format and in the end we link them together using emcc. this has some advantages:

  • building .so files is best practice on linux
  • thus easy to do
  • license wise smart, some packages can’t be statically linked legally IIRC

coverage

nix emscripten toolchain is now supported from:

especially the microsoft WSL release with the creators update is a very interesting audience as it makes it so easy to use nix on windows. no mingw, no cygwin! took me 10 minutes to install nix on windows!

for testing i’ve been compiling this very toolchain on my windows 10 computer. IO is slow but it works and it is easy to deploy.

emsdk

i’ve sumbled on this questions in https://github.com/juj/emsdk:

How do I change the currently active SDK version?

How do I build multiple projects with different SDK versions in parallel?

How do I use Emscripten SDK with a custom version of python, java, node.js or some other tool?

this are very interesting questions and all get easy once one uses this nixpkgs based toolchain as pointed out. i’ve been using the emsdk in the past but now that we have the ‘bits’ automated in nixpkgs i’m happy to not have to work statefully anymore!

ideas

during my last two days of work on the toolchain update, i had these ideas and motivations for the future:

  • more documentation:
    • nix-shell -A usage
    • cross platform nix usage
    • section how to (fast) write good and lasting unit tests
    • how to use older nixpkgs vs. newer ones (slightly modify a very old project for instance which has a bug)
  • packaging some common targts, what would these be?
  • package the caching of emscripten in ~/.emscripten properly, so that build artefacts can be reused over builds (time save) and remove the HOME=$TMPDIR requirement (ugly)

      DEBUG:root:adding object /home/joachim/.emscripten_cache/asmjs/dlmalloc.bc to link
      DEBUG:root:adding object /home/joachim/.emscripten_cache/asmjs/libc.bc to link
  • update nix-instantiate, which we use in the ‘tour of nix’, to a more recent version and also package it into the toolchain as an example
  • hydra builds
  • be more transparent on the license, ideally we could generate a list of licenses in the final folder

@nixos community: i’d love to get some feedback on this, so send me an email to js@lastlog.de if you have some interesting input.

summary

thanks to kripken (emscripten author) for his help! i’d love to put more effort into this, i think it’s really worth it so if you have any funding for general development or want to have something special realized, let me know!

another interesting project i’ve learned about lately would be https://github.com/NixOS/nixpkgs/pull/37291 from https://github.com/Ericson2314.

by qknight at March 18, 2018 05:35 PM

February 25, 2018

Sander van der Burg

A more realistic public Disnix example

It has almost been ten years ago when I started developing Disnix -- February 2008 marked the start of my master's thesis internship at Philips Research that resulted in the first prototype version.

Originally, Disnix was specifically developed for one use case only -- a medical service-oriented system called the "Service Development Support System" (SDS2) that can be used for asset tracking and utilisation analysis for medical devices in a hospital environment. More information about this case study can be found in my master's thesis, some of my research papers and my PhD thesis (all of them can be found on my publications page).

Many developments have happened since the realization of the first prototype -- its feature set has been extended considerably, its architecture has been overhauled several times and the code has evolved significantly. Most notably, I have been maintaining a production system for over three years with it.

In all these years, there is always one recurring question that I regularly receive from various kinds of people:

Why should I use Disnix and why would it be useful?

The answer is that Disnix becomes useful when you have a system that can be decomposed into distributable services, such as web services, RESTful services, web applications or processes.

In addition to the fact that Disnix automates its deployment and offers a number of powerful quality properties (e.g. non-destructive upgrades for the static parts of a system), it also helps componentized systems in reaching their full potential -- for example, when services can be built, deployed, and managed individually you can scale a system up and down (e.g. by distributing services to dedicated machines or consolidating all services on a single machine) and you can anticipate more flexibly to events (e.g. by redeploying services when we encounter a crashing machine).

Although the answer may sound simple, service-oriented systems are complicated -- besides facing all kinds of deployment complexities, properly dividing a system into distributable components is also quite challenging. For all the systems I have seen in the last decade, the requirements and their modularization strategies were all quite different from each other. I have also seen a number of systems for which decomposing into services did not work and unnecessary complexities were introduced.

Moreover, it is hard to find representative public examples that people can use as a reference. I was fortunate that I had access to an industrial case study during my research. Nonetheless, I was suffering from many difficulties because of the lack of any meaningful public case studies. As a countermeasure, I developed a collection of example cases in addition to SDS2, but because of their over-simplicity, proving my point often remained hard.

Roughly half a year ago, I have released most parts of my ancient web framework that I used to actively develop before I started doing research in software deployment and I created a couple of example applications for it.


Although my web framework development predates my deployment research, I was already using it to implement information systems that followed some modularity principles that are beneficial when using Disnix as a deployment system.

Recently, I have extended my web framework's example applications repository (providing a homework assistant, CMS, photo gallery and literature survey assistant) to become another public Disnix example case following the same modularity principles I used for the information systems I used to implement at that time.

Creating a componentized web information system


As mentioned earlier in this blog post, I have already implemented a (fairly simple) componentized web information system before I started working on Disnix using my ancient custom made web framework. The "componentization process" (a term that I had neither learned about yet nor something I was consciously implementing at that time) was partially driven by evolution and partially by non-functional requirements.

Originally, the system started out as just one single web application for one specific purpose and consisted of only two components -- a MySQL database responsible for storing the data and web front-end implemented in PHP, which is quite a common separation pattern for PHP applications.

Later, I was asked to implement another PHP application with similar functionality. Initially, I wrote the application from scratch without any reuse in mind, but at some point I made two important decisions:

  • I decided to keep the databases of each applications separate as opposed to integrating all the tables into one single database. My main motivating factor was that I wanted to prevent another developer's wrong decisions from messing up the other application. Moreover, I realized that for the data that was specific to the application domain that other systems did not have to know about it.
  • In addition to domain specific data, I noticed that both databases also stored the same kind of data, namely: user accounts -- both systems had a user account system to allow users to change the data. This also did not motivate me to integrate both databases into one database. Instead, I created a separate user database and authentication system (as a library API) that was shared among both applications.

After completing the two web applications, I had to implement more functionality. I decided to keep all of these new features for these new problem domains in separate applications with separate databases. The only thing they had in common was a shared user authentication system.

At some point I ended up having many sub applications. As a result, I needed a portal application that redirected users to these sub applications. Essentially, what I implemented became a system of systems.

Deployment with Disnix


The "architectural decisions" that I described earlier resulted in a system composed of several kinds of components:

  • Domain-specific web applications exposing functionality that logically belongs together.
  • Domain-specific databases storing tables that are strongly correlated.
  • A shared user database.
  • A portal application redirecting users to the domain-specific web applications.

The above listed components can be distributed over multiple machines in a network, because they connect to each other through network links (e.g. connecting to a MySQL database can be done with a TCP connection and connecting to a domain specific web application can be done through HTTP). As a result, they can also be modeled as services that can be deployed with Disnix.

To replicate the same patterns for demo purposes, I integrated my framework's example applications into a similar system of sub systems. We can deploy the corresponding example system to one single target machine with Disnix, by running:


$ disnixos-env -s services.nix \
-n network-single.nix \
-d distribution-single.nix --use-nixops

The entire system gets deployed to a single machine because of the distribution model (distribution.nix) that maps all services to one target machine:


{infrastructure}:

{
usersdb = [ infrastructure.test1 ];
cmsdb = [ infrastructure.test1 ];
cmsgallerydb = [ infrastructure.test1 ];
homeworkdb = [ infrastructure.test1 ];
literaturedb = [ infrastructure.test1 ];
portaldb = [ infrastructure.test1 ];

cms = [ infrastructure.test1 ];
cmsgallery = [ infrastructure.test1 ];
homework = [ infrastructure.test1 ];
literature = [ infrastructure.test1 ];
users = [ infrastructure.test1 ];
portal = [ infrastructure.test1 ];
}

The resulting deployment architecture looks as follows:


The above visualization of the deployment architecture shows the following aspects:

  • The surrounding light grey colored box denotes a target machine. In this particular example, we only have one single target machine where services are deployed to.
  • The dark grey colored boxes correspond to container environments. For our example system, we have two of them: mysql-database corresponding to a MySQL DBMS server and apache-webapplication corresponding to an Apache HTTP server.
  • The ovals denote services corresponding to MySQL databases and web applications.
  • The arrows denote inter-dependency links that correspond to network connections. As explained in my previous blog post, solid arrows are dependencies with a strict ordering requirement while dashed arrows are dependencies without an ordering requirement.

Some people may argue that it is not really beneficial to deploy such a system with Disnix -- with NixOps you can define a machine configuration having a MySQL DBMS server and an Apache HTTP server with the corresponding databases and web application components. With Disnix, you must first ensure that the machines, the MySQL and Apache HTTP servers are configured by other means first (that could for example be done with NixOps), and then you have to deploy the system's components with Disnix.

In a single machine deployment scenario, it may indeed not be that beneficial. However, what you get in addition to automated deployment is also more flexibility. Since Disnix manages the services directly, as opposed to entire machine configurations as a whole, you can anticipate better in case of events by redeploying the system.

For example, when the amount of visitors keeps growing, you may run into the problem that a single server can no longer handle all the traffic. In such cases, you can easily add another machine to the network and adjust the distribution model to move (for example) the databases to another machine:


{infrastructure}:

{
usersdb = [ infrastructure.test2 ];
cmsdb = [ infrastructure.test2 ];
cmsgallerydb = [ infrastructure.test2 ];
homeworkdb = [ infrastructure.test2 ];
literaturedb = [ infrastructure.test2 ];
portaldb = [ infrastructure.test2 ];

cms = [ infrastructure.test1 ];
cmsgallery = [ infrastructure.test1 ];
homework = [ infrastructure.test1 ];
literature = [ infrastructure.test1 ];
users = [ infrastructure.test1 ];
portal = [ infrastructure.test1 ];
}

By redeploying the system, we can take advantage of the additional system resources that the new machine provides:


$ disnixos-env -s services.nix \
-n network-separate.nix \
-d distribution-separate.nix --use-nixops

resulting in the following deployment architecture:


Likewise, there are countless of other deployment strategies possible to meet all kinds of non-functional requirements. For example, we can also distribute bundles of domain specific application and database pairs over two machines:


$ disnixos-env -s services.nix \
-n network-bundles.nix \
-d distribution-bundles.nix --use-nixops

resulting in the following deployment architecture:


This approach is even more scalable than simply offloading the databases to another server.

In addition to scalability, there are countless of other reasons to pick a certain distribution strategy. You could also, for example, distribute redundant instances of databases and applications as a failover to improve availability or improve security by deploying the databases with privacy sensitive data to a machine with restrictive network access.

State management


When updating the deployment of systems with Disnix (such as moving a database from one machine to another), there may be a recurring limitation that you could run frequently into -- like Nix, Disnix only manages the static parts of the system, but not any state. This means that a service's deployment can be reproduced elsewhere, but data, such as the content of a database is not migrated.

For example, the sub system of example applications stores two kinds of data -- records in the MySQL database and files, such as images uploaded in the photo gallery or PDF files uploaded to the literature application. When moving these applications around the data is not migrated.

As a possible solution, Disnix also provides simple state management facilities. When enabled, Disnix will take snapshots of the databases and filesets on the source machines, transfers the snapshots to the target machines, and finally restores the snapshots when moving a service one machine to another in the distribution model.

State management can be enabled globally by passing the --deploy-state parameter to (disnix-env or annotating the services with deployState = true; in the services model):


$ disnixos-env -s services.nix \
-n network-bundles.nix \
-d distribution-bundles.nix --use-nixops --deploy-state

We can also directly use the state management system, e.g. for backup purposes. When running the following command:


$ disnix-snapshot

Disnix takes snapshots of all databases and web application state (e.g. the images in the photo gallery and uploaded PDF files) and transfers them to the coordinator machine. With the dysnomia-snapshots tool we can inspect the snapshot store:


$ dysnomia-snapshots --query-all
apache-webapplication/cms/1f9ed847885d2b3e3c67c51231122d958751eb5e2443c281e02e1d7108a505a3
apache-webapplication/cmsgallery/28d17a6941cb195a92e748aae737ccf524747477c6943436b734891d0f36fd53
apache-webapplication/literature/ed5ec4f8b9b4fcdb8b740ad1fa7ecb40b10dece03548f1d6e09a6a82c804131b
apache-webapplication/portal/5bbea499f8f8a4f708bb873ad683dbf088afa4c553f90ab287a9249a7ef02651
mysql-database/cmsdb/aa75992f780991c39a0969dcac5f69b04685c4fa764937476b816e938d6972ba
mysql-database/cmsgallerydb/31ebdaba658ca376123ff6a91a3e275731b383346a07840b1acaa1e44d921b65
mysql-database/homeworkdb/f0fda91545af0cb300afd84592d4914dcd48257053401e232438e34d83af828d
mysql-database/literaturedb/cb881c2200a5f1562f0b66f1394d0902bbb8e2361068fe096faac3bc31f76b5d
mysql-database/portaldb/5d8a5cb952f40ce76f93eb939d0b37eab33736d7b1e1426038322f8a572034ee
mysql-database/usersdb/64d11fc7f8969da5da318276a666f2e00e0a020ba619a1d82ed9b84a7f1c2ca6

and with some shell scripting, the actual contents of the snapshot store:


$ find $(dysnomia-snapshots --resolve $(dysnomia-snapshots --query-all)) -type f
/home/sander/state/snapshots/apache-webapplication/cms/1f9ed847885d2b3e3c67c51231122d958751eb5e2443c281e02e1d7108a505a3/state.tar.xz
/home/sander/state/snapshots/apache-webapplication/cmsgallery/28d17a6941cb195a92e748aae737ccf524747477c6943436b734891d0f36fd53/state.tar.xz
/home/sander/state/snapshots/apache-webapplication/literature/ed5ec4f8b9b4fcdb8b740ad1fa7ecb40b10dece03548f1d6e09a6a82c804131b/state.tar.xz
/home/sander/state/snapshots/apache-webapplication/portal/5bbea499f8f8a4f708bb873ad683dbf088afa4c553f90ab287a9249a7ef02651/state.tar.xz
/home/sander/state/snapshots/mysql-database/cmsdb/aa75992f780991c39a0969dcac5f69b04685c4fa764937476b816e938d6972ba/dump.sql.xz
/home/sander/state/snapshots/mysql-database/cmsgallerydb/31ebdaba658ca376123ff6a91a3e275731b383346a07840b1acaa1e44d921b65/dump.sql.xz
/home/sander/state/snapshots/mysql-database/homeworkdb/f0fda91545af0cb300afd84592d4914dcd48257053401e232438e34d83af828d/dump.sql.xz
/home/sander/state/snapshots/mysql-database/literaturedb/cb881c2200a5f1562f0b66f1394d0902bbb8e2361068fe096faac3bc31f76b5d/dump.sql.xz
/home/sander/state/snapshots/mysql-database/portaldb/5d8a5cb952f40ce76f93eb939d0b37eab33736d7b1e1426038322f8a572034ee/dump.sql.xz
/home/sander/state/snapshots/mysql-database/usersdb/64d11fc7f8969da5da318276a666f2e00e0a020ba619a1d82ed9b84a7f1c2ca6/dump.sql.xz

The above output shows that for each MySQL database, we store a compressed SQL dump of the database and for each stateful web application, a compressed tarball of state files.

Conclusion


In this blog post, I have described a more realistic public Disnix example that is inspired by my web framework developments a long time ago. Aside from automating a system's deployment, the purpose of this blog post is to describe how a system that can be decomposed into distributable services that can be deployed with Disnix. Implementing such a system is all but trivial and driven by various kinds of design decisions.

Availability


The example web application system can be obtained from my GitHub page. The Disnix deployment expressions can be found in the deployment/ sub folder.

In addition, I have created a Dysnomia module named: fileset that can capture the state files of web applications in a compressed tarball.

After the recent developments the Disnix toolset has reached a new stable point. As a result, I have decided to release Disnix 0.8. Consult the Disnix homepage for more information!

by Sander van der Burg (noreply@blogger.com) at February 25, 2018 09:59 PM

February 21, 2018

Joachim Schiele

nix-language-atlas-javascript

motivation

in the nix-language-atlas series on lastlog.de/blog i want to discuss how well programming languages, for which i’m familiar with, integrate with nix. let’s revisit javascript, as there had been major improvements since i wrote about it last time. we won’t look into the emscripten toolchain.

DSL-PM in general

are ‘domain specific language package manager(s)’ (DSL-PM) as npm or yarn a good thing?

integrate DSL-PM(s) into nix/nixpkgs

DSL-PM properties nix requires:

  • reproducible dependency calculations/downloads
  • reproducible configuration, build & installation into the store of each dependency
  • reproducible configuration, build & installation into the store of main target

DSL-PM evolution (javascript)

note: read this if you are interested in npm/yarn differences. it seems that yarn forced many of its concepts into npm.

yarn2nix workflow

let’s see a simple example how to integrate a yarn based project into your nix codebase:

  1. we clone an example:

    git clone https://github.com/yarnpkg/example-yarn-package
    cd example-yarn-package
    mkdir bin
    touch bin/myapp
    chmod u+x bin/myapp
  2. let’s add a default.nix

    { pkgs ? import <nixpkgs> {} }:
    let
      yarn2nixSrc = pkgs.fetchFromGitHub {
        owner = "moretea";
        repo = "yarn2nix";
        rev = "0472167f2fa329ee4673cedec79a659d23b02e06";
        sha256 = "10gmyrz07y4baq1a1rkip6h2k4fy9is6sjv487fndbc931lwmdaf";
      };
      yarn2nixRepo = pkgs.callPackage yarn2nixSrc {};
      inherit (yarn2nixRepo) mkYarnPackage;
    in
      mkYarnPackage {
        src = ./.;
      }

    and add contents to bin/myapp

    #!/usr/bin/env node
    
    'use strict';
    
    // For package depenency demonstration purposes only
    var multiply = require('lodash/multiply');
    
    console.log("h" + multiply(2,0.5) + ", it works!")

    add a “bin” to package.json, see this patch:

  3. now build the source

    nix-build -Q
    building path(s) ‘/nix/store/gkdb8hsykw3idp4xpv6by618wad5s052-offline’
    building path(s) ‘/nix/store/b6kiqlcaacxrlq4nnjgk69mwjnrlvkpv-yarn2nix-modules-0.1.0’
    building
    yarn config v1.3.2
    success Set "yarn-offline-mirror" to "/nix/store/gkdb8hsykw3idp4xpv6by618wad5s052-offline".
    Done in 0.10s.
    yarn install v1.3.2
    [1/4] Resolving packages...
    [2/4] Fetching packages...
    [3/4] Linking dependencies...
    [4/4] Building fresh packages...
    warning Ignored scripts due to flag.
    Done in 4.99s.
    ...
    <skipped many lines>>
    ...
    building path(s) ‘/nix/store/7yg1mah963w35dhxjcylw5q0r62bkk83-yarn.nix’
    these derivations will be built:
      /nix/store/07xy1c1yp4ada1sjiz7m416ic2i1b908-webidl-conversions-3.0.1.tgz.drv
      /nix/store/0c2xgds6pzsnsp1hmxdgaqh1n28d68k5-lodash.isarray-3.0.4.tgz.drv
      /nix/store/0dj6mwhpmzvnxzz2122isdbw85mjg0wv-regex-cache-0.4.3.tgz.drv
      /nix/store/0in2pcwsc3ln6glqkab5fsx98m28vq8g-lodash._baseassign-3.2.0.tgz.drv
      /nix/store/1rhagnq3gz68x5lbhca6z4rsx9jnabgh-is-dotfile-1.0.2.tgz.drv
      /nix/store/1vq9rlkn3qdy2acndj2q47ryswiv5yir-lodash.assign-4.2.0.tgz.drv
      /nix/store/2a4kqk4p54nvm31cyy2nnl2vzr4klyb8-babel-generator-6.14.0.tgz.drv
      /nix/store/2mf2dr0gx3bqxiqbpqdhhp8bhz95d34h-arr-diff-2.0.0.tgz.drv
      /nix/store/2pbiyjyqg57dnn0apj24lrvqp1cbix9f-expand-brackets-0.1.5.tgz.drv
      /nix/store/31h1f7rair2xq0di3b3agr7g2xfgncyz-jest-matcher-utils-15.1.0.tgz.drv
      /nix/store/3l6amqm7wdx1c45mypbgh8xgsl92zw76-exec-sh-0.2.0.tgz.drv
      /nix/store/3w6wxplb811vzfky5sgvbifm6j5p6fac-babel-core-6.14.0.tgz.drv
      /nix/store/48d8jwrpgg2pxm8mkyqvavrv4k5i2nab-lodash.keys-3.1.2.tgz.drv
      /nix/store/4l0s8plnxw2nzszbjjdicrr5np9cc2ni-jest-resolve-15.0.1.tgz.drv
      /nix/store/4mn98xigyzybcnsb71731pw8biirzzax-micromatch-2.3.11.tgz.drv
    ...
    <skipped many lines>>
    ...
    building path(s) ‘/nix/store/chsx89ifr7dk7mc00d6789indh0rn41x-to-fast-properties-1.0.2.tgz’
    building path(s) ‘/nix/store/sqpsd9y77kkq1vvfpfcfzk9q347dwg14-walker-1.0.7.tgz’
    building path(s) ‘/nix/store/j7fp6iz0wj6dcp8ibsqyph2vy4p007wk-watch-0.10.0.tgz’
    building path(s) ‘/nix/store/nv5p8ijp15mc2kbpp26mvd8pn7vqlwyy-webidl-conversions-3.0.1.tgz’
    building path(s) ‘/nix/store/a3amdrmhzrvd3v1lsy1ih5rfpdamyj46-whatwg-url-3.0.0.tgz’
    building path(s) ‘/nix/store/vhwym48w1pvgzli9i57fnzqw0pjp3a83-window-size-0.2.0.tgz’
    building path(s) ‘/nix/store/2dg59ikzfrdwbcpkwg2xwjxs9rhpilb8-worker-farm-1.3.1.tgz’
    building path(s) ‘/nix/store/497p7kg9iyb09ah0vvqx88dpq3bn843p-yargs-5.0.0.tgz’
    building path(s) ‘/nix/store/w3lmwwq4cac653nhajnvyl1k2vhivaws-yargs-parser-3.2.0.tgz’
    building path(s) ‘/nix/store/ywn6ysmnvrkyzjwpdiwr4iivk9z9kx2y-offline’
    building path(s) ‘/nix/store/pnbb86igrhhfw3bdv9kcqnyz4hxgfkmd-example-yarn-package-modules-1.0.0’
    building path(s) ‘/nix/store/r9s6cmq5ik47c0s1hd48xz3r6pixmm1r-example-yarn-package-1.0.0’
    /nix/store/r9s6cmq5ik47c0s1hd48xz3r6pixmm1r-example-yarn-package-1.0.0

    note: the last output line of nix-build is what we are interested in!

  4. checking the result

    ls -lathr result/node_modules/
    
    ...
    <skipped many lines>>
    ...
    lrwxrwxrwx 1 root root  102  1. Jan 1970  align-text -> /nix/store/pnbb86igrhhfw3bdv9kcqnyz4hxgfkmd-example-yarn-package-modules-1.0.0/node_modules/align-text
    lrwxrwxrwx 1 root root  105  1. Jan 1970  acorn-globals -> /nix/store/pnbb86igrhhfw3bdv9kcqnyz4hxgfkmd-example-yarn-package-modules-1.0.0/node_modules/acorn-globals
    lrwxrwxrwx 1 root root   97  1. Jan 1970  acorn -> /nix/store/pnbb86igrhhfw3bdv9kcqnyz4hxgfkmd-example-yarn-package-modules-1.0.0/node_modules/acorn
    lrwxrwxrwx 1 root root   98  1. Jan 1970  abbrev -> /nix/store/pnbb86igrhhfw3bdv9kcqnyz4hxgfkmd-example-yarn-package-modules-1.0.0/node_modules/abbrev
    lrwxrwxrwx 1 root root   96  1. Jan 1970  abab -> /nix/store/pnbb86igrhhfw3bdv9kcqnyz4hxgfkmd-example-yarn-package-modules-1.0.0/node_modules/abab
  5. execute the binary

    /nix/store/r9s6cmq5ik47c0s1hd48xz3r6pixmm1r-example-yarn-package-1.0.0/bin/myapp
    h1, it works!

using nix-build we declaratively built the node package!

yarn2nix internals

yarn (imperative)

  1. yarn -> read yarn.lock
  2. make / build all the downloads
  3. come up with node_packages folder
  4. yarn install

    note: in general we skip step (4.)

yarn (declarative)

  1. yarn2nix -> read yarn.lock
  2. translate each dependency into a mkDerivation

    each is a mkDerivation residing in /nix/store/…

  3. evaluate each mkDerivation, gain store path
  4. call yarn with the list of all mkDerivation(s), yarn comes up with node_modules

    a single mkDerivation also residing in /nix/store/… but encapsulating other /nix/store entries

  5. final mkDerivation uses this node_modules and creates the store path we are interested in

conclusion

  • take notice, and i can’t stress this enough, we map yarn.lock entries into single pkgs.stdenv.mkDerivation (by calling mkYarnPackage) but later pass all these into yarn which creates the node_modules contents from this. this is exactly how a DSL-PM must be designed to be easily integrated
  • yarn.lock obsoletes the requirement of manually creating a dependency file per project:
    • shown in the example above where i only added a default.nix
    • i’d wish dep’s Gopkg.lock used with go would also contain a hash already but they are using GIT and we have to call nix-prefetch-git to generate a sha256 hash manually
  • yarn2nix might not be as advanced as node2nix but we at nixcloud.io use it for all projects. yarn/yarn2nix is really fast, in dependency calculation and deployment, compared what we had before
  • i’d wish that npm’s sha1 would be replaced by something more recent but in comparison to dep they at least have a hash of some sort
  • yarn2nix is not a part of nixpkgs yet, see https://github.com/moretea/yarn2nix/issues/5

by qknight at February 21, 2018 12:35 PM

February 11, 2018

Sander van der Burg

Deploying systems with circular dependencies using Disnix


Some time ago, during my PhD thesis defence, one of my committee members asked me how I would deploy systems with Disnix in which services have circular dependencies.

It was an interesting question because Disnix defines dependencies between services (that typically involve network connections) as inter-dependencies that have two properties:

  • They allow services to find services they depend on by providing their connection properties
  • They ensure that any inter-dependency is activated before the service itself, so that no failures will occur because of missing dependencies -- in Disnix, a service is either available or unavailable, but never in a broken state due to missing inter-dependencies at runtime.

In a system with circular dependencies, the ordering property is problematic -- it is impossible to activate one dependency before another without having broken connections between them.

During the defence, I had to admit that I have never deployed such systems with Disnix before, but that there were a couple of possible solutions to cope with such constraints. For example, you can propagate properties of the distribution model directly to a service, as opposed to declaring circular inter-dependencies. Then the ordering requirement is not enforced.

I also explained that systems should not have any hard cyclic requirements on other services, but instead compose their (potential bidirectional) communication channels at runtime. Furthermore, I explained that circular dependencies are bad from a reuse perspective -- when two services mutually depend on each other, then they should ideally be one service.

Although the answer sufficed (e.g. it provided the answer that it was possible), the solution basically relies on unconventional usage of the deployment tool. Recently, as a personal exercise, I have decided to dig up this question again and explore the possibilities of deploying systems with circular dependencies.

Chord: a peer-to-peer distributed hash table


When thinking of an example system that has a circular dependency structure, the first thing that came up in my mind is Chord: a peer-to-peer distributed hash table (a copy of the research paper written by Stoica et al can be found here). Interesting fact is that I had to implement it many years ago in the lab course of the distributed algorithms course taught by another member of my PhD thesis committee.

A Chord network has circular runtime dependencies because it has a a ring structure -- in a network that has more than one node, each node has a successor and predecessor link, in which no node has the same predecessor or successor and the last successor link refers to the first node:


The Chord nodes (shown in the figure above) constitute a distributed peer-to-peer hash table. In addition to the fact that it can store key and value pairs (all kinds of objects), it also distributes the data over the nodes in the network.

Moreover, its operations are decentralized -- for example, when it is desired to search for an object or to store new objects in the hash table, it is possible to consult any node in the network. The system will redirect the caller to the appropriate node that should host the data.

Various kinds of implementations exist of the Chord protocol. The official reference implementation is a filesystem abstraction layer built on top of it. I experimented with the Java-based OpenChord implementation that is capable of storing arbitrary serializable Java objects.

More details about the implementation details of Chord operations can be found in the research paper.

Deploying a Chord network


One of the challenges I faced during the lab course is that I had deploy a test Chord network with a small collection of nodes. At that time, I had no proper deployment automation. I ended up writing a bash shell script that spawned a collection of processes in parallel.

Because deployment was complicated, I never tried more complex scenarios than running a small collection of processes on a single machine. Because it was not required for the lab course to do more than just that I, for example, never tried any real network communication deployments in which I had to distribute Chord nodes over multiple computer systems. The latter would have introduced even more complexity to the deployment process.

Deploying a Chord network basically works as follows:

  • First, we must deploy an initial node that has no connection to a predecessor or successor node.
  • Then for each additional node, we call the join operation to attach it to the network. As explained earlier, a Chord hash-table is decentralized and as a result, we can consult any node we want in the network for the join process. The join and stabilization procedures decide which predecessor and successor a new node actually gets.

There are various strategies to join additional nodes to the network, but I what I ended up doing is using the initial node as a bootstrap node -- all successive nodes, simply join to the bootstrap node and the network stabilizes to become a ring.

(As a sidenote: you could argue whether this is a good process, since the introduction of a central bootstrap node during the deployment process violates the peer-to-peer contraint, but that is a different story. Obviously, you could also think of other bootstrap strategies but that is beyond the scope of this blog post).

Automating a Chord network deployment with Disnix


To experiment with a Chord network, I have decided to create a simple server process (using the OpenChord API) whose only responsibility is to store data. It can optionally join another node in the network and it has a command-line interface allowing me to conveniently specify the connection parameters.

The deployment strategy using the initial node as a bootstrap node can be easily automated with Disnix. In the Disnix services model, we can define the bootstrap node as follows:


ChordBootstrapNode = rec {
name = "ChordBootstrapNode";
pkg = customPkgs.ChordBootstrapNode { inherit port; };
port = 8001;
portAssign = "private";
type = "process";
};

The above service configuration corresponds to a process that binds the service to a provided TCP port.

Each successive node can be defined as a service that has an inter-dependency on the bootstrap node:


ChordNode1 = rec {
name = "ChordNode1";
pkg = customPkgs.ChordNode { inherit port; };
port = 8002;
portAssign = "private";
type = "process";
dependsOn = {
inherit ChordBootstrapNode;
};
};

As can be seen in the above Nix expression, the dependsOn attribute specifies that the node has an inter-dependency on the bootstrap node. The inter-dependency declaration provides the connection settings of the bootstrap node to the command-line utility that spawns the service and ensures that the bootstrap node is deployed first.

By providing an infrastructure model containing a number of machines and writing a distribution model that maps the node to the machine, such as:


{infrastructure}:

{
ChordBootstrapNode = [ infrastructure.test1 ];
ChordNode1 = [ infrastructure.test1 ];
ChordNode2 = [ infrastructure.test2 ];
ChordNode3 = [ infrastructure.test2 ];
}

we can deploy a Chord network consisting of 4 nodes distributed over two machines by running:


$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix

This is the resulting deployment architecture of the Chord network that gets deployed:
>

In the above picture, the light grey colored boxes denote machines, the dark grey colored boxes container environments, the ovals services and the arrows inter-dependency relationships.

By running the OpenChord console, we can join any of our nodes in the network, such as the third node deployed to machine test2:


$ /nix/var/nix/profiles/disnix/default/bin/openchord-console
> joinN -port 9000 -bootstrap test2:8001
Trying to join chord network with boostrap URL ocsocket://test2:8001/
URL of created chord node ocsocket://192.168.56.102:9000/.

we can check the references that the console node has:


> refsN
Node: C1 F0 42 95 , ocsocket://192.168.56.102:9000/
Finger table:
59 E4 86 AC , ocsocket://test2:8001/ (0-159)
Successor List:
59 E4 86 AC , ocsocket://test2:8001/
64 F1 96 B9 , ocsocket://test1:8001/
Predecessor: 9C 51 42 1F , ocsocket://test2:8002/

As may be observed in the output above, our predecessor is the node 3 deployed to machine test2 and our successors are node 3 deployed to machine test2 and node 1 deployed to machine test1.

We can also insert and retrieve the data we want:


> insertN -key test -value test
> entriesN
Entries:
key = A9 4A 8F E5 , value = [( key = A9 4A 8F E5 , value = test)]

Defining services with circular dependencies in Disnix


As shown in the previous paragraph, the ring structure of a Chord hash table is constructed at runtime. As a result, Disnix does not need to manage any circular dependencies. Instead, it only has to know the dependencies of the bootstrap phase which are not cyclic at all.

I was also curious whether I could modify Disnix to properly define circular-dependencies, without any workarounds such as directly propagating properties from the distribution model. As explained in the introduction, inter-dependencies have two properties in which the second property is problematic: the ordering constraint.

To cope with the problematic ordering property, I have introduced a new property in the services model called: connectsTo allowing users to specify inter-dependencies for which the ordering does not matter. The connectsTo property makes it possible for services to define mutual dependencies on each other.

As an example case, I have extended the Disnix composition examples (a set of trivial examples implementing "Hello world" testcases) with a cyclic example case. In this new sub example, I have created a web application that both contains a server returning the "Hello world!" string and a client displaying the string. The result would be the following screen:


(Does it look cool? :p)

A web application instance is capable of connecting to another web service to obtain the "Hello world!" message to display. We can compose two web application instances that refer to each other to accomplish this.

The corresponding services model looks as follows:


{distribution, invDistribution, system, pkgs}:

let customPkgs = import ../top-level/all-packages.nix {
inherit system pkgs;
};
in
rec {
HelloWorldCycle1 = {
name = "HelloWorldCycle1";
pkg = customPkgs.HelloWorldCycle;
connectsTo = {
# Depends on the other cyclic service
HelloWorldCycle = HelloWorldCycle2;
};
type = "tomcat-webapplication";
};

HelloWorldCycle2 = {
name = "HelloWorldCycle2";
pkg = customPkgs.HelloWorldCycle;
connectsTo = {
# Depends on the other cyclic service
HelloWorldCycle = HelloWorldCycle1;
};
type = "tomcat-webapplication";
};
}

As may be observed in the above code fragment, the first service has a dependency on the second, while the second also has a dependency on the first. They are allowed to refer to each other because the connectsTo property disregards ordering.

By mapping the services to a network of machines that have Apache Tomcat hosted:


{infrastructure}:

{
HelloWorldCycle1 = [ infrastructure.test1 ];
HelloWorldCycle2 = [ infrastructure.test2 ];
}

and deploying the system:


$ disnix-env -s services-cyclic.nix \
-i infrastructure.nix \
-d distribution-cyclic.nix

We end-up with a deployment architecture of two services having cyclic dependencies:


To produce the above visualization, I have extended the disnix-visualize tool with support for the connectsTo property that displays inter-dependencies as dashed arrows (as opposed to solid arrows that denote ordinary inter-dependencies).

In addition to the option to specify circular dependencies, the connectsTo property has another interesting use case -- when services have inter-dependencies that may be broken, we can optimize the duration of an upgrade processes.

Normally, when a service gets upgraded, all its inter-dependent services will be reactivated. This is an implication of Disnix's strictness -- a service is either available or unavailable, but never broken because of missing inter-dependencies.

However, all the extra reactivations in the upgrade phase can be quite expensive as a result. If a link is non-critical and it is permitted to be down for a short while, then redeployments can be made faster.

Conclusion


In this blog post, I have described two deployment experiments with Disnix involving systems that have circular dependencies -- a Chord-based distributed hash table (that constructs a ring structure at runtime) and a trivial toy example system in which two services have mutual dependencies on each other.

Availability


The newly introduced connectsTo property is part of the development version of Disnix and will become available in the next release.

The composition example and newly created Chord example can be found on my GitHub page.

by Sander van der Burg (noreply@blogger.com) at February 11, 2018 11:31 PM

Matthew Bauer

nix-buffer: nix-shell in Emacs

Shea Levy has a wonderful package out for users of Nix and Emacs. It’s called nix-buffer and it greatly improves working on Nix stuff in Emacs. To try it just get it from MELPA by running M-x package-install<RET>nix-buffer.

To use it in a project you need to create a new file in your project directory called dir-locals.nix. It works a lot like .dir-locals.el but it allows you to evaluate Nix expressions.

To setup, just create dir-locals.nix that looks like this:

let pkgs = import <nixpkgs> {};
in with pkgs; nixBufferBuilders.withPackages [ … ]

You can put anything from Nixpkgs inside of withPackages. That path is added to your exec-path in Emacs. Using this you can wrap your project in a container and avoid conflicts between project configurations!

There’s lot of future extensions for this. Ideally we could skip and the dir-locals.nix configuration and automatically detect what dependencies you need based on your default.nix file.

February 11, 2018 12:00 AM

February 04, 2018

Graham Christensen

Prometheus and the NixOS System Version

Use the Prometheus Node Exporter Textfile collector to help correlate changes in your application’s behavior with NixOS deployments.

First, configure Prometheus’s NodeExporter to enable Textfile collection:

services.prometheus.nodeExporter = {
  enable = true;
  enabledCollectors = [
    "textfile"
  ];
  extraFlags = [
    "--collector.textfile.directory=/var/lib/prometheus-node-exporter-text-files"
  ];
};

Second, populate the textfile directory with the current system version on every boot and deployment:

system.activationScripts.node-exporter-system-version = ''
  mkdir -pm 0775 /var/lib/prometheus-node-exporter-text-files
  (
    cd /var/lib/prometheus-node-exporter-text-files
    (
      echo -n "system_version ";
      readlink /nix/var/nix/profiles/system | cut -d- -f2
    ) > system-version.prom.next
    mv system-version.prom.next system-version.prom
  )
'';

Then, configure Grafana to use the system_version as an Annotation with the following query:

changes(system_version[5m])

February 04, 2018 12:00 AM

January 31, 2018

Sander van der Burg

Diagnosing problems and running maintenance tasks in a network with services deployed by Disnix

I have been maintaining a production system with Disnix for quite some time. Although deployment works quite conveniently for me (I may probably be a bit biased, since I created Disnix :-) ), you cannot get around unforeseen incidents and problems, such as:

  • Crashing processes due to bugs or excessive load.
  • Database problems, such as inconsistencies in the data.

Errors in distributed systems are typically much more difficult to debug than single machine system failures. For example, tracing the origins of an error in distributed systems is generally hard -- one service's fault may be caused by a message propagated by another service residing on a different machine in the network.

But even if you know the origins of an error (e.g. you can clearly observe that a web application is crashing or a database connection), you may face other kinds of challenges:

  • You have to figure out to which machine in the network a service has been deployed.
  • You have to connect to the machine, e.g. through an SSH connection, to run debugging tasks.
  • You have to know the configuration properties of a service to diagnose it -- in Disnix, as explained in earlier blog posts, services can take any form -- they can be web services, but also web applications, databases and processes.

Because of these challenges, diagnosing errors and running maintenance tasks in a system deployed by Disnix is always unnecessarily time-consuming and inconvenient.

To alleviate this burden, I have developed a small tool and extension that establishes remote shell connections with environments providing all relevant configuration properties. Furthermore, the tool gives suggestions to the end-user explaining what kinds of maintenance tasks he could carry out.

The shell activity of Dysnomia


As explained in previous Disnix-related blog posts, Disnix carries out all activities to deploy a service oriented system to a network machines (i.e. to bring it in a running state), such as building services from source code, distributing their intra-dependency closures to the target machines, and activating or deactivating every service.

For the build and distribution activities, Disnix uses, as its name implies, the Nix package manager because it offers a number of powerful properties, such as strong reproducibility guarantees and atomic upgrades and rollbacks.

For the remaining activities that Nix does not support, e.g. activating or deactivating services, Disnix uses a companion tool called Dysnomia. Because services in a Disnix context could take any form, there is no generic means to activate or deactivate them -- for this reason, Dysnomia provides a plugin system with modules that carry out specific activities for a specific service type.

One of the plugins that Dysnomia provides is the deployment of MySQL databases to a MySQL DBMS server. Dysnomia deployment activities are driven by two kinds of configuration specifications. A component configuration defines the properties of a deployable unit, such as a MySQL database:


create table author
( AUTHOR_ID INTEGER NOT NULL,
FirstName VARCHAR(255) NOT NULL,
LastName VARCHAR(255) NOT NULL,
PRIMARY KEY(AUTHOR_ID)
);

create table books
( ISBN VARCHAR(255) NOT NULL,
Title VARCHAR(255) NOT NULL,
AUTHOR_ID VARCHAR(255) NOT NULL,
PRIMARY KEY(ISBN),
FOREIGN KEY(AUTHOR_ID) references author(AUTHOR_ID) on update cascade on delete cascade
);

The above configuration is a MySQL script (~/testdb) that creates the database schema consisting of two tables.

The container configuration captures properties of the environment in which the component should be hosted, which is in this particular case, a MySQL DBMS server:


type=mysql-database
mysqlUsername=root
mysqlPassword=verysecret

The above component configuration (~/mysql-production) defines the type stating that mysql-database plugin must be used, and provides the authentication credentials required to connect to the DBMS server.

The Dysnomia plugin for MySQL implements various kinds of deployment activities for MySQL databases. For example, the activation activity is implemented as follows:


...

case "$1" in
activate)
# Initalize the given schema if the database does not exists
if [ "$(echo "show databases" | @mysql@ --user=$mysqlUsername --password=$mysqlPassword -N | grep -x $componentName)" = "" ]
then
( echo "create database $componentName;"
echo "use $componentName;"

if [ -d $2/mysql-databases ]
then
cat $2/mysql-databases/*.sql
fi
) | @mysql@ $socketArg --user=$mysqlUsername --password=$mysqlPassword -N
fi
markComponentAsActive
;;

...
esac

The above code fragment checks whether a database with the given schema exists and if it does not, it will create it by running the database initialization script provided by the component configuration. As may also be observed, the above activity uses the container properties (such as the authentication credentials) as environment variables.

Dysnomia activities can be executed by invoking the dysnomia command-line tool. For example, the following command will activate the MySQL database in the MySQL database server:


$ dysnomia --operation activate \
--component ~/testdb --container ~/mysql-production

To make the execution of arbitrary tasks more convenient, I have created a new Dysnomia option called: shell. The shell operation is basically an activity that does not execute anything, but instead spawns a shell session that provides the container configuration properties as environment variables.

Moreover, the shell activity of a Dysnomia plugin typically displays suggestions for shell commands that the user may want to carry out.

For example, when we run the following command:


$ dysnomia --shell \
--component ~/testdb --container ~/mysql-production

Dysnomia spawns a shell session that shows the following:


This is a shell session that can be used to control the 'staff' MySQL database.

Module specific environment variables:
mysqlUsername Username of the account that has the privileges to administer
the database
mysqlPassword Password of the above account
mysqlSocket Path to the UNIX domain socket that is used to connect to the
server (optional)

Some useful commands:
/nix/store/h0kcf5g2ssyancr9m2i8sr09b3wq2zy0-mariadb-10.1.28/bin/mysql --user=$mysqlUsername --password=$mysqlPassword staff Start a MySQL interactive terminal

General environment variables:
this_dysnomia_module Path to the Dysnomia module
this_component Path to the mutable component
this_container Path to the container configuration file

[dysnomia-shell:~]#

By executing the command-line suggestion shown above in the above shell session, we get a MySQL interactive terminal allowing us to execute arbitrary SQL commands. It saves us the burden looking up all the MySQL configuration properties, such as the authentication credentials and the database name.

The Dysnomia shell feature is heavily inspired by nix-shell that works in quite a similar way -- it will take the build dependencies of a package build as inputs (which typically manifest themselves as environment variables) and fetches the sources, but it will not execute the package build procedure. Instead, it spawns an interactive shell session allowing the user to execute arbitrary build tasks. This Nix feature is particularly useful for development projects.

Diagnosing services with Disnix


In addition to extending Dysnomia with the shell feature, I have also extended Disnix to make this feature available in a distributed context.

The following command can be executed to spawn a shell for a particular service of the ridiculous staff tracker example (that happens to be a MySQL database):


$ disnix-diagnose -S staff
[test2]: Connecting to service: /nix/store/yazjd3hcb9ds160cq03z66y5crbxiwq0-staff deployed to container: mysql-database
This is a shell session that can be used to control the 'staff' MySQL database.

Module specific environment variables:
mysqlUsername Username of the account that has the privileges to administer
the database
mysqlPassword Password of the above account
mysqlSocket Path to the UNIX domain socket that is used to connect to the
server (optional)

Some useful commands:
/nix/store/h0kcf5g2ssyancr9m2i8sr09b3wq2zy0-mariadb-10.1.28/bin/mysql --user=$mysqlUsername --password=$mysqlPassword staff Start a MySQL interactive terminal

General environment variables:
this_dysnomia_module Path to the Dysnomia module
this_component Path to the mutable component
this_container Path to the container configuration file

[dysnomia-shell:~]#

The above command-line instruction will lookup the location of the staff database in the configuration of the system that is currently deployed, connects to it (typically through SSH) and spawns a Dysnomia shell for the given service type.

In addition to an interactive shell, you can also directly run shell commands. For example, the following command will query all the staff records:


$ disnix-diagnose -S staff \
--command 'echo "select * from staff" | mysql -u $mysqlUsername -p $mysqlPassword staff'

In most cases, only one instance of a service exists, but Disnix can also deploy redundant instances of the same service. For example, we may want to deploy two redundant instances of the web application front end in the distribution.nix configuration file:


stafftracker = [ infrastructure.test1 infrastructure.test2 ];

When trying to spawn a Dysnomia shell, the tool returns an error because it does not know to which instance to connect to:


$ disnix-diagnose -S stafftracker
Multiple mappings found! Please specify a --target and, optionally, a
--container parameter! Alternatively, you can execute commands for all possible
service mappings by providing a --command parameter.

This service has been mapped to:

container: apache-webapplication, target: test1
container: apache-webapplication, target: test2

In this case, we must refine our query with a --target parameter. For example, the following command connects to the web front-end on the test1 machine:


$ disnix-diagnose -S stafftracker --target test1

It is still possible to execute remote shell commands for redundantly deployed services. For example, the following command gets executed twice, because we have two instances deployed:


$ disnix-diagnose -S stafftracker \
--command 'echo I will see this message two times!'

In some cases, you may want to execute other kinds of maintenance tasks or you simply want to know where a particular service resides. This can be done by running the following command:


$ disnix-diagnose -S stafftracker --show-mappings
This service has been mapped to:

container: apache-webapplication, target: test1
container: apache-webapplication, target: test2

Conclusion


In this blog post, I have described a new feature of Dysnomia and Disnix that spawns interactive shell sessions making problem solving and maintenance tasks more convenient.

disnix-diagnose and the shell extension are part of the development versions of Disnix and Dysnomia and will become available in the next release.

by Sander van der Burg (noreply@blogger.com) at January 31, 2018 10:55 PM

January 08, 2018

Sander van der Burg

Syntax highlighting Nix expressions in mcedit

The year 2017 has passed and 2018 has now started. For quite a few people, this is a good moment for reflection (as I have done in my previous blog post) and to think about new year's resolutions. New year's resolutions are typically about adopting good new habits and rejecting old bad ones.

Orthodox file managers



One of my unconventional habits is that I like orthodox file managers and that I extensively use them. Orthodox file managers have a number of interesting properties:

  • They typically display textual lists of files, as opposed to icons or thumbnails.
  • They typically have two panels for displaying files: one source and one destination panel.
  • They may also have third panel (typically placed underneath the source and destination panels) that serves as a command-line prompt.

The first orthodox file manager I ever used was DirectoryOpus on the Commodore Amiga. For nearly all operating systems and desktop environments that I touched ever since, I have been using some kind of a orthodox file manager, such as:


Over the years, I have received many questions from various kinds of people -- they typically ask me what is so appealing about using such a "weird program" and why I have never considered switching to a more "traditional way" of working, because "that would be more efficient".

Aside from the fact that it may probably be mostly inertia, my motivating factors are the following:

  • Lists of files allow me to see more relevant and interesting details. In many traditional file managers, much of the screen space is wasted by icons and the spacing between them. Furthermore, traditional file managers may typically hide properties of files that I also typically want to know about, such as a file's size or modification timestamp.
  • Some file operations involve a source and destination, such as copying or moving files. In an orthodox file manager, these operations can be executed much more intuitively IMO because there is always a source and destination panel present. When I am using a traditional file manager, I typically have to interrupt my workflow to open a second destination window, and use it to browse to my target location.
  • All the orthodox file managers I have mentioned, implement virtual file system support allowing me to browse compressed archives and remote network locations as if they were directories.

    Nowadays, VFS support is not exclusive to orthodox file managers anymore, but they existed in orthodox file managers much longer.

    Moreover, I consider the VFS properties of orthodox file managers to be much more powerful. For example, the Windows file explorer can browse Zip archives, but Total Commander also has first class support for many more kinds of archives, such as RAR, ACE, LhA, 7-zip and tarballs, and can be easily extended to support many other kinds of file systems by an add-on system.
  • They have very powerful search properties. For example, searching for a collection of files having certain kinds of text patterns can be done quite conveniently.

    As with VFS support, this feature is not exclusive to orthodox file managers, but I have noticed that their search functions are still considerably more powerful than most traditional file managers.

From all the orthodox file managers listed above, Midnight Commander is the one I have been using the longest -- it was one of the first programs I used when I started using Linux (in 1999) and I have been using it ever since.

Midnight Commander also includes a text editor named: mcedit that integrates nicely with the search function. Although I have experience with half a dozen editors (such as vim and various IDEs, such as Eclipse and Netbeans), I have been using mcedit, mostly for editing configuration files, shell scripts and simple programs.

Syntax highlighting in mcedit


Earlier in the introduction I mentioned: "new year's resolutions", which may probably suggest that I intend to quit using orthodox file managers and an unconventional editor, such as mcedit. Actually, this is not something I am planning to :-).

In addition to Midnight Commander and mcedit, I have also been using another unconventional program for quite some time, namely: the Nix package manager since late 2007.

What I noticed is that, despite being primitive, mcedit has reasonable syntax highlighting support for a variety of programming languages. Unfortunately, what I still miss is support for the Nix expression language -- the DSL that is used to specify package builds and system configurations.

For quite some time, editing Nix expressions was a primitive process for me. To improve my unconventional way of working a bit, I have decided to address this shortcoming in my Christmas break by creating a Nix syntax configuration file for mcedit.

Implementing a syntax configuration for the Nix expression language


mcedit provides syntax highlighting (the format is described in the manual page) for a number of programming languages. The syntax highlighting configurations seem to follow similar conventions, probably because of the fact that programming languages influence each other a lot.

As with many programming languages, the Nix expression language has its own influences as well, such as Haskell, C, bash, JavaScript (more specifically: the JSON subset) and Perl.

I have decided to adopt similar syntax highlighting conventions in the Nix expression syntax configuration. I started by examining Nix's lexer module (src/libexpr/lexer.l):

  • First, I took the keywords and operators, and configured the syntax highlighter to color them yellow. Yellow keywords is a convention that other syntax highlighting configurations also seem to follow.
  • Then I implemented support for single line and multi-line comments. The context directive turned out to be very helpful -- it makes it possible to color all characters between a start and stop token. Comments in mcedit are typically brown.
  • The next step were the numbers. Unfortunately, the syntax highlighter does not have full support for regular expressions. For example, you cannot specify character ranges, such as [0-9]+. Instead you must enumerate all characters one by one:

    keyword whole \[0123456789\]

    Floating point numbers were a bit trickier to support, but fortunately I could steal them from the JavaScript syntax highlighter, since the formatting Nix uses is exactly the same.
  • Strings were also relatively simple to implement (with the exception of anti-quotations) by using the context directive. I have configured the syntax highlighter to color them green, similar to other programming languages.
  • The Nix expression language also supports objects of the URL or path type. Since there is no other language that I am aware of that has a similar property, I have decided to color them white, with the exception of system paths -- system paths look very similar to the C preprocessor's #include path arguments, so I have decided to color them red, similar to the C syntax highlighter.

    To properly support paths, I implemented an approximation of the regular expression used in Nix's lexer. Without full regular expression support, it is extremely difficult to make a direct translation, but for all my use cases it seems to work fine.

After configuring the above properties, I noticed that there were still some bits missing. The next step was opening the parser configuration (src/libexpr/parser.y) and look for any missing characters.

I discovered that there were still separators that I needed to add (e.g. parenthesis, brackets, semi-colons etc.). I have configured the syntax highlighter to color them bright cyan, with the exception of semi-colons -- I colored them purple, similar to the C and JavaScript syntax highlighter.

I also added syntax highlighting for the builtin functions (e.g. derivation, map and toString) so that they appear in cyan. This convention is similar to bash' syntax highlighting.

The implementation process of the Nix syntax configuration was generally straight forward, except for one thing -- anti-quotations. Because we only have a primitive lexer and no parser, it is impossible to have a configuration that covers all possibilities. For example, anti-quotations in strings that embed strings cannot be properly supported. I ended up with an implementation that only works for simple cases (e.g. a reference to an identifier or a file).

Results


The syntax highlighter works quite well for the majority of expressions in the Nix packages collection. For example, the expression for the Disnix package looks as follows:


The top-level expression that contains the package compositions looks as follows:


Also, most Hydra release.nix configurations seem to work well, such as the one used for node2nix:


Availability


The Nix syntax configuration can be obtained from my GitHub page. It can be used by installing it in a user's personal configuration directory, or by deploying a patched version of Midnight Commander. More details can be found in the README.

by Sander van der Burg (noreply@blogger.com) at January 08, 2018 10:44 PM

December 19, 2017

Sander van der Burg

Bypassing NPM's content addressable cache in Nix deployments and generating expressions from lock files

Roughly half a year ago, Node.js version 8 was released that also includes a major NPM package manager update (version 5). NPM version 5 has a number of substantial changes over the previous version, such as:

  • It uses package lock files that pinpoint the resolved versions of all dependencies and transitive dependencies. When a project with a bundled package-lock.json file is deployed, NPM will use the pinpointed versions of the packages that are in the lock file making it possible to exactly reproduce a deployment elsewhere. When a project without a lock file is deployed for the first time, NPM will generate a lock file.
  • It has a content-addressable cache that optimizes package retrieval processes and allows fully offline package installations.
  • It uses SHA-512 hashing (as opposed to the significantly weakened SHA-1), for packages published in the NPM registry.

Although these features offer significant benefits over previous versions, e.g. NPM deployments are now much faster, more secure and more reliable, it also comes with a big drawback -- it breaks the integration with the Nix package manager in node2nix. Solving these problems were much harder than I initially anticipated.

In this blog post, I will explain how I have adjusted the generation procedure to cope with NPM's new conflicting features. Moreover, I have extended node2nix with the ability to generate Nix expressions from package-lock.json files.

Lock files


One of the major new features in NPM 5.0 is the lock file (the idea itself is not so new since NPM-inspired solutions such as yarn and the PHP-based composer already support them for quite some time).

A major drawback of NPM's dependency management is that version specifiers are nominal. They can refer to specific versions of packages in the NPM registry, but also to version ranges, or external artifacts such as Git repositories. The latter category of version specifiers affect reproducibility -- for example, the version range specifier >= 1.0.0 may refer to version 1.0.0 today and to version 1.0.1 tomorrow making it extremely hard to reproduce a deployment elsewhere.

In a development project, it is still possible to control the versions of dependencies by using a package.json configuration that only refers to exact versions. However, for transitive dependencies that may still have loose version specifiers there is only very little control.

To solve this reproducibility problem, a package-lock.json file can be used -- a package lock file pinpoints the resolved versions of all dependencies and transitive dependencies making it possible to reproduce the exact same deployment elsewhere.

For example, for the NiJS package with the following package.json configuration:


{
"name": "nijs",
"version": "0.0.25",
"description": "An internal DSL for the Nix package manager in JavaScript",
"bin": {
"nijs-build": "./bin/nijs-build.js",
"nijs-execute": "./bin/nijs-execute.js"
},
"main": "./lib/nijs",
"dependencies": {
"optparse": ">= 1.0.3",
"slasp": "0.0.4"
},
"devDependencies": {
"jsdoc": "*"
}
}

NPM may produce the following partial package-lock.json file:


{
"name": "nijs",
"version": "0.0.25",
"lockfileVersion": 1,
"requires": true,
"dependencies": {
"optparse": {
"version": "1.0.5",
"resolved": "https://registry.npmjs.org/optparse/-/optparse-1.0.5.tgz",
"integrity": "sha1-dedallBmEescZbqJAY/wipgeLBY="
},
"requizzle": {
"version": "0.2.1",
"resolved": "https://registry.npmjs.org/requizzle/-/requizzle-0.2.1.tgz",
"integrity": "sha1-aUPDUwxNmn5G8c3dUcFY/GcM294=",
"dev": true,
"requires": {
"underscore": "1.6.0"
},
"dependencies": {
"underscore": {
"version": "1.6.0",
"resolved": "https://registry.npmjs.org/underscore/-/underscore-1.6.0.tgz",
"integrity": "sha1-izixDKze9jM3uLJOT/htRa6lKag=",
"dev": true
}
}
},
...
}

The above lock file pinpoints all dependencies and development dependencies including transitive dependencies to exact versions, including the locations where they can be obtained from and integrity hash codes that can be used to validate them.

The lock file can also be used to derive the entire structure of the node_modules/ folder in which all dependencies are stored. The top level dependencies property captures all packages that reside in the project's node_modules/ folder. The dependencies property of each dependency captures all packages that reside in a dependency's node_modules/ folder.
If NPM 5.0 is used and no package-lock.json is present in a project, it will automatically generate one.

Substituting dependencies


As mentioned in an earlier blog post, the most important technique to make Nix-NPM integration work is by substituting NPM's dependency management activities that conflict with Nix's dependency management -- Nix is much more strict with handling dependencies (e.g. it uses hash codes derived from the build inputs to identify a package as opposed to a name and version number).

Furthermore, in Nix build environments network access is restricted to prevent unknown artifacts to influence the outcome of a build. Only so-called fixed output derivations, whose output hashes should be known in advance (so that Nix can verify its integrity), are allowed to obtain artifacts from external sources.

To substitute NPM's dependency management, populating the node_modules/ folder ourselves with all required dependencies and substituting certain version specifiers, such as Git URLs, used to suffice. Unfortunately, with the newest NPM this substitution process no longer works. When running the following command in a Nix builder environment:


$ npm --offline install ...

The NPM package manager is forced to work in offline mode consulting its content-addressable cache for the retrieval of external artifacts. If NPM needs to consult an external resource, it throws an error.

Despite the fact that all dependencies are present in the node_modules/ folder, deployment fails with the following error message:


npm ERR! code ENOTCACHED
npm ERR! request to https://registry.npmjs.org/optparse failed: cache mode is 'only-if-cached' but no cached response available.

At first sight, the error message suggests that NPM always requires the dependencies to reside in the content-addressable cache to prevent it from downloading it from external sites. However, when we use NPM outside a Nix builder environment, wipe the cache, and perform an offline installation, it does seem to work properly:


$ npm install
$ rm -rf ~/.npm/_cacache
$ npm --offline install

Further experimentation reveals that NPM augments the package.json configuration files of all dependencies with additional metadata that are prefixed by an underscore (_):


{
"_from": "optparse@>= 1.0.3",
"_id": "optparse@1.0.5",
"_inBundle": false,
"_integrity": "sha1-dedallBmEescZbqJAY/wipgeLBY=",
"_location": "/optparse",
"_phantomChildren": {},
"_requested": {
"type": "range",
"registry": true,
"raw": "optparse@>= 1.0.3",
"name": "optparse",
"escapedName": "optparse",
"rawSpec": ">= 1.0.3",
"saveSpec": null,
"fetchSpec": ">= 1.0.3"
},
"_requiredBy": [
"/"
],
"_resolved": "https://registry.npmjs.org/optparse/-/optparse-1.0.5.tgz",
"_shasum": "75e75a96506611eb1c65ba89018ff08a981e2c16",
"_spec": "optparse@>= 1.0.3",
"_where": "/home/sander/teststuff/nijs",
"name": "optparse",
"version": "1.0.5",
...
}

It turns out that when the _integrity property in a package.json configuration matches the integrity field of the dependency in the lock file, NPM will not attempt to reinstall it.

To summarize, the problem can be solved in Nix builder environments by running a script that augments the package.json configuration files with _integrity fields with the values from the package-lock.json file.

For Git repository dependency specifiers, there seems to be an additional requirement -- it also seems to require the _resolved field to be set to the URL of the repository.

Reconstructing package lock files


The fact that we have discovered how to bypass the cache in a Nix builder environment makes it possible to fix the integration with the latest NPM. However, one of the limitations of this approach is that it only works for projects that have a package-lock.json file included.

Since lock files are still a relatively new concept, many NPM projects (in particular older projects that are not frequently updated) may not have a lock file included. As a result, their deployments will still fail.

Fortunately, we can reconstruct a minimal lock file from the project's package.json configuration and compose dependencies objects by traversing the package.json configurations inside the node_modules/ directory hierarchy.

The only attribute that cannot be immediately derived are the integrity fields containing hashes that are used for validation. It seems that we can bypass the integrity check by providing a dummy hash, such as:


integrity: "sha1-000000000000000000000000000=",

NPM does not seem to object when it encounters these dummy hashes allowing us to deploy projects with a reconstructed package-lock.json file. The solution is a very ugly hack, but it seems to work.

Generating Nix expressions from lock files


As explained earlier, lock files pinpoint the exact versions of all dependencies and transitive dependencies and describe the structure of the entire dependency graph.

Instead of simulating NPM's dependency resolution algorithm, we can also use the data provided by the lock files to generate Nix expressions. Lock files appear to contain most of the data we need -- the URLs/locations of the external artifacts and integrity hashes that we can use for validation.

Using lock files for generation offer the following advantages:

  • We no longer need to simulate NPM's dependency resolution algorithm. Despite my best efforts and fairly good results, it is hard to truly make it 100% identical to NPM's. When using a lock file, the dependency graph is already given, making deployment results much more accurate.
  • We no longer need to consult external resources to resolve versions and compute hashes making the generation process much faster. The only exception seems to be Git repositories -- Nix needs to know the output hash of the clone whereas for NPM the revision hash suffices. When we encounter a Git dependency, we still need to download it and compute the output hash.

Another minor technical challenge are the integrity hashes -- in NPM lock files integrity hashes are in base-64 notation, whereas Nix uses heximal notation or its own custom base-32 notation. We need to convert the NPM integrity hashes to a notation that Nix understands.

Unfortunately, lock files can only be used in development projects. It appears that packages that are installed directly from the NPM registry, e.g. end-user packages that are installed globally through npm install -g, never include a package lock file. (It even seems that the NPM registry blacklist the lock files when publishing a package in the registry).

For this reason, we still need to keep our own implementation of the dependency resolution algorithm.

Usage


By adding a script that augments the dependencies' package.json configuration files with _integrity fields and by optionally reconstructing a package-lock.json file, NPM integration with Nix has been restored.

Using the new NPM 5.x features is straight forward. The following command can be used to generate Nix expressions for a development project with a lock file:


$ node2nix -8 -l package-lock.json

The above command will directly generate Nix expressions from the package lock file, resulting in a much faster generation process.

When a development project does not ship with a lock file, you can use the following command-line instruction:


$ node2nix -8

The generator will use its own implementation of NPM's dependency resolution algorithm. When deploying the package, the builder will reconstruct a dummy lock file to allow the deployment to succeed.

In addition to development projects, it is also possible to install end-user software, by providing a JSON file (e.g. pkgs.json) that defines an array of dependency specifiers:


[
"nijs"
, { "node2nix": "1.5.0" }
]

A Node.js 8 compatible expression can be generated as follows:


$ node2nix -8 -i pkgs.json

Discussion


The approach described in this blog post is not the first attempt to fix NPM 5.x integration. In my first attempt, I tried populating NPM's content-addressable cache in the Nix builder environment with artifacts that were obtained by the Nix package manager and forcing NPM to work in offline mode.

NPM exposes its download and cache-related functionality as a set of reusable APIs. For downloading packages from the NPM registry, pacote can be used. For downloading external artifacts through the HTTP protocol make-fetch-happen can be used. Both APIs are built on top of the content-addressable cache that can be controlled through the lower-level cacache API.

The real difficulty is that neither the high-level NPM APIs nor the npm cache command-line instruction work with local directories or local files -- they will only add artifacts to the cache if they come from a remote location. I have partially built my own API on top of cacache to populate the NPM cache with locally stored artifacts pretending that they were fetched from a remote location.

Although I had some basic functionality supported, it turned out to be much more complicated and time consuming to get all functionality implemented.

Furthermore, the NPM authors never promised that these APIs are stable, so the implementation may break at some point in time. As a result, I have decided to look for another approach.

Availability


I just released node2nix version 1.5.0 with NPM 5.x support. It can be obtained from the NPM registry, Github, or directly from the Nixpkgs repository.

by Sander van der Burg (noreply@blogger.com) at December 19, 2017 09:26 PM