NixOS Planet

October 13, 2019

Matthew Bauer

Improved performance in Nixpkgs

1 Avoiding subshells

A common complain in using Nixpkgs is that things can become slow when you have lots of dependencies. Processing of build inputs is processed in Bash which tends to be pretty hard to make performant. Bash doesn’t give us any way to loop through dependencies in parallel, so we end up with pretty slow Bash. Luckily, someone has found some ways to speed this up with some clever tricks in the setup.sh script.

1.1 Pull request

Albert Safin (@xzfc on GitHub) made an excellent PR that promises to improve performance for all users of Nixpkgs. The PR is available at PR #69131. The basic idea is to avoid invoking “subshells” in Bash. A subshell is basically anything that uses $(cmd ...). Each subshell requires forking a new process which has a constant time cost that ends up being ~2ms. This isn’t much in isolation, but adds up in big loops.

Subshells are usually used in Bash because they are convenient and easy to reason about. It’s easy to understand how a subshell works as it’s just substituting the result of one command into another’s arguments. We don’t usually care about the performance cost of subshells. In the hot path of Nixpkgs’ setup.sh, however, it’s pretty important to squeeze every bit of performance we can.

A few interesting changes were required to make this work. I’ll go through and document what there are. More information can be found at the Bash manual.

diff --git a/pkgs/stdenv/generic/setup.sh b/pkgs/stdenv/generic/setup.sh
index 326a60676a26..60067a4051de 100644
--- a/pkgs/stdenv/generic/setup.sh
+++ b/pkgs/stdenv/generic/setup.sh
@@ -98,7 +98,7 @@ _callImplicitHook() {
 # hooks exits the hook, not the caller. Also will only pass args if
 # command can take them
 _eval() {
-    if [ "$(type -t "$1")" = function ]; then
+    if declare -F "$1" > /dev/null 2>&1; then
         set +u
         "$@" # including args
     else

The first change is pretty easy to understand. It just replaces the type call with a declare call, utilizing an exit code in place of stdout. Unfortunately, declare is a Bashism which is not available in all POSIX shells. It’s been ill defined whether Bashisms can be used in Nixpkgs, but we now will require Nixpkgs to be sourced with Bash 4+.

diff --git a/pkgs/stdenv/generic/setup.sh b/pkgs/stdenv/generic/setup.sh
index 60067a4051de..7e7f8739845b 100644
--- a/pkgs/stdenv/generic/setup.sh
+++ b/pkgs/stdenv/generic/setup.sh
@@ -403,6 +403,7 @@ findInputs() {
     # The current package's host and target offset together
     # provide a <=-preserving homomorphism from the relative
     # offsets to current offset
+    local -i mapOffsetResult
     function mapOffset() {
         local -ri inputOffset="$1"
         if (( "$inputOffset" <= 0 )); then
@@ -410,7 +411,7 @@ findInputs() {
         else
             local -ri outputOffset="$inputOffset - 1 + $targetOffset"
         fi
-        echo "$outputOffset"
+        mapOffsetResult="$outputOffset"
     }

     # Host offset relative to that of the package whose immediate
@@ -422,8 +423,8 @@ findInputs() {

         # Host offset relative to the package currently being
         # built---as absolute an offset as will be used.
-        local -i hostOffsetNext
-        hostOffsetNext="$(mapOffset relHostOffset)"
+        mapOffset relHostOffset
+        local -i hostOffsetNext="$mapOffsetResult"

         # Ensure we're in bounds relative to the package currently
         # being built.
@@ -441,8 +442,8 @@ findInputs() {

             # Target offset relative to the package currently being
             # built.
-            local -i targetOffsetNext
-            targetOffsetNext="$(mapOffset relTargetOffset)"
+            mapOffset relTargetOffset
+            local -i targetOffsetNext="$mapOffsetResult"

             # Once again, ensure we're in bounds relative to the
             # package currently being built.

Similarly, this change makes mapOffset set to it’s result to mapOffsetResult instead of printing it to stdout, avoiding the subshell. Less functional, but more performant!

diff --git a/pkgs/stdenv/generic/setup.sh b/pkgs/stdenv/generic/setup.sh
index 7e7f8739845b..e25ea735a93c 100644
--- a/pkgs/stdenv/generic/setup.sh
+++ b/pkgs/stdenv/generic/setup.sh
@@ -73,21 +73,18 @@ _callImplicitHook() {
     set -u
     local def="$1"
     local hookName="$2"
-    case "$(type -t "$hookName")" in
-        (function|alias|builtin)
-            set +u
-            "$hookName";;
-        (file)
-            set +u
-            source "$hookName";;
-        (keyword) :;;
-        (*) if [ -z "${!hookName:-}" ]; then
-                return "$def";
-            else
-                set +u
-                eval "${!hookName}"
-            fi;;
-    esac
+    if declare -F "$hookName" > /dev/null; then
+        set +u
+        "$hookName"
+    elif type -p "$hookName" > /dev/null; then
+        set +u
+        source "$hookName"
+    elif [ -n "${!hookName:-}" ]; then
+        set +u
+        eval "${!hookName}"
+    else
+        return "$def"
+    fi
     # `_eval` expects hook to need nounset disable and leave it
     # disabled anyways, so Ok to to delegate. The alternative of a
     # return trap is no good because it would affect nested returns.

This change replaces the type -t command with calls to specific Bash builtins. declare -F tells us if the hook is a function, type -p tells us if hookName is a file, and otherwise -n tells us if the hook is non-empty. Again, this introduces a Bashism.

In the worst case, this does replace one case with multiple if branches. But since most hooks are functions, most of the time this ends up being a single if.

diff --git a/pkgs/stdenv/generic/setup.sh b/pkgs/stdenv/generic/setup.sh
index e25ea735a93c..ea550a6d534b 100644
--- a/pkgs/stdenv/generic/setup.sh
+++ b/pkgs/stdenv/generic/setup.sh
@@ -449,7 +449,8 @@ findInputs() {
             [[ -f "$pkg/nix-support/$file" ]] || continue

             local pkgNext
-            for pkgNext in $(< "$pkg/nix-support/$file"); do
+            read -r -d '' pkgNext < "$pkg/nix-support/$file" || true
+            for pkgNext in $pkgNext; do
                 findInputs "$pkgNext" "$hostOffsetNext" "$targetOffsetNext"
             done
         done

This change replaces the $(< ) call with a read call. This is a little surprising since read is using an empty delimiter '' instead of a new line. This replaces one Bashsism $(< ) with another in -d. And, the result, gets rid of a remaining subshell usage.

diff --git a/pkgs/build-support/bintools-wrapper/setup-hook.sh b/pkgs/build-support/bintools-wrapper/setup-hook.sh
index f65b792485a0..27d3e6ad5120 100644
--- a/pkgs/build-support/bintools-wrapper/setup-hook.sh
+++ b/pkgs/build-support/bintools-wrapper/setup-hook.sh
@@ -61,9 +61,8 @@ do
     if
         PATH=$_PATH type -p "@targetPrefix@${cmd}" > /dev/null
     then
-        upper_case="$(echo "$cmd" | tr "[:lower:]" "[:upper:]")"
-        export "${role_pre}${upper_case}=@targetPrefix@${cmd}";
-        export "${upper_case}${role_post}=@targetPrefix@${cmd}";
+        export "${role_pre}${cmd^^}=@targetPrefix@${cmd}";
+        export "${cmd^^}${role_post}=@targetPrefix@${cmd}";
     fi
 done

This replace a call to tr with a usage of the ^^. ${parameter^^pattern} is a Bash 4 feature and allows you to upper-case a string without calling out to tr.

diff --git a/pkgs/build-support/bintools-wrapper/setup-hook.sh b/pkgs/build-support/bintools-wrapper/setup-hook.sh
index 27d3e6ad5120..2e15fa95c794 100644
--- a/pkgs/build-support/bintools-wrapper/setup-hook.sh
+++ b/pkgs/build-support/bintools-wrapper/setup-hook.sh
@@ -24,7 +24,8 @@ bintoolsWrapper_addLDVars () {
         # Python and Haskell packages often only have directories like $out/lib/ghc-8.4.3/ or
         # $out/lib/python3.6/, so having them in LDFLAGS just makes the linker search unnecessary
         # directories and bloats the size of the environment variable space.
-        if [[ -n "$(echo $1/lib/lib*)" ]]; then
+        local -a glob=( $1/lib/lib* )
+        if [ "${#glob[*]}" -gt 0 ]; then
             export NIX_${role_pre}LDFLAGS+=" -L$1/lib"
         fi
     fi

Here, we are checking for whether any files exist in /lib/lib* using a glob. It originally used a subshell to check if the result was empty, but this change replaces it with the Bash ${#parameter} length operation.

diff --git a/pkgs/stdenv/generic/setup.sh b/pkgs/stdenv/generic/setup.sh
index 311292169ecd..326a60676a26 100644
--- a/pkgs/stdenv/generic/setup.sh
+++ b/pkgs/stdenv/generic/setup.sh
@@ -17,7 +17,8 @@ fi
 # code). The hooks for <hookName> are the shell function or variable
 # <hookName>, and the values of the shell array ‘<hookName>Hooks’.
 runHook() {
-    local oldOpts="$(shopt -po nounset)"
+    local oldOpts="-u"
+    shopt -qo nounset || oldOpts="+u"
     set -u # May be called from elsewhere, so do `set -u`.

     local hookName="$1"
@@ -32,7 +33,7 @@ runHook() {
         set -u # To balance `_eval`
     done

-    eval "${oldOpts}"
+    set "$oldOpts"
     return 0
 }

@@ -40,7 +41,8 @@ runHook() {
 # Run all hooks with the specified name, until one succeeds (returns a
 # zero exit code). If none succeed, return a non-zero exit code.
 runOneHook() {
-    local oldOpts="$(shopt -po nounset)"
+    local oldOpts="-u"
+    shopt -qo nounset || oldOpts="+u"
     set -u # May be called from elsewhere, so do `set -u`.

     local hookName="$1"
@@ -57,7 +59,7 @@ runOneHook() {
         set -u # To balance `_eval`
     done

-    eval "${oldOpts}"
+    set "$oldOpts"
     return "$ret"
 }

@@ -500,10 +502,11 @@ activatePackage() {
     (( "$hostOffset" <= "$targetOffset" )) || exit -1

     if [ -f "$pkg" ]; then
-        local oldOpts="$(shopt -po nounset)"
+        local oldOpts="-u"
+        shopt -qo nounset || oldOpts="+u"
         set +u
         source "$pkg"
-        eval "$oldOpts"
+        set "$oldOpts"
     fi

     # Only dependencies whose host platform is guaranteed to match the
@@ -522,10 +525,11 @@ activatePackage() {
     fi

     if [[ -f "$pkg/nix-support/setup-hook" ]]; then
-        local oldOpts="$(shopt -po nounset)"
+        local oldOpts="-u"
+        shopt -qo nounset || oldOpts="+u"
         set +u
         source "$pkg/nix-support/setup-hook"
-        eval "$oldOpts"
+        set "$oldOpts"
     fi
 }

@@ -1273,17 +1277,19 @@ showPhaseHeader() {

 genericBuild() {
     if [ -f "${buildCommandPath:-}" ]; then
-        local oldOpts="$(shopt -po nounset)"
+        local oldOpts="-u"
+        shopt -qo nounset || oldOpts="+u"
         set +u
         source "$buildCommandPath"
-        eval "$oldOpts"
+        set "$oldOpts"
         return
     fi
     if [ -n "${buildCommand:-}" ]; then
-        local oldOpts="$(shopt -po nounset)"
+        local oldOpts="-u"
+        shopt -qo nounset || oldOpts="+u"
         set +u
         eval "$buildCommand"
-        eval "$oldOpts"
+        set "$oldOpts"
         return
     fi

@@ -1313,10 +1319,11 @@ genericBuild() {

         # Evaluate the variable named $curPhase if it exists, otherwise the
         # function named $curPhase.
-        local oldOpts="$(shopt -po nounset)"
+        local oldOpts="-u"
+        shopt -qo nounset || oldOpts="+u"
         set +u
         eval "${!curPhase:-$curPhase}"
-        eval "$oldOpts"
+        set "$oldOpts"

         if [ "$curPhase" = unpackPhase ]; then
             cd "${sourceRoot:-.}"

This last change is maybe the trickiest. $(shopt -po nounset) is used to get the old value of nounset. The nounset setting tells Bash to treat unset variables as an error. This is used temporarily for phases and hooks to enforce this property. It will be reset to its previous value after we finish evaling the current phase or hook. To avoid the subshell here, the stdout provided in shopt -po is replaced with an exit code provided in shopt -qo nounset. If the shopt -qo nounset fails, we set oldOpts to +u, otherwise it is assumed that it is -u.

This commit was first merged in on September 20, but it takes a while for it to hit master. Today, it was finally merged into master (October 13) in 4e6826a so we can finally can see the benefits from it!

1.2 Benchmarking

Hyperfine makes it easy to compare differences in timings. You can install it locally with:

$ nix-env -iA nixpkgs.hyperfine

Here are some of the results:

$ hyperfine --warmup 3 \
  'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p stdenv --run :' \
  'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p stdenv --run :'
Benchmark #1: nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p stdenv --run :
  Time (mean ± σ):     436.4 ms ±   8.5 ms    [User: 324.7 ms, System: 107.8 ms]
  Range (min … max):   430.8 ms … 459.6 ms    10 runs

Benchmark #2: nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p stdenv --run :
  Time (mean ± σ):     244.5 ms ±   2.3 ms    [User: 190.7 ms, System: 34.2 ms]
  Range (min … max):   241.8 ms … 248.3 ms    12 runs

Summary
  'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p stdenv --run :' ran
    1.79 ± 0.04 times faster than 'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p stdenv --run :'
$ hyperfine --warmup 3 \
  'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p i3.buildInputs --run :' \
  'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p i3.buildInputs --run :'
Benchmark #1: nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p i3.buildInputs --run :
  Time (mean ± σ):      3.428 s ±  0.015 s    [User: 2.489 s, System: 1.081 s]
  Range (min … max):    3.404 s …  3.453 s    10 runs

Benchmark #2: nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p i3.buildInputs --run :
  Time (mean ± σ):     873.4 ms ±  12.2 ms    [User: 714.7 ms, System: 89.3 ms]
  Range (min … max):   861.5 ms … 906.4 ms    10 runs

Summary
  'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p i3.buildInputs --run :' ran
    3.92 ± 0.06 times faster than 'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p i3.buildInputs --run :'
$ hyperfine --warmup 3 \
  'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p inkscape.buildInputs --run :' \
  'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p inkscape.buildInputs --run :'
Benchmark #1: nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p inkscape.buildInputs --run :
  Time (mean ± σ):      4.380 s ±  0.024 s    [User: 3.155 s, System: 1.443 s]
  Range (min … max):    4.339 s …  4.409 s    10 runs

Benchmark #2: nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p inkscape.buildInputs --run :
  Time (mean ± σ):      1.007 s ±  0.011 s    [User: 826.7 ms, System: 114.2 ms]
  Range (min … max):    0.995 s …  1.026 s    10 runs

Summary
  'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p inkscape.buildInputs --run :' ran
    4.35 ± 0.05 times faster than 'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p inkscape.buildInputs --run :'

Try running these commands yourself, and compare the results.

1.3 Results

Avoiding subshells leads to a decrease in up to 4x of the time it used to take. That multiplier is going to depend on precisely how many inputs we are processing. It’s a pretty impressive improvement, and it comes with no added cost. These kind of easy wins in performance are pretty rare, and worth celebrating!

October 13, 2019 12:00 AM

October 07, 2019

Hercules Labs

Agent 0.5.0 with Terraform support and simpler configuration

Last week, we’ve released agent version 0.5.0. The main theme for the release is ease of installation. Running an agent should be as simple as possible, so we made:

Follow getting started guide to set up your first agent.

If you have and you’re using the module (NixOS, NixOps, nix-darwin) the update is entirely self-explanatory. Otherwise, check the notes.

Trusted-user

The agent now relies on being a trusted-user to the Nix daemon. The agent does not allow projects to execute arbitrary Nix store operations anyway. It may improve security since it simplifies configuration and secrets handling.

The security model for the agent is simple at this point: only build git refs from your repository. This way, third-party contributors can not run arbitrary code on your agent system; only contributors with write access to the repo can.

Talking about trust, we’ll share some details about securely doing CI for Open Source with Bors soon!

October 07, 2019 12:00 AM

October 03, 2019

Craige McWhirter

Installing LineageOS 16 on a Samsung SM-T710 (gts28wifi)

  1. Check the prerequisites
  2. Backup any files you want to keep
  3. Download LineageOS ROM and optional GAPPS package
  4. Copy LineageOS image & additional packages to the SM-T710
  5. Boot into recovery mode
  6. Wipe the existing installation.
  7. Format the device
  8. Install LineageOS ROM and other optional ROMs.

0 - Check the Prerequisites

  • The device already has the latest TWRP installed.
  • Android debugging is enabled on the device
  • ADB is installed on your workstation.
  • You have a suitably configured SD card as a back up handy.

I use this android.nix to ensure my NixOS environment has the prerequisites install and configured for it's side of the process.

1 - Backup any Files You Want to Keep

I like to use adb to pull the files from the device. There are also other methods available too.

$ adb pull /sdcard/MyFolder ./Downloads/MyDevice/

Usage of adb is documented at Android Debug Bridge

2 - Download LineageOS ROM and optional GAPPS package

I downloaded lineage-16.0-20191001-UNOFFICIAL-gts28wifi.zip from gts28wifi.

I also downloaded Open GApps ARM, nano to enable Google Apps.

I could have also downloaded and installed LineageOS addonsu and addonsu-remove but opted not to at this point.

3 - Copy LineageOS image & additional packages to the SM-T710

I use adb to copy the files files across:

$ adb push ./lineage-16.0-20191001-UNOFFICIAL-gts28wifi.zip /sdcard/
./lineage-16.0-20191001-UNOFFICIAL-gts28wifi.zip: 1 file pushed. 12.1 MB/s (408677035 bytes in 32.263s)
$ adb push ./open_gapps-arm-9.0-nano-20190405.zip /sdcard/
./open_gapps-arm-9.0-nano-20190405.zip: 1 file pushed. 11.1 MB/s (185790181 bytes in 15.948s)

I also copy both to the SD card at this point as the SM-T710 is an awful device to work with and in many random cases will not work with ADB. When this happens, I fall back to the SD card.

4 - Boot into recovery mode

I power the device off, then power it back into recovery mode by holding down [home]+[volume up]+[power].

5 - Wipe the existing installation

Press Wipe then Advanced Wipe.

Select:

  • Dalvik / Art Cache
  • System
  • Data
  • Cache

Swipe Swipe to Wipe at the bottom of the screen.

Press Back to return to the Advanced Wipe screen.

Press the triangular "back" button once to return to the Wipe screen.

6 - Format the device

Press Format Data.

Type yes and press blue check mark at the bottom-right corner to commence the format process.

Press Back to return to the Advanced Wipe screen.

Press the triangular "back" button twice to return to the main screen.

7 - Install LineageOS ROM and other optional ROMs

Press Install, select the images you wish to install and swipe make it go.

Reboot when it's completed and you should be off and running wtth a brand new LineageOS 16 on this tablet.

by Craige McWhirter at October 03, 2019 11:04 PM

September 30, 2019

Hercules Labs

Post-mortem on recent Cachix downtime

On 6th of September, Cachix experienced 3 hours of downtime.

We’d like to let you know exactly what happened and what measures we have taken to prevent such an event from happening in the future.

Timeline (UTC)

  • 2019-09-06 17:15:05: cachix.org down alert triggered
  • 2019-09-06 20:06:00: Domen gets out of MuniHac dinner in the basement and receives the alert
  • 2019-09-06 20:19:00: Domen restarts server process
  • 2019-09-06 20:19:38: cachix.org is back up

Observations

The backend logs were full of:

Sep 06 17:02:34 cachix-production.cachix cachix-server[6488]: Network.Socket.recvBuf: resource vanished (Connection reset by peer)

And:

(ConnectionFailure Network.BSD.getProtocolByName: does not exist (no such protocol name: tcp)))

Most importantly, there were no logs after downtime was triggered and until the restart:

Sep 06 17:15:48 cachix-production.cachix cachix-server[6488]: Network.Socket.recvBuf: resource vanished (Connection reset by peer)
Sep 06 20:19:26 cachix-production.cachix systemd[1]: Stopping cachix server service...

Our monitoring revealed an increased number of nginx connections and file handles (the time are in CEST - UTC+2):

File handles and nginx connections

Conclusions

  • The main cause for downtime was hanged backend. The underlying cause was not identified due to lack of information.

  • The backend was failing some requests due to reaching the limit of 1024 file descriptors.

  • The duration of the downtime was due to the absence of a telephone signal.

What we’ve already done

  • To avoid any hangs in the future, we have configured systemd watchdog which automatically restarts the service if the backend doesn’t respond for 3 seconds. Doing so we released warp-systemd Haskell library to integrate Warp (Haskell web server) with systemd, such as socket activation and watchdog features.

  • We’ve increased file descriptors limit to 8192.

  • We’ve set up Cachix status page so that you can check the state of the service.

  • For a better visibility into errors like file handles, we’ve configured sentry.io error reporting. Doing so we released katip-raven for seamless Sentry integration of structured logging which we also use to log Warp (Haskell web server) exceptions.

  • Robert is now fully onboarded to be able to resolve any Cachix issues

  • We’ve made a number of improvements for the performance of Cachix. Just tuning GHC RTS settings shows 15% speed up in common usage.

Future work

Summary

We’re confident such issues shouldn’t affect the production anymore and since availability of Cachix is our utmost priority, we are going to make sure to complete the rest of the work in a timely manner.


What we do

Automated hosted infrastructure for Nix, reliable and reproducible developer tooling, to speed up adoption and lower integration cost. We offer Continuous Integration and Binary Caches.

September 30, 2019 12:00 AM

September 26, 2019

Craige McWhirter

Setting Up Wireless Networking with NixOS

NixOS Gears by Craige McWhirter

The current NixOS Manual is a little sparse on details for different options to configure wireless networking. The version in master is a little better but still ambiguous. I've made a pull request to resolve this but in the interim, this documents how to configure a number of wireless scenarios with NixOS.

If you're going to use NetworkManager, this is not for you. This is for those of us who want reproducible configurations.

To enable a wireless connection with no spaces or special characters in the name that uses a pre-shared key, you first need to generate the raw PSK:

$ wpa_passphrase exampleSSID abcd1234
network={
        ssid="exampleSSID"
        #psk="abcd1234"
        psk=46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d
}

Now you can add the following stanza to your configuration.nix to enable wireless networking and this specific wireless connection:

networking.wireless = {
  enable = true;
  userControlled.enable = true;
  networks = {
    exampleSSID = {
      pskRaw = "46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d";
    };
  };
};

If you had another WiFi connection that had spaces and/or special characters in the name, you would configure it like this:

networking.wireless = {
  enable = true;
  userControlled.enable = true;
  networks = {
    "example's SSID" = {
      pskRaw = "46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d";
    };
  };
};

If you need to connect to a hidden network, you would do it like this:

networking.wireless = {
  enable = true;
  userControlled.enable = true;
  networks = {
    myHiddenSSID = {
      hidden = true;
      pskRaw = "46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d";
    };
  };
};

The final scenario that I have, is connecting to open SSIDs that have some kind of secondary method (like a login in web page) for authentication of connections:

networking.wireless = {
  enable = true;
  userControlled.enable = true;
  networks = {
    FreeWiFi = {};
  };
};

This is all fairly straight forward but was non-trivial to find the answers too.

by Craige McWhirter at September 26, 2019 09:38 PM

September 18, 2019

Craige McWhirter

Deploying TT-RSS on NixOS

NixOS Gears by Craige McWhirter

Deploying a vanilla Tiny Tiny RSS server on NixOS via NixOps is fairly straight forward.

My preferred method is to craft a tt-rss.nix file describes the configuration of the TT-RSS server.

tt-rss.nix:

{ config, pkgs, lib, ... }:

{

  services.tt-rss = {
    enable = true;                                # Enable TT-RSS
    database = {                                  # Configure the database
      type = "pgsql";                             # Database type
      passwordFile = "/run/keys/tt-rss-dbpass";   # Where to find the password
    };
    email = {
      fromAddress = "news@mydomain";              # Address for outgoing email
      fromName = "News at mydomain";              # Display name for outgoing email
    };
    selfUrlPath = "https://news.mydomain/";       # Root web URL
    virtualHost = "news.mydomain";                # Setup a virtualhost
  };

  services.postgresql = {
    enable = true;                # Ensure postgresql is enabled
    authentication = ''
      local tt_rss all ident map=tt_rss-users
    '';
    identMap =                    # Map the tt-rss user to postgresql
      ''
        tt_rss-users tt_rss tt_rss
      '';
  };

  services.nginx = {
    enable = true;                                          # Enable Nginx
    recommendedGzipSettings = true;
    recommendedOptimisation = true;
    recommendedProxySettings = true;
    recommendedTlsSettings = true;
    virtualHosts."news.mydomain" = {                        # TT-RSS hostname
      enableACME = true;                                    # Use ACME certs
      forceSSL = true;                                      # Force SSL
    };
  };

  security.acme.certs = {
      "news.mydomain".email = "email@mydomain";
  };

}

This line from the above file should stand out:

              passwordFile = "/run/keys/tt-rss-dbpass";   # Where to find the password

The passwordFile option requires that you use a secrets file with NixOps.

Where does that file come from? It's pulled from a secrets.nix file (example) that for this example, could look like this:

secrets.nix:

{ config, pkgs, ... }:

{
  deployment.keys = {
    # Database key for TT-RSS
    tt-rss-dbpass = {
      text        = "vaetohH{u9Veegh3caechish";   # Password, generated using pwgen -yB 24
      user        = "tt_rss";                     # User to own the key file
      group       = "wheel";                      # Group to own the key file
      permissions = "0640";                       # Key file permissions
    };

  };
}

The file's path /run/keys/tt-rss-dbpass is determined by the elements. So deployment.keys determines the initial path of /run/keys and the next element tt-rss-dbpass is a descriptive name provided by the stanza's author to describe the key's use and also provide the final file name.

Now that we have described the TT-RSS service in tt-rss_for_NixOps.nix and the required credentials in secrets.nix we need to pull it all together for deployment. We achieve that in this case by importing both these files into our existing host definition:

myhost.nix:

    {
      myhost =
        { config, pkgs, lib, ... }:

        {

          imports =
            [
              ./secrets.nix                               # Import our secrets
              ./servers/tt-rss_for_NixOps.nix              # Import TT-RSS description
            ];

          deployment.targetHost = "192.168.132.123";   # Target's IP address

          networking.hostName = "myhost";              # Target's hostname.
        };
    }

To deploy TT-RSS to your NixOps managed host, you merely run the deploy command for your already configured host and deployment, which would look like this:

    $ nixops deploy -d MyDeployment --include myhost

You should now have a running TT-RSS server and be able to login with the default admin user (admin: password).

In my nixos-examples repo I have a servers directory with some example files and a README with information and instructions. You can use two of the files to generate a TT-RSS VM to take a quick poke around. There is also an example of how you can deploy TT-RSS in production using NixOps, as per this post.

If you wish to dig a little deeper, I have my production deployment over at mio-ops.

by Craige McWhirter at September 18, 2019 04:58 AM

September 11, 2019

Craige McWhirter

Deploying and Configuring Vim on NixOS

NixOS Gears by Craige McWhirter

I had a need to deploy vim and my particular preferred configuration both system-wide and across multiple systems (via NixOps).

I started by creating a file named vim.nix that would be imported into either /etc/nixos/configuration.nix or an appropriate NixOps Nix file. This example is a stub that shows a number of common configuration items:

vim.nix:

with import <nixpkgs> {};

vim_configurable.customize {
  name = "vim";   # Specifies the vim binary name.
  # Below you can specify what usually goes into `~/.vimrc`
  vimrcConfig.customRC = ''
    " Preferred global default settings:
    set number                    " Enable line numbers by default
    set background=dark           " Set the default background to dark or light
    set smartindent               " Automatically insert extra level of indentation
    set tabstop=4                 " Default tabstop
    set shiftwidth=4              " Default indent spacing
    set expandtab                 " Expand [TABS] to spaces
    syntax enable                 " Enable syntax highlighting
    colorscheme solarized         " Set the default colour scheme
    set t_Co=256                  " use 265 colors in vim
    set spell spelllang=en_au     " Default spell checking language
    hi clear SpellBad             " Clear any unwanted default settings
    hi SpellBad cterm=underline   " Set the spell checking highlight style
    hi SpellBad ctermbg=NONE      " Set the spell checking highlight background
    match ErrorMsg '\s\+$'        "

    let g:airline_powerline_fonts = 1   " Use powerline fonts
    let g:airline_theme='solarized'     " Set the airline theme

    set laststatus=2   " Set up the status line so it's coloured and always on

    " Add more settings below
  '';
  # store your plugins in Vim packages
  vimrcConfig.packages.myVimPackage = with pkgs.vimPlugins; {
    start = [               # Plugins loaded on launch
      airline               # Lean & mean status/tabline for vim that's light as air
      solarized             # Solarized colours for Vim
      vim-airline-themes    # Collection of themes for airlin
      vim-nix               # Support for writing Nix expressions in vim
    ];
    # manually loadable by calling `:packadd $plugin-name`
    # opt = [ phpCompletion elm-vim ];
    # To automatically load a plugin when opening a filetype, add vimrc lines like:
    # autocmd FileType php :packadd phpCompletion
  };
}

I then needed to import this file into my system packages stanza:

  environment = {
    systemPackages = with pkgs; [
      someOtherPackages   # Normal package listing
      (
        import ./vim.nix
      )
    ];
  };

This will then install and configure Vim as you've defined it.

If you'd like to give this build a run in a non-production space, I've written vim_vm.nix with which you can build a VM, ssh into afterwards and test the Vim configuration:

$ nix-build '<nixpkgs/nixos>' -A vm --arg configuration ./vim_vm.nix
...
$ export QEMU_OPTS="-m 4192"
$ export QEMU_NET_OPTS="hostfwd=tcp::18080-:80,hostfwd=tcp::10022-:22"
$ ./result/bin/run-vim-vm-vm

Then, from a another terminal:

$ ssh nixos@localhost -p 10022

And you should be in a freshly baked NixOS VM with your Vim config ready to be used.

There's an always current example of my production Vim configuration in my mio-ops repo.

by Craige McWhirter at September 11, 2019 11:14 PM

September 10, 2019

Munich NixOS Meetup

NixOS 19.09 Release Sprint

photoMunich NixOS Meetup

The next stable NixOS release 19.09 'Loris' is going to happen at the end of September. The goal of this sprint is to fix critical issues before the release. Some maintainers will be attending and are available for guidance and feedback.

• Blocking issues: https://github.com/Ni...

• All 19.09 issues: https://github.com/Ni...

ZHF issue: https://github.com/Ni...

The sprint will be held at the Mayflower office in Munich on Friday starting at 11:00. Drinks will be provided.

München 80687 - Germany

Friday, September 13 at 11:00 AM

15

https://www.meetup.com/Munich-NixOS-Meetup/events/264400018/

September 10, 2019 03:57 PM

Craige McWhirter

Deploying Gitea on NixOS

NixOS Gitea by Craige McWhirter

I've been using GitLab for years but recently opted to switch to Gitea, primarily because of timing and I was looking for something more lightweight, not because of any particular problems with GitLab.

To deploy Gitea via NixOps I chose to craft a Nix file (example) that would be included in a host definition. The linked and below definition provides a deployment of Gitea, using Postgres, Nginx, ACME certificates and ReStructured Text rendering with syntax highlighting.

version-management/gitea_for_NixOps.nix:

    { config, pkgs, lib, ... }:

    {

      services.gitea = {
        enable = true;                               # Enable Gitea
        appName = "MyDomain: Gitea Service";         # Give the site a name
        database = {
          type = "postgres";                         # Database type
          passwordFile = "/run/keys/gitea-dbpass";   # Where to find the password
        };
        domain = "source.mydomain.tld";              # Domain name
        rootUrl = "https://source.mydomaain.tld/";   # Root web URL
        httpPort = 3001;                             # Provided unique port
        extraConfig = let
          docutils =
            pkgs.python37.withPackages (ps: with ps; [
              docutils                               # Provides rendering of ReStructured Text files
              pygments                               # Provides syntax highlighting
          ]);
        in ''
          [mailer]
          ENABLED = true
          FROM = "gitea@mydomain.tld"
          [service]
          REGISTER_EMAIL_CONFIRM = true
          [markup.restructuredtext]
          ENABLED = true
          FILE_EXTENSIONS = .rst
          RENDER_COMMAND = ${docutils}/bin/rst2html.py
          IS_INPUT_FILE = false
        '';
      };

      services.postgresql = {
        enable = true;                # Ensure postgresql is enabled
        authentication = ''
          local gitea all ident map=gitea-users
        '';
        identMap =                    # Map the gitea user to postgresql
          ''
            gitea-users gitea gitea
          '';
      };

      services.nginx = {
        enable = true;                                          # Enable Nginx
        recommendedGzipSettings = true;
        recommendedOptimisation = true;
        recommendedProxySettings = true;
        recommendedTlsSettings = true;
        virtualHosts."source.MyDomain.tld" = {                  # Gitea hostname
          enableACME = true;                                    # Use ACME certs
          forceSSL = true;                                      # Force SSL
          locations."/".proxyPass = "http://localhost:3001/";   # Proxy Gitea
        };
      };

      security.acme.certs = {
          "source.mydomain".email = "anEmail@mydomain.tld";
      };

    }

This line from the above file should stand out:

              passwordFile = "/run/keys/gitea-dbpass";   # Where to find the password

Where does that file come from? It's pulled from a secrets.nix file (example) that for this example, could look like this:

secrets.nix:

    { config, pkgs, ... }:

    {
      deployment.keys = {
        # An example set of keys to be used for the Gitea service's DB authentication
        gitea-dbpass = {
          text        = "uNgiakei+x>i7shuiwaeth3z";   # Password, generated using pwgen -yB 24
          user        = "gitea";                      # User to own the key file
          group       = "wheel";                      # Group to own the key file
          permissions = "0640";                       # Key file permissions
        };
      };
    }

The file's path /run/keys/gitea-dbpass is determined by the elements. So deployment.keys determines the initial path of /run/keys and the next element gitea-dbpass is a descriptive name provided by the stanza's author to describe the key's use and also provide the final file name.

Now that we have described the Gitea service in gitea_for_NixOps.nix and the required credentials in secrets.nix we need to pull it all together for deployment. We achieve that in this case by importing both these files into our existing host definition:

myhost.nix:

    {
      myhost =
        { config, pkgs, lib, ... }:

        {

          imports =
            [
              ./secrets.nix                               # Import our secrets
              ./version-management/gitea_got_NixOps.nix   # Import Gitea
            ];

          deployment.targetHost = "192.168.132.123";   # Target's IP address

          networking.hostName = "myhost";              # Target's hostname.
        };
    }

To deploy Gitea to your NixOps managed host, you merely run the deploy command for your already configured host and deployment, which would look like this:

    $ nixops deploy -d MyDeployment --include myhost

You should now have a running Gitea server and be able to create an initial admin user.

In my nixos-examples repo I have a version-management directory with some example files and a README with information and instructions. You can use two of the files to generate a Gitea VM to take a quick poke around. There is also an example of how you can deploy Gitea in production using NixOps, as per this post.

If you wish to dig a little deeper, I have my production deployment over at mio-ops.

by Craige McWhirter at September 10, 2019 03:12 AM

September 03, 2019

Craige McWhirter

Replacing a NixOS Service with an Upstream Version

NixOS Hydra Gears by Craige McWhirter

It's fairly well documented how to replace a NixOS service in the stable channel with one from the unstable channel.

What if you need to build from an upstream branch that's not in either of stable or unstable channels? This is how I go about it, including building a VM in which to test the result.

I specifically wanted to test the new hydra-notify service, so to test that, I need to replace the existing Hydra module in nixpkgs with the one from upstream source. Start by checking out the hydra source:

$ git clone https://github.com/NixOS/hydra.git

We Can configure Nix to replace the nixpkgs version of Hydra with a build from hydra/master.

You can see a completed example in hydra_notify.nix but the key points are that we need to disable Hydra in the standard Nix packages:

  disabledModules = [ "services/continuous-integration/hydra/default.nix" ];

as well as import the module definition from the Hydra source we downloaded:

  imports =
    [
      "/path/to/source/hydra/hydra-module.nix"
    ];

and we need to switch services.hydra to services.hydra-dev in two locations:

  networking.firewall.allowedTCPPorts = [ config.services.hydra-dev.port 80 443 ];

  services.hydra-dev = {
    ...
  };

With these three changes, we have swapped out the Hydra in nixpkgs for one to be built from the upstream source in hydra_notify.nix.

Next we need to build a configuration for our VM that uses the replaced Hydra module declared in hydra_notify.nix. This is hydra_vm.nix, which is a simple NixOS configuration, which importantly includes our replaced Hydra module:

  imports =
    [
      ./hydra_notify.nix
    ];

to give this a run yourself, checkout nixos-examples and change to the services/hydra_upstream directory:

$ git clone https://code.mcwhirter.io/craige/nixos-examples.git
$ cd  nixos-examples/services/hydra_upstream

After updating the path to Hydra's source, We can then build the VM with:

$ nix-build '<nixpkgs/nixos>' -A vm --arg configuration ./hydra_vm.nix

Before launching the VM, I like to make sure that it is provided with enough RAM and both hydra's web UI and SSH are available by exporting the below Qemu options:

$ export QEMU_OPTS="-m 4192"
$ export QEMU_NET_OPTS="hostfwd=tcp::10443-:443,hostfwd=tcp::10022-:22"

So now we're ready to launch the VM:

./result/bin/run-hydra-notications-vm

Once it has booted, you should be able to ssh nixos@localhost -p 10022 and hit the Hydra web UI at localhost:10443.

Once you've logged into the VM you can run systemctl status hydra-notify to check that you're running upstream Hydra.

by Craige McWhirter at September 03, 2019 12:16 AM

August 30, 2019

Hercules Labs

Native support for import-from-derivation

Today we are releasing a new feature we’ve been working on the last couple of weeks.

Generating Nix expressions

As a developer you often bump a dependency version or add a new dependency.

Every time your package files change, you need to regenerate the Nix expressions that describe how the project is built.

There are two ways to regenerate Nix expressions in that case:

  1. Outside the Nix domain, possibly with an automated script and commands like bundix, cabal2nix, yarn2nix. This quickly grows from a nuisance to a maintenance headache as your git repository grows in size due to generated artifacts. It requires special care when diffing, merging, etc.

  2. Let Nix generate Nix expressions during the build. Sounds simple, but it’s quite subtle.

Additionally, Nixpkgs builds forbid option (2), which leads to manual work.

As of today Hercules natively supports option (2), let’s dig into the subtleties.

Evaluation and realization

The Nix language describes how software is built, which happens in two phases.

The first phase is called evaluation:

Evaluation

Evaluation takes a Nix expression and results into a dependency tree of derivations.

A derivation is a set of instructions how to build software.

The second phase is called realization:

Evaluation

Realizing a derivation is the process of building. The builder is usually a shell script, although any executable can be specified.

Since a derivation describes all the necessary inputs, the result is guaranteed to be deterministic.

Derivations

This begs the question, why have intermediate representation (derivations)? There are a couple of reasons:

  • Evaluation can include significant computation. It can range from a couple of seconds, to typically minutes, or even an hour for huge projects. We want to evaluate only once and then distribute derivations to multiple machines for speedup and realize them as we traverse the graph of dependencies.

  • Evaluation can produce derivations that are built on different platforms or require some specific hardware. By copying the derivations to these machines, we don’t need to worry about running evaluation on those specific machines.

  • In case of a build failure, it allows the machine to retry immediately instead of re-evaluating again.

All in all, derivation files save us computation compared to evaluating more than once.

Interleaving evaluation and realization

Sometimes it’s worth mixing the two phases.

A build produces Nix expressions that we now would like to evaluate, but we’re already in the realization phase, so we have:

  1. Evaluate to get the derivation that will output a Nix file
  2. Realize that derivation
  3. Continue evaluating by importing the derivation containing the Nix file
  4. Realize the final derivation set

This is called Import-From-Derivation or shortly, IFD.

A minimal example

let
  pkgs = import <nixpkgs> {};
  getHello = pkgs.runCommand "get-hello.nix" {} ''
    # Call any command here to generate an expression. A simple example:
    echo 'pkgs: pkgs.hello' > $out
  '';
in import getHello pkgs

In the last line we’re importing from getHello, which is a Nix derivation that we need to build before evaluation can continue to use pkgs: pkgs.hello Nix expression in the output.

Haskell.nix example

haskell.nix is an alternative Haskell infrastructure for Nixpkgs.

Given a Haskell project with a Cabal file (Haskell’s package manager), drop the following default.nix into root of your repository:

let
  pkgs = import (import ./nix/sources.nix).nixpkgs {};
  haskell = import (import ./nix/sources.nix)."haskell.nix" { inherit pkgs; };
  plan = haskell.callCabalProjectToNix
              { index-state = "2019-08-26T00:00:00Z"; src = pkgs.lib.cleanSource ./.;};

  pkgSet = haskell.mkCabalProjectPkgSet {
    plan-pkgs = import plan;
    pkg-def-extras = [];
    modules = [];
  };
in pkgSet.config.hsPkgs.mypackage.components.all

Once you replace mypackage with the name from your Cabal file, your whole dependency tree is deterministic by pinning the package index to a timestamp using index-state and hash of your local folder using ./..

Haskell.nix will generate all expressions how to build each package on the fly via import from derivation.

Native support in CI

Using different platforms (typically Linux and macOS) during IFD is one of the reasons why upstream forbids IFD, since their evaluator is running on Linux and it can’t build for macOS.

Our CI dispatches all builds during IFD back to our scheduler, so it’s able to dispatch those builds to either specific platform or specific hardware.

IFD support is seamless. There’s nothing extra to configure.

In case of build errors during evaluation UI will show you all the details including build log:

IFD attribute error

In order to use IFD support you will need to upgrade to hercules-ci-agent-0.4.0.

Future work

Some Nix tools already embrace IFD, such as haskell.nix, yarn2nix (Node.js), pnpm2nix (Node.js) and opam2nix (OCaml).

We encourage more language tools to take advantage of this feature.

Currently Nix evaluation is single threaded and IFD evaluation is blocking until the builds are done. We have some ideas to make IFD concurrent.

We believe this is a huge step forward to simplify day-to-day Nix development.

What we do

Automated hosted infrastructure for Nix, reliable and reproducible developer tooling, to speed up adoption and lower integration cost. We offer Continuous Integration and Binary Caches.

Updates

2019-09-08: Add opam2nix

August 30, 2019 12:00 AM

August 29, 2019

Craige McWhirter

NixOS Appears to be Always Building From Source

NixOS Gears by Craige McWhirter

One of the things that NixOS and Hydra make easy is running your own custom cache of packages. A number of projects and companies make use of this.

A NixOS or Nix user can then make use of these caches by adding them to nix.conf for Nix users or /etc/nixos/configuration.nix for NixOS users.

What most people will want, is for their devices to have access to both caches.

If you add the new cache "incorrectly", you may suddenly find your device building almost everything from source, as I did.

The default /etc/nix/nix.conf for NixOS users has these default lines:

substituters = https://cache.nixos.org
...
trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY=

Many projects running custom caches will advise NixOS users to add a stanza like this to /etc/nixos/configuration.nix:

{
  nix = {
    binaryCaches = [
      "https://cache.my-project.org/"
    ];
    binaryCachePublicKeys = [
      "cache.my-project.org:j/Kb+r+tGeM+4YZH+ECfTr+b4OFViKHaciuIOHw1/DP="
    ];
  };
}

If you add this stanza to your NixOS configuration, you will end up with a nix.conf that looks like this:

...
substituters = https://cache.my-project.org/
...
trusted-public-keys = cache.my-project.org:j/Kb+r+tGeM+4YZH+ECfTr+b4OFViKHaciuIOHw1/DP=
...

Which will result in your systems only pulling cached packages from that cache and building everything else that's missing.

If you want to take advantage of what a custom cache is providing but not lose the advantages of the primary NixOS cache, your stanza in configuration.nix needs to looks like this:

{
  nix = {
    binaryCaches = [
      "https://cache.nixos.org"
      "https://cache.my-project.org/"
    ];
    binaryCachePublicKeys = [
      "cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY="
      "cache.my-project.org:j/Kb+r+tGeM+4YZH+ECfTr+b4OFViKHaciuIOHw1/DP="
    ];
  };
}

You will now get the benefit of both caches and your nix.conf will now look like:

...
substituters = https://cache.nixos.org https://cache.my-project.org/
...
trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= cache.my-project.org:j/Kb+r+tGeM+4YZH+ECfTr+b4OFViKHaciuIOHw1/DP=
...

The order does not matter, I just feel more comfortable putting the cache I consider "primary" first. The order is determined by NixOS, using the cache-info file from each Hydra cache:

$ curl https://cache.nixos.org/nix-cache-info
StoreDir: /nix/store
WantMassQuery: 1
Priority: 40

If you were experiencing excessive building from source and your intention was to draw from two caches, this should resolve it for you.

by Craige McWhirter at August 29, 2019 11:51 PM

August 22, 2019

Hercules Labs

Pre-commit git hooks with Nix

pre-commit manages a set of hooks that are executed by git before committing code:

pre-commit.png

Common hooks range from static analysis or linting to source formatting.

Since we’re managing quite a couple of repositories, maintaining the duplicated definitions became a burden.

Hence we created:

nix-pre-commit-hooks

The goal is to manage these hooks with Nix and solve the following:

  • Simpler integration into Nix projects, doing the wiring up behind the scenes

  • Provide a low-overhead build of all the tooling available for the hooks to use (calling nix-shell for every check does bring some latency when committing)

  • Common package set of hooks for popular languages like Haskell, Elm, etc.

  • Two trivial Nix functions to run hooks as part of development and on your CI

Currently the following hooks are provided:

Nix

  • canonix: a Nix formatter (currently incomplete, requiring some manual formatting as well)

Haskell

Elm

Shell

We encourage everyone to contribute additional hooks.

Installation

See project’s README for latest up-to-date installation steps.


What we do

Automated hosted infrastructure for Nix, reliable and reproducible developer tooling, to speed up adoption and lower integration cost. We offer Continuous Integration and binary caches.

August 22, 2019 12:00 AM

August 21, 2019

Matthew Bauer

All the versions with Nix

1 Background

In Channel Changing with Nix, I described how to move between channels in Nix expressions. This provides an easy way to work with multiple versions of Nixpkgs. I was reminded of this post after seeing a comment by jolmg on Hacker News. The comment suggested we should have a way to use multiple versions of packages seamlessly together. It suggested that we should use commits to differentiate versions, but I suggested that stable channels would work much better.

So, as a follow up to my Channel Changing post, I want to show how you can use a quick Nix mockup to accomplish this. Like the previous post, it will come with a Nix snippet that you can try out yourself.

2 Code

So, what follows is some code that I wrote up that lets you find package versions in an easier way. It is also available for download at https://matthewbauer.us/generate-versions.nix.

{ channels ? [ "19.03" "18.09" "18.03" "17.09" "17.03"
         "16.09" "16.03" "15.09" "14.12" "14.04" "13.10" ]
, attrs ? builtins.attrNames (import <nixpkgs> {})
, system ? builtins.currentSystem
, args ? { inherit system; }
}: let

  getSet = channel:
    (import (builtins.fetchTarball "channel:nixos-${channel}") args).pkgs;

  getPkg = name: channel: let
    pkgs = getSet channel;
    pkg = pkgs.${name};
    version = (builtins.parseDrvName pkg.name).version;
  in if builtins.hasAttr name pkgs && pkg ? name then {
    name = version;
    value = pkg;
  } else null;

in builtins.listToAttrs (map (name: {
  inherit name;
  value = builtins.listToAttrs
    (builtins.filter (x: x != null)
      (map (getPkg name) channels));
}) attrs)

This Nix expression generates an index of each package from all 11 releases of Nixpkgs that have occurred since October 2010. For every package, each version that came with a release is included and put into a map. The map uses the version as a key and the package as its value, preferring the newer release when versions conflict.

This is all done lazily, because that’s how Nix works. Still, it will take a little while at first to evaluate because we need to parse all 11 releases! Remarkably, this expression uses only Nix builtins, and requires no special library function.

3 Usage

Working with this Nix expression is extremely interesting, and I’ve included some examples of how to work with it. They should all be usable on a Linux machine (or maybe macOS) with Nix installed.

3.1 Query package versions

You can query what package versions are available through Nix’s builtins.attrNames function. For example,

$ nix eval "(builtins.attrNames (import (builtins.fetchurl https://matthewbauer.us/generate-versions.nix) {}).emacs)"
[ "24.3" "24.4" "24.5" "25.3" "26.1" ]

This shows us that there are 5 versions of Emacs. This is kind of interesting because it means that there were at least 6 duplicate versions of Emacs between our release channels. Unfortunately, a few versions of Emacs are notably missing including Emacs 25.1 and Emacs 25.2. Emacs 24.2 was released almost a year before the first stable Nixpkgs release! As time goes on, we should collect more of these releases.

3.2 Running an old version

A shown above, there are 5 versions of Emacs available to us. We can run Emacs 24.3 with a fairly short command:

$ LC_ALL=C nix run "(import (builtins.fetchurl https://matthewbauer.us/generate-versions.nix) {}).emacs.\"24.3\"" -c emacs

LC_ALL=C is needed on Linux to avoid the old Glibc trying to load the newer, incompatible locales that may be included with your system. This is an unfortunate problem with Glibc including breaking changes between releases. It also makes me want use to switch to Musl some time soon! I’ve also noticed some incompatibilities with GTK icons that appear to come from the gdk-pixbuf module. More investigation is needed on why this is the case.

This will not work on macOS because we did not have Emacs working on macOS back then! macOS users can try Emacs 25.3. This looks very similar to the above:

$ nix run "(import (builtins.fetchurl https://matthewbauer.us/generate-versions.nix) {}).emacs.\"25.3\"" -c emacs

3.3 Firefox

Another example using Firefox is pretty neat. The code is very similar to Emacs:

$ nix eval "(builtins.attrNames (import (builtins.fetchurl https://matthewbauer.us/generate-versions.nix) {}).firefox)"
[ "25.0.1" "34.0.5" "39.0.3" "45.0" "48.0.2" "51.0.1" "55.0.3" "59.0.2" "63.0.3" "66.0.3" "68.0.2" ]

We get all 11 releases with unique Firefox versions this time.

You can run Firefox 25.0.1 using this command:

$ LC_ALL=C nix run "(import (builtins.fetchurl https://matthewbauer.us/generate-versions.nix) {}).firefox.\"25.0.1\"" -c firefox

Amazing how notably Firefox has changed since then!

3.4 Blender

Another example using Blender. The code is very similar to the two above:

$ nix eval "(builtins.attrNames (import (builtins.fetchurl https://matthewbauer.us/generate-versions.nix) {}).blender)"
[ "2.67" "2.70" "2.72b" "2.75a" "2.77a" "2.78c" "2.79" "2.79a" "2.79b" ]

You can run Blender 2.67 using this command:

$ LC_ALL=C nix run "(import (builtins.fetchurl https://matthewbauer.us/generate-versions.nix) {}).blender.\"26.7\"" -c blender

4 Rationale

The reason that channels work better than commits is because every commit in Nixpkgs is not guaranteed to work on its own. Some may be missing security patches, configuration changes, or worse may just not work with other versions of packages. In addition, there are just too many commits to work with effectively. On the other hand, Nixpkgs release stable channels every 6 months, and we have a long vetting process of ensuring the stabilized channel works well.

The main drawback the 6-month channels have is that we don’t have every version released of package. If the version you want is missing in a release, you are out of luck. But, the 6-month window tends to pick up a lot of packages and we end up with almost every major version of popular software. My philosophy is not all releases are worth keeping. Some contain critical security flaws, contain major bugs, and might not work well with other software. The 6-month window is good enough for me. Perhaps in the future we can increase Nixpkgs release cadence to 3-month or 1-month, but the maintainers are not quite ready for that yet.

5 Conclusion

This has hopefully shown how Nix’s functional dependency model makes it very easy to switch between versions of packages. This is builtin to Nix, but you need some scripts to really use this well. Our 6-month release window is an arbitrary choice, but tends to pick up a lot of useful versions in the mean time.

August 21, 2019 12:00 AM

August 20, 2019

Craige McWhirter

Installing Your First Hydra

NixOS Hydra Gears by Craige McWhirter

Hydra is a Nix-based continuous build system. My method for configuring a server to be a Hydra build server, is to create a hydra.nix file like this:

# NixOps configuration for machines running Hydra

{ config, pkgs, lib, ... }:

{

  services.postfix = {
    enable = true;
    setSendmail = true;
  };

  services.postgresql = {
    enable = true;
    package = pkgs.postgresql;
    identMap =
      ''
        hydra-users hydra hydra
        hydra-users hydra-queue-runner hydra
        hydra-users hydra-www hydra
        hydra-users root postgres
        hydra-users postgres postgres
      '';
  };

  networking.firewall.allowedTCPPorts = [ config.services.hydra.port ];

  services.hydra = {
    enable = true;
    useSubstitutes = true;
    hydraURL = "https://my.website.org";
    notificationSender = "my.website.org";
    buildMachinesFiles = [];
    extraConfig = ''
      store_uri = file:///var/lib/hydra/cache?secret-key=/etc/nix/my.website.org/secret
      binary_cache_secret_key_file = /etc/nix/my.website.org/secret
      binary_cache_dir = /var/lib/hydra/cache
    '';
  };

  services.nginx = {
    enable = true;
    recommendedProxySettings = true;
    virtualHosts."my.website.org" = {
      forceSSL = true;
      enableACME = true;
      locations."/".proxyPass = "http://localhost:3000";
    };
  };

  security.acme.certs = {
      "my.website.org".email = "my.email@my.website.org";
  };

  systemd.services.hydra-manual-setup = {
    description = "Create Admin User for Hydra";
    serviceConfig.Type = "oneshot";
    serviceConfig.RemainAfterExit = true;
    wantedBy = [ "multi-user.target" ];
    requires = [ "hydra-init.service" ];
    after = [ "hydra-init.service" ];
    environment = builtins.removeAttrs (config.systemd.services.hydra-init.environment) ["PATH"];
    script = ''
      if [ ! -e ~hydra/.setup-is-complete ]; then
        # create signing keys
        /run/current-system/sw/bin/install -d -m 551 /etc/nix/my.website.org
        /run/current-system/sw/bin/nix-store --generate-binary-cache-key my.website.org /etc/nix/my.website.org/secret /etc/nix/my.website.org/public
        /run/current-system/sw/bin/chown -R hydra:hydra /etc/nix/my.website.org
        /run/current-system/sw/bin/chmod 440 /etc/nix/my.website.org/secret
        /run/current-system/sw/bin/chmod 444 /etc/nix/my.website.org/public
        # create cache
        /run/current-system/sw/bin/install -d -m 755 /var/lib/hydra/cache
        /run/current-system/sw/bin/chown -R hydra-queue-runner:hydra /var/lib/hydra/cache
        # done
        touch ~hydra/.setup-is-complete
      fi
    '';
  };
  nix.trustedUsers = ["hydra" "hydra-evaluator" "hydra-queue-runner"];
  nix.buildMachines = [
    {
      hostName = "localhost";
      systems = [ "x86_64-linux" "i686-linux" ];
      maxJobs = 6;
      # for building VirtualBox VMs as build artifacts, you might need other
      # features depending on what you are doing
      supportedFeatures = [ ];
    }
  ];
}

From there it can be imported in your configuration.nix or NixOps files like this:

{ config, pkgs, ... }:

{

  imports =
    [
      ./hydra.nix
    ];

...
}

To deploy hydra, you will then need to either run nixos-rebuild switch on the server or use nixops deploy -d my.network.

The result of this deployment, via NixOps can be seen at hydra.mcwhirter.io.

by Craige McWhirter at August 20, 2019 09:47 AM