NixOS Planet

August 22, 2016

Sander van der Burg

An extended self-adaptive deployment framework for service-oriented systems


Five years ago, while I was still in academia, I built an extension framework around Disnix (named: Dynamic Disnix) that enables self-adaptive redeployment of service-oriented systems. It was an interesting application as it demonstrated the full potential of service-oriented systems having their deployment processes automated with Disnix.

Moreover, the corresponding research paper was accepted for presentation at the SEAMS 2011 symposium (co-located with ICSE 2011) in Honolulu (Hawaii), which was (obviously!) a nice place to visit. :-)

Disnix's development was progressing at a very low pace for a while after I left academia, but since the end of 2014 I made some significant improvements. In contrast to the basic toolset, I did not improve Dynamic Disnix -- apart from the addition of a port assigner tool, I only kept the implementation in sync with Disnix's API changes to prevent it from breaking.

Recently, I have used Dynamic Disnix to give a couple of demos. As a result, I have improved some of its aspects a bit. For example, some basic documentation has been added. Furthermore, I have extended the framework's architecture to take a couple of new deployment planning aspects into account.

Disnix


For readers unfamiliar with Disnix: the primary purpose of the basic Disnix toolset is executing deployment processes of service-oriented systems. Deployments are driven by three kinds of declarative specifications:

  • The services model captures the services (distributed units of deployments) of which a system consists, their build/configuration properties and their inter-dependencies (dependencies on other services that may have to be reached through a network link).
  • The infrastructure model describes the target machines where services can be deployed to and their characteristics.
  • The distribution model maps services in the services model to machines in the infrastructure model.

By writing instances of the above specifications and running disnix-env:

$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix

Disnix executes all activities to get the system deployed, such as building their services from source code, distributing them to the target machines in the network and activating them. Changing any of these models and running disnix-env again causes the system to be upgraded. In case of an upgrade, Disnix will only execute the required activities making the process more efficient than deploying a system from scratch.

"Static" Disnix


So, what makes Disnix's deployment approach static? When looking at software systems from a very abstract point of view, they are supposed to meet a collection of functional and non-functional requirements. A change in a network of machines affects the ability for a service-oriented system to meet them, as the services of which these systems consist are typically distributed.

If a system relies on a critical component that has only one instance deployed and the machine that hosts it crashes, the functional requirements can no longer be met. However, even if we have multiple instances of the same components giving better guarantees that no functional requirements will be broken, important non-functional requirements may be affected, such as the responsiveness of a system.

We may also want to optimize a system's non-functional properties, such as its responsiveness, by adding more machines to the network that offer more system resources, or by changing the configuration of existing machine, e.g. upgrading the amount available RAM.

The basic Disnix toolset is considered static, because all these events events require manual modifications to the Disnix models for redeployment, so that a system can meet its requirements under the changed conditions.

For simple systems, manual reconfiguration is still doable, but with one hundred services, one hundred machines or a high frequency of events (or a combination of the three), it becomes too complex and time consuming.

For example, when a machine has been added or removed, we must rewrite the distribution model in such a way that all services are deployed to at least one machine and that none of them are mapped to machines that are not capable or allowed to host them. Furthermore, with microservices (one of their traits is that they typically embed HTTP servers), we must typically bind them to unique TCP ports that do not conflict with system services or other services deployed by Disnix. None of these configuration aspects are trivial for large service-oriented systems.

Dynamic Disnix


Dynamic Disnix extends Disnix's architecture with additional models and tools to cope with the dynamism of service oriented-systems. In the latest version, I have extended its architecture (which has been based on the old architecture described in the SEAMS 2011 paper and corresponding blog post):


The above diagram shows the structure of the dydisnix-self-adapt tool. The ovals denote command-line utilities, the rectangles denote files and the arrows denote files as inputs or outputs. As with the basic Disnix toolset, dydisnix-self-adapt is composed of command-line utilities each being responsible for executing an individual deployment activity:

  • On the top right, the infrastructure generator is shown that captures the configurations of the machines in the network and generates an infrastructure model from it. Currently, two different kinds of generators can be used: disnix-capture-infra (included with the basic toolset) that uses a bootstrap infrastructure model with connectivity settings, or dydisnix-geninfra-avahi that uses multicast DNS (through Avahi) to retrieve the machines' properties.
  • dydisnix-augment-infra is responsible for augmenting the generated infrastructure model with additional settings, such as passwords. It is typically undesired to automatically publish privacy-sensitive settings over a network using insecure connection protocols.
  • disnix-snapshot can be optionally used to preemptively capture the state of all stateful services (services with property: deployState = true; in the services model) so that the state of these services can be restored if a machine crashes or disappears. This tool is new in the extended architecture.
  • dydisnix-gendist generates a mapping of services to machines based on technical and non-functional properties defined in the services and infrastructure models.
  • dydisnix-port-assign assigns unique TCP port numbers to previously undeployed services and retains assigned TCP ports in a previous deployment for optimization purposes. This tool is new in the extended architecture.
  • disnix-env redeploys the system with the (statically) provided services model and the dynamically generated infrastructure and distribution models.

An example usage scenario


When a system has been configured to be (statically) deployed with Disnix (such as the infamous StaffTracker example cases that come in several variants), we need to add a few additional deployment specifications to make it dynamically deployable.

Auto discovering the infrastructure model


First, we must configure the machines in such a way that they publish their own configurations. The basic toolset comes with a primitive solution called: disnix-capture-infra that does not require any additional configuration -- it consults the Disnix service that is installed on every target machine.

By providing a simple bootstrap infrastructure model (e.g. infrastructure-bootstrap.nix) that only provides connectivity settings:


{
test1.properties.hostname = "test1";
test2.properties.hostname = "test2";
}

and running disnix-capture-infra, we can obtain the machines' configuration properties:


$ disnix-capture-infra infrastructure-bootstrap.nix

By setting the following environment variable, we can configure Dynamic Disnix to use the above command to capture the machines' infrastructure properties:


$ export DYDISNIX_GENINFRA="disnix-capture-infra infrastructure-bootstrap.nix"

Alternatively, there is the Dynamic Disnix Avahi publisher that is more powerful, but at the same time much more experimental and unstable than disnix-capture-infra.

When using Avahi, each machine uses multicast DNS (mDNS) to publish their own configuration properties. As a result, no bootstrap infrastructure model is needed. Simply gathering the data published by the machines on the same subnet suffices.

When using NixOS on a target machine, the Avahi publisher can be enabled by cloning the dydisnix-avahi Git repository and adding the following lines to /etc/nixos/configuration.nix:


imports = [ /home/sander/dydisnix/dydisnix-module.nix ];
services.dydisnixAvahiTest.enable = true;

To allow the coordinator machine to capture the configurations that the target machines publish, we must enable the Avahi system service. In NixOS, this can be done by adding the following lines to /etc/nixos/configuration.nix:


services.avahi.enable = true;

When running the following command-line instruction, the machines' configurations can be captured:


$ dydisnix-geninfra-avahi

Likewise, when setting the following environment variable:


$ export DYDISNIX_GENINFRA=dydisnix-geninfra-avahi

Dynamic Disnix uses the Avahi-discovery service to obtain an infrastructure model.

Writing an augmentation model


The Java version of StaffTracker for example uses MySQL to store data. Typically, it is undesired to publish the authentication credentials over the network (in particular with mDNS, which is quite insecure). We can augment these properties to the captured infrastructure model with the following augmentation model (augment.nix):


{infrastructure, lib}:

lib.mapAttrs (targetName: target:
target // (if target ? containers && target.containers ? mysql-database then {
containers = target.containers // {
mysql-database = target.containers.mysql-database //
{ mysqlUsername = "root";
mysqlPassword = "secret";
};
};
} else {})
) infrastructure

The above model implements a very simple password policy, by iterating over each target machine in the discovered infrastructure model and adding the same mysqlUsername and mysqlPassword property when it encounters a MySQL container service.

Mapping services to machines


In addition to a services model and a dynamically generated (and optionally augmented) infrastructure model, we must map each service to machine in the network using a configured strategy. A strategy can be programmed in a QoS model, such as:


{ services
, infrastructure
, initialDistribution
, previousDistribution
, filters
, lib
}:

let
distribution1 = filters.mapAttrOnList {
inherit services infrastructure;
distribution = initialDistribution;
serviceProperty = "type";
targetPropertyList = "supportedTypes";
};

distribution2 = filters.divideRoundRobin {
distribution = distribution1;
};
in
distribution2

The above QoS model implements the following policy:

  • First, it takes the initialDistribution model that is a cartesian product of all services and machines. It filters the machines on the relationship between the type attribute and the list of supportedTypes. This ensures that services will only be mapped to machines that can host them. For example, a MySQL database should only be deployed to a machine that has a MySQL DBMS installed.
  • Second, it divides the services over the candidate machines using the round robin strategy. That is, it divides services over the candidate target machines in equal proportions and in circular order.

Dynamically deploying a system


With the services model, augmentation model and QoS model, we can dynamically deploy the StaffTracker system (without manually specifying the target machines and their properties, and how to map the services to machines):


$ dydisnix-env -s services.nix -a augment.nix -q qos.nix

The Node.js variant of the StaffTracker example requires unique TCP ports for each web service and web application. By providing the --ports parameter we can include a port assignment specification that is internally managed by dydisnix-port-assign:


$ dydisnix-env -s services.nix -a augment.nix -q qos.nix --ports ports.nix

When providing the --ports parameter, the specification gets automatically updated when ports need to be reassigned.

Making a system self-adaptable from a deployment perspective


With dydisnix-self-adapt we can make a service-oriented system self-adaptable from a deployment perspective -- this tool continuously monitors the network for changes, and runs a redeployment when a change has been detected:


$ dydisnix-self-adapt -s services.nix -a augment.nix -q qos.nix

For example, when shutting down a machine in the network, you will notice that Dynamic Disnix automatically generates a new distribution and redeploys the system to get the missing services back.

Likewise, by adding the ports parameter, you can include port assignments as part of the deployment process:


$ dydisnix-self-adapt -s services.nix -a augment.nix -q qos.nix --ports ports.nix

By adding the --snapshot parameter, we can preemptively capture the state of all stateful services (services annotated with deployState = true; in the services model), such as the databases in which the records are stored. If a machine hosting databases disappears, Disnix can restore the state of the databases elsewhere.


$ dydisnix-self-adapt -s services.nix -a augment.nix -q qos.nix --snapshot

Keep in mind that this feature uses Disnix's snapshotting facilities, which may not be the best solution to manage state, in particular with large databases.

Conclusion


In this blog post, I have described an extended architecture of Dynamic Disnix. In comparison to the previous version, a port assigner has been added that automatically provides unique port numbers to services, and the disnix-snapshot utility that can preemptively capture the state of services, so that they can be restored if a machine disappears from the network.

Despite the fact that Dynamic Disnix has some basic documentation and other improvements from a usability perspective, Dynamic Disnix remains a very experimental prototype that should not be used for any production purposes. In contrast to the basic toolset, I have only used it for testing/demo purposes and I still have no real-life production experience with it. :-)

Moreover, I still have no plans to officially release it yet as many aspects still need to be improved/optimized. For now, you have to obtain the Dynamic Disnix source code from Github and use the included release.nix expression to install it. Furthermore, you probably need to a lot of courage. :-)

Finally, I have extended the Java and Node.js versions of the StaffTracker example as well as the virtual hosts example with simple augmentation and QoS models.

by Sander van der Burg (noreply@blogger.com) at August 22, 2016 09:46 PM

August 10, 2016

Joachim Schiele

tuebix

motivation

managing a 'call for papers' can be a lot of work. the tuebix cfp-software was created in the best practice of KISS.

tuebix

we held a linuxtag at the university of tübingen called tuebix and we had a talk about nixos and a workshop about nixops.

source

concept

the cfp-software backend is written in golang. the frontend was done in materializecss.

the workflow:

  • user fills the form-fields and gets instant feedback because of javascript checks
  • after 'submit' it will generate a json document and send it via email to a mailinglist
  • the mailinglist is monitored manually and people are contacted afterwards manually

after the cfp is over, one can use jq to process the data for creating a schedule.

hosting

security wise it would be good to create a custom user for hosting which was not done here.

/home/joachim/cfp.sh

#!/bin/sh
source /etc/profile
cd /home/joachim/cfp
nix-shell --command "while true; do go run server.go ; done"

systemd job

systemd.services.cfp = {
  wantedBy = [ "multi-user.target" ];
    after = [ "network.target" ];
    serviceConfig = {
      #Type = "forking";
      User = "joachim";
      ExecStart = ''/home/joachim/cfp.sh'';
      ExecStop = ''
      '';
    };
};

reverse proxy

... 
# nixcloud.io (https)
{
  hostName = "nixcloud.io";
  serverAliases = [ "nixcloud.io" "www.nixcloud.io" ];

  documentRoot = "/www/nixcloud.io/";
  enableSSL = true;
  sslServerCert = "/ssl/nixcloud.io-2015.crt";
  sslServerKey = "/ssl/nixcloud.io-2015.key";
  sslServerChain = "/ssl/nixcloud.io-2015-intermediata.der";

  extraConfig = ''
    ...
    RewriteRule ^/cfp$ /cfp/ [R]
    ProxyPass /cfp/ http://127.0.0.1:3000/ retry=0
    ProxyPassReverse /cfp/ http://127.0.0.1:3000/
    ...
  '';
...

summary

using nix-shell it was easy to develop the software and to deploy it to the server. all dependencies are contained.

for further questions drop me an email: js@lastlog.de

by qknight at August 10, 2016 04:35 PM

July 27, 2016

Flying Circus

Vulnix v1.0 release

Intro

Back in May I introduced you to the development of vulnix, a tool which initially was done to find out whether a system (might) be affected by a security vulnerability. It does this by matching the derivations name with the product and version specified in the cpe language of the so-called CVEs (Common Vulnerabilities and Exposures). In the meantime we introduced the tool to the community at the Berlin NixOS Meetup and got some wonderful input in which directions we might extend the features. We sprinted the next two days to improve the code quality and broaden the feature set.

What we got as a result, is best-demonstrated by showing the usage function.

* Is my NixOS system installation affected?

Invoke:  vulnix --system

* Is my user environment (~/.nix-profile) affected?

Invoke:  vulnix --user

* Is my project affected?

Invoke after nix-build:  vulnix ./result

Installation (manual)

With the help of Rok and his recently re-written pypi2nix packaging vulnix for NixOS was a breeze and the installation procedure a simple

git clone https://github.com/flyingcircusio/vulnix.git
cd ./vulnix
nix-build

For a full set of options go for vulnix --help

Platform

From the next release on, vulnix will be part of our platform code and check periodically if the NixOS based VMs are affected or not. In this case operations get informed and can develop counter-measures like introspecting the CVEs, applying patches and or decline the hits as false positives. For instances if the hit is simply coincidental or not relevant in the context of the Flying Circus platform.


by Maksim Bronsky at July 27, 2016 09:21 PM

July 21, 2016

Joachim Schiele

xmlmirror

motivation

we are happy to announce the initial release of a useful new tool called xmlmirror. as the name more or less spells out, xmlmirror is an XML webeditor with schema validation, based on webforms and implemented with codemirror. xmlmirror further uses a library called Fast-XML-Lint which uses libxml2 for schema verification and which is compiled with emscripten. or in layman's terms: a web application that really helps you to create complex XML documents from scratch, as well as fix existing documents that are broken.

live demo / source code

features

more details

selenium

unit testing was implemented using selenium 2.53:

nix-shell -p python35Packages.selenium firefox-bin --command "python3 selenium_test.py"

it works like this:

  1. opens a specially crafted html document: schemainfoCreator-test.html in a webbrowser
  2. executes it and looks for "OK" or a 10 seconds timeout

selenium_test.py:

from selenium import webdriver
from selenium.webdriver.common.keys import Keys

from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

ff = webdriver.Firefox()
ff.get("schemainfoCreator-test.html")

assert "schemaInfo unit-test" in ff.title

try:
    element = WebDriverWait(ff, 10).until(EC.presence_of_element_located((By.ID, "OK")))
finally:
    ff.quit()

closure-compiler

the google closure compiler was used to ensure strict typing even though it is javascript:

closure_compiler/jcc schemainfoCreator.js

hint: it was no joy to use this tooling due to a lack of documentation and examples.

fastXmlLint.c

it took a bit of time to get into the internals of libxml2 and the antique API documentation is more confusing than helpful. anyway, two interesting results:

  1. even though xmllint can parse xml documents with a multi-document relax-ng schema it can't be used to parse a multi-document relax-ng schema itself, see discussion

  2. ltrace is your best friend in reverse-engineering shared object/library usage:

    for instance, running:

    ltrace -f xmllint --relaxng html5-rng/xhtml.rng test_fail1.html

    would yield:

    ...
    xmlSAXDefaultVersion(2, 0x40bca9, 116, 112) = 2
    getenv("XMLLINT_INDENT") = nil
    xmlGetExternalEntityLoader(0x7fff26fdca5d, 0x40bd36, 1, 76) = 0x7f969b6c1160
    xmlSetExternalEntityLoader(0x407660, 0x40bd36, 1, 76) = 0x7f969b6c1160
    xmlLineNumbersDefault(1, 0x40bd36, 1, 76) = 0
    xmlSubstituteEntitiesDefault(1, 0x40bd36, 0, 76) = 0
    __xmlLoadExtDtdDefaultValue(1, 0x40bd36, 0, 76) = 0x7f969b9c0a1c
    xmlRelaxNGNewParserCtxt(0x7fff26fdaf23, 0x40bd36, 0, 76) = 0x245c490
    xmlRelaxNGSetParserErrors(0x245c490, 0x404ac0, 0x404ac0, 0x7f969af39080) = 0x245c490
    xmlRelaxNGParse(0x245c490, 0x404ac0, 0x404ac0, 0x7f969af39080) = 0x2527f70
    xmlRelaxNGFreeParserCtxt(0x245c490, 0xffffffff, 0x7f969af38678, 0x25b5570) = 1
    ...

    and this is the exact order of libxml2 function calls xmllint issues to parse test_fail1.html!

    note: this helped us a lot and made it possible to discover the secret xmlLineNumbersDefault function!

emscripten

during this project we had the idea to create a c to javascript cross-compiler abstraction using nix for emscripten and we are happy to announce that it is now officially in nixpkgs, see PR 16208.

this means:

  1. you can use nix-build to cross-compile all your dependencies like libz and afterwards use these in your project
  2. since nix runs on all linuxes, mac os x and other unix-like platforms, you can now enojoy full toolchain automation and deployment when doing emscripten.

if using nixpkgs (master), you can check for emscripten targets using:

nix-env -qaP | grep emscriptenPackages

and install using:

nix-env -iA emscriptenPackages.json_c

note: don't mix json_c (native, x86) with other libs (emscripten, javascript) in your user-profile or you will get weird error messages with object code being in the wrong format and such.

nixos development

nix-shell was the primary development tool along with the default.nix which can basically spawn two different environments:

  • nix-shell -A emEnv - emscripten environment: used to compile c-code in javascript
  • nix-shell -A nativeEnv - native environment: used to develop the c-code in question and also for unit testing purposes

see Makefile.emEnv and Makefile.nativeEnv respectivly.

let's have a look at the default.nix:

let 

...

emEnvironment = stdenv.mkDerivation rec {
  name = "emEnv";
  shellHook = ''
    export HISTFILE=".zsh_history"
    alias make="colormake -f Makefile.emEnv"
    alias c="while true; do inotifywait * -e modify --quiet > /dev/null; clear; make closure| head -n 30; done"
    alias s="python customserver.py"
    alias jcc=closure_compiler/jcc
    echo "welcome to the emEnvironment"
    PS1="emEnv: \$? \w \[$(tput sgr0)\]"
  '';

  buildInputs = [ json-c libz xml-js ] ++ [ colormake nodejs emscripten autoconf automake libtool pkgconfig gnumake strace ltrace python openjdk ncurses ];
};

...

in

{
  # use nix-shell with -A to select the wanted environment to work with:
  #   --pure is optional

  # nix-shell -A nativeEnv --pure  
  nativeEnv = nativeEnvironment;
  # nix-shell -A emEnv --pure  
  emEnv = emEnvironment;
}

you will notice that emEnv is a stdenv.mkDerivation and it uses shellHook and buildInputs.

some remarks:

  • we set a HISTFILE and get a project based history which is nice
  • using alias we override make with colormake and also set the target Makefile to Makefile.emEnv
  • setting a custom PS1 makes it easier to identify the shell when working on n+1 projects at the same time
  • the s alias runs a python webserver with xhtml mime-type support, which is handy when developing with chromium as XHR requests will be working then

note: the default.nix does contain libz, json-c, xml-js packaging and since this is now in nixpkgs it is kind of obsolete now.

conclusion

we (paul/joachim) want to thank Jos van den Oever (prolific open source contributor and co-chair of the OASIS ODF TC on behalf of the Dutch government) for inspiring the creation of this tool. ODF is a prominent example of a real-world standard that leverages the relax ng standard, and we expect xmlmirror to be very useful in the creation of more ODF autotests. Jos has also graciously offered to provide an initial host repository for xmlmirror.

  • schema parsing in codemirror can now easily be extended with all relax ng schemas!

  • also thanks to profpatsch for his explanations on the nix feature called override, see emscripten-packages.nix

  • we also want to thank nlnet foundation for their financial contribution from the ODF fund which enabled us to complete this interesting project. thanks as well to Michiel Leenaars (not only from nlnet but also one of the people behind the ODF plugfests) for his interest in the project. now we have a real powerful xmleditor, made huge progress with the emscripten toolchain on nixos and have created a pretty useful development workflow.

if you have questions/comments, see nixcloud.io for contact details.

by qknight at July 21, 2016 08:35 AM

July 06, 2016

Rok Garbas

pypi2nix reborn

In the recent years pypi2nix tool went over many iterations. Quite few approaches were tested, and for the last year all pieces finally came together. I can say personally that for the past 6 months I'm a happy pypi2nix user. I finally came around to polish some rough edges and write this blogpost.

Currently I'm looking for python developers in Nix community to give it a try and report bugs, file feature requests or ask questions if something is unclear.

What is pypi2nix?

pypi2nix is tool that tries to generate nix expressions from your projects requirements.txt, buildout.cfg and setup.py.

Python packaging is not the simplest thing you can read all about in one place. It is years of poorly documented work and somehow it all works (well at least most of the time). pypi2nix is never going to be a tool that will work 100% of the time, but in the worse case it will get you pretty close and leave you with only few lines of manual work.

An important thing to keep in mind is that pypi2nix is not a tool that will automate generating pkgs/top-level/python-packages.nix for nixpkgs repository. pypi2nix should be used on project basis (similar to how cabal2nix works). Maybe this will change in the future, but for now this is the current scope of the project.

How do you install it?

pypi2nix was just pushed to nixpkgs master and should take some time until it lands in your channels. You can install it directly from master branch on Github.

% git clone https://github.com/garbas/pypi2nix
% cd pypi2nix
% nix-env -iA build."x86_64-linux" -f release.nix

Once pypi2nix gets built by hydra you can also install it via nix-env command:

% nix-env -iA nixos.pypi2nix

or if you are using Nix on non-NixOS system

% nix-env -iA nixpkgs.pypi2nix

If you want to start contributing to pypi2nix look no further then running nix-shell:

% nix-shell

and you can start hacking on the code.

How do you use it?

It is very common for python project to have requirements.txt file which lists projects dependencies.

For the sake of this blogpost lets create an example requirements.txt

% echo "requests"   >  requirements.txt
% echo "pyramid"    >> requirements.txt
% echo "lxml"       >> requirements.txt

As you can see above I created a requirements.txt file where I specified 3 dependencies. To generate nix expressions we have to run:

% pypi2nix -r requirements.txt -E "libxslt libxml2"
...

Because lxml depends on libxslt and libxml2 we needed to declare them as build input (-E options) in order to install it.

For those new to python packaging world keep in minf that we need to install a package in order to know which are its dependencies. Crazy right!? Well that is how python works.

Above command created 3 files:

  • requirements_generated.nix - A list of generated nix expressions for all packages listed in requirements.txt and all their dependencies. An example of generated expression

    { pkgs, python, commonBuildInputs ? [], commonDoCheck ? false }:
    
    self: {
    
      ...
    
      "Babel" = python.mkDerivation {
        name = "Babel-2.3.4";
        src = pkgs.fetchurl {
          url = "https://files.pythonhosted.org/packages/.../Babel-2.3.4.tar.gz";
          sha256= "...";
        };
        doCheck = commonDoCheck;
        buildInputs = commonBuildInputs;
        propagatedBuildInputs = [
          self."pytz"
        ];
        meta = {
          homepage = "";
          license = lib.bsdOriginal;
          description = "Internationalization utilities";
        };
      };
    
      ...
    
    }
    
  • requirements_override.nix - It is an empty list of overrides that you can write if you are not happy with what pypi2nix generated. This file only gets created if it does not exist yet. More on this later on.

  • requirements.nix - A file that glues together requirements_generated.nix and requirements_override.nix. Also in this file a new python nix functions are implemented. More on this later on.

To build pyramid, lxml and requests do:

% nix-build requirements.nix -A pkgs.pyramid -A pkgs.lxml -A pkgs.requests

Or to build python interpreter with all of the above packages

% nix-build requirements.nix -A interpreter
% ./result/bin/python -c "import pyramid; import lxml; import requests"

Or to enter development environment with all of the above packages

% nix-shell requirements.nix -A interpreter
(nix-shell) % python -c "import pyramid; import lxml; import requests"

By default python 2.7 is selected, but you can choose other python version by specifying -V option

% pypi2nix -r requirements.txt -E "libxslt libxml2" -V "3.5"
...
% nix-shell requirements.nix -A interpreter
(nix-shell) % python3 -c "import pyramid; import lxml; import requests"

All python version in nixpkgs are available in pypi2nix as well

% pypi2nix --help
...
Options:
   ...
   -V, --python-version [2.7|3.5|3.4|3.3|pypy|2.6|3.2]
                                  Provide which python version we build for.
                                  [required]

You can find few more examples in examples folder.

What to do when nix-build fails?

It might (and probably will) happen that some packages will fail to build. There are milion of reasons why some packages will fail to build. It would be foolish trying to solve and accommodate every possible scenario. Instead, when things go south, tools to override generated expressions are provided.

Initially a file with suffix _override.nix is generated with an empty list of overrides.

As an example to show how this overriding works, let us enable tests for lxml library from previous example. A requirements_override.nix file would then look like

{ pkgs, python }:

self: super: {

  "lxml" = python.overrideDerivation super."lxml" (old: {
    doCheck = true;
  });

}

After the change you can continue to build packages as shown above. If you rerun pypi2nix -r requirements.txt you will see that requirements_override.nix does not get overwritten.

Above example gives you all the flexibility to override any existing expression as well as adding new (manually written) expressions.

Why are generated expressions different then the one used in nixpkgs?

If you look at generated nix expressions in requirements_generated.nix and if you packaged python packages with nix in the past you will see that the functions used there are a bit different.

With pypi2nix I also took the freedome to explore how could we have nicer developer experience when working with python and nix. Nothing is set in stone, so I invite you to open an issue if you disagree with the the direction taken.

Current limitations

Current limitations of pypi2nix tool that are going to be fixed in the future are:

  • not possible to specify multiple requirements.txt files (#34)
  • not working with buildout.cfg files (#31)
  • not working with on setup.py files (#32)
  • tests for generated packages are disabled by default for now (#35)
  • requires packages on pypi to have a tarball/zip release. some packages only publish in egg/wheel format (#26)

Let me know if I'm missing some crutial things that I just can not see.

What does the future hold?

pypi2nix is just a first stepping stone on the roadmap to package python packages on PyPI.

End goal for me is that an average python developer (or a consumer of python packages) would be able to depend on any python package without writing a single line of nix code.

Next thing towards that goal I am going to work on is to create a repository of a smaller set automatically generated nix expressions from PyPi. Not sure where the experiment will take me, but I have used similar approach in few previous projects.

I already started a repository called nixpkgs-python where I plan to continue this work, in case you are interested in following/joining this effort.

Convinced to give it a try?

Help me test pypi2nix and provide me with failing requirements.txt. And if it works please give me a shout over twitter. It is nice to know that it is working for somebody.

by Rok Garbas at July 06, 2016 10:00 PM

July 02, 2016

Rok Garbas

NixOS Meetup Report

All together we were 17 people, which was the biggest number so far for any of NixOS Berlin gatherings. I'm not sure what was the reason, but I my speculations are that an evening of gathering is nice from time to time (presentations, demos, ...), but a lot more Nixers want to hack together and an evening is just to short. A day or two of hacking feels like you can actually get something done. We will definetly repeat it.

Also at this point I would like to thank Mozilla for letting us use their office and to keep us hydrated for all those hard working days.

Beginner workshop

We started the meetup with a Nix/NixOS walk through for anybody that is just starting with Nix/NixOS. We got 5 newcomers to Nix/NixOS.

We also had some interesting talks afterwards how we can improve this initial Nix on-boarding. There is just soo much we can improve. I know I personally learned a lot how to improve explaining Nix to a newcomer and I can't wait until next time I have a chance to do it.

For those who of you who did not have time to join us for this Nix/NixOS introduction workshop, you can rest assure we will repeat it very soon.

Presentations / Demos

Each day we reserved ~1 hour to relax from coding and listen to people that want to present/demonstrate something they built with Nix or some interesting tooling around Nix. We did not have any camera around to record it, and also presentations were more discussions then really presentations.

Reports

Many people worked on many different projects, here are individual short reports. I hope all of this give you inspiration for you work and hopefully you will join us next time.


Emery,

To summarize, I added color output to nix-repl, and worked on a new Nixpkgs stdenv for Genode. https://github.com/ehmry


Christian Kauhaus,

Known as @ckauhaus on IRC & Twitter.

Blogging on https://blog.flyingcircus.io from time to time.

Topics:

  • Got first hands-on experience with nixops, installing lots of random stuff into an ever-changing set of virtual machines on my laptop. Struggled with a corrupted nix store on my host system. Unfortunatly, nix does not give clear error messages in this case so I lost a lot of time here.
  • Hacked heavily on vulnix. An initial release should be ready soon.

lassulus

  • I learned how FHSUserEnv are working. Used them to install some closed source apps on my workstation. - I updated my build buildbot configuration to use the correct version of nixpkgs for my tests
  • I extended some services in my nixos configuration.

Code can be found in my nixos-configuration repository:

http://cgit.lassul.us/stockholm/


Maksim Bronsky

@dvhfm

  • I introduced vulnix to the community. Had some great reactions and discussions of where the development should be headed
  • I coded on one of the caching mechanisms of vulnix (xml parsing) to improve the overall runtime.

Matthias Heinzel Matthias Heinzel

Great meetup, I really enjoyed it. Looking forward to the next one. :)

I mainly set up my NixOS system, got my wifi working (yay!) and learnt a bit about the Nix language. Thanks for all your help!

I also plan to help sjordan with the graphical user interface for Nix.

mheinzel on github and IRC.


sjordan

Enjoyed the meetup. Helped people to get started on nixos/nix.

I had great discussions about different aspects of nix, especially nix as a package manager which really helped me develop concepts for a GUI for nix.

nixgui: http://hub.darcs.net/seppeljordan/nixgui


Maarten Hoogendoorn

twtiter: @moretea_nl github: moretea

  • Helped to install NixOS on a macbook
  • Worked with Fabian on experimenting with adding documentation comments to nix, to allow attr set arguments to be documented in his list-package-options branch. Some initial code: https://github.com/moretea/nix/commit/9be41e4110983604367ee796a03aab4114a7bdbf (see tests/lang/eval-okay-functionargs-docs.exp and tests/lang/eval-okay-functionargs-docs.nix in the repo for what this actually does).
  • Hacked a way to document library functions in a structured way, this could be extended (see last commit) to support runtime type checking of functions in nix itself. See https://www.youtube.com/watch?v=ahVu3tjrriM

Mathias Schreck

github: @lo1tuma twiltter: @lo1tuma

The meetup was a really great event, as a nix newbie a learned a lot.

  • Learned about the difference between import and callPackage
  • I learned a lot about nixops and finally managed to deploy our custom jenkins package (which is based on this expression https://github.com/zalora/microgram/blob/59dfe04d2ac67945f6d2dee5f7233b0cdba9318d/pkgs/jenkins/default.nix) to virtualbox and EC2. Now we have all our jenkins config 100% as code which is much better than configuring stuff via the jenkins web UI.
  • I learned about buildFHSUserEnv. One of our jenkins plugins uses a hardcoded path to /bin/echo which I could make available through buildFHSUserEnv. I also created an upstream issue and I try provide a patch for jenkins soon.

Rok Garbas

github/twitter/irc: garbas

  • closed few wiki tickets. discovered how much I suck at writting documentation. too many interuptions to get something concretly done. but a small progress was done. anyone that want to help port remaining wiki tickets to respected manuals please select your pick:

    https://github.com/NixOS/nixpkgs/milestones/Move%20the%20wiki!

  • got familiar with vulnix. can't wait to use it for my own work. packaged the project with help of pypi2nix. helped port code from argparse to click

  • fix a lot of bugs for pypi2nix. currently i'm looking for python developers to help me test if it is working for them. separate blog post on this topic is will be written.

Final thoughts

It was fun and we also got work done. Hooray for us! I can not wait for next time we meet again. Let me know if you have any ideas for next nixos meetup (ping me on Twitter).

by Rok Garbas at July 02, 2016 10:00 PM

NixOS Meetup Report

All together we were 17 people, which was the biggest number so far for any of NixOS Berlin gatherings. I'm not sure what was the reason, but my speculations are that an evening of gathering is nice from time to time (presentations, demos, ...), but a lot more Nixers want to hack together and an evening is just to short. A day or two of hacking feels like you can actually get something done. We will definitely repeat it.

Also at this point I would like to thank Mozilla for letting us use their office and to keep us hydrated for all those hard working days.

Beginner workshop

We started the meetup with a Nix/NixOS walk through for anybody that is just starting with Nix/NixOS. We got 5 newcomers to Nix/NixOS.

We also had some interesting talks afterwards how we can improve this initial Nix on-boarding. There is just soo much we can improve. I know I personally learned a lot how to improve explaining Nix to a newcomer and I can't wait until next time I have a chance to do it.

For those who of you who did not have time to join us for this Nix/NixOS introduction workshop, you can rest assured we will repeat it very soon.

Presentations / Demos

Each day we reserved ~1 hour to relax from coding and listen to people that want to present/demonstrate something they built with Nix or some interesting tooling around Nix. We did not have any camera around to record it, and also presentations were more discussions then really presentations.

Reports

Many people worked on many different projects, here are individual short reports. I hope all of this give you inspiration for you work and hopefully you will join us next time.


Emery,

To summarize, I added color output to nix-repl, and worked on a new Nixpkgs stdenv for Genode. https://github.com/ehmry


Christian Kauhaus,

Known as @ckauhaus on IRC & Twitter.

Blogging on https://blog.flyingcircus.io from time to time.

Topics:

  • Got first hands-on experience with nixops, installing lots of random stuff into an ever-changing set of virtual machines on my laptop. Struggled with a corrupted nix store on my host system. Unfortunatly, nix does not give clear error messages in this case so I lost a lot of time here.
  • Hacked heavily on vulnix. An initial release should be ready soon.

lassulus

  • I learned how FHSUserEnv are working. Used them to install some closed source apps on my workstation. - I updated my build buildbot configuration to use the correct version of nixpkgs for my tests
  • I extended some services in my nixos configuration.

Code can be found in my nixos-configuration repository:

http://cgit.lassul.us/stockholm/


Maksim Bronsky

@dvhfm

  • I introduced vulnix to the community. Had some great reactions and discussions of where the development should be headed
  • I coded on one of the caching mechanisms of vulnix (xml parsing) to improve the overall runtime.

Matthias Heinzel Matthias Heinzel

Great meetup, I really enjoyed it. Looking forward to the next one. :)

I mainly set up my NixOS system, got my wifi working (yay!) and learnt a bit about the Nix language. Thanks for all your help!

I also plan to help sjordan with the graphical user interface for Nix.

mheinzel on github and IRC.


sjordan

Enjoyed the meetup. Helped people to get started on nixos/nix.

I had great discussions about different aspects of nix, especially nix as a package manager which really helped me develop concepts for a GUI for nix.

nixgui: http://hub.darcs.net/seppeljordan/nixgui


Maarten Hoogendoorn

twtiter: @moretea_nl github: moretea

  • Helped to install NixOS on a macbook
  • Worked with Fabian on experimenting with adding documentation comments to nix, to allow attr set arguments to be documented in his list-package-options branch. Some initial code: https://github.com/moretea/nix/commit/9be41e4110983604367ee796a03aab4114a7bdbf (see tests/lang/eval-okay-functionargs-docs.exp and tests/lang/eval-okay-functionargs-docs.nix in the repo for what this actually does).
  • Hacked a way to document library functions in a structured way, this could be extended (see last commit) to support runtime type checking of functions in nix itself. See https://www.youtube.com/watch?v=ahVu3tjrriM

Mathias Schreck

github: @lo1tuma twiltter: @lo1tuma

The meetup was a really great event, as a nix newbie a learned a lot.

  • Learned about the difference between import and callPackage
  • I learned a lot about nixops and finally managed to deploy our custom jenkins package (which is based on this expression https://github.com/zalora/microgram/blob/59dfe04d2ac67945f6d2dee5f7233b0cdba9318d/pkgs/jenkins/default.nix) to virtualbox and EC2. Now we have all our jenkins config 100% as code which is much better than configuring stuff via the jenkins web UI.
  • I learned about buildFHSUserEnv. One of our jenkins plugins uses a hardcoded path to /bin/echo which I could make available through buildFHSUserEnv. I also created an upstream issue and I try provide a patch for jenkins soon.

Rok Garbas

github/twitter/irc: garbas

  • closed few wiki tickets. discovered how much I suck at writting documentation. too many interuptions to get something concretly done. but a small progress was done. anyone that want to help port remaining wiki tickets to respected manuals please select your pick:

    https://github.com/NixOS/nixpkgs/milestones/Move%20the%20wiki!

  • got familiar with vulnix. can't wait to use it for my own work. packaged the project with help of pypi2nix. helped port code from argparse to click

  • fix a lot of bugs for pypi2nix. currently i'm looking for python developers to help me test if it is working for them. separate blog post on this topic is will be written.

Final thoughts

It was fun and we also got work done. Hooray for us! I can not wait for next time we meet again. Let me know if you have any ideas for next nixos meetup (ping me on Twitter).

by Rok Garbas at July 02, 2016 10:00 PM

NixOS Meetup Report

All together we were 17 people, which was the biggest number so far for any of NixOS Berlin gatherings. I'm not sure what was the reason, but I my speculations are that an evening of gathering is nice from time to time (presentations, demos, ...), but a lot more Nixers want to hack together and an evening is just to short. A day or two of hacking feels like you can actually get something done. We will definetly repeat it.

Also at this point I would like to thank Mozilla for letting us use their office and to keep us hydrated for all those hard working days.

Beginner workshop

We started the meetup with a Nix/NixOS walk through for anybody that is just starting with Nix/NixOS. We got 5 newcomers to Nix/NixOS.

We also had some interesting talks afterwards how we can improve this initial Nix on-boarding. There is just soo much we can improve. I know I personally learned a lot how to improve explaining Nix to a newcomer and I can't wait until next time I have a chance to do it.

For those who of you who did not have time to join us for this Nix/NixOS introduction workshop, you can rest assure we will repeat it very soon.

Presentations / Demos

Each day we reserved ~1 hour to relax from coding and listen to people that want to present/demonstrate something they built with Nix or some interesting tooling around Nix. We did not have any camera around to record it, and also presentations were more discussions then really presentations.

Reports

Many people worked on many different projects, here are individual short reports. I hope all of this give you inspiration for you work and hopefully you will join us next time.


Emery,

To summarize, I added color output to nix-repl, and worked on a new Nixpkgs stdenv for Genode. https://github.com/ehmry


Christian Kauhaus,

Known as @ckauhaus on IRC & Twitter.

Blogging on https://blog.flyingcircus.io from time to time.

Topics:

  • Got first hands-on experience with nixops, installing lots of random stuff into an ever-changing set of virtual machines on my laptop. Struggled with a corrupted nix store on my host system. Unfortunatly, nix does not give clear error messages in this case so I lost a lot of time here.
  • Hacked heavily on vulnix. An initial release should be ready soon.

lassulus

  • I learned how FHSUserEnv are working. Used them to install some closed source apps on my workstation. - I updated my build buildbot configuration to use the correct version of nixpkgs for my tests
  • I extended some services in my nixos configuration.

Code can be found in my nixos-configuration repository:

http://cgit.lassul.us/stockholm/


Maksim Bronsky

@dvhfm

  • I introduced vulnix to the community. Had some great reactions and discussions of where the development should be headed
  • I coded on one of the caching mechanisms of vulnix (xml parsing) to improve the overall runtime.

Matthias Heinzel Matthias Heinzel

Great meetup, I really enjoyed it. Looking forward to the next one. :)

I mainly set up my NixOS system, got my wifi working (yay!) and learnt a bit about the Nix language. Thanks for all your help!

I also plan to help sjordan with the graphical user interface for Nix.

mheinzel on github and IRC.


sjordan

Enjoyed the meetup. Helped people to get started on nixos/nix.

I had great discussions about different aspects of nix, especially nix as a package manager which really helped me develop concepts for a GUI for nix.

nixgui: http://hub.darcs.net/seppeljordan/nixgui


Maarten Hoogendoorn

twtiter: @moretea_nl github: moretea

  • Helped to install NixOS on a macbook
  • Worked with Fabian on experimenting with adding documentation comments to nix, to allow attr set arguments to be documented in his list-package-options branch. Some initial code: https://github.com/moretea/nix/commit/9be41e4110983604367ee796a03aab4114a7bdbf (see tests/lang/eval-okay-functionargs-docs.exp and tests/lang/eval-okay-functionargs-docs.nix in the repo for what this actually does).
  • Hacked a way to document library functions in a structured way, this could be extended (see last commit) to support runtime type checking of functions in nix itself. See https://www.youtube.com/watch?v=ahVu3tjrriM

Mathias Schreck

github: @lo1tuma twiltter: @lo1tuma

The meetup was a really great event, as a nix newbie a learned a lot.

  • Learned about the difference between import and callPackage
  • I learned a lot about nixops and finally managed to deploy our custom jenkins package (which is based on this expression https://github.com/zalora/microgram/blob/59dfe04d2ac67945f6d2dee5f7233b0cdba9318d/pkgs/jenkins/default.nix) to virtualbox and EC2. Now we have all our jenkins config 100% as code which is much better than configuring stuff via the jenkins web UI.
  • I learned about buildFHSUserEnv. One of our jenkins plugins uses a hardcoded path to /bin/echo which I could make available through buildFHSUserEnv. I also created an upstream issue and I try provide a patch for jenkins soon.

Rok Garbas

github/twitter/irc: garbas

  • closed few wiki tickets. discovered how much I suck at writting documentation. too many interuptions to get something concretly done. but a small progress was done. anyone that want to help port remaining wiki tickets to respected manuals please select your pick:

    https://github.com/NixOS/nixpkgs/milestones/Move%20the%20wiki!

  • got familiar with vulnix. can't wait to use it for my own work. packaged the project with help of pypi2nix. helped port code from argparse to click

  • fix a lot of bugs for pypi2nix. currently i'm looking for python developers to help me test if it is working for them. separate blog post on this topic is will be written.

Final thoughts

It was fun and we also got work done. Hooray for us! I can not wait for next time we meet again. Let me know if you have any ideas for next nixos meetup (ping me on Twitter).

by Rok Garbas at July 02, 2016 10:00 PM

June 28, 2016

Joachim Schiele

nixos-augsburg-sprint

nixos-sprint

paul and me visisted the augsburger openlab last weekend, for a nice nixos sprint with profpatsch. the sprint lasted two days and we talked lots about nix/nixpkgs internals.

nix based emscripten toolchain

we've been working on the emscripten nix toolchain. current status is: prototype is working and we can already compile these targets:

note: YAY, this is the first nix-based emscripten toolchain which should work on nixos but also on mac os x and on basically every POSIX supporting unix!

nixexpr grammar

we had the idea to make the nix expression language more forgiving by having it support ; at the end of a normal function body.

random example: mkDerivation

  json_c_ = emscriptenStdenv.mkDerivation rec {

    name = "json_c_";
    version = "json-c-0.12-20140410";

    buildInputs = [ autoconf automake libtool pkgconfig gnumake ];

    src = fetchgit {
      url = "https://github.com/json-c/json-c";
      rev = "refs/tags/${version}";
      sha256 = "0s9h6147v2vkd4l4k3prg850n0k1mcbhwhbr09dzq97m6vi9lfdi";
    };
    postFixup = "echo postFixup";
    preFixup = "echo preFixup";
    fixupPhase = "echo fixupPhase";
  };

you close the scope with }; and you always need the ; which is not optional.

function call

{ foo, bar } :

{
  # function body
}

you close the scope with } and you are not allowed to use };!

note: with this patch you are now allowed to write either } (default) or }; which is new.

mkDerivation outputs

i finally learned that we now have the outputs-feature

this means nixos/nix now supports split packages:

  • foo.deb vs.
  • foo-dev.deb

as ubuntu and other distros does.

in nixpkgs, for instance in an stdenv.mkDerivation you can now use:

outputs = [ "lib" "headers" "doc" ];

to install software into these directories! great!

summary

nice ppl, nice room. 10/10, would hack there again! huge thanks to Profpatsch and helpers.

by qknight at June 28, 2016 02:54 AM

June 21, 2016

Joachim Schiele

android calendar and CalDav

motivation

2016 and android still does not have a built-in open source CalDAV/CardDAV, this article is a very interesting read on the history and reactions of people wanting that. there is a nice article on how to implement a CalDAV client.

so in a nutshell: syncing different devices while supporting offline capabilities is complicated.

long story short: i do not want to copy my data into the google cloud. in this this posting i want to share my results with using owncloud 8.2 to sync my calendar and address book between my desktop computer and android (cyanogenmod) mobile.

owncloud/PIM

used components:

results

what i liked:

  • have my own infrastructure
  • many components are open source

what i hate:

  • lots of manual deployemnt and configuration
  • android
    • battery drain on android: 4 programs periodically pull for updates
    • if owncloud can't be reached aCalDAV will always set a notification with an error, see issue 17

  • laptop: thunderbird's CalDAV implementation
    • focus stealing warning popup-dialog is displayed once in a while -> very annoying, see issue 1287332

      to reproduce this:

      1. shutdown your owncloud server
      2. start thunderbird
      3. add a new event in the calendar and see thunderbird-> tools -> error console

        Timestamp: 21.06.2016 15:01:54
        Error: An error occurred when writing to the calendar http://192.168.0.86/owncloud/remote.php/dav/calendars/joachim/default/! Error code: MODIFICATION_FAILED. Description: 
        Source File: resource://calendar/modules/calUtils.jsm -> file:///home/joachim/.thunderbird/hy2x4cxy.default/extensions/%7Be2fda1a4-762b-4020-b5ad-a41df1933103%7D/calendar-js/calCalendarManager.js
        Line: 959
      4. see the dialog

      5. restart thunderbird, see it once in a while
      6. once owncloud is connected and synced, the 'added and cached' event is synced and the message will not appear anymore

  • laptop: thunderbird's CardDAV implementation interacts stangely with the SoGo connector, that is:
    • don't try mass-moving of your contacts from the offline-address books into your owncloud address book. with old versions you might hit this bug
    • address book can only be editited when SoGo connector can communicate with owncloud, no offline functionality
    • sometimes one deletes an entry in the address book, it vanishes, appears again and finally vanishes forver ...?

conclusion

i love this setup even as is a bit fragile! k9 mail is a great client, has even a better thread view than thunderbird. etar is exactly the calendar app i wanted! owncloud 8.x/9.x is currently packaged in nixpkgs but is broken. next up: fix the owncloud package(s) on nixos/nixcloud and use that instead of ubuntu 16.03.

by qknight at June 21, 2016 11:54 AM

Sheena Artrip

NixOS Recipes - ELK Stack

Walkthrough of a simple ELK stack configuration in NixOS, followed by a more advanced introduction to functions and services

June 21, 2016 04:58 AM

June 20, 2016

Sander van der Burg

Using Disnix as a remote package deployer

Recently, I was asked whether it is possible to use Disnix as a tool for remote package deployment.

As described in a number of earlier blog posts, Disnix's primary purpose is not remote (or distributed) package management, but deploying systems that can be decomposed into services to networks of machines. To deploy these kinds of systems, Disnix executes all required deployment activities, including building services from source code, distributing them to target machines in the network and activating or deactivating them.

However, a service deployment process is basically a superset of an "ordinary" package deployment process. In this blog post, I will describe how we can do remote package deployment by instructing Disnix to only use a relevant subset of features.

Specifying packages as services


In the Nix packages collection, it is a common habit to write each package specification as a function in which the parameters denote the (local) build and runtime dependencies (something that Disnix's manual refers to as intra-dependencies) that the package needs. The remainder of the function describes how to build the package from source code and its provided dependencies.

Disnix has adopted this habit and extended this convention to services. The main difference between Nix package expressions and Disnix service expressions is that the latter also take inter-dependencies into account that refer to run-time dependencies on services that may have been deployed to other machines in the network. For services that have no inter-dependencies, a Disnix expression is identical to an ordinary package expression.

This means that, for example, an expression for a package such as the Midnight Commander is also a valid Disnix service with no inter-dependencies:


{ stdenv, fetchurl, pkgconfig, glib, gpm, file, e2fsprogs
, libX11, libICE, perl, zip, unzip, gettext, slang
}:

stdenv.mkDerivation {
name = "mc-4.8.12";

src = fetchurl {
url = http://www.midnight-commander.org/downloads/mc-4.8.12.tar.bz2;
sha256 = "15lkwcis0labshq9k8c2fqdwv8az2c87qpdqwp5p31s8gb1gqm0h";
};

buildInputs = [ pkgconfig perl glib gpm slang zip unzip file gettext
libX11 libICE e2fsprogs ];

meta = {
description = "File Manager and User Shell for the GNU Project";
homepage = http://www.midnight-commander.org;
license = "GPLv2+";
maintainers = [ stdenv.lib.maintainers.sander ];
};
}

Composing packages locally


Package and service expressions are functions that do not specify the versions or variants of the dependencies that should be used. To allow services to be deployed, we must compose them by providing the desired versions or variants of the dependencies as function parameters.

As with ordinary Nix packages, Disnix has also adopted this convention for services. In addition, we have to compose a Disnix service twice -- first its intra-dependencies and later its inter-dependencies.

Intra-dependency composition in Disnix is done in a similar way as in the Nix packages collection:


{pkgs, system}:

let
callPackage = pkgs.lib.callPackageWith (pkgs // self);

self = {
pkgconfig = callPackage ./pkgs/pkgconfig { };

gpm = callPackage ./pkgs/gpm { };

mc = callPackage ./pkgs/mc { };
};
in
self

The above expression (custom-packages.nix) composes the Midnight Commander package by providing its intra-dependencies as function parameters. The third attribute (mc) invokes a function named: callPackage {} that imports the previous package expression and automatically provides the parameters having the same names as the function parameters.

The callPackage { } function first consults the self attribute set (that composes some of Midnight Commander's dependencies as well, such as gpm and pkgconfig) and then any package from the Nixpkgs repository.

Writing a minimal services model


Previously, we have shown how to build packages from source code and its dependencies, and how to compose packages locally. For the deployment of services, more information is needed. For example, we need to compose their inter-dependencies so that services know how to reach them.

Furthermore, Disnix's end objective is to get a running service-oriented system and carries out extra deployment activities for services to accomplish this, such as activation and deactivation. The latter two steps are executed by a Dysnomia plugin that is determined by annotating a service with a type attribute.

For package deployment, specifying these extra attributes and executing these remaining activities are in principle not required. Nonetheless, we still need to provide a minimal services model so that Disnix knows which units can be deployed.

Exposing the Midnight Commander package as a service, can be done as follows:


{pkgs, system, distribution, invDistribution}:

let
customPkgs = import ./custom-packages.nix {
inherit pkgs system;
};
in
{
mc = {
name = "mc";
pkg = customPkgs.mc;
type = "package";
};
}

In the above expression, we import our intra-dependency composition expression (custom-packages.nix), and we use the pkg sub attribute to refer to the intra-dependency composition of the Midnight Commander. We annotate the Midnight Commander service with a package type to instruct Disnix that no additional deployment steps need to be performed beyond the installation of the package, such activation or deactivation.

Since the above pattern is common to all packages, we can also automatically generate services for any package in the composition expression:


{pkgs, system, distribution, invDistribution}:

let
customPkgs = import ./custom-packages.nix {
inherit pkgs system;
};
in
pkgs.lib.mapAttrs (name: pkg: {
inherit name pkg;
type = "package";
}) customPkgs

The above services model exposes all packages in our composition expression as a service.

Configuring the remote machine's search paths


With the services models shown in the previous section, we have all ingredients available to deploy packages with Disnix. To allow users on the remote machines to conveniently access their packages, we must add Disnix' Nix profile to the PATH of a user on the remote machines:


$ export PATH=/nix/var/nix/profiles/disnix/default/bin:$PATH

When using NixOS, this variable can be extended by adding the following line to /etc/nixos/configuration.nix:


environment.variables.PATH = [ "/nix/var/nix/profiles/disnix/default/bin" ];

Deploying packages with Disnix


In addition to a services model, Disnix needs an infrastructure and distribution model to deploy packages. For example, we can define an infrastructure model that may look as follows:


{
test1.properties.hostname = "test1";
test2 = {
properties.hostname = "test2";
system = "x86_64-darwin";
};
}

The above infrastructure model describes two machines that have hostname test1 and test2. Furthermore, machine test2 has a specific system architecture: x86_64-darwin that corresponds to a 64-bit Intel-based Mac OS X.

We can distribute package to these two machines with the following distribution model:


{infrastructure}:

{
gpm = [ infrastructure.test1 ];
pkgconfig = [ infrastructure.test2 ];
mc = [ infrastructure.test1 infrastructure.test2 ];
}

In the above distribution model, we distribute package gpm to machine test1, pkgconfig to machine test2 and mc to both machines.

When running the following command-line instruction:


$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix

Disnix executes all activities to get the packages in the distribution model deployed to the machines, such as building them from source code (including its dependencies), and distributing their dependency closures to the target machines.

Because machine test2 may have a different system architecture as the coordinator machine responsible for carrying out the deployment, Disnix can use Nix's delegation mechanism to forward a build to a machine that is capable of doing it.

Alternatively, packages can also be built on the target machines through Disnix:


$ disnix-env --build-on-targets \
-s services.nix -i infrastructure.nix -d distribution.nix

After the deployment above command-line instructions have succeeded, we should be able to start the Midnight Commander on any of the target machines, by running:


$ mc

Deploying any package from the Nixpkgs repository


Besides deploying a custom set of packages, it is also possible to use Disnix to remotely deploy any package in the Nixpkgs repository, but doing so is a bit tricky.

The main challenge lies in the fact that the Nix packages set is a nested set of attributes, whereas Disnix expects services to be addressed in one attribute set only. Fortunately, the Nix expression language and Disnix models are flexible enough to implement a solution. For example, we can define a distribution model as follows:


{infrastructure}:

{
mc = [ infrastructure.test1 ];
git = [ infrastructure.test1 ];
wget = [ infrastructure.test1 ];
"xlibs.libX11" = [ infrastructure.test1 ];
}

Note that we use a dot notation: xlibs.libX11 as an attribute name to refer to libX11 that can only be referenced as a sub attribute in Nixpkgs.

We can write a services model that uses the attribute names in the distribution model to refer to the corresponding package in Nixpkgs:


{pkgs, system, distribution, invDistribution}:

pkgs.lib.mapAttrs (name: targets:
let
attrPath = pkgs.lib.splitString "." name;
in
{ inherit name;
pkg = pkgs.lib.attrByPath attrPath
(throw "package: ${name} cannot be referenced in the package set")
pkgs;
type = "package";
}
) distribution

With the above service model we can deploy any Nix package to any remote machine with Disnix.

Multi-user package management


Besides supporting single user installations, Nix also supports multi-user installations in which every user has its own private Nix profile with its own set of packages. With Disnix we can also manage multiple profiles. For example, by adding the --profile parameter, we can deploy another Nix profile that, for example, contains a set of packages for the user: sander:


$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix \
--profile sander

The user: sander can access its own set of packages by setting the PATH environment variable to:


$ export PATH=/nix/var/nix/profiles/disnix/sander:$PATH

Conclusion


Although Disnix has not been strictly designed for this purpose, I have described in this blog post how Disnix can be used as a remote package deployer by using a relevant subset of Disnix features.

Moreover, I now consider the underlying Disnix primitives to be mature enough. As such, I am announcing the release of Disnix 0.6!

Acknowledgements


I gained the inspiration for writing this blog post from discussions with Matthias Beyer on the #nixos IRC channel.

by Sander van der Burg (noreply@blogger.com) at June 20, 2016 08:38 PM

June 11, 2016

Sander van der Burg

Deploying containers with Disnix as primitives for multi-layered service deployments

As explained in an earlier blog post, Disnix is a service deployment tool that can only be used after a collection of machines have been predeployed providing a number of container services, such as a service manager (e.g. systemd), a DBMS (e.g. MySQL) or an application server (e.g. Apache Tomcat).

To deploy these machines, we need an external solution. Some solutions are:

  • Manual installations requiring somebody to obtain a few machines, manually installing operating systems (e.g. a Linux distribution), and finally installing all required software packages, such as Nix, Dysnomia, Disnix and any additional container services. Manually configuring a machine is typically tedious, time consuming and error prone.
  • NixOps. NixOps is capable of automatically instantiating networks of virtual machines in the cloud (such as Amazon EC2) and deploying entire NixOS system configurations to them. These NixOS configurations can be used to automatically deploy Dysnomia, Disnix and any container service that we need. A drawback is that NixOps is NixOS-based and not really useful if you want to deploy services to machines running different kinds of operating systems.
  • disnixos-deploy-network. In a Disnix-context, services are basically undefined units of deployment, and we can also automatically deploy entire NixOS configurations to target machines as services. A major drawback of this approach is that we require predeployed machines running Disnix first.

Although there are several ways to manage the underlying infrastructure of services, they are basically all or nothing solutions with regards to automation -- we either have to manually deploy entire machine configurations ourselves or we are stuck to a NixOS-based solution that completely automates it.

In some scenarios (e.g. when it is desired to deploy services to non-Linux operating systems), the initial deployment phase becomes quite tedious. For example, it took me quite a bit of effort to set up the heterogeneous network deployment demo I have given at NixCon2015.

In this blog post, I will describe an approach that serves as an in-between solution -- since services in a Disnix-context can be (almost) any kind of deployment unit, we can also use Disnix to deploy container configurations as services. These container services can also be deployed to non-NixOS systems, which means that we can alleviate the effort in setting up the initial target system configurations where Disnix can deploy services to.

Deploying containers as services with Disnix


As with services, containers in a Disnix-context could take any form. For example, in addition to MySQL databases (that we can deploy as services with Disnix), we can also deploy the corresponding container: the MySQL DBMS server, as a Disnix service:

{ stdenv, mysql, dysnomia
, name ? "mysql-database"
, mysqlUsername ? "root", mysqlPassword ? "secret"
, user ? "mysql-database", group ? "mysql-database"
}:

stdenv.mkDerivation {
inherit name;

buildCommand = ''
mkdir -p $out/bin

# Create wrapper script
cat > $out/bin/wrapper <<EOF
#! ${stdenv.shell} -e

case "\$1" in
activate)
# Create group, user and the initial database if it does not exists
# ...

# Run the MySQL server
${mysql}/bin/mysqld_safe --user=${user} --datadir=${dataDir} --basedir=${mysql} --pid-file=${pidDir}/mysqld.pid &

# Change root password
# ...
;;
deactivate)
${mysql}/bin/mysqladmin -u ${mysqlUsername} -p "${mysqlPassword}" -p shutdown

# Delete the user and group
# ...
;;
esac
EOF

chmod +x $out/bin/wrapper

# Add Dysnomia container configuration file for the MySQL DBMS
mkdir -p $out/etc/dysnomia/containers

cat > $out/etc/dysnomia/containers/${name} <<EOF
mysqlUsername="${mysqlUsername}"
mysqlPassword="${mysqlPassword}"
EOF

# Copy the Dysnomia module that manages MySQL databases
mkdir -p $out/etc/dysnomia/modules
cp ${dysnomia}/libexec/dysnomia/mysql-database $out/etc/dysnomia/modules
'';
}

The above code fragment is a simplified Disnix expression that can be used to deploy a MySQL server. The above expression produces a wrapper script, which carries out a set of deployment activities invoked by Disnix:

  • On activation, the wrapper script starts the MySQL server by spawning the mysqld_safe daemon process in background mode. Before starting the daemon, it also intitializes some of the server's state, such as creating user accounts under which the daemon runs and setting up the system database if it does not exists (these steps are left out of the example for simplicity reasons).
  • On deactivation it shuts down the MySQL server and removes some of the attached state, such as the user accounts.

Besides composing a wrapper script, we must allow Dysnomia (and Disnix) to deploy databases as Disnix services to the MySQL server that we have just deployed:

  • We generate a Dysnomia container configuration file with the MySQL server settings to allow a database (that gets deployed as a service) to know what credentials it should use to connect to the database.
  • We bundle a Dysnomia plugin module that implements the deployment activities for MySQL databases, such as activation and deactivation. Because Dysnomia offers this plugin as part of its software distribution, we make a copy of it, but we could also compose our own plugin from scratch.

With the earlier shown Disnix expression, we can define the MySQL server as a service in a Disnix services model:

mysql-database = {
name = "mysql-database";
pkg = customPkgs.mysql-database;
dependsOn = {};
type = "wrapper";
};

and distribute it to a target machine in the network by adding an entry to the distribution model:

mysql-database = [ infrastructure.test2 ];

Configuring Disnix and Dysnomia


Once we have deployed containers as Disnix services, Disnix (and Dysnomia) must know about their availability so that we can deploy services to these recently deployed containers.

Each time Disnix has successfully deployed a configuration, it generates Nix profiles on the target machines in which the contents of all services can be accessed from a single location. This means that we can simply extend Dysnomia's module and container search paths:

export DYSNOMIA_MODULES_PATH=$DYSNOMIA_MODULES_PATH:/nix/var/nix/profiles/disnix/containers/etc/dysnomia/modules
export DYSNOMIA_CONTAINERS_PATH=$DYSNOMIA_CONTAINERS_PATH:/nix/var/nix/profiles/disnix/containers/etc/dysnomia/containers

with the paths to the Disnix profiles that have containers deployed.

A simple example scenario


I have modified the Java variant of the ridiculous Disnix StaffTracker example to support a deployment scenario with containers as Disnix services.

First, we need to start with a collection of machines having a very basic configuration without any additional containers. The StaffTracker package contains a bare network configuration that we can deploy with NixOps, as follows:

$ nixops create ./network-bare.nix ./network-virtualbox.nix -d vbox
$ nixops deploy -d vbox

By configuring the following environment variables, we can connect Disnix to the machines in the network that we have just deployed with NixOps:

$ export NIXOPS_DEPLOYMENT=vbox
$ export DISNIX_CLIENT_INTERFACE=disnix-nixops-client

We can write a very simple bootstrap infrastructure model (infrastructure-bootstrap.nix), to dynamically capture the configuration of the target machines:

{
test1.properties.hostname = "test1";
test2.properties.hostname = "test2";
}

Running the following command:

$ disnix-capture-infra infrastructure-bootstrap.nix > infrastructure-bare.nix

yields an infrastructure model (infrastructure-containers.nix) that may have the following structure:

{
"test1" = {
properties = {
"hostname" = "test1";
"system" = "x86_64-linux";
};
containers = {
process = {
};
wrapper = {
};
};
"system" = "x86_64-linux";
};
"test2" = {
properties = {
"hostname" = "test2";
"system" = "x86_64-linux";
};
containers = {
process = {
};
wrapper = {
};
};
"system" = "x86_64-linux";
};
}

As may be observed in the captured infrastructure model shown above, we have a very minimal configuration only hosting the process and wrapper containers, that integrate with host system's service manager, such as systemd.

We can deploy a Disnix configuration having Apache Tomcat and the MySQL DBMS as services, by running:

$ disnix-env -s services-containers.nix \
-i infrastructure-bare.nix \
-d distribution-containers.nix \
--profile containers

Note that we have provided an extra parameter to Disnix: --profile to isolate the containers from the default deployment environment. If the above command succeeds, we have a deployment architecture that looks as follows:


Both machines have Apache Tomcat deployed as a service and machine test2 also runs a MySQL server.

When capturing the target machines' configurations again:

$ disnix-capture-infra infrastructure-bare.nix > infrastructure-containers.nix

we will receive an infrastructure model (infrastructure-containers.nix) that may have the following structure:

{
"test1" = {
properties = {
"hostname" = "test1";
"system" = "x86_64-linux";
};
containers = {
tomcat-webapplication = {
"tomcatPort" = "8080";
};
process = {
};
wrapper = {
};
};
"system" = "x86_64-linux";
};
"test2" = {
properties = {
"hostname" = "test2";
"system" = "x86_64-linux";
};
containers = {
mysql-database = {
"mysqlUsername" = "root";
"mysqlPassword" = "secret";
"mysqlPort" = "3306";
};
tomcat-webapplication = {
"tomcatPort" = "8080";
};
process = {
};
wrapper = {
};
};
"system" = "x86_64-linux";
};
}

As may be observed in the above infrastructure model, both machines provide a tomcat-webapplication container exposing the TCP port number that the Apache Tomcat server has been bound to. Machine test2 exposes the mysql-database container with its connectivity settings.

We can now deploy the StaffTracker system (that consists of multiple MySQL databases and Apache Tomcat web applications) by running:

$ disnix-env -s services.nix \
-i infrastructure-containers.nix \
-d distribution.nix \
--profile services

Note that I use a different --profile parameter, to tell Disnix that the StaffTracker components belong to a different environment than the containers. If I would use --profile containers again, Disnix will undeploy the previously shown containers environment with the MySQL DBMS and Apache Tomcat and deploy the databases and web applications, which will lead to a failure.

If the above command succeeds, we have the following deployment architecture:


The result is that we have all the service components of the StaffTracker example deployed to containers that are also deployed by Disnix.

An advanced example scenario: multi-containers


We could go even one step beyond the example I have shown in the previous section. In the first example, we deploy no more than one instance of each container to a machine in the network -- this is quite common, as it rarely happens that you want to run two MySQL or Apache Tomcat servers on a single machine. Most Linux distributions (including NixOS) do not support deploying multiple instances of system services out of the box.

However, with a few relatively simple modifications to the Disnix expressions of the MySQL DBMS and Apache Tomcat services, it becomes possible to allow multiple instances to co-exist on the same machine. What we basically have to do is identifying the conflicting runtime resources, making them configurable and changing their values in such a way that they no longer conflict.

{ stdenv, mysql, dysnomia
, name ? "mysql-database"
, mysqlUsername ? "root", mysqlPassword ? "secret"
, user ? "mysql-database", group ? "mysql-database"
, dataDir ? "/var/db/mysql", pidDir ? "/run/mysqld"
, port ? 3306
}:

stdenv.mkDerivation {
inherit name;

buildCommand = ''
mkdir -p $out/bin

# Create wrapper script
cat > $out/bin/wrapper <<EOF
#! ${stdenv.shell} -e

case "\$1" in
activate)
# Create group, user and the initial database if it does not exists
# ...

# Run the MySQL server
${mysql}/bin/mysqld_safe --port=${toString port} --user=${user} --datadir=${dataDir} --basedir=${mysql} --pid-file=${pidDir}/mysqld.pid --socket=${pidDir}/mysqld.sock &

# Change root password
# ...
;;
deactivate)
${mysql}/bin/mysqladmin --socket=${pidDir}/mysqld.sock -u ${mysqlUsername} -p "${mysqlPassword}" -p shutdown

# Delete the user and group
# ...
;;
esac
EOF

chmod +x $out/bin/wrapper

# Add Dysnomia container configuration file for the MySQL DBMS
mkdir -p $out/etc/dysnomia/containers

cat > $out/etc/dysnomia/containers/${name} <<EOF
mysqlUsername="${mysqlUsername}"
mysqlPassword="${mysqlPassword}"
mysqlPort=${toString port}
mysqlSocket=${pidDir}/mysqld.sock
EOF

# Copy the Dysnomia module that manages MySQL databases
mkdir -p $out/etc/dysnomia/modules
cp ${dysnomia}/libexec/dysnomia/mysql-database $out/etc/dysnomia/modules
'';
}

For example, I have revised the MySQL server Disnix expression with additional parameters that change the TCP port the service binds to, the UNIX domain socket that is used by the administration utilities and the filesystem location where the databases are stored. Moreover, these additional configuration properties are also exposed by the Dysnomia container configuration file.

These additional parameters make it possible to define multiple variants of container services in the services model:

{distribution, invDistribution, system, pkgs}:

let
customPkgs = import ../top-level/all-packages.nix {
inherit system pkgs;
};
in
rec {
mysql-production = {
name = "mysql-production";
pkg = customPkgs.mysql-production;
dependsOn = {};
type = "wrapper";
};

mysql-test = {
name = "mysql-test";
pkg = customPkgs.mysql-test;
dependsOn = {};
type = "wrapper";
};

tomcat-production = {
name = "tomcat-production";
pkg = customPkgs.tomcat-production;
dependsOn = {};
type = "wrapper";
};

tomcat-test = {
name = "tomcat-test";
pkg = customPkgs.tomcat-test;
dependsOn = {};
type = "wrapper";
};
}

I can, for example, map two MySQL DBMS instances and the two Apache Tomcat servers to the same machines in the distribution model:

{infrastructure}:

{
mysql-production = [ infrastructure.test1 ];
mysql-test = [ infrastructure.test1 ];
tomcat-production = [ infrastructure.test2 ];
tomcat-test = [ infrastructure.test2 ];
}

Deploying the above configuration:

$ disnix-env -s services-multicontainers.nix \
-i infrastructure-bare.nix \
-d distribution-multicontainers.nix \
--profile containers

yields the following deployment architecture:


As can be observed, we have two instances of the same container hosted on the same machine. When capturing the configuration:

$ disnix-capture-infra infrastructure-bare.nix > infrastructure-multicontainers.nix

we will receive a Nix expression that may look as follows:

{
"test1" = {
properties = {
"hostname" = "test1";
"system" = "x86_64-linux";
};
containers = {
mysql-production = {
"mysqlUsername" = "root";
"mysqlPassword" = "secret";
"mysqlPort" = "3306";
"mysqlSocket" = "/run/mysqld-production/mysqld.sock";
};
mysql-test = {
"mysqlUsername" = "root";
"mysqlPassword" = "secret";
"mysqlPort" = "3307";
"mysqlSocket" = "/run/mysqld-test/mysqld.sock";
};
process = {
};
wrapper = {
};
};
"system" = "x86_64-linux";
};
"test2" = {
properties = {
"hostname" = "test2";
"system" = "x86_64-linux";
};
containers = {
tomcat-production = {
"tomcatPort" = "8080";
"catalinaBaseDir" = "/var/tomcat-production";
};
tomcat-test = {
"tomcatPort" = "8081";
"catalinaBaseDir" = "/var/tomcat-test";
};
process = {
};
wrapper = {
};
};
"system" = "x86_64-linux";
};
}

In the above expression, there are two instances of MySQL and Apache Tomcat deployed to the same machine. These containers have their resources configured in such a way that they do not conflict. For example, both MySQL instances bind to a different TCP ports (3306 and 3307) and different UNIX domain sockets (/run/mysqld-production/mysqld.sock and /run/mysqld-test/mysqld.sock).

After deploying the containers, we can also deploy the StaffTracker components (databases and web applications) to them. As described in my previous blog post, we can use an alternative (and more verbose) notation in the distribution model to directly map services to containers:

{infrastructure}:

{
GeolocationService = {
targets = [
{ target = infrastructure.test2; container = "tomcat-test"; }
];
};
RoomService = {
targets = [
{ target = infrastructure.test2; container = "tomcat-production"; }
];
};
StaffService = {
targets = [
{ target = infrastructure.test2; container = "tomcat-test"; }
];
};
StaffTracker = {
targets = [
{ target = infrastructure.test2; container = "tomcat-production"; }
];
};
ZipcodeService = {
targets = [
{ target = infrastructure.test2; container = "tomcat-test"; }
];
};
rooms = {
targets = [
{ target = infrastructure.test1; container = "mysql-production"; }
];
};
staff = {
targets = [
{ target = infrastructure.test1; container = "mysql-test"; }
];
};
zipcodes = {
targets = [
{ target = infrastructure.test1; container = "mysql-production"; }
];
};
}

As may be observed in the distribution model above, we deploy databases and web application to both instances that are hosted on the same machine.

We can deploy the services of which the StaffTracker consists, as follows:

$ disnix-env -s services.nix \
-i infrastructure-multicontainers.nix \
-d distribution-advanced.nix \
--profile services

and the result is the following deployment architecture:


As may be observed in the picture above, we now have a running StaffTracker system that uses two MySQL and two Apache Tomcat servers on one machine. Isn't it awesome? :-)

Conclusion


In this blog post, I have demonstrated an approach in which we deploy containers as services with Disnix. Containers serve as potential deployment targets for other Disnix services.

Previously, we only had NixOS-based solutions to manage the configuration of containers, which makes using Disnix on other platforms than NixOS painful, as the containers had to be deployed manually. The approach described in this blog post serves as an in-between solution.

In theory, the process in which we deploy containers as services first followed by the "actual" services, could be generalized and extended into a layered service deployment model, with a new tool automating the process and declarative specifications capturing the properties of the layers.

However, I have decided not to implement this new model any time soon for practical reasons -- in nearly all of my experiences with service deployment, I have almost never encountered the need to have more than two layers supported. The only exception I can think of is the deployment of Axis2 web services to an Axis2 container -- the Axis2 container is a Java web application that must be deployed to Apache Tomcat first, which in turn requires the presence of the Apache Tomcat server.

Availability


I have integrated the two container deployment examples into the Java variant of the StaffTracker example.

The new concepts described in this blog post are part of the development version of Disnix and will become available in the next release.

by Sander van der Burg (noreply@blogger.com) at June 11, 2016 03:22 PM

June 08, 2016

Anders Papitto

The NixOS Landscape - Nix, nixpkgs, NixOS

Posted on June 8, 2016
Tags: nixos

I have previously given some of the reasons I find NixOS compelling. However, NixOS is a pretty big project, with a lot of moving parts that don’t map directly to more traditional ways of doing things. The fundamentals are internally pretty consistent, but that doesn’t necessarily help when you’re approaching it for the first time.

Here I’ll give an overview of some of the key components of the ecosystem, and draw the lines between them as clearly as I can.

Nix (the toolchain)

Nix is actually two closely related things that share a name.

First of all, it is a package management toolchain. It allows you to define the sources and dependencies and build instructions of a piece of software. It allows you to build and install software for which such a definition has been written. It allows you to upgrade and downgrade, to share packages with other machines, to define collections of packages, such as an OS userland.

In terms of dependencies, Nix is actually more lightweight than you might think. It’s not tied to any OS - all you need is a C++ compiler (and at the moment, also a Perl interpreter - but that is being fixed).

Nix has a manual.

Nix (the language)

Nix is also the name of a small programming language which is bundled with and integrated into the toolchain. It’s a pure functional language, and in many cases is used as a glorified JSON. However since it has functions, it can handle just about everything natively that you’d want to do in terms of code reuse and parameterization - there’s no need for a metaprogramming language, or to generate Nix code from another tool, as you might do with a pure data format like JSON. The language features line up pretty well with the use case, but the syntax can be a little weird at first. For example, here’s some roughly equivalent Nix and Python code.

let foo = { a, b }: c: a + b + c;
def foo(a_and_b, c):
    return a_and_b.a + and_and_b.b + c

You can play around with the Nix interpreter (compiled to Javascript) online here.

The role Nix-the-language takes is that this is what you use to write package descriptions, which can then be fed into the toolchain to do things like build your package or trace it’s dependencies.

Nixpkgs

Nixpkgs is basically a giant pile of Nix code, all in one github repo. Most of Nixpkgs is package descriptions - for example, Vim and Postgres and Steam. Many are handwritten, although some are automatically generated or automatically updated - this is particularly common for language-specific libraries, like the Python or Elisp or Haskell libraries.

Nixpkgs is primarily maintained for use on Linux and MacOS, where it can be used alongside your existing distro with absolutely no conflicts or interference. Due to the nature of the core Nix toolchain, Nix-installed software will both ignore pollution from your system (for example, if you dump some garbage headers into /usr, your Nix-installed gcc won’t even notice), and also doesn’t pollute your system itself, as everything is stored under the single /nix directory.

There’s quite a bit of stuff packaged - you can switch over to using Nixpkgs in place of, say, apt-get and stand a good chance of not noticing anything missing.

Nixpkgs also has a manual.

NixOS

NixOS is a subdirectory of the nixpkgs repo. It contains most of the code which isn’t package descriptions. Instead, NixOS is a full Linux userland, with all the things you’d expect - installation, booting, system and user services, management of kernels and kernel modules, window managers and desktop environments, and other things. It extends the standard Nix benefits of isolation and declarativity and transactionality from just installing software to also managing services and OS configuration.

NixOS is Linux-only - again, this isn’t technically fundamental, but it would be quite a bit of effort to port everything to a different kernel.

NixOS also has a manual. And don’t miss Appendix A, which has a full listing of all the available configuration options for all packaged services.

June 08, 2016 12:00 AM