NixOS Planet

February 11, 2018

Sander van der Burg

Deploying systems with circular dependencies using Disnix


Some time ago, during my PhD thesis defence, one of my committee members asked me how I would deploy systems with Disnix in which services have circular dependencies.

It was an interesting question because Disnix defines dependencies between services (that typically involve network connections) as inter-dependencies that have two properties:

  • They allow services to find services they depend on by providing their connection properties
  • They ensure that any inter-dependency is activated before the service itself, so that no failures will occur because of missing dependencies -- in Disnix, a service is either available or unavailable, but never in a broken state due to missing inter-dependencies at runtime.

In a system with circular dependencies, the ordering property is problematic -- it is impossible to activate one dependency before another without having broken connections between them.

During the defence, I had to admit that I have never deployed such systems with Disnix before, but that there were a couple of possible solutions to cope with such constraints. For example, you can propagate properties of the distribution model directly to a service, as opposed to declaring circular inter-dependencies. Then the ordering requirement is not enforced.

I also explained that systems should not have any hard cyclic requirements on other services, but instead compose their (potential bidirectional) communication channels at runtime. Furthermore, I explained that circular dependencies are bad from a reuse perspective -- when two services mutually depend on each other, then they should ideally be one service.

Although the answer sufficed (e.g. it provided the answer that it was possible), the solution basically relies on unconventional usage of the deployment tool. Recently, as a personal exercise, I have decided to dig up this question again and explore the possibilities of deploying systems with circular dependencies.

Chord: a peer-to-peer distributed hash table


When thinking of an example system that has a circular dependency structure, the first thing that came up in my mind is Chord: a peer-to-peer distributed hash table (a copy of the research paper written by Stoica et al can be found here). Interesting fact is that I had to implement it many years ago in the lab course of the distributed algorithms course taught by another member of my PhD thesis committee.

A Chord network has circular runtime dependencies because it has a a ring structure -- in a network that has more than one node, each node has a successor and predecessor link, in which no node has the same predecessor or successor and the last successor link refers to the first node:


The Chord nodes (shown in the figure above) constitute a distributed peer-to-peer hash table. In addition to the fact that it can store key and value pairs (all kinds of objects), it also distributes the data over the nodes in the network.

Moreover, its operations are decentralized -- for example, when it is desired to search for an object or to store new objects in the hash table, it is possible to consult any node in the network. The system will redirect the caller to the appropriate node that should host the data.

Various kinds of implementations exist of the Chord protocol. The official reference implementation is a filesystem abstraction layer built on top of it. I experimented with the Java-based OpenChord implementation that is capable of storing arbitrary serializable Java objects.

More details about the implementation details of Chord operations can be found in the research paper.

Deploying a Chord network


One of the challenges I faced during the lab course is that I had deploy a test Chord network with a small collection of nodes. At that time, I had no proper deployment automation. I ended up writing a bash shell script that spawned a collection of processes in parallel.

Because deployment was complicated, I never tried more complex scenarios than running a small collection of processes on a single machine. Because it was not required for the lab course to do more than just that I, for example, never tried any real network communication deployments in which I had to distribute Chord nodes over multiple computer systems. The latter would have introduced even more complexity to the deployment process.

Deploying a Chord network basically works as follows:

  • First, we must deploy an initial node that has no connection to a predecessor or successor node.
  • Then for each additional node, we call the join operation to attach it to the network. As explained earlier, a Chord hash-table is decentralized and as a result, we can consult any node we want in the network for the join process. The join and stabilization procedures decide which predecessor and successor a new node actually gets.

There are various strategies to join additional nodes to the network, but I what I ended up doing is using the initial node as a bootstrap node -- all successive nodes, simply join to the bootstrap node and the network stabilizes to become a ring.

(As a sidenote: you could argue whether this is a good process, since the introduction of a central bootstrap node during the deployment process violates the peer-to-peer contraint, but that is a different story. Obviously, you could also think of other bootstrap strategies but that is beyond the scope of this blog post).

Automating a Chord network deployment with Disnix


To experiment with a Chord network, I have decided to create a simple server process (using the OpenChord API) whose only responsibility is to store data. It can optionally join another node in the network and it has a command-line interface allowing me to conveniently specify the connection parameters.

The deployment strategy using the initial node as a bootstrap node can be easily automated with Disnix. In the Disnix services model, we can define the bootstrap node as follows:


ChordBootstrapNode = rec {
name = "ChordBootstrapNode";
pkg = customPkgs.ChordBootstrapNode { inherit port; };
port = 8001;
portAssign = "private";
type = "process";
};

The above service configuration corresponds to a process that binds the service to a provided TCP port.

Each successive node can be defined as a service that has an inter-dependency on the bootstrap node:


ChordNode1 = rec {
name = "ChordNode1";
pkg = customPkgs.ChordNode { inherit port; };
port = 8002;
portAssign = "private";
type = "process";
dependsOn = {
inherit ChordBootstrapNode;
};
};

As can be seen in the above Nix expression, the dependsOn attribute specifies that the node has an inter-dependency on the bootstrap node. The inter-dependency declaration provides the connection settings of the bootstrap node to the command-line utility that spawns the service and ensures that the bootstrap node is deployed first.

By providing an infrastructure model containing a number of machines and writing a distribution model that maps the node to the machine, such as:


{infrastructure}:

{
ChordBootstrapNode = [ infrastructure.test1 ];
ChordNode1 = [ infrastructure.test1 ];
ChordNode2 = [ infrastructure.test2 ];
ChordNode3 = [ infrastructure.test2 ];
}

we can deploy a Chord network consisting of 4 nodes distributed over two machines by running:


$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix

This is the resulting deployment architecture of the Chord network that gets deployed:
>

In the above picture, the light grey colored boxes denote machines, the dark grey colored boxes container environments, the ovals services and the arrows inter-dependency relationships.

By running the OpenChord console, we can join any of our nodes in the network, such as the third node deployed to machine test2:


$ /nix/var/nix/profiles/disnix/default/bin/openchord-console
> joinN -port 9000 -bootstrap test2:8001
Trying to join chord network with boostrap URL ocsocket://test2:8001/
URL of created chord node ocsocket://192.168.56.102:9000/.

we can check the references that the console node has:


> refsN
Node: C1 F0 42 95 , ocsocket://192.168.56.102:9000/
Finger table:
59 E4 86 AC , ocsocket://test2:8001/ (0-159)
Successor List:
59 E4 86 AC , ocsocket://test2:8001/
64 F1 96 B9 , ocsocket://test1:8001/
Predecessor: 9C 51 42 1F , ocsocket://test2:8002/

As may be observed in the output above, our predecessor is the node 3 deployed to machine test2 and our successors are node 3 deployed to machine test2 and node 1 deployed to machine test1.

We can also insert and retrieve the data we want:


> insertN -key test -value test
> entriesN
Entries:
key = A9 4A 8F E5 , value = [( key = A9 4A 8F E5 , value = test)]

Defining services with circular dependencies in Disnix


As shown in the previous paragraph, the ring structure of a Chord hash table is constructed at runtime. As a result, Disnix does not need to manage any circular dependencies. Instead, it only has to know the dependencies of the bootstrap phase which are not cyclic at all.

I was also curious whether I could modify Disnix to properly define circular-dependencies, without any workarounds such as directly propagating properties from the distribution model. As explained in the introduction, inter-dependencies have two properties in which the second property is problematic: the ordering constraint.

To cope with the problematic ordering property, I have introduced a new property in the services model called: connectsTo allowing users to specify inter-dependencies for which the ordering does not matter. The connectsTo property makes it possible for services to define mutual dependencies on each other.

As an example case, I have extended the Disnix composition examples (a set of trivial examples implementing "Hello world" testcases) with a cyclic example case. In this new sub example, I have created a web application that both contains a server returning the "Hello world!" string and a client displaying the string. The result would be the following screen:


(Does it look cool? :p)

A web application instance is capable of connecting to another web service to obtain the "Hello world!" message to display. We can compose two web application instances that refer to each other to accomplish this.

The corresponding services model looks as follows:


{distribution, invDistribution, system, pkgs}:

let customPkgs = import ../top-level/all-packages.nix {
inherit system pkgs;
};
in
rec {
HelloWorldCycle1 = {
name = "HelloWorldCycle1";
pkg = customPkgs.HelloWorldCycle;
connectsTo = {
# Depends on the other cyclic service
HelloWorldCycle = HelloWorldCycle2;
};
type = "tomcat-webapplication";
};

HelloWorldCycle2 = {
name = "HelloWorldCycle2";
pkg = customPkgs.HelloWorldCycle;
connectsTo = {
# Depends on the other cyclic service
HelloWorldCycle = HelloWorldCycle1;
};
type = "tomcat-webapplication";
};
}

As may be observed in the above code fragment, the first service has a dependency on the second, while the second also has a dependency on the first. They are allowed to refer to each other because the connectsTo property disregards ordering.

By mapping the services to a network of machines that have Apache Tomcat hosted:


{infrastructure}:

{
HelloWorldCycle1 = [ infrastructure.test1 ];
HelloWorldCycle2 = [ infrastructure.test2 ];
}

and deploying the system:


$ disnix-env -s services-cyclic.nix \
-i infrastructure.nix \
-d distribution-cyclic.nix

We end-up with a deployment architecture of two services having cyclic dependencies:


To produce the above visualization, I have extended the disnix-visualize tool with support for the connectsTo property that displays inter-dependencies as dashed arrows (as opposed to solid arrows that denote ordinary inter-dependencies).

In addition to the option to specify circular dependencies, the connectsTo property has another interesting use case -- when services have inter-dependencies that may be broken, we can optimize the duration of an upgrade processes.

Normally, when a service gets upgraded, all its inter-dependent services will be reactivated. This is an implication of Disnix's strictness -- a service is either available or unavailable, but never broken because of missing inter-dependencies.

However, all the extra reactivations in the upgrade phase can be quite expensive as a result. If a link is non-critical and it is permitted to be down for a short while, then redeployments can be made faster.

Conclusion


In this blog post, I have described two deployment experiments with Disnix involving systems that have circular dependencies -- a Chord-based distributed hash table (that constructs a ring structure at runtime) and a trivial toy example system in which two services have mutual dependencies on each other.

Availability


The newly introduced connectsTo property is part of the development version of Disnix and will become available in the next release.

The composition example and newly created Chord example can be found on my GitHub page.

by Sander van der Burg (noreply@blogger.com) at February 11, 2018 11:31 PM

January 31, 2018

Sander van der Burg

Diagnosing problems and running maintenance tasks in a network with services deployed by Disnix

I have been maintaining a production system with Disnix for quite some time. Although deployment works quite conveniently for me (I may probably be a bit biased, since I created Disnix :-) ), you cannot get around unforeseen incidents and problems, such as:

  • Crashing processes due to bugs or excessive load.
  • Database problems, such as inconsistencies in the data.

Errors in distributed systems are typically much more difficult to debug than single machine system failures. For example, tracing the origins of an error in distributed systems is generally hard -- one service's fault may be caused by a message propagated by another service residing on a different machine in the network.

But even if you know the origins of an error (e.g. you can clearly observe that a web application is crashing or a database connection), you may face other kinds of challenges:

  • You have to figure out to which machine in the network a service has been deployed.
  • You have to connect to the machine, e.g. through an SSH connection, to run debugging tasks.
  • You have to know the configuration properties of a service to diagnose it -- in Disnix, as explained in earlier blog posts, services can take any form -- they can be web services, but also web applications, databases and processes.

Because of these challenges, diagnosing errors and running maintenance tasks in a system deployed by Disnix is always unnecessarily time-consuming and inconvenient.

To alleviate this burden, I have developed a small tool and extension that establishes remote shell connections with environments providing all relevant configuration properties. Furthermore, the tool gives suggestions to the end-user explaining what kinds of maintenance tasks he could carry out.

The shell activity of Dysnomia


As explained in previous Disnix-related blog posts, Disnix carries out all activities to deploy a service oriented system to a network machines (i.e. to bring it in a running state), such as building services from source code, distributing their intra-dependency closures to the target machines, and activating or deactivating every service.

For the build and distribution activities, Disnix uses, as its name implies, the Nix package manager because it offers a number of powerful properties, such as strong reproducibility guarantees and atomic upgrades and rollbacks.

For the remaining activities that Nix does not support, e.g. activating or deactivating services, Disnix uses a companion tool called Dysnomia. Because services in a Disnix context could take any form, there is no generic means to activate or deactivate them -- for this reason, Dysnomia provides a plugin system with modules that carry out specific activities for a specific service type.

One of the plugins that Dysnomia provides is the deployment of MySQL databases to a MySQL DBMS server. Dysnomia deployment activities are driven by two kinds of configuration specifications. A component configuration defines the properties of a deployable unit, such as a MySQL database:


create table author
( AUTHOR_ID INTEGER NOT NULL,
FirstName VARCHAR(255) NOT NULL,
LastName VARCHAR(255) NOT NULL,
PRIMARY KEY(AUTHOR_ID)
);

create table books
( ISBN VARCHAR(255) NOT NULL,
Title VARCHAR(255) NOT NULL,
AUTHOR_ID VARCHAR(255) NOT NULL,
PRIMARY KEY(ISBN),
FOREIGN KEY(AUTHOR_ID) references author(AUTHOR_ID) on update cascade on delete cascade
);

The above configuration is a MySQL script (~/testdb) that creates the database schema consisting of two tables.

The container configuration captures properties of the environment in which the component should be hosted, which is in this particular case, a MySQL DBMS server:


type=mysql-database
mysqlUsername=root
mysqlPassword=verysecret

The above component configuration (~/mysql-production) defines the type stating that mysql-database plugin must be used, and provides the authentication credentials required to connect to the DBMS server.

The Dysnomia plugin for MySQL implements various kinds of deployment activities for MySQL databases. For example, the activation activity is implemented as follows:


...

case "$1" in
activate)
# Initalize the given schema if the database does not exists
if [ "$(echo "show databases" | @mysql@ --user=$mysqlUsername --password=$mysqlPassword -N | grep -x $componentName)" = "" ]
then
( echo "create database $componentName;"
echo "use $componentName;"

if [ -d $2/mysql-databases ]
then
cat $2/mysql-databases/*.sql
fi
) | @mysql@ $socketArg --user=$mysqlUsername --password=$mysqlPassword -N
fi
markComponentAsActive
;;

...
esac

The above code fragment checks whether a database with the given schema exists and if it does not, it will create it by running the database initialization script provided by the component configuration. As may also be observed, the above activity uses the container properties (such as the authentication credentials) as environment variables.

Dysnomia activities can be executed by invoking the dysnomia command-line tool. For example, the following command will activate the MySQL database in the MySQL database server:


$ dysnomia --operation activate \
--component ~/testdb --container ~/mysql-production

To make the execution of arbitrary tasks more convenient, I have created a new Dysnomia option called: shell. The shell operation is basically an activity that does not execute anything, but instead spawns a shell session that provides the container configuration properties as environment variables.

Moreover, the shell activity of a Dysnomia plugin typically displays suggestions for shell commands that the user may want to carry out.

For example, when we run the following command:


$ dysnomia --shell \
--component ~/testdb --container ~/mysql-production

Dysnomia spawns a shell session that shows the following:


This is a shell session that can be used to control the 'staff' MySQL database.

Module specific environment variables:
mysqlUsername Username of the account that has the privileges to administer
the database
mysqlPassword Password of the above account
mysqlSocket Path to the UNIX domain socket that is used to connect to the
server (optional)

Some useful commands:
/nix/store/h0kcf5g2ssyancr9m2i8sr09b3wq2zy0-mariadb-10.1.28/bin/mysql --user=$mysqlUsername --password=$mysqlPassword staff Start a MySQL interactive terminal

General environment variables:
this_dysnomia_module Path to the Dysnomia module
this_component Path to the mutable component
this_container Path to the container configuration file

[dysnomia-shell:~]#

By executing the command-line suggestion shown above in the above shell session, we get a MySQL interactive terminal allowing us to execute arbitrary SQL commands. It saves us the burden looking up all the MySQL configuration properties, such as the authentication credentials and the database name.

The Dysnomia shell feature is heavily inspired by nix-shell that works in quite a similar way -- it will take the build dependencies of a package build as inputs (which typically manifest themselves as environment variables) and fetches the sources, but it will not execute the package build procedure. Instead, it spawns an interactive shell session allowing the user to execute arbitrary build tasks. This Nix feature is particularly useful for development projects.

Diagnosing services with Disnix


In addition to extending Dysnomia with the shell feature, I have also extended Disnix to make this feature available in a distributed context.

The following command can be executed to spawn a shell for a particular service of the ridiculous staff tracker example (that happens to be a MySQL database):


$ disnix-diagnose -S staff
[test2]: Connecting to service: /nix/store/yazjd3hcb9ds160cq03z66y5crbxiwq0-staff deployed to container: mysql-database
This is a shell session that can be used to control the 'staff' MySQL database.

Module specific environment variables:
mysqlUsername Username of the account that has the privileges to administer
the database
mysqlPassword Password of the above account
mysqlSocket Path to the UNIX domain socket that is used to connect to the
server (optional)

Some useful commands:
/nix/store/h0kcf5g2ssyancr9m2i8sr09b3wq2zy0-mariadb-10.1.28/bin/mysql --user=$mysqlUsername --password=$mysqlPassword staff Start a MySQL interactive terminal

General environment variables:
this_dysnomia_module Path to the Dysnomia module
this_component Path to the mutable component
this_container Path to the container configuration file

[dysnomia-shell:~]#

The above command-line instruction will lookup the location of the staff database in the configuration of the system that is currently deployed, connects to it (typically through SSH) and spawns a Dysnomia shell for the given service type.

In addition to an interactive shell, you can also directly run shell commands. For example, the following command will query all the staff records:


$ disnix-diagnose -S staff \
--command 'echo "select * from staff" | mysql -u $mysqlUsername -p $mysqlPassword staff'

In most cases, only one instance of a service exists, but Disnix can also deploy redundant instances of the same service. For example, we may want to deploy two redundant instances of the web application front end in the distribution.nix configuration file:


stafftracker = [ infrastructure.test1 infrastructure.test2 ];

When trying to spawn a Dysnomia shell, the tool returns an error because it does not know to which instance to connect to:


$ disnix-diagnose -S stafftracker
Multiple mappings found! Please specify a --target and, optionally, a
--container parameter! Alternatively, you can execute commands for all possible
service mappings by providing a --command parameter.

This service has been mapped to:

container: apache-webapplication, target: test1
container: apache-webapplication, target: test2

In this case, we must refine our query with a --target parameter. For example, the following command connects to the web front-end on the test1 machine:


$ disnix-diagnose -S stafftracker --target test1

It is still possible to execute remote shell commands for redundantly deployed services. For example, the following command gets executed twice, because we have two instances deployed:


$ disnix-diagnose -S stafftracker \
--command 'echo I will see this message two times!'

In some cases, you may want to execute other kinds of maintenance tasks or you simply want to know where a particular service resides. This can be done by running the following command:


$ disnix-diagnose -S stafftracker --show-mappings
This service has been mapped to:

container: apache-webapplication, target: test1
container: apache-webapplication, target: test2

Conclusion


In this blog post, I have described a new feature of Dysnomia and Disnix that spawns interactive shell sessions making problem solving and maintenance tasks more convenient.

disnix-diagnose and the shell extension are part of the development versions of Disnix and Dysnomia and will become available in the next release.

by Sander van der Burg (noreply@blogger.com) at January 31, 2018 10:55 PM

January 08, 2018

Sander van der Burg

Syntax highlighting Nix expressions in mcedit

The year 2017 has passed and 2018 has now started. For quite a few people, this is a good moment for reflection (as I have done in my previous blog post) and to think about new year's resolutions. New year's resolutions are typically about adopting good new habits and rejecting old bad ones.

Orthodox file managers



One of my unconventional habits is that I like orthodox file managers and that I extensively use them. Orthodox file managers have a number of interesting properties:

  • They typically display textual lists of files, as opposed to icons or thumbnails.
  • They typically have two panels for displaying files: one source and one destination panel.
  • They may also have third panel (typically placed underneath the source and destination panels) that serves as a command-line prompt.

The first orthodox file manager I ever used was DirectoryOpus on the Commodore Amiga. For nearly all operating systems and desktop environments that I touched ever since, I have been using some kind of a orthodox file manager, such as:


Over the years, I have received many questions from various kinds of people -- they typically ask me what is so appealing about using such a "weird program" and why I have never considered switching to a more "traditional way" of working, because "that would be more efficient".

Aside from the fact that it may probably be mostly inertia, my motivating factors are the following:

  • Lists of files allow me to see more relevant and interesting details. In many traditional file managers, much of the screen space is wasted by icons and the spacing between them. Furthermore, traditional file managers may typically hide properties of files that I also typically want to know about, such as a file's size or modification timestamp.
  • Some file operations involve a source and destination, such as copying or moving files. In an orthodox file manager, these operations can be executed much more intuitively IMO because there is always a source and destination panel present. When I am using a traditional file manager, I typically have to interrupt my workflow to open a second destination window, and use it to browse to my target location.
  • All the orthodox file managers I have mentioned, implement virtual file system support allowing me to browse compressed archives and remote network locations as if they were directories.

    Nowadays, VFS support is not exclusive to orthodox file managers anymore, but they existed in orthodox file managers much longer.

    Moreover, I consider the VFS properties of orthodox file managers to be much more powerful. For example, the Windows file explorer can browse Zip archives, but Total Commander also has first class support for many more kinds of archives, such as RAR, ACE, LhA, 7-zip and tarballs, and can be easily extended to support many other kinds of file systems by an add-on system.
  • They have very powerful search properties. For example, searching for a collection of files having certain kinds of text patterns can be done quite conveniently.

    As with VFS support, this feature is not exclusive to orthodox file managers, but I have noticed that their search functions are still considerably more powerful than most traditional file managers.

From all the orthodox file managers listed above, Midnight Commander is the one I have been using the longest -- it was one of the first programs I used when I started using Linux (in 1999) and I have been using it ever since.

Midnight Commander also includes a text editor named: mcedit that integrates nicely with the search function. Although I have experience with half a dozen editors (such as vim and various IDEs, such as Eclipse and Netbeans), I have been using mcedit, mostly for editing configuration files, shell scripts and simple programs.

Syntax highlighting in mcedit


Earlier in the introduction I mentioned: "new year's resolutions", which may probably suggest that I intend to quit using orthodox file managers and an unconventional editor, such as mcedit. Actually, this is not something I am planning to :-).

In addition to Midnight Commander and mcedit, I have also been using another unconventional program for quite some time, namely: the Nix package manager since late 2007.

What I noticed is that, despite being primitive, mcedit has reasonable syntax highlighting support for a variety of programming languages. Unfortunately, what I still miss is support for the Nix expression language -- the DSL that is used to specify package builds and system configurations.

For quite some time, editing Nix expressions was a primitive process for me. To improve my unconventional way of working a bit, I have decided to address this shortcoming in my Christmas break by creating a Nix syntax configuration file for mcedit.

Implementing a syntax configuration for the Nix expression language


mcedit provides syntax highlighting (the format is described in the manual page) for a number of programming languages. The syntax highlighting configurations seem to follow similar conventions, probably because of the fact that programming languages influence each other a lot.

As with many programming languages, the Nix expression language has its own influences as well, such as Haskell, C, bash, JavaScript (more specifically: the JSON subset) and Perl.

I have decided to adopt similar syntax highlighting conventions in the Nix expression syntax configuration. I started by examining Nix's lexer module (src/libexpr/lexer.l):

  • First, I took the keywords and operators, and configured the syntax highlighter to color them yellow. Yellow keywords is a convention that other syntax highlighting configurations also seem to follow.
  • Then I implemented support for single line and multi-line comments. The context directive turned out to be very helpful -- it makes it possible to color all characters between a start and stop token. Comments in mcedit are typically brown.
  • The next step were the numbers. Unfortunately, the syntax highlighter does not have full support for regular expressions. For example, you cannot specify character ranges, such as [0-9]+. Instead you must enumerate all characters one by one:

    keyword whole \[0123456789\]

    Floating point numbers were a bit trickier to support, but fortunately I could steal them from the JavaScript syntax highlighter, since the formatting Nix uses is exactly the same.
  • Strings were also relatively simple to implement (with the exception of anti-quotations) by using the context directive. I have configured the syntax highlighter to color them green, similar to other programming languages.
  • The Nix expression language also supports objects of the URL or path type. Since there is no other language that I am aware of that has a similar property, I have decided to color them white, with the exception of system paths -- system paths look very similar to the C preprocessor's #include path arguments, so I have decided to color them red, similar to the C syntax highlighter.

    To properly support paths, I implemented an approximation of the regular expression used in Nix's lexer. Without full regular expression support, it is extremely difficult to make a direct translation, but for all my use cases it seems to work fine.

After configuring the above properties, I noticed that there were still some bits missing. The next step was opening the parser configuration (src/libexpr/parser.y) and look for any missing characters.

I discovered that there were still separators that I needed to add (e.g. parenthesis, brackets, semi-colons etc.). I have configured the syntax highlighter to color them bright cyan, with the exception of semi-colons -- I colored them purple, similar to the C and JavaScript syntax highlighter.

I also added syntax highlighting for the builtin functions (e.g. derivation, map and toString) so that they appear in cyan. This convention is similar to bash' syntax highlighting.

The implementation process of the Nix syntax configuration was generally straight forward, except for one thing -- anti-quotations. Because we only have a primitive lexer and no parser, it is impossible to have a configuration that covers all possibilities. For example, anti-quotations in strings that embed strings cannot be properly supported. I ended up with an implementation that only works for simple cases (e.g. a reference to an identifier or a file).

Results


The syntax highlighter works quite well for the majority of expressions in the Nix packages collection. For example, the expression for the Disnix package looks as follows:


The top-level expression that contains the package compositions looks as follows:


Also, most Hydra release.nix configurations seem to work well, such as the one used for node2nix:


Availability


The Nix syntax configuration can be obtained from my GitHub page. It can be used by installing it in a user's personal configuration directory, or by deploying a patched version of Midnight Commander. More details can be found in the README.

by Sander van der Burg (noreply@blogger.com) at January 08, 2018 10:44 PM

December 19, 2017

Sander van der Burg

Bypassing NPM's content addressable cache in Nix deployments and generating expressions from lock files

Roughly half a year ago, Node.js version 8 was released that also includes a major NPM package manager update (version 5). NPM version 5 has a number of substantial changes over the previous version, such as:

  • It uses package lock files that pinpoint the resolved versions of all dependencies and transitive dependencies. When a project with a bundled package-lock.json file is deployed, NPM will use the pinpointed versions of the packages that are in the lock file making it possible to exactly reproduce a deployment elsewhere. When a project without a lock file is deployed for the first time, NPM will generate a lock file.
  • It has a content-addressable cache that optimizes package retrieval processes and allows fully offline package installations.
  • It uses SHA-512 hashing (as opposed to the significantly weakened SHA-1), for packages published in the NPM registry.

Although these features offer significant benefits over previous versions, e.g. NPM deployments are now much faster, more secure and more reliable, it also comes with a big drawback -- it breaks the integration with the Nix package manager in node2nix. Solving these problems were much harder than I initially anticipated.

In this blog post, I will explain how I have adjusted the generation procedure to cope with NPM's new conflicting features. Moreover, I have extended node2nix with the ability to generate Nix expressions from package-lock.json files.

Lock files


One of the major new features in NPM 5.0 is the lock file (the idea itself is not so new since NPM-inspired solutions such as yarn and the PHP-based composer already support them for quite some time).

A major drawback of NPM's dependency management is that version specifiers are nominal. They can refer to specific versions of packages in the NPM registry, but also to version ranges, or external artifacts such as Git repositories. The latter category of version specifiers affect reproducibility -- for example, the version range specifier >= 1.0.0 may refer to version 1.0.0 today and to version 1.0.1 tomorrow making it extremely hard to reproduce a deployment elsewhere.

In a development project, it is still possible to control the versions of dependencies by using a package.json configuration that only refers to exact versions. However, for transitive dependencies that may still have loose version specifiers there is only very little control.

To solve this reproducibility problem, a package-lock.json file can be used -- a package lock file pinpoints the resolved versions of all dependencies and transitive dependencies making it possible to reproduce the exact same deployment elsewhere.

For example, for the NiJS package with the following package.json configuration:


{
"name": "nijs",
"version": "0.0.25",
"description": "An internal DSL for the Nix package manager in JavaScript",
"bin": {
"nijs-build": "./bin/nijs-build.js",
"nijs-execute": "./bin/nijs-execute.js"
},
"main": "./lib/nijs",
"dependencies": {
"optparse": ">= 1.0.3",
"slasp": "0.0.4"
},
"devDependencies": {
"jsdoc": "*"
}
}

NPM may produce the following partial package-lock.json file:


{
"name": "nijs",
"version": "0.0.25",
"lockfileVersion": 1,
"requires": true,
"dependencies": {
"optparse": {
"version": "1.0.5",
"resolved": "https://registry.npmjs.org/optparse/-/optparse-1.0.5.tgz",
"integrity": "sha1-dedallBmEescZbqJAY/wipgeLBY="
},
"requizzle": {
"version": "0.2.1",
"resolved": "https://registry.npmjs.org/requizzle/-/requizzle-0.2.1.tgz",
"integrity": "sha1-aUPDUwxNmn5G8c3dUcFY/GcM294=",
"dev": true,
"requires": {
"underscore": "1.6.0"
},
"dependencies": {
"underscore": {
"version": "1.6.0",
"resolved": "https://registry.npmjs.org/underscore/-/underscore-1.6.0.tgz",
"integrity": "sha1-izixDKze9jM3uLJOT/htRa6lKag=",
"dev": true
}
}
},
...
}

The above lock file pinpoints all dependencies and development dependencies including transitive dependencies to exact versions, including the locations where they can be obtained from and integrity hash codes that can be used to validate them.

The lock file can also be used to derive the entire structure of the node_modules/ folder in which all dependencies are stored. The top level dependencies property captures all packages that reside in the project's node_modules/ folder. The dependencies property of each dependency captures all packages that reside in a dependency's node_modules/ folder.
If NPM 5.0 is used and no package-lock.json is present in a project, it will automatically generate one.

Substituting dependencies


As mentioned in an earlier blog post, the most important technique to make Nix-NPM integration work is by substituting NPM's dependency management activities that conflict with Nix's dependency management -- Nix is much more strict with handling dependencies (e.g. it uses hash codes derived from the build inputs to identify a package as opposed to a name and version number).

Furthermore, in Nix build environments network access is restricted to prevent unknown artifacts to influence the outcome of a build. Only so-called fixed output derivations, whose output hashes should be known in advance (so that Nix can verify its integrity), are allowed to obtain artifacts from external sources.

To substitute NPM's dependency management, populating the node_modules/ folder ourselves with all required dependencies and substituting certain version specifiers, such as Git URLs, used to suffice. Unfortunately, with the newest NPM this substitution process no longer works. When running the following command in a Nix builder environment:


$ npm --offline install ...

The NPM package manager is forced to work in offline mode consulting its content-addressable cache for the retrieval of external artifacts. If NPM needs to consult an external resource, it throws an error.

Despite the fact that all dependencies are present in the node_modules/ folder, deployment fails with the following error message:


npm ERR! code ENOTCACHED
npm ERR! request to https://registry.npmjs.org/optparse failed: cache mode is 'only-if-cached' but no cached response available.

At first sight, the error message suggests that NPM always requires the dependencies to reside in the content-addressable cache to prevent it from downloading it from external sites. However, when we use NPM outside a Nix builder environment, wipe the cache, and perform an offline installation, it does seem to work properly:


$ npm install
$ rm -rf ~/.npm/_cacache
$ npm --offline install

Further experimentation reveals that NPM augments the package.json configuration files of all dependencies with additional metadata that are prefixed by an underscore (_):


{
"_from": "optparse@>= 1.0.3",
"_id": "optparse@1.0.5",
"_inBundle": false,
"_integrity": "sha1-dedallBmEescZbqJAY/wipgeLBY=",
"_location": "/optparse",
"_phantomChildren": {},
"_requested": {
"type": "range",
"registry": true,
"raw": "optparse@>= 1.0.3",
"name": "optparse",
"escapedName": "optparse",
"rawSpec": ">= 1.0.3",
"saveSpec": null,
"fetchSpec": ">= 1.0.3"
},
"_requiredBy": [
"/"
],
"_resolved": "https://registry.npmjs.org/optparse/-/optparse-1.0.5.tgz",
"_shasum": "75e75a96506611eb1c65ba89018ff08a981e2c16",
"_spec": "optparse@>= 1.0.3",
"_where": "/home/sander/teststuff/nijs",
"name": "optparse",
"version": "1.0.5",
...
}

It turns out that when the _integrity property in a package.json configuration matches the integrity field of the dependency in the lock file, NPM will not attempt to reinstall it.

To summarize, the problem can be solved in Nix builder environments by running a script that augments the package.json configuration files with _integrity fields with the values from the package-lock.json file.

For Git repository dependency specifiers, there seems to be an additional requirement -- it also seems to require the _resolved field to be set to the URL of the repository.

Reconstructing package lock files


The fact that we have discovered how to bypass the cache in a Nix builder environment makes it possible to fix the integration with the latest NPM. However, one of the limitations of this approach is that it only works for projects that have a package-lock.json file included.

Since lock files are still a relatively new concept, many NPM projects (in particular older projects that are not frequently updated) may not have a lock file included. As a result, their deployments will still fail.

Fortunately, we can reconstruct a minimal lock file from the project's package.json configuration and compose dependencies objects by traversing the package.json configurations inside the node_modules/ directory hierarchy.

The only attribute that cannot be immediately derived are the integrity fields containing hashes that are used for validation. It seems that we can bypass the integrity check by providing a dummy hash, such as:


integrity: "sha1-000000000000000000000000000=",

NPM does not seem to object when it encounters these dummy hashes allowing us to deploy projects with a reconstructed package-lock.json file. The solution is a very ugly hack, but it seems to work.

Generating Nix expressions from lock files


As explained earlier, lock files pinpoint the exact versions of all dependencies and transitive dependencies and describe the structure of the entire dependency graph.

Instead of simulating NPM's dependency resolution algorithm, we can also use the data provided by the lock files to generate Nix expressions. Lock files appear to contain most of the data we need -- the URLs/locations of the external artifacts and integrity hashes that we can use for validation.

Using lock files for generation offer the following advantages:

  • We no longer need to simulate NPM's dependency resolution algorithm. Despite my best efforts and fairly good results, it is hard to truly make it 100% identical to NPM's. When using a lock file, the dependency graph is already given, making deployment results much more accurate.
  • We no longer need to consult external resources to resolve versions and compute hashes making the generation process much faster. The only exception seems to be Git repositories -- Nix needs to know the output hash of the clone whereas for NPM the revision hash suffices. When we encounter a Git dependency, we still need to download it and compute the output hash.

Another minor technical challenge are the integrity hashes -- in NPM lock files integrity hashes are in base-64 notation, whereas Nix uses heximal notation or its own custom base-32 notation. We need to convert the NPM integrity hashes to a notation that Nix understands.

Unfortunately, lock files can only be used in development projects. It appears that packages that are installed directly from the NPM registry, e.g. end-user packages that are installed globally through npm install -g, never include a package lock file. (It even seems that the NPM registry blacklist the lock files when publishing a package in the registry).

For this reason, we still need to keep our own implementation of the dependency resolution algorithm.

Usage


By adding a script that augments the dependencies' package.json configuration files with _integrity fields and by optionally reconstructing a package-lock.json file, NPM integration with Nix has been restored.

Using the new NPM 5.x features is straight forward. The following command can be used to generate Nix expressions for a development project with a lock file:


$ node2nix -8 -l package-lock.json

The above command will directly generate Nix expressions from the package lock file, resulting in a much faster generation process.

When a development project does not ship with a lock file, you can use the following command-line instruction:


$ node2nix -8

The generator will use its own implementation of NPM's dependency resolution algorithm. When deploying the package, the builder will reconstruct a dummy lock file to allow the deployment to succeed.

In addition to development projects, it is also possible to install end-user software, by providing a JSON file (e.g. pkgs.json) that defines an array of dependency specifiers:


[
"nijs"
, { "node2nix": "1.5.0" }
]

A Node.js 8 compatible expression can be generated as follows:


$ node2nix -8 -i pkgs.json

Discussion


The approach described in this blog post is not the first attempt to fix NPM 5.x integration. In my first attempt, I tried populating NPM's content-addressable cache in the Nix builder environment with artifacts that were obtained by the Nix package manager and forcing NPM to work in offline mode.

NPM exposes its download and cache-related functionality as a set of reusable APIs. For downloading packages from the NPM registry, pacote can be used. For downloading external artifacts through the HTTP protocol make-fetch-happen can be used. Both APIs are built on top of the content-addressable cache that can be controlled through the lower-level cacache API.

The real difficulty is that neither the high-level NPM APIs nor the npm cache command-line instruction work with local directories or local files -- they will only add artifacts to the cache if they come from a remote location. I have partially built my own API on top of cacache to populate the NPM cache with locally stored artifacts pretending that they were fetched from a remote location.

Although I had some basic functionality supported, it turned out to be much more complicated and time consuming to get all functionality implemented.

Furthermore, the NPM authors never promised that these APIs are stable, so the implementation may break at some point in time. As a result, I have decided to look for another approach.

Availability


I just released node2nix version 1.5.0 with NPM 5.x support. It can be obtained from the NPM registry, Github, or directly from the Nixpkgs repository.

by Sander van der Burg (noreply@blogger.com) at December 19, 2017 09:26 PM

December 08, 2017

Sander van der Burg

Controlling a Hydra server from a Node.js application

In a number of earlier blog posts, I have elaborated about the development of custom configuration management solutions that use Nix to carry out a variety (or all) of its deployment tasks.

For example, an interesting consideration is that when a different programming language than Nix's expression language is used, an internal DSL can be used to make integration with the Nix expression language more convenient and safe.

Another problem that might surface when developing a custom solution is the scale at which builds are carried out. When it is desired to carry out builds on a large scale, additional concerns must be addressed beyond the solutions that the Nix package manager provides, such as managing a build queue, timing out builds that appear to be stuck, and sending notifications in case of a success or failure.

In such cases, it may be very tempting to address these concerns in your own custom build solution. However, some of these concerns are quite difficult to implement, such as build queue management.

There is already a Nix project that solves many of these problems, namely: Hydra, the Nix-based continuous integration service. One of Hydra's lesser known features is that it also exposes many of its operations through a REST API. Apart from a very small section in the Hydra manual, the API is not very well documented and -- as a result -- a bit cumbersome to use.

In this blog post, I will describe a recently developed Node.js package that exposes most of Hydra's functionality with an API that is documented and convenient to use. It can be used to conveniently integrate a Node.js application with Hydra's build management services. To demonstrate its usefulness, I have developed a simple command-line utility that can be used to remotely control a Hydra instance.

Finding relevant API calls


The Hydra API is not extensively documented. The manual has a small section titled: "Using the external API" that demonstrates some basic usage scenarios, such as querying data in JSON format.

Querying data is basically a nearly identical operation to opening the Hydra web front-end in a web browser. The only difference is that the GET request should include a header field stating that the output format should be displayed in JSON format.

For example, the following request fetches an overview of projects in JSON format (as opposed to HTML which is the default):


$ curl -H "Accept: application/json" https://hydra.nixos.org

In addition to querying data in JSON format, Hydra supports many additional REST operations, such as creating, updating or deleting projects and jobsets. Unfortunately, these operations are not documented anywhere in the manual.

By analyzing the Hydra source code and running the following script I could (sort of) semi-automatically derive the REST operations that are interesting to invoke:


$ cd hydra/src/lib/Hydra/Controller
$ find . -type f | while read i
do
echo "# $i"
grep -E '^sub [a-zA-Z0-9]+[ ]?\:' $i
echo
done

Running the script shows me the following (partial) output:


# ./Project.pm
sub projectChain :Chained('/') :PathPart('project') :CaptureArgs(1) {
sub project :Chained('projectChain') :PathPart('') :Args(0) :ActionClass('REST::ForBrowsers') { }
sub edit : Chained('projectChain') PathPart Args(0) {
sub create : Path('/create-project') {
...

The web front-end component of Hydra uses Catalyst, a Perl-based MVC framework. One of the conventions that Catalyst uses is that every request invokes an annotated Perl subroutine.

The above script scans the controller modules, and shows for each module annotated subroutines that may be of interest. In particular, the sub routines with a :ActionClass('REST::ForBrowsers') annotation are relevant -- they encapsulate create, read, update and delete operations for various kinds of data.

For example, the project subroutine shown above supports the following REST operations:


sub project_GET {
...
}

sub project_PUT {
...
}

sub project_DELETE {
...
}

The above operations will query the properties a project (GET), create a new or update an existing project (PUT) or delete a project (DELETE).

With the manual, the extracted information and reading the source code a bit (which is unavoidable since many details are missing such as formal function parameters), I was able to develop a client API supporting a substantial amount of Hydra features.

API usage


Using client the API is relatively straight forward. The general idea is to instantiate the HydraConnector prototype to connect to a Hydra server, such as a local test instance:


var HydraConnector = require('hydra-connector').HydraConnector;

var hydraConnector = new HydraConnector("http://localhost");

and then invoke any of its supported operations:


hydraConnector.queryProjects(function(err, projects) {
if(err) {
console.log("Some error occurred: "+err);
} else {
for(var i = 0; i < projects.length; i++) {
var project = projects[i];
console.log("Project: "+project.name);
}
}
});

The above code fragment fetches all projects and displays their names.

Write operations require a user to be logged in with the appropriate permissions. By invoking the login() method, we can authenticate ourselves:


hydraConnector.login("admin", "myverysecretpassword", function(err) {
if(err) {
console.log("Login succeeded!");
} else {
console.log("Some error occurred: "+err);
}
});

Besides logging in, the client API also implements a logout() operation to relinquish write operation rights.

As explained in an earlier blog post, write operations require user authentication but all data is publicly readable. When it is desired to restrict read access, Hydra can be placed behind a reverse proxy that requires HTTP basic authentication.

The client API also supports HTTP basic authentication to support this usage scenario:


var hydraConnector = new HydraConnector("http://localhost",
"sander",
"12345"); // HTTP basic credentials

Using the command-line client


To demonstrate the usefulness of the API, I have created a utility that serves as a command-line equivalent of Hydra's web front-end. The following command shows an overview of projects:


$ hydra-connect --url https://hydra.nixos.org --projects


As may be observed in the screenshot above, the command-line utility provides suggestions for additional command-line instructions that could be relevant, such as querying more detailed information or modifying data.

The following command shows the properties of an individual project:


$ hydra-connect --url https://hydra.nixos.org --project disnix


By default, the command-line utility will show a somewhat readable textual representation of the data. It is also possible to display the "raw" JSON data of any request for debugging purposes:


$ hydra-connect --url https://hydra.nixos.org --project disnix --json


We can also use the command-line utility to create or edit data, such as a project:


$ hydra-connect --url https://hydra.nixos.org --login
$ export HYDRA_SESSION=...
$ hydra-connect --url https://hydra.nixos.org --project disnix --modify


The application will show command prompts asking the user to provide all the relevant project properties.

When changes have been triggered, the active builds in the queue can be inspected by running:


$ hydra-connect --url https://hydra.nixos.org --status


Many other operations are supported. For example, you can inspect the properties of an individual build:


$ hydra-connect --url https://hydra.nixos.org --build 65100054




And request various kinds of build related artifacts, such as the raw build log of a build:


$ hydra-connect --url https://hydra.nixos.org --build 65100054 --raw-log

or download one of its build products:


$ hydra-connect --url https://hydra.nixos.org --build 65100054 \
--build-product 4 > /home/sander/Download/disnix-0.7.2.tar.gz

Conclusion


In this blog post, I have shown the node-hydra-connector package that can be used to remotely control a Hydra server from a Node.js application. The package can be obtained from the NPM registry and the corresponding GitHub repository.

by Sander van der Burg (noreply@blogger.com) at December 08, 2017 09:32 PM

November 28, 2017

Joachim Schiele

nixcloud-webservices-tests

motivation

many programming languages have testing frameworks and for some they are actually used. in nixpkgs i see them lots, most often in libraries written in perl, python and go.

this posting is about how we adapted this concept into nix. AFAIK this hasn't been done before so it might be worth sharing.

how testing works in perl/python/go

the tests are executed during the build of the software and most often implemented in libraries.

sometimes there is doCheck = false; as the tests fail for some reason. often these tests are impure and the build environment can't perform the actions, as for instance, visiting some remote website during build time which is not possible in a nix-build phase.

testing systems i'm refering to:

python example

av = buildPythonPackage rec {
  name = "av-${version}";
  version = "0.2.4";

  src = pkgs.fetchurl {
    url = "mirror://pypi/a/av/${name}.tar.gz";
    sha256 = "bdc7e2e213cb9041d9c5c0497e6f8c47e84f89f1f2673a46d891cca0fb0d19a0";
  };

  buildInputs
    =  (with self; [ nose pillow numpy ])
    ++ (with pkgs; [ ffmpeg_2 git libav pkgconfig ]);

  # Because of https://github.com/mikeboers/PyAV/issues/152
  doCheck = false;

  meta = {
    description = "Pythonic bindings for FFmpeg/Libav";
    homepage = https://github.com/mikeboers/PyAV/;
    license = licenses.bsd2;
  };
};

example taken from https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/python-packages.nix.

perl example

ack = buildPerlPackage rec {
  name = "ack-2.16";
  src = fetchurl {
    url = "mirror://cpan/authors/id/P/PE/PETDANCE/${name}.tar.gz";
    sha256 = "0ifbmbfvagfi76i7vjpggs2hrbqqisd14f5zizan6cbdn8dl5z2g";
  };
  outputs = ["out" "man"];
  # use gnused so that the preCheck command passes
  buildInputs = stdenv.lib.optional stdenv.isDarwin gnused;
  propagatedBuildInputs = [ FileNext ];
  meta = with stdenv.lib; {
    description = "A grep-like tool tailored to working with large trees of source code";
    homepage    = http://betterthangrep.com/;
    license     = licenses.artistic2;
    maintainers = with maintainers; [ lovek323 ];
    platforms   = platforms.unix;
  };
  # tests fails on nixos and hydra because of different purity issues
  doCheck = false;
};

example taken from https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/perl-packages.nix

go example

terraform_0_9 = generic {
  version = "0.9.11";
  sha256 = "045zcpd4g9c52ynhgh3213p422ahds63mzhmd2iwcmj88g8i1w6x";
  # checks are failing again
  doCheck = false;
};

example taken from https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/networking/cluster/terraform/default.nix.

nixos testing

first, let's have a look at how testing in nixos is done traditionally. these tests use KVM, spawn one or serveral virtual machines which interact with each other over the network. they are called explicitly: nix-build -A leaps release.nix.

the leaps test:

import ./make-test.nix ({ pkgs,  ... }:

{
  name = "leaps";
  meta = with pkgs.stdenv.lib.maintainers; {
    maintainers = [ qknight ];
  };

  nodes =
    { 
      client = { };

      server =
        { services.leaps = {
            enable = true;
            port = 6666;
            path = "/leaps/";
          };
          networking.firewall.enable = false;
        };
    };

  testScript =
    ''
      startAll;
      $server->waitForOpenPort(6666);
      $client->waitForUnit("network.target");
      $client->succeed("${pkgs.curl}/bin/curl http://server:6666/leaps/ | grep -i 'leaps'");
    '';
})

nixos tests are described in the manual https://nixos.org/nixos/manual/index.html#sec-nixos-tests nicely. the source of the tests are at https://github.com/NixOS/nixpkgs/blob/master/nixos/release.nix#L217.

nixcloud testing

at nixcloud we also implement tests as we have them in nixos (described above). some tests can be found here: https://github.com/nixcloud/nixcloud-webservices/tree/master/tests but, and that is why this posting was written, we also have different kind of tests, which are similar to those in perl, python and go explained previously.

test usage

we basically extended the nixos module system:

{ config, pkgs, lib, ... } @ args:

{
  options = {
    nixcloud.reverse-proxy = {
      enable = mkEnableOption "reverse-proxy";
      ...
    }
  };
  config = { 
    ... 
    nixcloud.tests.wanted = [ ./test.nix ];
  };
}

see complete usage implementation

note: the nixcloud.reverse-proxy module is similar to nixos modules as services.openssh

if the nixcloud.reverse-proxy module is used and one does a nixos-rebuild switch, it evaluates the ./test.nix during build.

implementation

and the implementation of the test is here https://github.com/nixcloud/nixcloud-webservices/blob/12e51b6bd073acdd78807d0dbb23458267538b48/modules/core/testing.nix

advances

  • tests are located near the implementation of the service and not in a completely different directory

  • a user can't forget to run the test since they are implicit

  • everytime a developer changes the implementation, nixcloud.webservices for instance, it is unit tested. this insures that users don't forget to run the unit test before they commit.

  • these test builds are cached locally and are executed only once. similar for packages out of nixpkgs: if a dependency or the source code is changed, the test is also rerun.

  • oh, and hydra can evaluate the tests as well

drawbacks

  • a major drawback is that if you don't have KVM support it still spawns the tests and emulating will be very slow.

    this is true for:

    • remote machines which were already virtualized (and don't support nested virtualization)
    • if you execute nixos inside virtualbox
    • if you are using nix from inside mac os x (darwin) or basically from any other linux other than nixos

summary

i would love to see this feature coming to nixos/nixpkgs also! oh and thanks to aszlig!

questions?

by qknight at November 28, 2017 12:35 PM

November 18, 2017

Joachim Schiele

nixcloud-webservices

motivation

on the quest to make nix the no. 1 deployment tool for webservices, we are happy to announce the release of:

  • nixcloud.webservices
  • nixcloud.reverse-proxy
  • nixcloud.email

all can be found online at:

git clone https://github.com/nixcloud/nixcloud-webservices

or visit: https://github.com/nixcloud/nixcloud-webservices

nixcloud

what is the nixcloud? paul and me (joachim) see the nixcloud as an extension to nixpkgs focusing on the deployment of webservices. we abstract installation, monitoring, DNS management and most of all 'state management'.

nixcloud.webservices

nixcloud.webservices is a extension to nixpkgs for packaging webservices (apache, nginx) or language specific implementations as go, rust, perl and so on.

nixcloud.webservices.leaps.myservice = {
  enable = true;
  proxyOptions = {
    port   = 50000;
    path   = "/foo";
    domain = "example.com";
  };
};

it comes with so many features and improvements that you are better off reading the documentation here:

https://github.com/nixcloud/nixcloud-webservices/blob/master/documentation/nixcloud.webservices.md

nixcloud.reverse-proxy

this component makes it easy to mix several webservices, like multiple webservers as apache or nginx, into one or several domains. it also abstracts TLS using ACME.

nixcloud.reverse-proxy = {
  enable = true;
  extendEtcHosts = true;
  extraMappings = [
    {  
      domain = "example.com";
      path = "/";
      http = {
        mode = "on";
        record = ''
          rewrite ^(.*)$ https://example.org permanent;
        '';
      };
      https = {
        mode = "on";
        basicAuth."joachim" = "foo";
        record = ''
          rewrite ^(.*)$ https://example.org permanent;
        '';              
      };
    }
  ];
};

https://github.com/nixcloud/nixcloud-webservices/blob/master/documentation/nixcloud.reverse-proxy.md

nixcloud.email

manual here: https://github.com/nixcloud/nixcloud-webservices/blob/master/documentation/nixcloud.email.md

the idea is to make email deployment accessible to the masses by providing an easy abstraction, especially for webhosting.

configuration example

nixcloud.email= {
  enable = true;
  domains = [ "example.com" "example.org"  ];
  ipAddress = "1.2.3.4";
  ip6Address = "afe2::2;
  hostname = "example.com";
  users = [
    # kdkdkdkdkdkd -> feed that into -> doveadm pw -s sha256-crypt
    { name = "js2"; domain = "example.org"; password = "{PLAIN}foobar1234"; aliases = [ "postmaster@r0sset.com" ]; }
    { name = "paul"; domain = "example.com"; password = "{PLAIN}supersupergeheim"; }
    { name = "catchall"; domain = "example.com"; password = "{PLAIN}foobar1234"; aliases = [ "@example.com" ]; }
    #{ name = "js2"; domain = "nix.lt"; password = "{PLAIN}supersupergeheim"; }
    #{ name = "quotatest"; domain = "nix.lt"; password = "{PLAIN}supersupergeheim"; quota = "11M"; aliases = [ "postmaster@mail.nix.lt" "postmaster@nix.lt" ]; }
  ];
};

questions?

by qknight at November 18, 2017 12:35 PM

November 07, 2017

Flying Circus

NixOS: The DOs and DON’Ts of nixpkgs overlays

One presentation at NixCon 2017 that especially drew my attention was Nicolas Pierron‘s talk about Nixpkgs overlays (video, slides). I’d like to give a quick summary here for future reference. All the credits go to Nicolas, of course.

What are overlays?

Overlays are the new standard mechanism to customize nixpkgs. They replace constructs like packageOverride and overridePackages.

How do I use overlays?

Put your overlays into ~/.config/nixpkgs/overlays. All overlays share a basic structure:

self: super: {
  …
}

Arguments:

  • self is the result of the fix point calculation. Use it to access packages which could be modified somewhere else in the overlay stack.
  • super one overlay down in the stack (and base nixpkgs for the first overlay). Use it to access the package recipes you want to customize, and for library functions.

Good examples

Add default proxy to Google Chrome

self: super:
{
   google-chrome = super.google-chrome.override {
     commandLineArgs =
       "--proxy-server='https=127.0.0.1:3128;http=127.0.0.1:3128'";
   };
}

From Alexandre Peyroux.

Fudge Nix store location

self: super:
{
  nix = super.nix.override {
    storeDir = "${<nix-dir>}/store";
     stateDir = "${<nix-dir>}/var";
   };
}

From Yann Hodique.

Add custom package

self: super:
{
  VidyoDesktop = super.callPackage ./pkgs/VidyoDesktop { };
}

From Mozilla.

Bad examples

Recursive attrset

self: super: rec {
   fakeClosure = self.writeText "fake-closure.nix" ''
     …
   '';
   fakeConfig = self.writeText "fake-config.nix" ''
    (import ${fakeClosure} {}).config.nixpkgs.config
  '';
}

Two issues:

  • Use super to access library functions.
  • Overlays should not be recursive. Use self.

Surplus arguments

{ python }:
self: super:
{
  "execnet" =
    python.overrideDerivation super."execnet" (old: {
      buildInputs = old.buildInputs ++ [ self."setuptools-scm" ];
    });
}

The issue:

  • Overlays should depend just on self and super in order to be composeable.

Awkward nixpkgs import

{ pkgs ? import <nixpkgs> {} }:
let
  projectOverlay = self: super: {
    customPythonPackages =
      (import ./requirements.nix { inherit pkgs; }).packages;
  };
in
import pkgs.path {
  overlays = [ projectOverlay ];
}

Two issues:

  • Unnecessary double-import of nixpkgs. This might break cross-system builds.
  • requirements.nix should use self not pkgs.

Improved version

shell.nix:

{ pkgsPath ? <nixpkgs> }:
import pkgsPath {
  overlays = [ import ./default.nix; ];
}

default.nix:

self: super:
{
 customPythonPackages =
   (import ./requirements.nix { inherit self; }).packages;
}

Incorrectly referenced dependencies

self: super:
let inherit (super) callPackage;
in {
  radare2 = callPackage ./pkgs/radare2 {
    inherit (super.gnome2) vte;
    lua = super.lua5;
  };
}

The issue:

  • Other packages should be taken from self not super. This way they
    can be overridden by other overlays.

Overridden attrset

self: super:
{
  lib = {
    firefoxVersion = … ;
  };
  latest = {
    firefox-nightly-bin = … ;
    firefox-beta-bin = … ;
    firefox-bin = … ;
    firefox-esr-bin = … ;
  };
}

The issue:

  • Other attributes present in lib and latest from down the overlay stack are
    erased.

Improved version

Always extend attrsets in overlays:

self: super:
{
  lib = (super.lib or {}) // {
    firefoxVersion = … ;
  };
  latest = (super.latest or {}) // {
    …
  };
}

Summary

I hope this condensed guide helps you to write better overlays. For in-depth discussion, please go watch Nicolas’ talk and read the Nixpkgs manual. Many thanks to Nicolas for putting this together!


Cover image: ⓒ 2009 studio tdes / Flickr / CC-BY-2.0

by Christian Kauhaus at November 07, 2017 09:44 AM

November 03, 2017

Sander van der Burg

Creating custom object transformations with NiJS and PNDP

In a number earlier blog posts, I have described two kinds of internal DSLs for Nix -- NiJS is a JavaScript-based internal DSL and PNDP is a PHP-based internal DSL.

These internal DSLs have a variety of application areas. Most of them are simply just experiments, but the most serious application area is code generation.

Using an internal DSL for generation has a number of advantages over string generation that is more commonly used. For example, when composing strings containing Nix expressions, we must make sure that any variable in the host language that we append to a generated expression is properly escaped to prevent code injection attacks.

Furthermore, we also have to take care of the indentation if we want to output Nix expression code that should be readable. Finally, string manipulation itself is not a very intuitive activity as it makes it very hard to read what the generated code would look like.

Translating host language objects to the Nix expression language


A very important feature of both internal DSLs is that they can literally translate some language constructs from the host language (JavaScript or PHP) to the Nix expression because they have (nearly) an identical meaning. For example, the following JavaScript code fragment:

var nijs = require('nijs');

var expr = {
hello: "Hello",
name: {
firstName: "Sander",
lastName: "van der Burg"
},
numbers: [ 1, 2, 3, 4, 5 ]
};

var output = nijs.jsToNix(expr, true);
console.log(output);

will output the following Nix expression:

{
hello = "Hello",
name = {
firstName = "Sander";
lastName = "van der Burg";
};
numbers = [
1
2
3
4
5
];
}

In the above example, strings will be translated to strings (and quotes will be escaped if necessary), objects to attribute sets, and the array of numbers to a list of numbers. Furthermore, the generated code is also pretty printed so that attribute set and list members have 2 spaces of indentation.

Similarly, in PHP we can compose the following code fragment to get an identical Nix output:

use PNDP\NixGenerator;

$expr = array(
"hello" => "Hello",
"name" => array(
"firstName" => "Sander",
"lastName => "van der Burg"
),
"numbers" => array(1, 2, 3, 4, 5)
);

$output = NixGenerator::phpToNix($expr, true);
echo($output);

The PHP generator uses a number of clever tricks to determine whether an array is associative or sequential -- the former gets translated into a Nix attribute set while the latter gets translated into a list.

There are objects in the Nix expression language for which no equivalent exists in the host language. For example, Nix also allows you to define objects of a 'URL' and 'file' type. Neither JavaScript nor PHP have a direct equivalent. Moreover, it may be desired to generate other kinds of language constructs, such as function declarations and function invocations.

To still generate these kinds of objects, you must compose an abstract syntax tree from objects that inherit from the NixObject prototype or class. For example, we can define a function invocation to fetchurl {} in Nixpkgs as follows in JavaScript:

var expr = new nijs.NixFunInvocation({
funExpr: new nijs.NixExpression("fetchurl"),
paramExpr: {
url: new nijs.NixURL("mirror://gnu/hello/hello-2.10.tar.gz"),
sha256: "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i"
}
});

and in PHP as follows:

use PNDP\AST\NixExpression;
use PNDP\AST\NixFunInvocation;
use PNDP\AST\NixURL;

$expr = new NixFunInvocation(new NixExpression("fetchurl"), array(
"url" => new NixURL("mirror://gnu/hello/hello-2.10.tar.gz"),
"sha256" => "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i"
));

Both of the objects in the above code fragments translate to the following Nix expression:

fetchurl {
url = mirror://gnu/hello/hello-2.10.tar.gz;
sha256 = "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i";
}

Transforming custom object structures into Nix expressions


The earlier described use cases are basically one-on-one translations from the host language (JavaScript or PHP) to the guest language (Nix). In some cases, literal translations do not make sense -- for example, it may be possible that we already have an application with an existing data model from which we want to derive deployments that should be carried out with Nix.

In the latest versions of NiJS and PNDP, it is also possible to specify how to transform custom object structures into a Nix expression. This can be done by inheriting from the NixASTNode class or prototype and overriding the toNixAST() method.

For example, we may have a system already providing a representation of a file that should be downloaded from an external source:

function HelloSourceModel() {
this.src = "mirror://gnu/hello/hello-2.10.tar.gz";
this.sha256 = "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i";
}

The above module defines a constructor function composing an object that refers to the GNU Hello package provided by a GNU mirror site.

A direct translation of an object constructed by the above function to the Nix expression language does not provide anything meaningful -- it can, for example, not be used to let Nix fetch the package from the mirror site.

We can inherit from NixASTNode and implement our own custom toNixAST() function to provide a more meaningful Nix translation:

var nijs = require('nijs');
var inherit = require('nijs/lib/ast/util/inherit.js').inherit;

/* HelloSourceModel inherits from NixASTNode */
inherit(nijs.NixASTNode, HelloSourceModel);

/**
* @see NixASTNode#toNixAST
*/
HelloSourceModel.prototype.toNixAST = function() {
return this.args.fetchurl()({
url: new nijs.NixURL(this.src),
sha256: this.sha256
});
};

The toNixAST() function shown above composes an abstract syntax tree (AST) for a function invocation to fetchurl {} in the Nix expression language with the url and sha256 properties a parameters.

An object that inherits from the NixASTNode prototype also indirectly inherits from NixObject. This means that we can directly attach such an object to any other AST object. The generator uses the underlying toNixAST() function to automatically convert it to its AST representation:

var helloSource = new HelloSourceModel();
var output = nijs.jsToNix(helloSource, true);
console.log(output);

In the above code fragment, we directly pass the construct HelloSourceModel object instance to the generator. The output will be the following Nix expression:

fetchurl {
url = mirror://gnu/hello/hello-2.10.tar.gz;
sha256 = "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i";
}

In some cases, it may not be possible to inherit from NixASTNode, for example, when the object already inherits from another prototype or class that is beyond the user's control.

It is also possible to use the NixASTNode constructor function as an adapter. For example, we can take any object with a toNixAST() function:

var helloSourceWrapper = {
toNixAST: function() {
return new nijs.NixFunInvocation({
funExpr: new nijs.NixExpression("fetchurl"),
paramExpr: {
url: new nijs.NixURL(this.src),
sha256: this.sha256
}
});
}
};

By wrapping the helloSourceWrapper object in the NixASTNode constructor, we can convert it to an object that is an instance of NixASTNode:

new nijs.NixASTNode(helloSourceWrapper)

In PHP, we can change any class into a NixASTNode by implementing the NixASTConvertable interface:

use PNDP\AST\NixASTConvertable;
use PNDP\AST\NixURL;

class HelloSourceModel implements NixASTConvertable
{
/**
* @see NixASTConvertable::toNixAST()
*/
public function toNixAST()
{
return $this->args->fetchurl(array(
"url" => new NixURL($this->src),
"sha256" => $this->sha256
));
}
}

By passing an object that implements the NixASTConvertable interface to the NixASTNode constructor, it can be converted:

new NixASTNode(new HelloSourceModel())

Motivating use case: the node2nix and composer2nix generators


My main motivation to use custom transformations is to improve the quality of the node2nix and composer2nix generators -- the former converts NPM package configurations to Nix expressions and the latter converts PHP composer package configurations to Nix expressions.

Although NiJS and PNDP provide a number of powerful properties to improve the code generation steps of these tools, e.g. I no longer have to think much about escaping strings or pretty printing, there are still many organizational coding issues left. For example, the code that parses the configurations, fetches the external sources, and generates the code are mixed. As a consequence, the code is very hard to read, update, maintain and to ensure its correctness.

The new transformation facilities allow me to separate concerns much better. For example, both generators now have a data model that reflects the NPM and composer problem domain. For example, I could compose the following (simplified) class diagram for node2nix's problem domain:


A crucial part of node2nix's generator is the package class shown on the top left on the diagram. A package requires zero or more packages as dependencies and may provide zero or more packages in the node_modules/ folder residing in the package's base directory.

For readers not familiar with NPM's dependency management: every package can install its dependencies privately in a node_modules/ folder residing in the same base directory. The CommonJS module ensures that every file is considered to be a unique module that should not interfere with other modules. Sharing is accomplished by putting a dependency in a node_modules/ folder of an enclosing parent package.

NPM 2.x always installs a package dependency privately unless a parent package exists that can provide a conforming version. NPM 3.x (and later) will also move a package into the node_modules/ folder hierarchy as high as possible to prevent too many layers of nested node_modules/ folders (this is particularly a problem on Windows). The class structure in the above diagram reflects this kind of dependency organisation.

In addition to a package dependency graph, we also need to obtain package metadata and compute their output hashes. NPM packages originate from various kinds of sources, such as the NPM registry, Git repositories, HTTP sites and local directories on the filesystem.

To optimize the process and support sharing of common sources among packages, we can use a source cache that memorizes all unique source referencess.

The Package::resolveDependencies() method sets the generation process in motion -- it will construct the dependency graph replicating NPM's dependency resolution algorithm as faithfully as possible, and resolves all the dependencies' (and transitive dependencies) metadata.

After resolving all dependencies and their metadata, we must generate the output Nix expressions. One Nix expression is copied (the build infrastructure) and two are generated -- a composition expression and a package or collection expression.

We can also compose a class diagram for the generation infrastructure:


In the above class diagram, every generated expression is represented a class inheriting from NixASTNode. We can also reuse some classes from the domain model as constituents for the generated expressions, by also inheriting from NixASTNode and overriding the toNixAST() method:

  • The source objects can be translated into sub expressions that invoke fetchurl {} and fetchgit {}.
  • The sources cache can be translated into an attribute set exposing all sources that are used as dependencies for packages.
  • A package instance can be converted into a function invocation to nodeenv.buildNodePackage {} that, in addition to configuring build properties, binds the required dependencies to the sources in the sources cache attribute set.

By decomposing the expression into objects and combining the objects' AST representations, we can nicely modularize the generation process.

For composer2nix, we can also compose a class diagram for its domain -- the generation process:


The above class diagram has many similarities, but also some major differences compared to node2nix. composer provides so-called lock files that pinpoint the exact versions of all dependencies and transitive dependencies. As a result, we do not need to replicate composer's dependency resolution algorithm.

Instead, the generation process is driven by the ComposerConfig class that encapsulates the properties of the composer.json and composer.lock files of a package. From a composer configuration, the generator constructs a package object that refers to the package we intend to deploy and populates a source cache with source objects that come from various sources, such as Git, Mercurial and Subversion repositories, Zip files, and directories residing on the local filesystem.

For the generation process, we can adopt a similar strategy that exposes the generated Nix expressions as classes and uses some classes of the domain model as constituents for the generation process:


Discussion


In this blog post, I have described a new feature for the NiJS and PNDP frameworks, making it possible to implement custom transformations. Some of its benefits are that it allows an existing object model to be reused and concerns in an application can be separated much more conveniently.

These facilities are not only useful for the improvement of the architecture of the node2nix and composer2nix generators -- at the company I work for (Conference Compass), we developed our own domain-specific configuration management tool.

Despite the fact that it uses several tools from the Nix project to carry out deployments, it uses a domain model that is not Nix-specific at all. Instead, it uses terminology and an organization that reflects company processes and systems.

For example, we use a backend-for-frontend organization that provides a backend for each mobile application that we ship. We call these backends configurators. Optionally, every configurator can import data from various external sources that we call channels. The tool's data model reflects this kind of organization, and generates Nix expressions that contain all relevant implementation details, if necessary.

Finally, the fact that I modified node2nix to have a much cleaner architecture has another reason beyond quality improvement. Currently, NPM version 5.x (that comes with Node.js 8.x) is still unsupported. To make it work with Nix, we require a slightly different generation process and a completely different builder environment. The new architecture allows me to reuse the common parts much more conveniently. More details about the new NPM support will follow (hopefully) soon.

Availability


I have released new versions of NiJS and PNDP that have the custom transformation facilities included.

Furthermore, I have decided to release new versions for node2nix and composer2nix that use the new generation facilities in their architecture. The improved architecture revealed a very uncommon but nasty bug with bundled dependencies in node2nix, that is now solved.

by Sander van der Burg (noreply@blogger.com) at November 03, 2017 10:43 PM

October 04, 2017

Flying Circus

Announcing fc-userscan

NixOS manages dependencies in a very strict way—sometimes too strict? Here at Flying Circus, many users prefer to compile custom applications in home directories. They link them against libraries they have installed before by nix-env. This works well… until something is updated! On the next change anywhere down the dependency chain, libraries get new hashes in the Nix store, the garbage collector removes old versions, and user applications break until recompiled.

In this blog post, I would like to introduce fc-userscan. This little tool scans (home) directories recursively for Nix store references and registers them as per-user roots with the garbage collector. This way, dependencies will be protected even if they cease to be referenced from “official” Nix roots like the current-system profile or a user’s local Nix profile. After registering formerly unmanaged references with fc-userscan, one can fearlessly run updates and garbage collection.

This problem would not be there if everyone would just write Nix expressions for everything. This is, for various reasons, not realistic. Some users find it hard to accommodate with Nix concepts (“I just want to compile my software and don’t care about your arcane Linux distro!”). Others use deploy tools like zc.buildout which are explicitly designed for in-place updates and don’t really distinguish compile time from run time. With fc-userscan, you can get both: Nix expressions for superb reproducibility, and an acceptable degree of reliance for impurely compiled applications.

fc-userscan is published as open source software under a 3-clause BSD license. Get the Rust source code or the latest pre-compiled binaries for x86_64. Pull requests are welcome.

Example: How to get stale Nix store references

Imagine we have Python installed in our Nix user profile. When creating a Python virtual environment, we get unmanaged references into the Nix store:

$ nix-env -iA nixpkgs.python35
$ ~/.nix-profile/bin/pyvenv venv
$ ls -og venv/bin/python3.5
lrwxrwxrwx 1 71 Sep 29 11:19 venv/bin/python3.5 -> /nix/store/8lh4pxhhi4cx00pp1zxpz9pqyy44kjm6-python3-3.5.4/bin/python3.5

The garbage collector does not know anything about this reference. This is OK as long as nothing gets deleted from the Nix store. But eventually channel updates arrive. After an seemingly innocent run of nixos-rebuild or nix-env –upgrade, chances are high that the Python package’s hash has changed. The next run of nix-collect-garbage will trash the Python binary still referenced from our virtualenv.

Although we use Python for illustration, the problem is universal. You can easily run into similar trouble with C programs, for example.

Protecting unmanaged references

We run fc-userscan to detect and register Nix store references:

$ fc-userscan -v venv
fc-userscan: Scouting venv
2 references in /nix/var/nix/gcroots/profiles/per-user/ck/home/ck/venv
Processed 741 files (8 MB read) in 0.011 s
fc-userscan: Finished venv

Now the exact Python interpreter version is registered as a garbage collection root:

$ ls -og /nix/var/nix/gcroots/profiles/per-user/ck/home/ck/venv
lrwxrwxrwx 1 57 Sep 29 11:46 8lh4pxhhi4cx00pp1zxpz9pqyy44kjm6 -> /nix/store/8lh4pxhhi4cx00pp1zxpz9pqyy44kjm6-python3-3.5.4
[...]

The Python interpreter and all its dependencies are now protected. The file layout in the per-user directories is organized in such a way that is possible to re-scan arbitrary directories and always get correct results.

Feature overview

Include/exclude lists

To reduce system load, fc-userscan can be instructed to skip files that are unlikely to contain store references. Specify include/exclude globs from

  1. a default ignore file ~/.userscan-ignore if existent (in gitignore(5) format)
  2. a custom ignore file with -­-exclude-from (in gitignore(5) format)
  3. the command line with -­-exclude, -­-include.

A related option is -­-quickcheck which causes fc-userscan to stop scanning a file when there is no Nix store reference in the first n bytes of that file.

Cache

fc-userscan is able to preserve scan results between runs to avoid re-scanning unchanged files. Simply add -­-cache FILE. It uses the ctime inode attribute to determine if a file has been changed or not. The cache is stored as compressed messagepack file so that it does not take much space even on large installations.

Unzip on the fly

Some runnable application artefacts are stored as ZIP files and may contain Nix store references. The most prominent instances are compressed Python eggs and JAR files. fc-userscan has support to transparently decompress files that match glob patterns passed with -­-unzip. Note that scanning inside compressed ZIP archives is significantly slower than scanning regular files.

List mode

It is possible to scan directory hierarchies and just print all found references to stdout instead of registering them with -­-list. The output format is script friendly.

These options and many more are explained in the fc-userscan(1) man page which ships with the release.

Interaction with nix-collect-garbage

How does fc-userscan interact with nix-collect-garbage now? We at Flying Circus run fc-userscan for all users and start nix-collect-garbage directly afterwards. fc-userscan exits unsuccessfully if there was a problem (e.g., unreadable files). We recommend not to run nix-collect-garbage if fc-userscan found a problem. Otherwise, not all relevant files might have been checked for Nix store references and nix-collect-garbage might delete too much.

Here is an example script which scans all users’ home directories:

failed=0
while read user home; do
  sudo -u $user -H -- \
    fc-userscan -v 2 \
    --exclude-from /etc/userscan.exclude \
    --cache $home/.cache/fc-userscan.cache \
    --unzip '*.egg'
    $home || failed=1
done < <(getent passwd | awk -F: '$4 >= 100 { print $1 " " $6 }')

if (( failed )); then
  echo "ERROR: fc-userscan failed (see above)"
  exit 1
else
  nix-collect-garbage --delete-older-than 3d
fi

Conclusion

While in theory it would be better to drive all software installations on a NixOS system via derivations, fc-userscan brings us flexibility to protect impurely installed applications.


Cover image © 2007 by benfrantzdale CC-BY-SA 2.0

by Christian Kauhaus at October 04, 2017 12:32 PM

October 02, 2017

Sander van der Burg

Deploying PHP composer packages with the Nix package manager

In two earlier blog posts, I have described various pieces of my custom web framework that I used to actively develop many years ago. The framework is quite modular -- every concern, such as layout management, data management, the editor, and the gallery, are separated into packages that can be deployed independently, so that web applications only have to include what they actually need.

Although modularity is quite useful for a variety of reasons, the framework did not start out as being modular in the beginning -- when I just started developing web applications in PHP, I did not reuse anything at all. Slowly, I discovered similarities between my projects and started sharing snippets of common functionality between them. Gradually, I learned that keeping these common aspects up to date became a burden. As a result, I developed a "common framework" that I reused among all my PHP projects.

Having a common framework for my web application projects reduced the amount of required maintenance, but introduced a new drawback -- its size kept growing and growing. As a result, many simple web applications that only required a small subset of the framework's functionality still had to embed the entire framework, making them unnecessarily big.

Today, a bit of extra PHP code is not so much of a problem, but around the time I was still actively developing web applications, many shared web hosting providers only offered a small amount of storage capacity, typically just a few megabytes.

To cope with the growing size of the framework, I decided to modularize the code by separating the framework's concerns into packages that can be deployed independently. I "invented" my own conventions to integrate the framework packages into web applications:

  • In the base directory of the web application project, I create a lib/ directory that contains symlinks to the framework packages.
  • In every PHP script that displays a page (typically only index.php), I configure the include path to refer to the packages' content in the lib/ folder, such as:


    set_include_path("./lib/sblayout:./lib/sbdata:./lib/sbcrud");

  • Each PHP module is responsible for loading the desired classes or utility functions from the framework packages. As a result, I ended up writing a substantial amount of require() statements, such as:


    require_once("data/model/Form.class.php");
    require_once("data/model/field/HiddenField.class.php");
    require_once("data/model/field/TextField.class.php");
    require_once("data/model/field/DateField.class.php");
    require_once("data/model/field/TextAreaField.class.php");
    require_once("data/model/field/URLField.class.php");
    require_once("data/model/field/FileField.class.php");

After my (approximately) 8 years of absence from the PHP domain, I discovered that a tool has been developed to support convenient construction of modular PHP applications: composer. Composer is heavily inspired by the NPM package manager, that is the defacto package delivery mechanism for Node.js applications.

In the last couple of months (it progresses quite slowly as it is a non-urgent side project), I have decided to get rid of my custom modularity conventions in my framework packages, and to adopt composer instead.

Furthermore, composer is a useful deployment tool, but its scope is limited to PHP applications only. As frequent readers may probably already know, I use Nix-based solutions to deploy entire software systems (that are also composed of non-PHP packages) from a single declarative specification.

To be able to include PHP composer packages in a Nix deployment process, I have developed a generator named: composer2nix that can be used to generate Nix deployment expressions from composer configuration files.

In this blog post, I will explain the concepts of composer2nix and show how it can be used.

Using composer


Using composer is generally quite straight forward. In the most common usage scenario, there is typically a PHP project (often a web application) that requires a number of dependencies. By changing the current working folder to the project directory, and running:


$ composer install

Composer will obtain all required dependencies and stores them in the vendor/ sub directory.

The vendor/ folder follows a very specific organisation:


$ find vendor/ -maxdepth 2 -type d
vendor/bin
vendor/composer
vendor/phpdocumentor
vendor/phpdocumentor/fileset
vendor/phpdocumentor/graphviz
vendor/phpdocumentor/reflection-docblock
vendor/phpdocumentor/reflection
vendor/phpdocumentor/phpdocumentor
vendor/svanderburg
vendor/svanderburg/pndp
...

The vendor/ folder structure (mostly) consists two levels: the outer directory defines the namespace of the packages and the inner directory the package names.

There are a couple of folders deviating from this convention -- most notably, the vendor/composer directory, that is used by composer to track package installations:


$ ls vendor/composer
autoload_classmap.php
autoload_files.php
autoload_namespaces.php
autoload_psr4.php
autoload_real.php
autoload_static.php
ClassLoader.php
installed.json
LICENSE

In addition to obtaining packages and storing them in the vendor/ folder, composer also generates autoload scripts (as shown above) that can be used to automatically make code units (typically classes) provided by the packages available for use in the project. Adding the following statement to one of your project's PHP scripts:


require_once("vendor/autoload.php");

suffices to load the functionality exposed by the packages that composer installs.

Composer can be used to install both runtime and development dependencies. Many development dependencies (such as phpunit or phpdocumentor) provide command-line utilities to carry out tasks. Composer packages can also declare which executables they provide. Composer automatically generates symlinks for all provided executables in the: vendor/bin folder:


$ ls -l vendor/bin/
lrwxrwxrwx 1 sander users 29 Sep 26 11:49 jsonlint -> ../seld/jsonlint/bin/jsonlint
lrwxrwxrwx 1 sander users 41 Sep 26 11:49 phpdoc -> ../phpdocumentor/phpdocumentor/bin/phpdoc
lrwxrwxrwx 1 sander users 45 Sep 26 11:49 phpdoc.php -> ../phpdocumentor/phpdocumentor/bin/phpdoc.php
lrwxrwxrwx 1 sander users 34 Sep 26 11:49 pndp-build -> ../svanderburg/pndp/bin/pndp-build
lrwxrwxrwx 1 sander users 46 Sep 26 11:49 validate-json -> ../justinrainbow/json-schema/bin/validate-json

For example, you can run the following command-line instruction from the base directory of a project to generate API documentation:


$ vendor/bin/phpdocumentor -d src -t out

In some cases (the composer documentation often discourages this) you may want to install end-user packages globally. They can be installed into the global composer configuration directory by running:


$ composer global require phpunit/phpunit

After installing a package globally (and adding: $HOME/.config/composer/vendor/bin directory to the PATH environment variable), we should be able to run:


$ phpunit --help

The composer configuration


The deployment operations that composer carries out are driven by a configuration file named: composer.json. An example of such a configuration file could be:


{
"name": "svanderburg/composer2nix",
"description": "Generate Nix expressions to build PHP composer packages",
"type": "library",
"license": "MIT",
"authors": [
{
"name": "Sander van der Burg",
"email": "svanderburg@gmail.com",
"homepage": "http://sandervanderburg.nl"
}
],

"require": {
"svanderburg/pndp": "0.0.1"
},
"require-dev": {
"phpdocumentor/phpdocumentor": "2.9.x"
},

"autoload": {
"psr-4": { "Composer2Nix\\": "src/Composer2Nix" }
},

"bin": [ "bin/composer2nix" ]
}

The above configuration file declares the following configuration properties:

  • A number of meta attributes, such as the package name, description, license and authors.
  • The package type. The type: library indicates that this project is a library that can be used in another project.
  • The project's runtime (require) and development (require-dev) dependencies. In a dependency object, the keys refer to the package names and the values to version specifications that can be either:
    • A semver compatible version specifier that can be an exact version (e.g. 0.0.1), wildcard (e.g. 1.0.x), or version range (e.g. >= 1.0.0).
    • A version alias that directly (or indirectly) resolves to a branch in the VCS repository of the dependency. For example, the dev-master version specifier refers to the current master branch of the Git repository of the package.
  • The autoloader configuration. In the above example, we configure the autoloader to load all classes belonging to the Composer2Nix namespace, from the src/Composer2Nix sub directory.

By default, composer obtains all packages from the Packagist repository. However, it is also possible to consult other kinds of repositories, such as external HTTP sites or VCS repositories of various kinds (including Git, Mercurial and Subversion).

External repositories can be specified by adding a 'repositories' object to the composer configuration:


{
"name": "svanderburg/composer2nix",
"description": "Generate Nix expressions to build PHP composer packages",
"type": "library",
"license": "MIT",
"authors": [
{
"name": "Sander van der Burg",
"email": "svanderburg@gmail.com",
"homepage": "http://sandervanderburg.nl"
}
],
"repositories": [
{
"type": "vcs",
"url": "https://github.com/svanderburg/pndp"
}
],

"require": {
"svanderburg/pndp": "dev-master"
},
"require-dev": {
"phpdocumentor/phpdocumentor": "2.9.x"
},

"autoload": {
"psr-4": { "Composer2Nix\\": "src/Composer2Nix" }
},

"bin": [ "bin/composer2nix" ]
}

In the above example, we have defined PNDP's GitHub repository as an external repository and changed the version specifier of svanderburg/pndp to use the latest Git master branch.

Composer uses a version resolution strategy that will parse composer configuration files and branch names in all repositories to figure out where a version can be obtained from and takes the first option that matches the dependency specification. Packagist is consulted last, making it possible for the user to override dependencies.

Pinpointing dependency versions


The version specifiers of dependencies in a composer.json configuration file are nominal and have some drawbacks when it comes to reproducibility -- for example, the version specifier: >= 1.0.1 may resolve to version 1.0.2 today and to 1.0.3 tomorrow, making it very difficult to exactly reproduce a deployment elsewhere at a later point in time.

Although direct dependencies can be easily controlled by the user, it is quite difficult to control the version resolutions of the transitive dependencies. To cope with this problem, composer will always generate lock files (composer.lock) that pinpoint the exact dependency versions (including all transitive dependencies) the first time when it gets invoked (or when composer update is called):


{
"_readme": [
"This file locks the dependencies of your project to a known state",
"Read more about it at https://getcomposer.org/doc/01-basic-usage.md#composer-lock-the-lock-file",
"This file is @generated automatically"
],
"content-hash": "ca5ed9191c272685068c66b76ed1bae8",
"packages": [
{
"name": "svanderburg/pndp",
"version": "v0.0.1",
"source": {
"type": "git",
"url": "https://github.com/svanderburg/pndp.git",
"reference": "99b0904e0f2efb35b8f012892912e0d171e9c2da"
},
"dist": {
"type": "zip",
"url": "https://api.github.com/repos/svanderburg/pndp/zipball/99b0904e0f2efb35b8f012892912e0d171e9c2da",
"reference": "99b0904e0f2efb35b8f012892912e0d171e9c2da",
"shasum": ""
},
"bin": [
"bin/pndp-build"
],
...
}
...
]
}

By bundling the composer.lock file with the package, it becomes possible to reproduce a deployment elsewhere with the exact same package versions.

The Nix package manager


Nix is a package manager whose main purpose is to build all kinds of software packages from source code, such as GNU Autotools, CMake, Perl's MakeMaker, Apache Ant, and Python projects.

Nix's main purpose is not be a build tool (it can actually also be used for building projects, but this application area is still highly experimental). Instead, Nix manages dependencies and complements existing build tools by providing dedicated build environments to make deployments reliable and reproducible, such as clearing all environment variables, making files read-only after the package has been built, restricting network access and resetting the files' timestamps to 1.

Most importantly, in these dedicated environments Nix ensures that only specified dependencies can be found. This may probably sound inconvenient at first, but this property exists for a good reason: if a package unknowingly depends on another package then it may work on the machine where it has been built, but may fail on another machine because this unknown dependency is missing. By building a package in a pure environment in which all dependencies are known, we eliminate this problem.

To provide stricter purity guarantees, Nix isolates packages by storing them in a so-called "Nix store" (that typically resides in: /nix/store) in which every directory entry corresponds to a package. Every path in the Nix store is prefixed by hash code, such as:


/nix/store/2gi1ghzlmb1fjpqqfb4hyh543kzhhgpi-firefox-52.0.1

The hash is derived from all build-time dependencies to build the package.

Because every package is stored in its own path and variants of packages never share the same name because of the hash prefix, it becomes harder for builds to accidentally succeed because of undeclared dependencies. Dependencies can only be found if the environment has been configured in such a way that the Nix store paths to the packages are known, for example, by configuring environment variables, such as: export PATH=/nix/store/5vyssyqvbirdihqrpqhbkq138ax64bjy-gnumake-4.2.1/bin.

The Nix expression language and build environment abstractions have all kinds of facilities to make the configuration of dependencies convenient.

Integrating composer deployments into Nix builder environments


Invoking composer in a Nix builder environment introduces an additional challenge -- composer is not only a tool that does build management (e.g. it can execute script directives that can carry out arbitrary build steps), but also dependency management. The latter property conflicts with the Nix package manager.

In a Nix builder environment, network access is typically restricted, because it affects reproducibility (although it still possible to hack around this restriction) -- when downloading a file from an external site it is not known in advance what you will get. An unknown artifact influences the outcome of a package build in unpredictable ways.

Network access in Nix build environments is only permitted in so-called fixed output derivations. For a fixed output derivation, the output hash must be known in advance so that Nix can verify whether we have obtained the artifact we want.

The solution to cope with a conflicting dependency manager is by substituting it -- we must let Nix obtain the dependencies and force the tool to only execute its build management tasks.

We can populate the vendor/ folder ourselves. As explained earlier, the composer.lock file stores the exact versions of pinpointed dependencies including all transitive dependencies. For example, when a project declares svanderburg/pndp version 0.0.1 as a dependency, it may translate to the following entry in the composer.lock file:


"packages": [
{
"name": "svanderburg/pndp",
"version": "v0.0.1",
"source": {
"type": "git",
"url": "https://github.com/svanderburg/pndp.git",
"reference": "99b0904e0f2efb35b8f012892912e0d171e9c2da"
},
"dist": {
"type": "zip",
"url": "https://api.github.com/repos/svanderburg/pndp/zipball/99b0904e0f2efb35b8f012892912e0d171e9c2da",
"reference": "99b0904e0f2efb35b8f012892912e0d171e9c2da",
"shasum": ""
},
...
}
...
]

As can be seen in the code fragment above, the dependency translates to two kinds of pinpointed source objects -- a source reference to a specific revision in a Git repository and a dist reference to a zipball containing a snapshot of the given Git revision.

The reason why every dependency translates to two kinds of objects is that composer supports two kinds of installation modes: source (to obtain a dependency directly from a VCS) and dist (to obtain a dependency from a zipball).

We can translate the 'dist' reference into the following Nix function invocation:


"svanderburg/pndp" = {
targetDir = "";
src = composerEnv.buildZipPackage {
name = "svanderburg-pndp-99b0904e0f2efb35b8f012892912e0d171e9c2da";
src = fetchurl {
url = https://api.github.com/repos/svanderburg/pndp/zipball/99b0904e0f2efb35b8f012892912e0d171e9c2da;
sha256 = "19l7i7adp76bjf32x9a2ykm0r5cgcmi4wf4cm4127miy3yhs0n4y";
};
};
};

and the 'source' reference to the following Nix function invocation:


"svanderburg/pndp" = {
targetDir = "";
src = fetchgit {
name = "svanderburg-pndp-99b0904e0f2efb35b8f012892912e0d171e9c2da";
url = "https://github.com/svanderburg/pndp.git";
rev = "99b0904e0f2efb35b8f012892912e0d171e9c2da";
sha256 = "15i311dc0123v3ppa69f49ssnlyzizaafzxxr50crdfrm8g6i4kh";
};
};

(As a sidenote: we need the targetDir property to provide compatibility with the deprecated PSR-0 autoloading standard. Old autoload packages can be stored in a sub folder of a package residing in the vendor/ structure.)

To generate the above function invocations, we need more than just the properties provided by the composer.lock file. Since download functions in Nix are fixed output derivations, we must compute the output hashes of the downloads by invoking a Nix prefetch script, such as nix-prefetch-url or nix-prefetch-git. The composer2nix generator will automatically invoke the appropriate prefetch script to augment the generated expressions with output hashes.

To ensure maximum compatibility with composer's behaviour, the dependencies obtained by Nix must be copied into to the vendor/ folder. In theory, symlinking would be more space efficient, but experiments have shown that some packages (such as phpunit) may attempt to load the project's autoload script, e.g. by invoking:


require_once(realpath("../../autoload.php"));

The above require invocation does not work if the dependency is a symlink -- the require path resolves to a path in the Nix store (e.g. /nix/store/...). The parent's parent path corresponds to /nix where no autoload script is stored. (As a sidenote: I have decided to still provide symlinking as an option for deployment scenarios where this is not an issue).

After some experimentation, I discovered that composer uses the following file to track which packages have been installed: vendor/composer/installed.json. The contents appears to be quite similar to the composer.lock file:


[
{
"name": "svanderburg/pndp",
"version": "v0.0.1",
"version_normalized": "0.0.1.0",
"source": {
"type": "git",
"url": "https://github.com/svanderburg/pndp.git",
"reference": "99b0904e0f2efb35b8f012892912e0d171e9c2da"
},
"dist": {
"type": "zip",
"url": "https://api.github.com/repos/svanderburg/pndp/zipball/99b0904e0f2efb35b8f012892912e0d171e9c2da",
"reference": "99b0904e0f2efb35b8f012892912e0d171e9c2da",
"shasum": ""
},
...
},
...
]

Reconstructing the above file can be done by merging the contents of the packages and packages-dev objects in the composer.lock file.

Another missing piece in the puzzle is the autoload scripts. We can force composer to dump the autoload script, by running:


$ composer dump-autoload --optimize

The above command generates an optimized autoloader script. A non-optimized autoload script dynamically inspects the contents of the package folders to load modules. This is convenient in the development stage of a project, in which the files continuously change, but in production environments this introduces quite a bit of load time overhead.

Since packages in the Nix store can never change after they have been built, it makes no sense to generate a non-optimized autoloader script.

Finally, the last remaining practical issue, is PHP packages providing command-line utilities. Most executables have the following shebang line:


#!/usr/bin/env php

To ensure that these CLI tools work in Nix builder environments, the above shebang must be subsituted by the PHP executable that resides in the Nix store.

After carrying out the above described steps, running the following command:


$ composer install --optimize-autoloader

is simply just a formality -- it will not download or change anything.

Use cases


composer2nix has a variety of use cases. The most obvious one is to use it to package a web application project with Nix instead of composer. Running the following command generates Nix expressions from the composer configuration files:


$ composer2nix

By running the following command, we can use Nix to obtain the dependencies and generate a package with a vendor/ folder:


$ nix-build
$ ls result/
index.php vendor/

In addition to web applications, we can also deploy command-line utility projects implemented in PHP. For these kinds of projects it make more sense generate a bin/ sub folder in which the executables can be found.

For example, for the composer2nix project, we can generate a CLI-specific expression by adding the --executable parameter:


$ composer2nix --executable

We can install the composer2nix executable in our Nix profile by running:


$ nix-env -f default.nix -i

and then invoke composer2nix as follows:


$ composer2nix --help

We can also deploy third party command-line utilities directly from the Packagist repository:


$ composer2nix -p phpunit/phpunit
$ nix-env -f default.nix -iA phpunit-phpunit
$ phpunit --version

The most powerful application is not the integration with Nix itself, but the integration with other Nix projects. For example, we can define a NixOS configuration running an Apache HTTP server instance with PHP and our example web application:


{pkgs, config, ...}:

let
myexampleapp = import /home/sander/myexampleapp {
inherit pkgs;
};
in
{
services.httpd = {
enable = true;
adminAddr = "admin@localhost";
extraModules = [
{ name = "php7"; path = "${pkgs.php}/modules/libphp7.so"; }
];
documentRoot = myexampleapp;
};

...
}

We can deploy the above NixOS configuration as follows:


$ nixos-rebuild switch

By running only one simple command-line instruction, we have a running system with the Apache webserver serving our web application.

Discussion


In addition to composer2nix, I have also been responsible for developing node2nix, a tool that generates Nix expressions from NPM package configurations. Because composer is heavily inspired by NPM, we see many similarities in the architecture of both generators. For example, both generate the same kinds of expressions (a builder environment, a packages expression and a composition expression), have a similar separation of concerns, and both use an internal DSL for generating Nix expressions (NiJS and PNDP).

There are also a number of conceptual differences -- dependencies in NPM can be private to a package or shared among multiple packages. In composer, all dependencies in a project are shared.

The reason why NPM's dependency management is more powerful is because Node.js uses the CommonJS module system. CommonJS considers each file to be a unique module. This, for example, makes it possible for one module to load a version of a package from a certain filesystem location and another version of the same package from another filesystem location within the same project.

By contrast, in PHP, isolation is accomplished by the namespace declarations in each file. Namespaces can not be dynamically altered so that multiple versions can safely coexist in one project. Furthermore, the vendor/ directory structure makes it possible to store only one variant of a package.

Despite the fact that composer's dependency management is less powerful makes constructing a generator much more straightforward compared to NPM.

Another feature that composer supports for quite some time, and NPM until very recently is pinpointing/locking dependencies. When generating Nix expressions from NPM package configurations, we must replicate NPM's dependency resolving algorithm. In composer, we can simply take whatever the composer.lock file provides. The lock file saves us from replicating the dependency lookup process making the generation process considerably easier.

Acknowledgments


My implementation is not the first attempt that tries to integrate composer with Nix. After a few days of developing, I discovered another attempt that seems to be in a very early development stage. I did not try or use this version.

Availability


composer2nix can be obtained from Packagist and my GitHub page.

Despite the fact that I did quite a bit of research, composer2nix should still be considered a prototype. One of its known limitations is that it does not support fossil repositories yet.

by Sander van der Burg (noreply@blogger.com) at October 02, 2017 10:15 PM

September 22, 2017

Georges Dubus

Share scripts that have dependencies with Nix

Share scripts that have dependencies with Nix

Writing and sharing scripts is essential to productivity in development teams. But if you want the script to work everywhere, you can only depend on the tools that are everywhere. That usually means not using the newer features of a language, and not using libraries. In this article, I explain how I use Nix to write scripts that can run everywhere, regardless on what they depend on.

In my work as a software engineer, I often write small scripts to help with administrative actions or provide some nice view of what we are working on. From time to time, one of these scripts is useful enough that I want to share it with my colleagues.

But sharing the scripts comes with a challenge. I write my scripts in Python because it is a great language for that kind of things. But the best parts are outside of the standard library. As a result, to share my scripts, I also have to write a documentation explaining how to get the right version of python, create a python virtual environment, activate it, install the dependencies, and run the script. That's not worth the hassle, and my colleagues won't bother following the complex set of instructions for a script.

Recently, I've started using Nix to make self-contained scripts, and it's been working pretty well.

A script that can download its dependencies

For example, some of the things I like using in python scripts are:

  • the new string formatting syntax in Python 3.6,
  • the requests package for anything that touches HTTP
  • docopt for argument parsing.

My scripts will usually look like:

#! /usr/bin/env nix-shell
#! nix-shell -i python -p python36 python36Packages.requests2 python36Packages.docopt -I nixpkgs=https://github.com/NixOS/nixpkgs-channels/archive/nixos-17.03.tar.gz
# You'll need nix to automatically download the dependencies: `curl https://nixos.org/nix/install | sh`
"""
Some script

Usage:
  some_script.py <user> <variant> [--api=<api>]
  some_script.py -h | --help

Arguments:
  <user>        format: 42
  <variant>     format: 2

Options:
  -h --help     Show this screen.
  --api=<api>   The api instance to talk to [default: the-api.internal]
"""
import requests
from docopt import docopt

def doit(user, variant, api):
    # The actual thing I want to do
    pass

if __name__ == '__main__':
    arguments = docopt(__doc__)
    doit(arguments['<user>'], arguments['<variant>'], arguments['--api'])

Now, anybody can run the script with ./some_script.py, which will download the dependencies and run the script with them. They'll need to install nix if they don't have it yet (with curl https://nixos.org/nix/install | sh).

What actually happens there?

The first line of the script is a shebang. It tells the system to run the script with nix-shell, one of the tools provided by Nix.

The second line specifies how to invoke nix-shell:

  • -i python: this argument means we want nix-shell to run the script with the python command (-i stands for interpreter).
  • -p python36 python36Packages.requests2 python36Packages.docopt: this argument is the list of the nix packages that my script is going to use. Here, I want python 3.6, and the python packages requests and docopt. I usually use the Nix packages page to find the name of the dependencies I want.
  • -I nixpkgs=https://github.com/NixOS/nixpkgs-channels/archive/nixos-17.03.tar.gz: the version of nix I want to use for this script. This ensures people running the script in the future will get compatible versions of the dependencies.

When I run the script, nix-shell starts by downloading https://github.com/NixOS/nixpkgs-channels/archive/nixos-17.03.tar.gz if it has not done so recently. This is the definition of all the nix packages. Then, nix-shell looks into this file to know how to download the packages I want and their dependencies. Finally, it runs my script in an environment where the dependencies are available.

Try it now

This technique allows coworkers to share scripts written in different technologies without the cost of teaching everybody how to install dependencies and run scripts in each language.

Try it yourself right now with the example script from earlier. Make sure you have Nix (or install it with curl https://nixos.org/nix/install | sh), get the script, and run it without caring about the dependencies!

by Georges Dubus at September 22, 2017 01:00 PM

September 11, 2017

Sander van der Burg

PNDP: An internal DSL for Nix in PHP

It has been a while since I wrote a Nix-related blog post. In many of my earlier Nix blog posts, I have elaborated about various Nix applications and their benefits.

However, when you are developing a product or service, you typically do not only want to use configuration management tools, such as Nix -- you may also want to build a platform that is tailored towards your needs, so that common operations can be executed structurally and conveniently.

When it is desired to integrate custom solutions with Nix-related tools, you basically have one recurring challenge -- you must generate deployment specifications in the Nix expression language.

The most obvious solution is to use string manipulation to generate the expressions we want, but this has a number of disadvantages. Foremost, composing strings is not a very intuitive activity -- it is not always obvious to see what the end result would be by looking at the code.

Furthermore, it is difficult to ensure that a generated expression is correct and safe. For example, if a string value is not properly escaped, it may be possible to inject arbitrary deployment code putting the security of the deployed system at risk.

For these reasons, I have developed NiJS: an internal DSL for JavaScript, a couple of years ago to make integration with JavaScript-based applications more convenient. Most notably, NiJS is used by node2nix to generate Nix expressions from NPM package deployment specifications.

I have been doing PHP development in the last couple of weeks and realized that I needed a similar solution for this language. In this blog post, I will describe PNDP, an internal DSL for Nix in PHP, and show how it can be used.

Composing Nix packages in PHP


The Nix packages repository follows a specific convention for organizing packages -- every package is a Nix expression file containing a function definition describing how to build a package from source code and its build-time dependencies.

A top-level composition expression file provides all the function invocations that build variants of packages (typically only one per package) by providing the desired versions of the build-time dependencies as function parameters.

Every package definition typically invokes stdenv.mkDerivation {} (or abstractions built around it) that composes a dedicated build environment in which only the specified dependencies can be found and other kinds of precautions are taken to improve build reproducibility. In this builder environment, we can execute many kinds of build steps, such as running GNU Make, CMake, or Apache Ant.

In our internal DSL in PHP we can replicate these conventions using PHP language constructs. We can compose a proxy to the stdenv.mkDerivation {} invocation in PHP by writing the following class:


namespace Pkgs;
use PNDP\AST\NixFunInvocation;
use PNDP\AST\NixExpression;

class Stdenv
{
public function mkDerivation($args)
{
return new NixFunInvocation(new NixExpression("pkgs.stdenv.mkDerivation"), $args);
}
}

In the above code fragment, we define a class named: Stdenv exposing a method named mkDerivation. The method composes an abstract syntax tree for a function invocation to stdenv.mkDerivation {} using an arbitrary PHP object of any type as a parameter.

With the proxy shown above, we can create our own in packages in PHP by providing a function definition that specifies how a package can be built from source code and its build-time dependencies:


namespace Pkgs;
use PNDP\AST\NixURL;

class Hello
{
public static function composePackage($args)
{
return $args->stdenv->mkDerivation(array(
"name" => "hello-2.10",

"src" => $args->fetchurl(array(
"url" => new NixURL("mirror://gnu/hello/hello-2.10.tar.gz"),
"sha256" => "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i"
)),

"doCheck" => true,

"meta" => array(
"description" => "A program that produces a familiar, friendly greeting",
"homepage" => new NixURL("http://www.gnu.org/software/hello/manual"),
"license" => "GPLv3+"
)
));
}
}

The above code fragment defines a class named 'Hello' exposing one static method named: composePackage(). The composePackage method invokes the stdenv.mkDerivation {} proxy (shown earlier) to build GNU Hello from source code.

In addition to constructing a package, the above code fragment also follows the PHP conventions for modularization -- in PHP it is a common practice to modularize code chunks into classes that reside in their own namespace. For example, by following these conventions, we can also automatically load our package classes by using an autoloading implementation that follows the PSR-4 recommendation.

We can create compositions of packages as follows:


class Pkgs
{
public $stdenv;

public function __construct()
{
$this->stdenv = new Pkgs\Stdenv();
}

public function fetchurl($args)
{
return Pkgs\Fetchurl::composePackage($this, $args);
}

public function hello()
{
return Pkgs\Hello::composePackage($this);
}
}

As with the previous example, the composition example is a class. In this case, it exposes variants of packages by calling the functions with their required function arguments. In the above example, there is only one variant of the GNU Hello package. As a result, it suffices to just propagate the object itself as build parameters.

Contrary to the Nix expression language, we must expose each package composition as a method -- the Nix expression language is a lazy language that only invokes functions when their results are needed, PHP is an eager language that will evaluate them at construction time.

An implication of using eager evaluation is that opening the composition module, triggers all packages to be built. By wrapping the compositions into methods, we can make sure that only the requested packages are evaluated when needed.

Another practical implication of creating methods for each package composition is that it can become quite tedious if we have many of them. PHP offers a magic method named: __call() that gets invoked when we invoke a method that does not exists. We can use this magic method to automatically compose a package based on the method name:


public function __call($name, $arguments)
{
// Compose the classname from the function name
$className = ucfirst($name);
// Compose the name of the method to compose the package
$methodName = 'Pkgs\\'.$className.'::composePackage';
// Prepend $this so that it becomes the first function parameter
array_unshift($arguments, $this);
// Dynamically the invoke the class' composition method with $this as first parameter and the remaining parameters
return call_user_func_array($methodName, $arguments);
}

The above method takes the (non-existent) method name, converts it into the corresponding class name (by using the camel case naming convention), invokes the package's composition method using the composition object itself as a first parameter, and any other method parameters as successive parameters.

Converting PHP language constructs into Nix language constructs


Everything that PNDP does boils down to the phpToNix() function that automatically converts most PHP language constructs into semantically equivalent or similar Nix language constructs. For example, the following PHP language constructs are converted to Nix as follows:

  • A variable of type boolean, integer or double are converted verbatim.
  • A string will be converted into a string in the Nix expression language, and conflicting characters, such as the backslash and double quote, will be escaped.
  • In PHP, arrays can be sequential (when all elements have numeric keys that appear in numeric order) or associative in the remainder of the cases. The generator tries to detect what kind of array we have. It recursively converts sequential arrays into Nix lists of Nix language elements, and associative arrays into Nix attribute sets.
  • An object that is an instance of a class, will be converted into a Nix attribute set exposing its public properties.
  • A NULL reference gets converted into a Nix null value.
  • Variables that have an unknown type or are a resource will throw an exception.

As with NiJS (and JavaScript), the PHP host language does not provide equivalents for all Nix language constructs, such as values of the URL type, or encoding Nix function definitions.

You can still generate these objects by composing an abstract syntax from objects that are instances of the NixObject class. For example, when composing a NixURL object, we can generate a value of the URL type in the Nix expression language.

Arrays are a bit confusing in PHP, because you do not always know in advance whether it would yield a list or attribute set. To make these conversions explicit and prevent generation errors, they can be wrapped inside a NixList or NixAttrSet object.

Building packages programmatically


The PNDPBuild::callNixBuild() function can be used to build a generated Nix expression, such as the GNU Hello example shown earlier:


/* Evaluate the package */
$expr = PNDPBuild::evaluatePackage("Pkgs.php", "hello", false);

/* Call nix-build */
PNDPBuild::callNixBuild($expr, array());

In the code fragment above, we open the composition class file, named: Pkgs.php and we evaluate the hello() method to generate the Nix expression. Finally, we call the callNixBuild() function, in which we evaluate the generated expression by the Nix package manager. When the build succeeds, the resulting Nix store path is printed on the standard output.

Building packages from the command-line


As the previous code example is so common, there is also a command-line utility that can execute the same task. The following instruction builds the GNU Hello package from the composition class (Pkgs.php):


$ pndp-build -f Pkgs.php -A hello

It may also be useful to see what kind of Nix expression is generated for debugging or testing purposes. The --eval-only option prints the generated Nix expression on the standard output:


$ pndp-build -f Pkgs.js -A hello --eval-only

We can also nicely format the generated expression to improve readability:


$ pndp-build -f Pkgs.js -A hello --eval-only --format

Discussion


In this blog post, I have described PNDP: an internal DSL for Nix in PHP.

PNDP is not the first internal DSL I have developed for Nix. A couple of years ago, I also wrote NiJS: an internal DSL in JavaScript. PNDP shares a lot of concepts and implementation details with NiJS.

Contrary to NiJS, the functionality of PNDP is much more limited -- I have developed PNDP mainly for code generation purposes. In NiJS, I have also been exploring the abilities of the JavaScript language, such as exposing JavaScript functions in the Nix expression language, and the possibilities of an internal DSL, such as creating an interpreter that makes it a primitive standalone package manager. In PNDP, all this extra functionality is missing, since I have no practical need for them.

In a future blog post, I will describe an application that uses PNDP as a generator.

Availability


PNDP can obtained from Packagist as well as my GitHub page. It can be used under the terms and conditions of the MIT license.

by Sander van der Burg (noreply@blogger.com) at September 11, 2017 09:21 PM

August 23, 2017

Ollie Charles

Providing an API for extensible-effects and monad transformers

I was recently working on a small little project - a client API for the ListenBrainz project. Most of the details aren’t particularly interesting - it’s just a HTTP client library to a REST-like API with JSON. For the implementation, I let Servant and aeson do most of the heavy lifting, but I got stuck when considering what final API to give to my users.

Obviously, interacting with ListenBrainz requires some sort of IO so whatever API I will be offering has to live within some sort of monad. Currently, there are three major options:

  1. Supply an API targetting a concrete monad stack.

    Under this option, our API would have types such as

    submitListens :: ... -> M ()
    getListens :: ... -> M Listens

    where M is some particular monad (or monad transformer).

  2. Supply an API using type classes

    This is the mtl approach. Rather than choosing which monad my users have to work in, my API can be polymorphic over monads that support accessing the ListenBrainz API. This means my API is more like:

    submitListens :: MonadListenBrainz m => ... -> m ()
    getListens :: MonadListenBrainz m => ... -> m Listens
  3. Use an extensible effects framework.

    Extensible effects are a fairly new entry, that are something of a mix of the above options. We target a family of concrete monads - Eff - but the extensible effects framework lets our effect (querying ListenBrainz) seamlessly compose with other effects. Using freer-effects, our API would be:

    submitListens :: Member ListenBrainzAPICall effects => ... -> Eff effects ()
    getListens :: Member ListenBrainzAPICall effects => ... -> Eff effects Listens

So, which do we choose? Evaluating the options, I have some concerns.

For option one, we impose pain on all our users who want to use a different monad stack. It’s unlikely that you’re application is going to be written soley to query ListenBrainz, which means client code becomes littered with lift. You may write that off as syntatic, but there is another problem - we have committed to an interpretation strategy. Rather than describing API calls, my library now skips directly to prescribing how to run API calls. However, it’s entirely possible that you want to intercept these calls - maybe introducing a caching layer or additional logging. Your only option is to duplicate my API into your own project and wrap each function call and then change your program to use your API rather than mine. Essentially, the program itself is no longer a first class value that you can transform.

Extensible effects gives us a solution to both of the above. The use of the Member type class automatically reshuffles effects so that multiple effects can be combined without syntatic overhead, and we only commit to an interpretation strategy when we actually run the program. Eff is essentially a free monad, which captures the syntax tree of effects, rather than the result of their execution.

Sounds good, but extensible effects come with (at least) two problems that make me hesistant: they are experimental and esoteric, and it’s unclear that they are performant. By using only extensible effects, I am forcing an extensible effects framework on my users, and I’d rather not dictate that. Of course, extensible effects can be composed with traditional monad transformers, but I’ve still imposed an unnecessary burden on my users.

So, what do we do? Well, as Old El Paso has taught us: why don’t we have both?

It’s trivial to actually support both a monad transformer stack and extensible effects by using an mtl type class. As I argue in Monad transformers, free monads, mtl, laws and a new approach, I think the best pattern for an mtl class is to be a monad homomorphism from a program description, and often a free monad is a fine choice to lift:

class Monad m => MonadListenBrainz m where
  liftListenBrainz :: Free f a -> m a

But what about f? As observed earlier, extensible effects are basically free monads, so we can actually share the same implementation. For freer-effects, we might describe the ListenBrainz API with a GADT such as:

data ListenBrainzAPICall returns where
  GetListens :: ... -> ListenBrainzAPICall Listens
  SubmitListens :: ... -> ListenBrainzAPICall ()

However, this isn’t a functor - it’s just a normal data type. In order for Free f a to actually be a monad, we need f to be a functor. We could rewrite ListenBrainzAPICall into a functor, but it’s even easier to just fabricate a functor for free - and that’s exactly what Coyoneda will do. Thus our mtl type class becomes:

class Monad m => MonadListenBrainz m where
  liftListenBrainz :: Free (Coyoneda ListenBrainzAPICall) a -> m a 

We can now provide an implementation in terms of a monad transformer:

instance Monad m => MonadListenBrainz (ListenBrainzT m)
  liftListenBrainz f =
    iterM (join . lowerCoyoneda . hoistCoyoneda go)

    where
      go :: ListenBrainzAPICall a -> ListenBrainzT m a

or extensible effects:

instance Member ListenBrainzAPICall effs => MonadListenBrainz (Eff effs) where
  liftListenBrainz f = iterM (join . lowerCoyoneda . hoistCoyoneda send) f 

or maybe directly to a free monad for later inspection:

instance MonadListenBrainz (Free (Coyoneda ListenBrainzAPICall)) where
  liftListenBrainz = id

For the actual implementation of performing the API call, I work with a concrete monad transformer stack:

performAPICall :: Manager -> ListenBrainzAPICall a -> IO (Either ServantError a)

which both my extensible effects “run” function calls, or the go function in the iterM call for ListenBrainzT’s MonadListenBrainz instance.

In conclusion, I’m able to offer my users a choice of either:

  • a traditional monad transformer approach, which doesn’t commit to a particular intepretation strategy by using an mtl type class
  • extensible effects

All without extra syntatic burden, a complicated type class, or duplicating the implementation.

You can see the final implemantion of listenbrainz-client here.

Bonus - what about the ReaderT pattern?

The ReaderT design pattern has been mentioned recently, so where does this fit in? There are two options if we wanted to follow this pattern:

  • We require a HTTP Manager in our environment, and commit to using this. This has all the problems of providing a concrete monad transformer stack - we are committing to an interpretation.
  • We require a family of functions that explain how to perform each API call. This kind of like a van Laarhoven free monad, or really just explicit dictionary passing. I don’t see this really gaining much on abstracting with type classes.

I don’t feel like the ReaderT design pattern offers anything that isn’t already dealt with above.

by Oliver Charles at August 23, 2017 12:00 AM

July 03, 2017

Typing nix

Records (again)

This week has been dedicated to records: parsing, simplification and (a bit of) typing.

Summary of the week

Parsing of records

Nothing really fancy here: I've extended the parser to support records, record patterns and record type annotations.

The record and record patterns are exactly the same as in plain Nix − except that they may contain type annotations. The record type annotations looks similar to records: { field_1; ...; field_n; } or { field_1; ...; field_n; ... }. The fields may have the form field_name = type or field_name =? type. The first one means that the field named field_name has type type and the second one that the field named field_name is optional and has type type if present.

Note that contrary to records, the field names can't be dynamic (this wouldn't really make sense), and the syntactic sugar x.y.z = type isn't available. The trailing semicolon is also optional.

For example, the record

{
  x = 1;
  y = {
    z = true;
    k = x: x + 1;
  };
}

has the type

{
  x = Int;
  y = {
    z = Bool;
    k = Int → Int;
  }
}

and the function

{ x /*: Int */, y /*: Bool */ ? true, ... }: if y then x else x+1

has the type

{ x = Int; y =? Bool; ... } → Int

Simplification

Nix has a special syntax for defining nested records:

{
  x.y = 1;
  x.z = 2;
}

Although this syntax is really practical, we can't directly work on such a representation, and we must transform a record such as the previous one into something of the form { x = { y = 1; z = 2; }; }.

This is a rather simple task in the general case, but happends to be impossible as soon as some field names are dynamic: How may one know whether { ${e1}.x = 1; ${e2}.y = 2; } is a record of the form { foo = { x = 1; y = 2; }; } or of the form { foo = { x = 1; }; bar = { y = 2; }; }?

So we have to be careful while treating them. Currently, I simply assume for the sake of simplicity that no conflict will occur (i.e. that the previous example will correspond to { foo = { x = 1; }; bar = { y = 2; }; }, but this is obviously wrong in the general case, so I'll have to add more subtlety in the future.

Typing of record patterns

The two previous points took me quite a long time (worse that what I expected to be honest), so I just began the actual typing work. I however got enough time to type the record patterns (which isn't really useful while I can't type record literals, but still).

More on this next week, because it makes more sense to present the whole record typing at once.

Next steps

Time flies, and I already have only two months left (minus some holidays that I'll have to take for my own sanity) until the end of my internship (my defense is planned on September, 1st). I plan to continue my work on this − and hope other people will join me once it reaches an advanced enough state − but I won't be able to be full-time on it, so it will likely advance much slower (hence I want to do as much as possible while I can spend all my days on it).

So here's a quick plan of what remains to do:

  • On the typing side of things:

    • Finish the typing of records

    • Write a default typing environment with all the built-in values.

      Some of them (like functionArgs for example) will be available as operators with custom typing rules because it isn't really possible to give them a meaningful type.

    • Get a real gradual type

      There's currently a type that is called "gradual" in the tool, but it is in fact nothing but a new type constant (akin to Int or Bool) without any special behaviour.

    • Fix the subtyping

      The subtyping relation that is used isn't exactly the one one would expect (because it has been designed for a strict language and Nix is lazy, and this introduces some differences in the subtyping).

      Unless some clever hack is found to reuse the existing algorithm this is gonna be really tricky and I may not be able to do it before September.

    • Rewrite the pretty-printing of types

      The pretty-printing of the types is currently handled by CDuce (the language whose type-system I based this one on), but as its types are slightly different from ours, the pretty-printing isn't always exactly accurate. So if it isn't too hard, I'll rewrite a custom pretty-printer.

  • On a more practical side:

    • Document everything

      Although all the documentation should be present in my work notes, this blog or my future thesis, a proper manual and tutorial has to be written for this to be usable by someone else than me.

    • Run more extensive tests on the parser to ensure its compatibility with plain Nix

    • Enhance the interface

      Apart from an OCaml library, the interface for the typechecker is a quickly written executable which I use for testing and which will require some work in order to be usable

    • Choose the default warnings and errors

      The errors reported by the tool are configurable, but a couple of defaults sets (the "strict" and the "permissive" modes) will cover 99% of the usages and avoids the need for the user to understand every detail of the type system.

    • Enhance the error reporting

      The error messages are rather rough and need to be polished

July 03, 2017 12:00 AM