NixOS Planet

April 09, 2017

Munich NixOS Meetup

Augsburger Linux-Infotag

photoMunich NixOS Meetup

Der Augsburger Linux-Infotag ist eine eintägige Konferenz mit ca. 20 Vorträgen und 9 Workshops rund um Linux, Open-Source-Software und die digitale Gesellschaft. Der Eintritt ist frei.
Wir sind dort mit einem Stand zu NixOS vertreten :-)
https://www.luga.de/A...

Augsburg - Germany

Saturday, April 22 at 9:30 AM

6

https://www.meetup.com/Munich-NixOS-Meetup/events/239077440/

April 09, 2017 08:16 AM

Hackathon & Barbecue

photoMunich NixOS Meetup

Let's do another hackathon! Bring the Nix project you are currently working on or struggling with and discuss it with other people. Followed by a barbecue in the evening. Please add to the Pad which food you will bring to the barbecue: https://etherpad.wiki...

Augsburg - Germany

Monday, May 1 at 2:00 PM

8

https://www.meetup.com/Munich-NixOS-Meetup/events/239077247/

April 09, 2017 08:06 AM

March 31, 2017

Sander van der Burg

Subsituting impure version specifiers in node2nix generated package compositions

In a number of previous blog posts, I have described node2nix, a tool that can be used to automatically integrate NPM packages into the Nix packages ecosystem. The biggest challenge in making this integration possible is the fact that NPM does dependency management in addition to build management -- NPM's dependency management properties conflict with Nix's purity principles.

Dealing with a conflicting dependency manager is quite simple from a conceptual perspective -- you must substitute it by a custom implementation that uses Nix to obtain all required dependencies. The remaining responsibilities (such as build management) are left untouched and still have to be carried out by the guest package manager.

Although conceptually simple, implementing such a substitution approach is much more difficult than expected. For example, in my previous blog posts I have described the following techniques:

  • Extracting dependencies. In addition to the package we intend to deploy with Nix, we must also include all its dependencies and transitive dependencies in the generation process.
  • Computing output hashes. In order to make package deployments deterministic, Nix requires that the output hashes of downloads are known in advance. As a result, we must examine all dependencies and compute their corresponding SHA256 output hashes. Some NPM projects have thousands of transitive dependencies that need to be analyzed.
  • Snapshotting versions. Nix uses SHA256 hash codes (derived from all inputs to build a package) to address specific variants or versions of packages whereas version specifiers in NPM package.json configurations are nominal -- they permit version ranges and references to external artifacts (such as Git repositories and external URLs).

    For example, a version range of >= 1.0.3 might resolve to version 1.0.3 today and to version 1.0.4 tomorrow. Translating a version range to a Nix package with a hash code identifier breaks the ability for Nix to guarantee that a package with a specific hash code yields a (nearly) bit identical build.

    To ensure reproducibility, we must snapshot the resolved version of these nominal dependency version specifiers (such as a version range) at generation time and generate the corresponding Nix expression for the resulting snapshot.
  • Simulating shared and private dependencies. In NPM projects, dependencies of a package are stored in the node_modules/ sub folder of the package. Each dependency can have private dependencies by putting them in their corresponding node_modules/ sub folder. Sharing dependencies is also possible by placing the corresponding dependency in any of the parent node_modules/ sub folders.

    Moreover, although this is not explicitly advertised as such, NPM implicitly supports cyclic dependencies and is able cope with them because it will refuse to install a dependency in a node_modules/ sub folder if any parent folder already provides it.

    When generating Nix expressions, we must replicate the exact same behaviour when it comes to private and shared dependencies. This is particularly important to cope with cyclic dependencies -- the Nix package manager does not allow them and we have to break any potential cycles at generation time.
  • Simulating "flat module" installations. In NPM versions older than 3.0, every dependency was installed privately by default unless a shared dependency exists that fits within the required version range.

    In newer NPM versions, this strategy has been reversed -- every dependency will be shared as much as possible until a conflict has been encountered. This means that we have to move dependencies as high up in the node_modules/ folder hierarchy as possible which is an imperative operation -- in Nix this is a problem, because packages are cannot be changed after they have been built.

    To cope with flattening, we must compute the implications of flattening the dependency structure in advance at generation time.

With the above techniques it is possible construct a node_modules/ directory structure having a nearly identical structure that NPM would normally compose with a high degree of accuracy.

Impure version specifiers


Even if it would be possible to reproduce the node_modules/ directory hierarchy with 100% accuracy, there is another problem that remains -- some version specifiers always trigger network communication regardless whether the dependencies have been provided or not, such as:


[
{ "node2nix": "latest" }
, { "nijs": "git+https://github.com/svanderburg/nijs.git#master" }
, { "prom2cb": "github:svanderburg/prom2cb" }
]

When referring to tags or Git branches, NPM is unable to determine to which version a package resolves. As a consequence, it attempts to retrieve the corresponding packages to investigate even when a compatible version in the node_modules/ directory hierarchy already exists.

While performing package builds, Nix takes various precautions to prevent side effects from influencing builds including network connections. As a result, an NPM package deployment will still fail despite the fact that a compatible dependency has already been provided.

In the package builder Nix expression provided by node2nix, I used to substitute these version specifiers in the package.json configuration files by a wildcard: '*'. Wildcards used to work fine for old Node.js 4.x/NPM 2.x installations, but with NPM 3.x flat module installations they became another big source of problems -- in order to make flat module installations work, NPM needs to know to which version a package resolves to determine whether it can be shared on a higher level in the node_modules/ folder hierarchy or not. Wildcards prevent NPM from making these comparisons, and as a result, some package deployments fail that did not use to fail with older versions of NPM.

Pinpointing version specifiers


In the latest node2nix I have solved these issues by implementing a different substitution strategy -- instead of substituting impure version specifiers by wildcards, I pinpoint all the dependencies to the exact version numbers to which these dependencies resolve. Internally, NPM addresses all dependencies by their names and version numbers only (this also has a number of weird implications, because it disregards the origins of these dependencies, but I will not go into detail on that).

I got the inspiration for this pinpointing strategy from the yarn package manager (an alternative to NPM developed by Facebook) -- when deploying a project with yarn, yarn pinpoints the installed dependencies in a so-called yarn.lock file so that package deployments become reproducible when a system is deployed for a second time.

The pinpointing strategy will always prevent NPM from consulting external resources (under the condition that we have provided the package by our substitute dependency manager first) and always provide version numbers for any dependency so that NPM can perform flat module installations. As a result, the accuracy of node2nix with newer versions of NPM has improved quite a bit.

Availability


The pinpointing strategy is part of the latest node2nix that can be obtained from the NPM registry or the Nixpkgs repository.

One month ago, I have given a talk about node2nix at FOSDEM 2017 summarizing the techniques discussed in my blog posts written so far. For convenience, I have embedded the slides into this web page:

by Sander van der Burg (noreply@blogger.com) at March 31, 2017 09:25 PM

March 30, 2017

Munich NixOS Meetup

NixOS 17.03 Release Party

photoMunich NixOS Meetup

NixOS 17.03 "Gorilla" will be the next release scheduled to be released on March 30, 2017. Time to party on March 31, 2017!

Meet other fellow Nix/NixOS users & developers and the 17.03 release manager for some drinks.

We will provide beer, Tschunks and non-alcoholic beverages on our rooftop terrace.

See release notes for more information about major changes and updates at http://nixos.org/nixo....

And of course the regular thank you to Eelco Dolstra for his tireless work onNixOS, Nix and all the projects around that. I'd like to thank Domen Kožar for his help on getting this release out smoothly and his regular work on NixOS, the security team for taking a lot of workload off the release manager by always making sure to keep our systems and packages secure and also Mayflower for allowing Robin to work on NixOS a lot in working hours.

The art is from http://www.hasanlai.c... released under CC-BY-SA-4.0.

München - Germany

Friday, March 31 at 7:00 PM

23

https://www.meetup.com/Munich-NixOS-Meetup/events/238567023/

March 30, 2017 08:52 PM

March 20, 2017

Munich NixOS Meetup

NixOS 17.03 Release Sprint

photoMunich NixOS Meetup

The next stable NixOS release 17.03 'Gorilla' is going to happen on the 31st of March. The goal of this sprint is to fix critical issues before the release. The release manager will also attend and is available for guidance and feedback.

• Blocking issues: https://github.com/Ni...

• All 17.03 issues: https://github.com/Ni...

The sprint will be held at the Mayflower office in Munich on Saturday and Sunday starting at 11:00. Drinks will be provided.

The art is from http://www.hasanlai.c... released under CC-BY-SA-4.0.

München - Germany

Saturday, March 25 at 11:00 AM

7

https://www.meetup.com/Munich-NixOS-Meetup/events/238567006/

March 20, 2017 08:49 PM

March 14, 2017

Sander van der Burg

Reconstructing Disnix deployment configurations

In two earlier blog posts, I have described Dynamic Disnix, an experimental framework enabling self-adaptive redeployment on top of Disnix. The purpose of this framework is to redeploy a service-oriented system whenever the conditions of the environment change, so that the system can still meet its functional and non-functional requirements.

An important category of events that change the environment are machines that crash and disappear from the network -- when a disappearing machine used to host a crucial service, a system can no longer meet its functional requirements. Fortunately, Dynamic Disnix is capable of automatically responding to such events by deploying the missing components elsewhere.

Although Dynamic Disnix supports the recovery of missing services, there is one particular kind of failure I did not take into account. In addition to potentially crashing target machines that host the services of which a service-oriented systems consist, the coordinator machine that initiates the deployment process and stores the deployment state could also disappear. When the deployment state gets lost, it is no longer possible to reliably update the system.

In this blog post, I will describe a new addition to the Disnix toolset that can be used to cope with these kinds of failures by reconstructing a coordinator machine's deployment configuration from the meta data stored on the target machines.

The Disnix upgrade workflow


As explained in earlier blog posts, Disnix requires three kinds of deployment models to carry out a deployment process: a services model capturing the components of which a system consists, an infrastructure model describing the available target machines and their properties, and a distribution model mapping services in the services model to target machines in the infrastructure model. By writing instances of these three models and running the following command-line instruction:

$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix

Disnix will carry out all activities necessary to deploy the system: building the services and its intra-dependencies from source code, distributing the services and its intra-dependencies, and activating all services in the right order.

When changing any of the models and running the same command-line instruction again, Disnix attempts to upgrade the system by only rebuilding the aspects that have changed, and only deactivating the obsolete services and activating new services.

Disnix (as well as other Nix-related tools) attempt to optimize a redeployment process by only executing the steps that are required to reach a new deployment state. In Disnix, the building and distribution steps are optimized due to the fact that every package is stored in isolation the Nix store in which each package has a unique filename with a hash prefix, such as:

/nix/store/acv1y1zf7w0i6jx02kfa6gxyn2kfwj3l-firefox-48.0.2

As explained in a number of earlier blog posts, the hash prefix (acv1y1zf7w0i6jx02kfa6gxyn2kfwj3l...) is derived from all inputs used to build the package including its source code, build script, and libraries that it links to. That, for example, means that if we upgrade a system and nothing to the any of inputs of Firefox changes, we get an identical hash and if such a package build already exists, we do not have to build or transfer the package from an external site again.

The building step in Disnix produces a so-called low-level manifest file that is used by tools executing the remaining deployment activities:

<?xml version="1.0"?>
<manifest version="1">
<distribution>
<mapping>
<profile>/nix/store/aiawhpk5irpjqj25kh6ah6pqfvaifm53-test1</profile>
<target>test1</target>
</mapping>
</distribution>
<activation>
<mapping>
<dependsOn>
<dependency>
<target>test1</target>
<container>process</container>
<key>d500194f55ce2096487c6d2cf69fd94a0d9b1340361ea76fb8b289c39cdc202d</key>
</dependency>
</dependsOn>
<name>nginx</name>
<service>/nix/store/aa5hn5n1pg2qbb7i8skr6vkgpnsjhlns-nginx-wrapper</service>
<target>test1</target>
<container>wrapper</container>
<type>wrapper</type>
<key>da8c3879ccf1b0ae34a952f36b0630d47211d7f9d185a8f2362fa001652a9753</key>
</mapping>
</activation>
<targets>
<target>
<properties>
<hostname>test1</hostname>
</properties>
<containers>
<mongo-database/>
<process/>
<wrapper/>
</containers>
<system>x86_64-linux</system>
<numOfCores>1</numOfCores>
<clientInterface>disnix-ssh-client</clientInterface>
<targetProperty>hostname</targetProperty>
</target>
</targets>
</manifest>

The above manifest file contains the following kinds of information:

  • The distribution element section maps Nix profiles (containing references to all packages implementing the services deployed to the machine) to target machines in the network. This information is used by the distribution step to transfer packages from the coordinator machine to a target machine.
  • The activation element section contains elements specifying which service to activate on which machine in the network including other properties relevant to the activation, such as the type plugin that needs to be invoked that takes care of the activation process. This information is used by the activation step.
  • The targets section contains properties of the machines in the network and is used by all tools that carry out remote deployment steps.
  • There is also an optional snapshots section (not shown in the code fragment above) that contains the properties of services whose state need to be snapshotted, transferred and restored in case their location changes.

When a Disnix (re)deployment process successfully completes, Disnix stores the above manifest as a Disnix coordinator Nix profile on the coorindator machine for future reference with the purpose to optimize the successive upgrade step -- when redeploying a system Disnix will compare the generated manifest with the previously deployed generated instance and only deactivate services that have become obsolete and activating services that are new, making upgrades more efficient than fresh installations.

Unfortunately, when the coordinator machine storing the manifests gets lost, then also the deployment manifest gets lost. As a result, a system can no longer be reliably upgraded -- without deactivating obsolete services, newly deployed services may conflict with services that are already running on the target machines preventing the system from working properly.

Reconstructible manifests


Recently, I have modified Disnix in such a way that the deployment manifests on the coordinator machine can be reconstructed. Each Nix profile that Disnix distributes to a target machine includes a so-called profile manifest file, e.g. /nix/store/aiawhpk5irpjqj25kh6ah6pqfvaifm53-test1/manifest. Previously, this file only contained the Nix store paths to the deployed services and was primarily used by the disnix-query tool to display the installed set of services per machines.

In the latest Disnix, I have changed the format of the profile manifest file to contain all required meta data so that the the activation mappings can be reconstructed on the coordinator machine:

stafftracker
/nix/store/mi7dn2wvwvpgdj7h8xpvyb04d1nycriy-stafftracker-wrapper
process
process
d500194f55ce2096487c6d2cf69fd94a0d9b1340361ea76fb8b289c39cdc202d
false
[{ target = "test2"; container = "process"; _key = "4827dfcde5497466b5d218edcd3326327a4174f2b23fd3c9956e664e2386a080"; } { target = "test2"; container = "process"; _key = "b629e50900fe8637c4d3ddf8e37fc5420f2f08a9ecd476648274da63f9e1ebcc"; } { target = "test1"; container = "process"; _key = "d85ba27c57ba626fa63be2520fee356570626674c5635435d9768cf7da943aa3"; }]

The above code fragment shows a portion of the profile manifest. It has a line-oriented structure in which every 7 lines represent the properties of a deployed service. The first line denotes the name of the service, second line the Nix store path, third line the Dysnomia container, fourth line the Dysnomia type, fifth line the hash code derived of all properties, sixth line whether the attached state must be managed by Disnix and the seventh line an encoding of the inter-dependencies.

The other portions of the deployment manifest can be reconstructed as follows: the distribution section can be derived by querying the Nix store paths of the installed profiles on the target machines, the snapshots section by checking which services have been marked as stateful and the targets section can be directly derived from a provided infrastructure model.

With the augmented data in the profile manifests on the target machines, I have developed a tool named disnix-reconstruct that can reconstruct a deployment manifest from all the meta data the manifests on the target machines provide.

I can now, for example, delete all the deployment manifest generations on the coordinator machine:

$ rm /nix/var/nix/profiles/per-user/sander/disnix-coordinator/*

and reconstruct the latest deployment manifest, by running:

$ disnix-reconstruct infrastructure.nix

The above command resolves the full paths to the Nix profiles on the target machines, then downloads their intra-dependency closures to the coordinator machine, reconstructs the deployment manifest from the profile manifests and finally installs the generated deployment manifest.

If the above command succeeds, then we can reliably upgrade a system again with the usual command-line instruction:

$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix

Extending the self-adaptive deployment framework


In addition to reconstructing deployment manifests that have gone missing, disnix-reconstruct offers another benefit -- the self-adaptive redeployment framework described in the two earlier blog posts is capable of responding to various kinds of events, including redeploying services to other machines when a machine crashes and disappears from the network.

However, when a machine disappears from the network and reappears at a later point in time, Disnix no longer knows about its configuration. When such a machine reappears in the network, this could have disastrous results.

Fortunately, by adding disnix-reconstruct to the framework we can solve this issue:


As shown in the above diagram, whenever a change in the infrastructure is detected, we reconstruct the deployment manifest so that Disnix knows which services are deployed to it. Then when the system is being redeployed, the services on the reappearing machines can also be upgraded or undeployed completely, if needed.

The automatic reconstruction feature can be used by providing the --reconstruct parameter to the self adapt tool:


$ dydisnix-self-adapt -s services.nix -i infrastructure.nix -q qos.nix \
--reconstruct

Conclusion


In this blog post, I have described the latest addition to Disnix: disnix-reconstruct that can be used to reconstruct the deployment manifest on the coordinator machine from meta data stored on the target machines. With this addition, we can still update systems if the coordinator machine gets lost.

Furthermore, we can use this addition in the self-adaptive deployment framework to deal with reappearing machines that already have services deployed to them.

Finally, besides developing disnix-reconstruct, I have reached another stable point. As a result, I have decided to release Disnix 0.7. Consult the Disnix homepage for more information.

by Sander van der Burg (noreply@blogger.com) at March 14, 2017 11:02 PM

March 02, 2017

Munich NixOS Meetup

guix/guixSD: Talk & Discussion

photoMunich NixOS Meetup

John Darrington will give a talk about guix and guixSD, a package manager and a Linux distribution which are based on similar concepts as nix/NixOS.

The talk will be in English.

Augsburg - Germany

Wednesday, March 29 at 7:00 PM

11

https://www.meetup.com/Munich-NixOS-Meetup/events/237831744/

March 02, 2017 08:16 AM

December 31, 2016

Domen Kozar

Reflecting on 2016

Haven't blogged in 2016, but a lot has happened.

A quick summary of highlighted events:

2016 was a functional programming year as I've planned by end of 2015.

I greatly miss Python community and in that spirit, I've attended EuroPython 2016 and helped organize DragonSprint in Ljubljana. I don't think there's a place for me in OOP anymore, but I'll surely attend community events as nostalgia will kick in.

2017 seems extremely exciting, plans will unveil as I go, starting with some exciting news in January for Nix community.

by Domen Kožar at December 31, 2016 06:00 PM

December 27, 2016

Joachim Schiele

33C3-nixos

nixos assembly

if you want to join our nixos sprint, you can find our assembly here:

hope to see you!

qknight

by qknight at December 27, 2016 11:35 AM

December 20, 2016

Rok Garbas

Reproducible builds summit in Berlin

Last week a Reproducible Builds Summit was held in Berlin and since it was almost at my doorstep I had to take part. Along with Eelco Dolstra, I represented the NixOS voice in the debates that were happening at the summit.

Group picture of participant of Reproducible Builds summit in Berlin

When is a build reproducible?

There were quite few discussion sessions trying to define, when some piece of software is reproducible. While it is important to have a definition - in words, I was hoping that there would be a simple tool that could tell me this.

Such tool might never exist. What I realized during the summit is that reproducibility is not something which is true or false, but something that we is true until somebody disproves it. Reproducibility is a goal which we are always working towards, just like security.

As with zero vulnerability days, there should probably be zero reproducibility days.

Making sure that our software is reproducibility would follow similar practices we have already for security: 3rd party auditing, CVE-like database of reproducibility bugs, ...

Knowing if something is reproducible is not a simple yes/no question, but it is a process you need to follow. Yaay I am not too old to learn something :)

Different kinds of reproducibility

For the purpose of this blog post I would like to point out that there are - at least - two kinds of reproducibility.

When we talked about reproducibility at the summit we were of course referring to a bit-by-bit reproducibility or as I will continue to call it during this blog post binary reproducibility.

And then there is reproducibility which I like to call build reproducibility which only ensures the reproducibility of build environments (eg. versions of tools in build environments are the same).

Purpose of Reproducible Builds effort is of course to be binary reproducible, but to be able to build something bit by bit identical you need to use the same version of tools, which makes build reproducibility a pre-step of binary reproducibility.

Build reproducibility is a prerequirementof binary reproducibility.

Why this distinction matter I will explain in a bit, but for now just acknowledge this naming.

We are all biased

A leader in Reproducible Builds efforts is Debian community. You can see that Debian community is working hard on this and many Debian developers were present at the summit.

Getting the involvement outside of the Debian community is high on the list, since everybody realizes that only with common efforts we will be able to achive reproducibility nirvana.

But regardless of all the good intentions, I noticed two biases that I would like to point.

  • Many of us look down on language specific package managers (eg. pip, cabal, ...) as being less worthy usually talking about them as: "Who in their sane mind would use latest version of packages?". It sounds like a usual developer vs. sysadmin conversation. I hope at next summit we could also have representatives from some of the language specific package managers joined the discussions.
  • I got the impression that the sole reason of reproducible builds is that you would be more secure. That implies that everybody cares about security. Which would be great, but in a world with tight deadlines and startups security is usually the first thing that gets crossed out of the list. We need to make a more compelling reason then just security. I am aware that security is important to many, but we must also understand that it is not top priority for everybody.

Probably I missed some, because of my own biases, but the only way to overcome such thing is to try to expose each other biases.

Why would you care about reproducible builds?

My personal quest for this summit was to find better ways to market reproducible builds. I found that many who tried or want to introduce reproducible builds at work, fail because of few of reasons:

  • you need to opt in and switch from tools that you are used to.
  • many tools you are already using were not built with reproducible issues in mind.
  • reproducibility many times sounds like: all or nothing.
  • the prevalent (marketed) benefit of reproducible builds is better security. not everybody requires that level of security.
  • high cost (usually in developers hours) is usually required.

Reproducibility as productivity tool

What if we turn the marketing of reproducible builds around. What if the main (marketed) reason for the reproducible builds would be to improve developer productivity?

In previous paragraph I already explained that reproducibility is not a simple yes/no questions, but it is a process which one must follow. Also the path to reproducible builds is not a simple turn on/off switch. There are many steps on the way to binary reproducibility that already improve current development practices while getting us closer to binary reproducibility.

One example of this could be: reproducible development environments. If we have tools that could recreate any environment, couldn't we also use the for development purposes?

And we don't have to stop here. Isn't the most expensive part of fixing bugs, reproducing them? Couldn't build reproducibility also help us with reproducing certain type of bugs?

Maybe by now it became I little clearer, that there are many benefits along the way to binary reproducibility, that might be more convincing for some companies.

Cross platform build tool

Another discussion I watched from across the room was, BuildInfo specification. As part of reproducibility building Debian packages, also an BuildInfo file is (will be) produced which has all needed instructions, sources and final checksums, that allows somebody else to verify (reproduce) the resulting binary.

I was not alone thinking that this verification process should/could be distribution agnostic. Even a group was formed to discuss this, but sadly I was busy in other discussions to take part in this.

But then I realized that BuildInfo effort is actually changing a binary distribution like Debian into a source -> binary like distribution. Why produce BuildInfo file after the build process and why not start with it and only record the checksums of binaries after the build is done.

Is there such build tool that works across distributions would allow us to have BuildInfo specification (except the checksums) before the build process? Of course there is: Nix.

What many do not know about Nix is that Nix is first and foremost a build tool. It only happens that there is a database of packages already described how to be built and a side-effect we get is that Nix can also be a package manager. But initially it is a build tool. Nix can build .deb or .rpm packages or any other format you want.

What I would like to see is that whoever is looking into this direction that gives Nix a try and at least learn from it, because Nix and NixOS community is doing build reproducibility already for the last 10 years.

Conclusion

Few things I want you to take from this blog post are:

  • Go to the next Reproducible Builds summit. It was great. I hope to be there too.
  • Reproducibility is a process and not a state.
  • There are many useful steps before you reach reproducible nirvana. It might make sense to market those as well.
  • Nix is a cross distribution build tool. Use it. I know I will :)

by Rok Garbas at December 20, 2016 11:00 PM

December 14, 2016

Joachim Schiele

rust-cargo

motivation

our latest encounter with rust was cargo which we replaced in a way that it is not used for deployment in nixos anymore.

more can be found at: https://github.com/nixcloud/nixcrates

tar-example

here is a tiny example using the tar-crate with the source from https://github.com/alexcrichton/tar-rs#writing-an-archive

tar-example source code

tar-example = stdenv.mkDerivation rec {
  name = "tar-example";
  src = ./example/src3;
  buildInputs = with allCrates; [ tar filetime libc xattr ]; 
  buildPhase = ''
    ${symlinkCalc buildInputs}

    # this creates the files needed for the test
    echo "hu" > file1.txt
    echo "bar" > file2.txt
    echo "batz" > file3.txt 

    ${rustcNightly}/bin/rustc $src/main.rs --crate-type "bin" --emit=dep-info,link --crate-name main -L dependency=mylibs --extern tar=${allCrates.tar}/libtar.rlib
    ./main
    if [ -f foo.tar ]; then
      echo -e "---------\nSUCCESS: tar-example was executed successfully!   \n--------"
    else
      echo "FAIL: not working!"
    fi
  '';
  installPhase=''
    mkdir $out
  '';
};

after executing it

nix-build  -A tar-example
these derivations will be built:
  /nix/store/wmfbc5x59pnhmvh4fdql7gnbymsli33w-tar.drv
  /nix/store/sr1cgvkapz044wgx1rw6261hl7d9682y-tar-example.drv
building path(s) ‘/nix/store/hgravz7r38n3ic8msn94qy761zzi6jyw-tarunpacking sources
tar-0.4.9/.gitignore
tar-0.4.9/.travis.yml
tar-0.4.9/Cargo.toml
tar-0.4.9/LICENSE-APACHE
tar-0.4.9/LICENSE-MIT
tar-0.4.9/README.md
tar-0.4.9/appveyor.yml
tar-0.4.9/examples/extract_file.rs
tar-0.4.9/examples/list.rs
tar-0.4.9/examples/raw_list.rs
tar-0.4.9/examples/write.rs
tar-0.4.9/src/archive.rs
tar-0.4.9/src/builder.rs
tar-0.4.9/src/entry.rs
tar-0.4.9/src/entry_type.rs
tar-0.4.9/src/error.rs
tar-0.4.9/src/header.rs
tar-0.4.9/src/lib.rs
tar-0.4.9/src/pax.rs
tar-0.4.9/tests/all.rs
tar-0.4.9/tests/archives/directory.tar
tar-0.4.9/tests/archives/duplicate_dirs.tar
tar-0.4.9/tests/archives/empty_filename.tar
tar-0.4.9/tests/archives/file_times.tar
tar-0.4.9/tests/archives/link.tar
tar-0.4.9/tests/archives/pax.tar
tar-0.4.9/tests/archives/reading_files.tar
tar-0.4.9/tests/archives/simple.tar
tar-0.4.9/tests/archives/spaces.tar
tar-0.4.9/tests/archives/sparse.tar
tar-0.4.9/tests/archives/xattrs.tar
tar-0.4.9/tests/entry.rs
tar-0.4.9/tests/header/mod.rs
patching sources
configuring
no configure script, doing nothing
building
tar -  --extern libc=/nix/store/nc2jvn8rzbrbbqdwfwc7clzl99za9w2r-libc/liblibc.rlib --extern filetime=/nix/store/a5rr0mvyqnvq3mawhacwb49i101lyp4v-filetime/libfiletime.rlib
namefix tar
name tar
About to use rustc to compile some lib - tar
installing
post-installation fixup
shrinking RPATHs of ELF executables and libraries in /nix/store/hgravz7r38n3ic8msn94qy761zzi6jyw-tar
patching script interpreter paths in /nix/store/hgravz7r38n3ic8msn94qy761zzi6jyw-tar
building path(s) ‘/nix/store/m0kslv072cphsk11n4696lzncc6rprc1-tar-exampleunpacking sources
unpacking source archive /nix/store/3q2yq22lh5shr0w4fxhcw8h1s61p6q9y-src3
source root is src3
patching sources
configuring
no configure script, doing nothing
building
warning: unused import: `std::io::prelude::*;`, #[warn(unused_imports)] on by default
 --> /nix/store/3q2yq22lh5shr0w4fxhcw8h1s61p6q9y-src3/main.rs:3:5
  |
3 | use std::io::prelude::*;
  |     ^^^^^^^^^^^^^^^^^^^^

---------
SUCCESS: tar-example was executed successfully!   
--------
installing
post-installation fixup
shrinking RPATHs of ELF executables and libraries in /nix/store/m0kslv072cphsk11n4696lzncc6rprc1-tar-example
patching script interpreter paths in /nix/store/m0kslv072cphsk11n4696lzncc6rprc1-tar-example
/nix/store/m0kslv072cphsk11n4696lzncc6rprc1-tar-example

summary

the project is still in its early stages but evolves into a reimplementation of cargo. however, it is pretty easy to be used and works for many crates already.

by qknight at December 14, 2016 07:35 PM

November 15, 2016

Joachim Schiele

nixos-augsburg-sprint-2016-11

nixos-sprint

paul and me visisted the augsburger openlab again!

projects

profpatsch

  • package draw.io
    • ✔ build with ant
  • ✔ Initialize package tests

uwap

  • Quassel + qt4 doesn't support postgresql as database backend
    • ✔ Add an option to the quassel service to allow the qt5 version
  • "nixify" postfix configuration

christine

paul, michael & qknight

  • ✔ started nextcloud packaging
  • ✔ leaps: packaged with tests: https://github.com/NixOS/nixpkgs/commit/47d81ed3473f33cfb48f2be079f50cdfac60f1e7
  • ✔ made https://github.com/QuiteRSS/quiterss work on nixos, https://github.com/NixOS/nixpkgs/pull/20245
  • fixed nixcloud.io email system so that qknight can use thunderbird with STARTTLS and submission

    submissionOptions = {
      "smtpd_tls_security_level" = "encrypt";
      "smtpd_sasl_auth_enable" = "yes";
      "smtpd_client_restrictions" = "permit_sasl_authenticated,reject";
      "smtpd_sasl_type" = "dovecot";
      "smtpd_sasl_path" = "private/auth";
    };
  • LXC: Unprivileged container with NixOS as guest and as host:
    • ✔ LXC container ist started as root that spawnes the LXC as user 100000 which is unprivileged on the host
    • ✔ shared read only store with the host
    • ✔ container can be build and updated on the host with nix-env

summary

this sprint was awesome. we got so many things to work!

by qknight at November 15, 2016 12:10 PM

November 07, 2016

Anders Papitto

Scripting pulseaudio, bluetooth, jack

Posted on November 7, 2016

I’ve just leveled up my audio configuration, and there’s precious little information out there on how to script against pulseaudio and bluetooth and jack, so I’ll document it a bit.

Future versions of all snippets will live in my nixos configuration repo.

Here are some things that I have working - I can run jack alongside pulseaudio. youtube videos can play at the same time as audio software. - I can play audio through bluetooth headphones. I can even do this simultaneously with running jack - though, only pulseaudio outputs can be sent to the bluetooth headphones.

And that’s basically it. But it’s tricky to set up for the first time. So, let’s look at the implementation.

system configuration

First of all, I’ve scrapped the default /etc/pulse/default.pa, and written my own. It’s largely the same - I went through the default and copied in each line, except that I skipped all the modules that have to do with restoring streams and devices, etc. It’s quite annoying when pulseaudio has persistent state and tries to do “smart” things - I would rather my scripts be in full control. Note the presence of some jackdbus and bluetooth modules.

In that file, I also have some pam limits configured, as well as some kernel modules and /etc/jackdrc. Those precede my most recent pass of configuration, so I’m not sure if they’re actually necessary.

Assuming that you can get jack and/or bluetooth to work the first time, the interesting bits are the scripts that make them not a pain to deal with. I have two main scripts - switch-to-jack, switch-to-bluetooth. I also have a mostly-unused switch-to-stereo for completeness.

switch-to-jack looks like this

#!/bin/bash
set -x
until pacmd list-sinks | egrep -q 'jack_out'
do
    jack_control start
done
pactl set-sink-volume jack_out 50%
pacmd set-default-sink jack_out
for index in $(pacmd list-sink-inputs | grep index | awk '{ print $2 }')
do
    pacmd move-sink-input $index jack_out
done

It starts jack with jack_control. This will cause jack to take exclusive control of the sound card. pulseaudio will notice this and add a sink and source for jack - however because all the ‘smart’ modules were removed, it won’t redirect any active streams to jack. They will just freeze, because they no longer have access to the sound card.

Then, I set the volume to something low enough to be safe - disabling the ‘smarts’ means that the jack device always gets added at 100% volume, which generally is too loud. After doing that, it’s safe to tell future audio to default to playing over jack, and then to move all existing streams there.

switch-to-bluetooth is similar

#!/bin/bash

set -x

DEVICE=$(bluetoothctl <<< devices | egrep '^Device' | awk '{ print $2 }')
until bluetoothctl <<< show | grep -q 'Powered: yes'
do
    bluetoothctl <<< 'power on'
done
until pacmd list-sinks | egrep -q 'name:.*bluez_sink'
do
    bluetoothctl <<< "connect $DEVICE"
done

TARGET_CARD=$(pacmd list-cards | grep 'name:' | egrep -o 'bluez.*[^>]')
TARGET_SINK=$(pacmd list-sinks | grep 'name:' | egrep -o 'bluez.*[^>]')

until pacmd list-cards | egrep -q 'active profile: <a2dp_sink>'
do
    pacmd set-card-profile $TARGET_CARD a2dp_sink
done


pactl set-sink-volume $TARGET_SINK 30%
pacmd set-default-sink $TARGET_SINK
for index in $(pacmd list-sink-inputs | grep index | awk '{ print $2 }')
do
    pacmd move-sink-input $index $TARGET_SINK
done

A couple differences are

  • we make sure bluetooth is on
  • we have to connect to the specific device. If I had more than one set of bluetooth headphones this logic might be more complicated.
  • I have to always tell pulseaudio to use a2dp instead of hsp/hfp, because again I disabled the ‘remembering’ modules in pulseaudio.

Also note that this is the ‘steady-state’ implementation. The first time you want to connect a particular bluetooth device, you have to go through a little dance that looks something like this:

$ bluetoothctl
[NEW] Controller 60:57:18:9B:AB:71 BlueZ 5.40 [default]

[bluetooth]# agent on
[bluetooth]# Agent registered

[bluetooth]# discoverable on
[bluetooth]# Changing discoverable on succeeded
[CHG] Controller 60:57:18:9B:AB:71 Discoverable: yes

[bluetooth]# scan on
[bluetooth]# Discovery started
[CHG] Controller 60:57:18:9B:AB:71 Discovering: yes
[NEW] Device E8:07:BF:00:14:14 Mixcder ShareMe 7

[bluetooth]# [CHG] Device E8:07:BF:00:14:14 RSSI: -52

[bluetooth]# scan off
[bluetooth]# [CHG] Device E8:07:BF:00:14:14 RSSI is nil
Discovery stopped
[CHG] Controller 60:57:18:9B:AB:71 Discovering: no

[bluetooth]# devices
Device E8:07:BF:00:14:14 Mixcder ShareMe 7
[bluetooth]# pair E8:07:BF:00:14:14
Attempting to pair with E8:07:BF:00:14:14
[CHG] Device E8:07:BF:00:14:14 Connected: yes

[Mixcder ShareMe 7]# [CHG] Device E8:07:BF:00:14:14 Modalias: bluetooth:v0094p5081d0101
[CHG] Device E8:07:BF:00:14:14 UUIDs: 00001108-0000-1000-8000-00805f9b34fb
[CHG] Device E8:07:BF:00:14:14 UUIDs: 0000110b-0000-1000-8000-00805f9b34fb
[CHG] Device E8:07:BF:00:14:14 UUIDs: 0000110c-0000-1000-8000-00805f9b34fb
[CHG] Device E8:07:BF:00:14:14 UUIDs: 0000110e-0000-1000-8000-00805f9b34fb
[CHG] Device E8:07:BF:00:14:14 UUIDs: 0000111e-0000-1000-8000-00805f9b34fb
[CHG] Device E8:07:BF:00:14:14 UUIDs: 00001200-0000-1000-8000-00805f9b34fb
[CHG] Device E8:07:BF:00:14:14 Paired: yes
Pairing successful

[CHG] Device E8:07:BF:00:14:14 Connected: no

[bluetooth]# connect E8:07:BF:00:14:14
Attempting to connect to E8:07:BF:00:14:14
[CHG] Device E8:07:BF:00:14:14 Connected: yes
Connection successful

[Mixcder ShareMe 7]# exit
[Mixcder ShareMe 7]# Agent unregistered
[DEL] Controller 60:57:18:9B:AB:71 BlueZ 5.40 [default]

Abandoning ship

Laptop suspend causes bluetooth to disconnect. With the pulseaudio ‘smarts’ disabled, as far as I’m aware the bluetooth-connected streams will just die. To work around this, I switch everything to jack right before shutting down with a systemd service.

Other tips

Given the command line tooling, it’s not really necessary to use any guis. However, I found it convenient to use pavucontrol while setting all this up just to keep an eye on what was active and which streams were going where.

Note that I set volume with pactl, and most other things use pacmd. I’m not sure what the exact differences between these tools is, but pacmd doesn’t support setting a percent-based volume.

November 07, 2016 12:00 AM

November 06, 2016

Joachim Schiele

pulseaudio tcp streaming

motivation

this is a simple setup for streaming pulseaudio streams over the network.

server

https://github.com/openlab-aux/vuizvui/blob/master/machines/labnet/labtops.nix#L26

hardware.pulseaudio = {
  enable = true;
  tcp.enable = true;
  tcp.anonymousClients.allowedIpRanges = [ "172.16.0.0/16" ];
  zeroconf.publish.enable = true;
};

client

pulseaudio.zeroconf.discovery.enable=true;

to use the new setup simply play some music and in pavucontrol you can select a different output device for the listed stream.

by qknight at November 06, 2016 12:35 AM

November 05, 2016

Joachim Schiele

apu

motivation

a few months back i've replaced the odroid XU4 with this APU 2c4 board.

installing nixos

first have a look into the apu2 manual.

since there is no VGA/DVI output but only a RS232 serial interface we need to use that:

  1. serial cable

    out of simplicity i soldered one myself, the pins are:

    pin 2 to pin 3
    pin 3 to pin 2
    pin 5 to pin 5 (GND)

    i've been using this with a USB 2 RS232 converter

    # lsusb
    Bus 003 Device 003: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port
  2. connecting via serial console:

    picocom /dev/ttyUSB0 -b 115200
  3. nixos boot cd

    download the nixos-minimal-16.03.714.69420c5-x86_64-linux.iso and use unetbootin to deploy it to an USB stick. afterwards mount the first partition of the USB-stick and append this to the syslinux.cfg file's kernel command line:

    console=ttyS0,115200n8

    info: using the serial console you can see the GRUB output, see the kernel's output after boot and finally get a shell.

  4. booting from the USB stick

    the apu 2c4 features corebios and the process is straight forward, just hit F10 and select the USB stick

  5. nixos installation

    basically follow the nixos manual

    info: but don't forget to include this line in configuration.nix:

    boot.kernelParams = [ "console=ttyS0,115200n8" ];

/etc/nixos/configuration.nix

# Edit this configuration file to define what should be installed on
# your system.  Help is available in the configuration.nix(5) man page
# and in the NixOS manual (accessible by running ‘nixos-help’).

{ config, pkgs, ... }:

let
  pw = import ./passwords.nix;
in
# setfacl -R -m u:joachim:rwx /backup

{
  imports =
    [ # Include the results of the hardware scan.
      ./hardware-configuration.nix
    ];

  # Use the GRUB 2 boot loader.
  boot.loader.grub.enable = true;
  boot.loader.grub.version = 2;
  # Define on which hard drive you want to install Grub.
  boot.loader.grub.device = "/dev/sda";

  boot.kernelParams = [ "console=ttyS0,115200n8" ];

  networking = {
    hostName = "apu-nixi"; # Define your hostname.
    bridges.br0.interfaces = [ "enp1s0" "wlp4s0" ];
    firewall = {
      enable = true;
      allowPing = true;
      allowedTCPPorts = [ 22 ];
      #allowedUDPPorts = [ 5353 ];
    };

  };

  # networking.wireless.enable = true;  # Enables wireless support via wpa_supplicant.

  #Select internationalisation properties.
  i18n = {
    consoleFont = "Lat2-Terminus16";
    consoleKeyMap = "us";
    defaultLocale = "en_US.UTF-8";
  };

  security.sudo.enable = true;

  programs.zsh.enable = true;
  users.defaultUserShell = "/run/current-system/sw/bin/zsh";

  services = {
    nscd.enable = true;
    ntp.enable = true;
    klogd.enable = true;
    nixosManual.enable = false; # slows down nixos-rebuilds, also requires nixpkgs.config.allowUnfree here..?
    xserver.enable = false;

    cron = {
      enable = true;
      mailto = "js@lastlog.de";
      systemCronJobs = [
        "0 0,8,16 * * * joachim cd /backup/; ./run_backup.sh"
        #*     *     *   *    *            command to be executed
        #-     -     -   -    -
        #|     |     |   |    |
        #|     |     |   |    +----- day of week (0 - 6) (Sunday=0)
        #|     |     |   +------- month (1 - 12)
        #|     |     +--------- day of month (1 - 31) 
        #|     +----------- hour (0 - 23)
        #+------------- min (0 - 59)
      ]; 
    };  
  };  

  # Set your time zone.
  # time.timeZone = "Europe/Amsterdam";

  # List packages installed in system profile. To search by name, run:
  environment.systemPackages = with pkgs; [
    borgbackup
    bridge-utils
    pciutils
    openssl
    ethtool
    #borg
    iotop
    cryptsetup
    parted
    pv
    tmux
    htop
    git
    dfc
    vim
    wget
    linuxPackages.cpupower
    powertop
    usbutils
    ethtool
    smartmontools
    nix-repl
    manpages
    ntfs3g
    lsof
    iptraf
    mc
    hdparm
    sdparm
    file
    dcfldd
    dhex
    inotifyTools
    nmap
    tcpdump
    silver-searcher
    #emacs
  ];

  time.timeZone = "Europe/Berlin";

  # Enable the OpenSSH daemon.
  services.openssh = {
    enable = true;
    permitRootLogin = "without-password";
  };

  systemd.services.hostapd.after = [ "sys-subsystem-net-devices-wlp4s0.device" ];

  services.hostapd = {
    enable = true;
    wpaPassphrase = pw.wpaPassphrase;
    interface = "wlp4s0";
    ssid="flux";
  };


  # Define a user account. Don't forget to set a password with ‘passwd’.
  users.extraUsers.joachim = {
    isNormalUser = true;
    uid = 1000;
  };

  # The NixOS release to be compatible with for stateful data such as databases.
  system.stateVersion = "16.09";
}

WD passport USB 3.0 bug

with a WD passport USB 3.0 disk i can't boot the system since i hit this bug.

SeaBIOS (version ?-20160311_005214-0c3a223c2ee6)
XHCI init on dev 00:10.0: regs @ 0xfea22000, 4 ports, 32 slots, 32 byte contexts
XHCI    extcap 0x1 @ fea22500
XHCI    protocol USB  3.00, 2 ports (offset 1), def 0
XHCI    protocol USB  2.00, 2 ports (offset 3), def 10
XHCI    extcap 0xa @ fea22540
Found 2 serial ports
ATA controller 1 at 4010/4020/0 (irq 0 dev 88)
EHCI init on dev 00:13.0 (regs=0xfea25420)
ATA controller 2 at 4018/4024/0 (irq 0 dev 88)
Searching bootorder for: /pci@i0cf8/*@14,7
Searching bootorder for: /rom@img/memtest
Searching bootorder for: /rom@img/setup
ata0-0: KINGSTON SMS200S360G ATA-8 Hard-Disk (57241 MiBytes)
Searching bootorder for: /pci@i0cf8/*@11/drive@0/disk@0
XHCI port #3: 0x002202a0, powered, pls 5, speed 0 [ - ]
XHCI port #1: 0x00021203, powered, enabled, pls 0, speed 4 [Super]
Searching bootorder for: /pci@i0cf8/usb@10/storage@1/*@0/*@0,0
Searching bootorder for: /pci@i0cf8/usb@10/usb-*@1
USB MSC vendor='WD' product='My Passport 0827' rev='1012' type=0 removable=0
call16 with invalid stack
PCEngines apu2
coreboot build 20160311
4080 MB ECC DRAM

documentation

how to recover a bricked BIOS (after flashing)? on APU1 it was SPI, and there's a header, so like wires + a ch341a should do it.

wireless

the APU i'm using also has a Mini-PCIe wireless card built and you can choose from these two cards:

the access point works nicely with my android devices as well as my linux laptops.

buy the APU

if you want to buy an APU, buy the APU bundle.

summary

the APU is runnig NixOS and is very stable and fast while using little energy. would use/buy again!

by qknight at November 05, 2016 02:35 PM