I had a need to deploy vim and my particular preferred configuration both system-wide and across multiple systems (via NixOps).
I started by creating a file named vim.nix
that would be imported into either
/etc/nixos/configuration.nix
or an appropriate NixOps Nix file. This example
is a stub that shows a number of common configuration items:
with import <nixpkgs> {};
vim_configurable.customize {
name = "vim"; # Specifies the vim binary name.
# Below you can specify what usually goes into `~/.vimrc`
vimrcConfig.customRC = ''
" Preferred global default settings:
set number " Enable line numbers by default
set background=dark " Set the default background to dark or light
set smartindent " Automatically insert extra level of indentation
set tabstop=4 " Default tabstop
set shiftwidth=4 " Default indent spacing
set expandtab " Expand [TABS] to spaces
syntax enable " Enable syntax highlighting
colorscheme solarized " Set the default colour scheme
set t_Co=256 " use 265 colors in vim
set spell spelllang=en_au " Default spell checking language
hi clear SpellBad " Clear any unwanted default settings
hi SpellBad cterm=underline " Set the spell checking highlight style
hi SpellBad ctermbg=NONE " Set the spell checking highlight background
match ErrorMsg '\s\+$' "
let g:airline_powerline_fonts = 1 " Use powerline fonts
let g:airline_theme='solarized' " Set the airline theme
set laststatus=2 " Set up the status line so it's coloured and always on
" Add more settings below
'';
# store your plugins in Vim packages
vimrcConfig.packages.myVimPackage = with pkgs.vimPlugins; {
start = [ # Plugins loaded on launch
airline # Lean & mean status/tabline for vim that's light as air
solarized # Solarized colours for Vim
vim-airline-themes # Collection of themes for airlin
vim-nix # Support for writing Nix expressions in vim
];
# manually loadable by calling `:packadd $plugin-name`
# opt = [ phpCompletion elm-vim ];
# To automatically load a plugin when opening a filetype, add vimrc lines like:
# autocmd FileType php :packadd phpCompletion
};
}
I then needed to import this file into my system packages stanza:
environment = {
systemPackages = with pkgs; [
someOtherPackages # Normal package listing
(
import ./vim.nix
)
];
};
This will then install and configure Vim as you've defined it.
If you'd like to give this build a run in a non-production space, I've written vim_vm.nix with which you can build a VM, ssh into afterwards and test the Vim configuration:
$ nix-build '<nixpkgs/nixos>' -A vm --arg configuration ./vim_vm.nix
...
$ export QEMU_OPTS="-m 4192"
$ export QEMU_NET_OPTS="hostfwd=tcp::18080-:80,hostfwd=tcp::10022-:22"
$ ./result/bin/run-vim-vm-vm
Then, from a another terminal:
$ ssh nixos@localhost -p 10022
And you should be in a freshly baked NixOS VM with your Vim config ready to be used.
There's an always current example of my production Vim configuration in my mio-ops repo.
We’ve released hercules-ci-agent 0.6.1, days after 0.6.0 release.
Everyone is encouraged to upgrade, as it brings performance improvements, a bugfix to IFD and better onboarding experience.
Fix token leak to system log when reporting an HTTP exception. This was introduced by a library upgrade.
This was discovered after tagging 0.6.0 but before the release was
announced and before moving of the stable
branch.
Only users of the hercules-ci-agent
master
branch and the unannounced
tag were exposed to this leak.
We recommend to follow the stable
branch.
Temporarily revert a Nix GC configuration change that might cause problems until agent gc root behavior is improved.
Automated hosted infrastructure for Nix, reliable and reproducible developer tooling, to speed up adoption and lower integration cost. We offer Continuous Integration and Binary Caches.
{ stdenv, fetchurl, pkgconfig, glib, gpm, file, e2fsprogs
, perl, zip, unzip, gettext, libssh2, openssl}:
stdenv.mkDerivation rec {
pname = "mc";
version = "4.8.23";
src = fetchurl {
url = "http://www.midnight-commander.org/downloads/${pname}-${version}.tar.xz";
sha256 = "077z7phzq3m1sxyz7li77lyzv4rjmmh3wp2vy86pnc4387kpqzyx";
};
buildInputs = [
pkgconfig perl glib slang zip unzip file gettext libssh2 openssl
];
configureFlags = [ "--enable-vfs-smb" ];
meta = {
description = "File Manager and User Shell for the GNU Project";
homepage = http://www.midnight-commander.org;
maintainers = [ stdenv.lib.maintainers.sander ];
platforms = with stdenv.lib.platforms; linux ++ darwin;
};
}
{ system ? builtins.currentSystem }:
rec {
stdenv = ...
fetchurl = ...
pkgconfig = ...
glib = ...
...
openssl = import ../development/libraries/openssl {
inherit stdenv fetchurl zlib ...;
};
mc = import ../tools/misc/mc {
inherit stdenv fetchurl pkgconfig glib gpm file e2fsprogs perl;
inherit zip unzip gettext libssh2 openssl;
};
}
$ nix-build all-packages.nix -A mc
/nix/store/wp3r8qv4k510...-mc-4.8.23
$ /nix/store/wp3r8qv4k510...-mc-4.8.23/bin/mc
{ system ? builtins.currentSystem }:
rec {
stdenv = ...
fetchurl = ...
pkgconfig = ...
glib = ...
...
openssl_1_1_0 = import ../development/libraries/openssl/1.1.0.nix {
inherit stdenv fetchurl zlib ...;
};
mc_alternative = import ../tools/misc/mc {
inherit stdenv fetchurl pkgconfig glib gpm file e2fsprogs perl;
inherit zip unzip gettext libssh2;
openssl = openssl_1_1_0; # Use a different OpenSSL version
};
}
$ nix-build all-packages.nix -A mc_alternative
/nix/store/0g0wm23y85nc0y...-mc-4.8.23
$ nix-env -f all-packages.nix -iA mc
$ mc
$ /etc/init.d/nginx start
$ /etc/init.d/nginx stop
#!/bin/bash
## BEGIN INIT INFO
# Provides: nginx
# Default-Start: 3 4 5
# Default-Stop: 0 1 2 6
# Should-Start: webapp
# Should-Stop: webapp
# Description: Nginx
## END INIT INFO
. /lib/lsb/init-functions
case "$1" in
start)
log_info_msg "Starting Nginx..."
mkdir -p /var/nginx/logs
start_daemon /usr/bin/nginx -c /etc/nginx.conf -p /var/nginx
evaluate_retval
;;
stop)
log_info_msg "Stopping Nginx..."
killproc /usr/bin/nginx
evaluate_retval
;;
reload)
log_info_msg "Reloading Nginx..."
killproc /usr/bin/nginx -HUP
evaluate_retval
;;
restart)
$0 stop
sleep 1
$0 start
;;
status)
statusproc /usr/bin/nginx
;;
*)
echo "Usage: $0 {start|stop|reload|restart|status}"
exit 1
;;
esac
$ daemon -U -i /home/sander/webapp/app.js
/etc/
init.d/
webapp
nginx
rc0.d/
K98nginx -> ../init.d/nginx
K99webapp -> ../init.d/webapp
rc1.d/
K98nginx -> ../init.d/nginx
K99webapp -> ../init.d/webapp
rc2.d/
K98nginx -> ../init.d/nginx
K99webapp -> ../init.d/webapp
rc3.d/
S00webapp -> ../init.d/nginx
S01nginx -> ../init.d/webapp
rc4.d/
S00webapp -> ../init.d/nginx
S01nginx -> ../init.d/webapp
rc5.d/
S00webapp -> ../init.d/nginx
S01nginx -> ../init.d/webapp
rc6.d/
K98nginx -> ../init.d/nginx
K99webapp -> ../init.d/webapp
{createSystemVInitScript, nginx}:
let
configFile = ./nginx.conf;
stateDir = "/var";
in
createSystemVInitScript {
name = "nginx";
description = "Nginx";
activities = {
start = ''
mkdir -p ${stateDir}/logs
log_info_msg "Starting Nginx..."
loadproc ${nginx}/bin/nginx -c ${configFile} -p ${stateDir}
evaluate_retval
'';
stop = ''
log_info_msg "Stopping Nginx..."
killproc ${nginx}/bin/nginx
evaluate_retval
'';
reload = ''
log_info_msg "Reloading Nginx..."
killproc ${nginx}/bin/nginx -HUP
evaluate_retval
'';
restart = ''
$0 stop
sleep 1
$0 start
'';
status = "statusproc ${nginx}/bin/nginx";
};
runlevels = [ 3 4 5 ];
}
log_info_msg "Starting Nginx..."
loadproc ${nginx}/bin/nginx -c ${configFile} -p ${stateDir}
evaluate_retval
{createSystemVInitScript, nginx}:
let
configFile = ./nginx.conf;
stateDir = "/var";
in
createSystemVInitScript {
name = "nginx";
description = "Nginx";
instructions = {
start = {
activity = "Starting";
instruction = ''
mkdir -p ${stateDir}/logs
loadproc ${nginx}/bin/nginx -c ${configFile} -p ${stateDir}
'';
};
stop = {
activity = "Stopping";
instruction = "killproc ${nginx}/bin/nginx";
};
reload = {
activity = "Reloading";
instruction = "killproc ${nginx}/bin/nginx -HUP";
};
};
activities = {
status = "statusproc ${nginx}/bin/nginx";
};
runlevels = [ 3 4 5 ];
}
{createSystemVInitScript, nginx}:
let
configFile = ./nginx.conf;
stateDir = "/var";
in
createSystemVInitScript {
name = "nginx";
description = "Nginx";
initialize = ''
mkdir -p ${stateDir}/logs
'';
process = "${nginx}/bin/nginx";
args = [ "-c" configFile "-p" stateDir ];
runlevels = [ 3 4 5 ];
}
{createSystemVInitScript}:
let
webapp = (import ./webapp {}).package;
in
createSystemVInitScript {
name = "webapp";
process = "${webapp}/lib/node_modules/webapp/app.js";
processIsDaemon = false;
runlevels = [ 3 4 5 ];
environment = {
PORT = 5000;
};
}
{createSystemVInitScript, nginx, webapp}:
let
configFile = ./nginx.conf;
stateDir = "/var";
in
createSystemVInitScript {
name = "nginx";
description = "Nginx";
initialize = ''
mkdir -p ${stateDir}/logs
'';
process = "${nginx}/bin/nginx";
args = [ "-c" configFile "-p" stateDir ];
runlevels = [ 3 4 5 ];
dependencies = [ webapp ];
}
{createSystemVInitScript, port ? 5000}:
let
webapp = (import /home/sander/webapp {}).package;
in
createSystemVInitScript {
name = "webapp";
process = "${webapp}/lib/node_modules/webapp/app.js";
processIsDaemon = false;
runlevels = [ 3 4 5 ];
environment = {
PORT = port;
};
}
{createSystemVInitScript, stdenv, writeTextFile, nginx
, runtimeDir, stateDir, logDir, port ? 80, webapps ? []}:
let
nginxStateDir = "${stateDir}/nginx";
in
import ./nginx.nix {
inherit createSystemVInitScript nginx instanceSuffix;
stateDir = nginxStateDir;
dependencies = map (webapp: webapp.pkg) webapps;
configFile = writeTextFile {
name = "nginx.conf";
text = ''
error_log ${nginxStateDir}/logs/error.log;
pid ${runtimeDir}/nginx.pid;
events {
worker_connections 190000;
}
http {
${stdenv.lib.concatMapStrings (dependency: ''
upstream webapp${toString dependency.port} {
server localhost:${toString dependency.port};
}
'') webapps}
${stdenv.lib.concatMapStrings (dependency: ''
server {
listen ${toString port};
server_name ${dependency.dnsName};
location / {
proxy_pass http://webapp${toString dependency.port};
}
}
'') webapps}
}
'';
};
}
{ pkgs ? import <nixpkgs> { inherit system; }
, system ? builtins.currentSystem
, stateDir ? "/var"
, runtimeDir ? "${stateDir}/run"
, logDir ? "${stateDir}/log"
, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")
}:
let
createSystemVInitScript = import ./create-sysvinit-script.nix {
inherit (pkgs) stdenv writeTextFile daemon;
inherit runtimeDir tmpDir;
createCredentials = import ./create-credentials.nix {
inherit (pkgs) stdenv;
};
initFunctions = import ./init-functions.nix {
basePackages = [
pkgs.coreutils
pkgs.gnused
pkgs.inetutils
pkgs.gnugrep
pkgs.sysvinit
];
inherit (pkgs) stdenv;
inherit runtimeDir;
};
};
in
rec {
webapp = rec {
port = 5000;
dnsName = "webapp.local";
pkg = import ./webapp.nix {
inherit createSystemVInitScript port;
};
};
nginxReverseProxy = rec {
port = 80;
pkg = import ./nginx-reverse-proxy.nix {
inherit createSystemVInitScript;
inherit stateDir logDir runtimeDir port;
inherit (pkgs) stdenv writeTextFile nginx;
webapps = [ webapp ];
};
};
}
$ nix-build processes.nix -A webapp
$ ./result/bin/etc/rc.d/init.d/webapp start
$ ./result/bin/etc/rc.d/init.d/webapp stop
{ pkgs ? import <nixpkgs> { inherit system; }
, system ? builtins.currentSystem
}:
let
buildSystemVInitEnv = import ./build-sysvinit-env.nix {
inherit (pkgs) buildEnv;
};
in
buildSystemVInitEnv {
processes = import ./processes.nix {
inherit pkgs system;
};
}
$ nix-build profile.nix
$ rcswitch ./result/etc/rc.d/rc3.d
$ rcswitch ./result/etc/rc.d/rc.3 ./oldresult/etc/rc.d/rc3.d
$ rcactivity status ./result/etc/rc.d/rc3.d
$ nix-build processes.nix --argstr stateDir /home/sander/var \
-A nginxReverseProxy
$ ./result/etc/rc.d/init.d/nginx start
{createSystemVInitScript}:
{port, instanceSuffix ? ""}:
let
webapp = (import ./webapp {}).package;
instanceName = "webapp${instanceSuffix}";
in
createSystemVInitScript {
name = instanceName;
inherit instanceName;
process = "${webapp}/lib/node_modules/webapp/app.js";
processIsDaemon = false;
runlevels = [ 3 4 5 ];
environment = {
PORT = port;
};
}
{ createSystemVInitScript, stdenv, writeTextFile, nginx
, runtimeDir, stateDir, logDir}:
{port ? 80, webapps ? [], instanceSuffix ? ""}:
let
instanceName = "nginx${instanceSuffix}";
nginxStateDir = "${stateDir}/${instanceName}";
in
import ./nginx.nix {
inherit createSystemVInitScript nginx instanceSuffix;
stateDir = nginxStateDir;
dependencies = map (webapp: webapp.pkg) webapps;
configFile = writeTextFile {
name = "nginx.conf";
text = ''
error_log ${nginxStateDir}/logs/error.log;
pid ${runtimeDir}/${instanceName}.pid;
events {
worker_connections 190000;
}
http {
${stdenv.lib.concatMapStrings (dependency: ''
upstream webapp${toString dependency.port} {
server localhost:${toString dependency.port};
}
'') webapps}
${stdenv.lib.concatMapStrings (dependency: ''
server {
listen ${toString port};
server_name ${dependency.dnsName};
location / {
proxy_pass http://webapp${toString dependency.port};
}
}
'') webapps}
}
'';
};
}
{ pkgs
, system
, stateDir
, logDir
, runtimeDir
, tmpDir
}:
let
createSystemVInitScript = import ./create-sysvinit-script.nix {
inherit (pkgs) stdenv writeTextFile daemon;
inherit runtimeDir tmpDir;
createCredentials = import ./create-credentials.nix {
inherit (pkgs) stdenv;
};
initFunctions = import ./init-functions.nix {
basePackages = [
pkgs.coreutils
pkgs.gnused
pkgs.inetutils
pkgs.gnugrep
pkgs.sysvinit
];
inherit (pkgs) stdenv;
inherit runtimeDir;
};
};
in
{
webapp = import ./webapp.nix {
inherit createSystemVInitScript;
};
nginxReverseProxy = import ./nginx-reverse-proxy.nix {
inherit createSystemVInitScript stateDir logDir runtimeDir;
inherit (pkgs) stdenv writeTextFile nginx;
};
}
{ pkgs ? import { inherit system; }
, system ? builtins.currentSystem
, stateDir ? "/home/sbu"
, runtimeDir ? "${stateDir}/run"
, logDir ? "${stateDir}/log"
, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")
}:
let
constructors = import ./constructors.nix {
inherit pkgs system stateDir runtimeDir logDir tmpDir;
};
in
rec {
webapp1 = rec {
port = 5000;
dnsName = "webapp1.local";
pkg = constructors.webapp {
inherit port;
instanceSuffix = "1";
};
};
webapp2 = rec {
port = 5001;
dnsName = "webapp2.local";
pkg = constructors.webapp {
inherit port;
instanceSuffix = "2";
};
};
webapp3 = rec {
port = 5002;
dnsName = "webapp3.local";
pkg = constructors.webapp {
inherit port;
instanceSuffix = "3";
};
};
webapp4 = rec {
port = 5003;
dnsName = "webapp4.local";
pkg = constructors.webapp {
inherit port;
instanceSuffix = "4";
};
};
nginxReverseProxy = rec {
port = 8080;
pkg = constructors.nginxReverseProxy {
webapps = [ webapp1 webapp2 webapp3 webapp4 ];
inherit port;
};
};
webapp5 = rec {
port = 6002;
dnsName = "webapp5.local";
pkg = constructors.webapp {
inherit port;
instanceSuffix = "5";
};
};
webapp6 = rec {
port = 6003;
dnsName = "webapp6.local";
pkg = constructors.webapp {
inherit port;
instanceSuffix = "6";
};
};
nginxReverseProxy2 = rec {
port = 8081;
pkg = constructors.nginxReverseProxy {
webapps = [ webapp5 webapp6 ];
inherit port;
instanceSuffix = "2";
};
};
}
$ nix-build processes.nix -A webapp3
$ ./result/etc/rc.d/init.d/webapp3 start
$ nix-build profile.nix
$ rcswitch ./result/etc/rc.d/rc3.d
{createSystemVInitScript}:
{port, instanceSuffix ? ""}:
let
webapp = (import ./webapp {}).package;
instanceName = "webapp${instanceSuffix}";
in
createSystemVInitScript {
name = instanceName;
inherit instanceName;
process = "${webapp}/lib/node_modules/webapp/app.js";
processIsDaemon = false;
runlevels = [ 3 4 5 ];
environment = {
PORT = port;
};
user = instanceName;
credentials = {
groups = {
"${instanceName}" = {};
};
users = {
"${instanceName}" = {
group = instanceName;
description = "Webapp";
};
};
};
}
{ pkgs
, stateDir
, logDir
, runtimeDir
, tmpDir
, forceDisableUserChange
}:
let
createSystemVInitScript = import ./create-sysvinit-script.nix {
inherit (pkgs) stdenv writeTextFile daemon;
inherit runtimeDir tmpDir forceDisableUserChange;
createCredentials = import ./create-credentials.nix {
inherit (pkgs) stdenv;
};
initFunctions = import ./init-functions.nix {
basePackages = [
pkgs.coreutils
pkgs.gnused
pkgs.inetutils
pkgs.gnugrep
pkgs.sysvinit
];
inherit (pkgs) stdenv;
inherit runtimeDir;
};
};
in
{
...
}
$ nix-build processes.nix --arg forceDisableUserChange true
{ createSystemVInitScript, stdenv, writeTextFile, nginx
, runtimeDir, stateDir, logDir
}:
{port ? 80, instanceSuffix ? ""}:
interDeps:
let
instanceName = "nginx${instanceSuffix}";
nginxStateDir = "${stateDir}/${instanceName}";
in
import ./nginx.nix {
inherit createSystemVInitScript nginx instanceSuffix;
stateDir = nginxStateDir;
dependencies = map (dependencyName:
let
dependency = builtins.getAttr dependencyName interDeps;
in
dependency.pkg
) dependencies;
configFile = writeTextFile {
name = "nginx.conf";
text = ''
error_log ${nginxStateDir}/logs/error.log;
pid ${runtimeDir}/${instanceName}.pid;
events {
worker_connections 190000;
}
http {
${stdenv.lib.concatMapStrings (dependencyName:
let
dependency = builtins.getAttr dependencyName interDeps;
in
''
upstream webapp${toString dependency.port} {
server ${dependency.target.properties.hostname}:${toString dependency.port};
}
'') (builtins.attrNames interDeps)}
${stdenv.lib.concatMapStrings (dependencyName:
let
dependency = builtins.getAttr dependencyName interDeps;
in
''
server {
listen ${toString port};
server_name ${dependency.dnsName};
location / {
proxy_pass http://webapp${toString dependency.port};
}
}
'') (builtins.attrNames interDeps)}
}
'';
};
}
{ pkgs, distribution, invDistribution, system
, stateDir ? "/var"
, runtimeDir ? "${stateDir}/run"
, logDir ? "${stateDir}/log"
, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")
, forceDisableUserChange ? true
}:
let
constructors = import ./constructors.nix {
inherit pkgs stateDir runtimeDir logDir tmpDir;
inherit forceDisableUserChange;
};
in
rec {
webapp = rec {
name = "webapp";
port = 5000;
dnsName = "webapp.local";
pkg = constructors.webapp {
inherit port;
};
type = "sysvinit-script";
};
nginxReverseProxy = rec {
name = "nginxReverseProxy";
port = 8080;
pkg = constructors.nginxReverseProxy {
inherit port;
};
dependsOn = {
inherit webapp;
};
type = "sysvinit-script";
};
}
{
test1.properties.hostname = "test1";
test2.properties.hostname = "test2";
}
{infrastructure}:
{
webapp = [ infrastructure.test1 ];
nginxReverseProxy = [ infrastructure.test2 ];
}
$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix
{stdenv, createSystemdService}:
{port, instanceSuffix ? ""}:
let
webapp = (import ./webapp {}).package;
instanceName = "webapp${instanceSuffix}";
in
createSystemdService {
name = instanceName;
environment = {
PORT = port;
};
Unit = {
Description = "Example web application";
Documentation = http://example.com;
};
Service = {
ExecStart = "${webapp}/lib/node_modules/webapp/app.js";
};
}
{stdenv, createSupervisordProgram}:
{port, instanceSuffix ? ""}:
let
webapp = (import ./webapp {}).package;
instanceName = "webapp${instanceSuffix}";
in
createSupervisordProgram {
name = instanceName;
command = "${webapp}/lib/node_modules/webapp/app.js";
environment = {
PORT = port;
};
}
by Sander van der Burg (noreply@blogger.com) at November 11, 2019 10:43 PM
In March 2018 we set ourselves a mission to provide seamless infrastructure to teams using Nix in day-to-day software development.
In June 2018 we released a solution for developers to easily share binary caches, trusted today by over a thousand developers.
In Octobter 2018 we showed the very first demo of Hercules CI at NixCon 2018.
In March 2019 we added added support for private binary caches.
Since April 2019 we have been gradually giving out early access to the preview release with over a hundred participating developers.
We are announcing general availability of continuous integration specialized for Nix projects.
Check out the landing page to get started.
In the coming months we’re going to work closely with customers to polish the experience and continue to save developer’s time.
For support (with getting started and other questions), contact me at domen@hercules-ci.com so we can set you up and make sure you get the most out of our CI.
Subscribe to @hercules_ci for updates.
Automated hosted infrastructure for Nix, reliable and reproducible developer tooling, to speed up adoption and lower integration cost. We offer Continuous Integration and Binary Caches.
A common complain in using Nixpkgs is that things can become slow when
you have lots of dependencies. Processing of build inputs is processed
in Bash which tends to be pretty hard to make performant. Bash doesn’t
give us any way to loop through dependencies in parallel, so we end up
with pretty slow Bash. Luckily, someone has found some ways to speed
this up with some clever tricks in the setup.sh
script.
Albert Safin (@xzfc on GitHub) made an excellent PR that promises to
improve performance for all users of Nixpkgs. The PR is available at
PR #69131. The basic idea is to avoid invoking “subshells” in Bash. A
subshell is basically anything that uses $(cmd ...)
. Each subshell
requires forking a new process which has a constant time cost that
ends up being ~2ms. This isn’t much in isolation, but adds up in big
loops.
Subshells are usually used in Bash because they are convenient and
easy to reason about. It’s easy to understand how a subshell works as
it’s just substituting the result of one command into another’s
arguments. We don’t usually care about the performance cost of
subshells. In the hot path of Nixpkgs’ setup.sh
, however, it’s
pretty important to squeeze every bit of performance we can.
A few interesting changes were required to make this work. I’ll go through and document what there are. More information can be found at the Bash manual.
diff --git a/pkgs/stdenv/generic/setup.sh b/pkgs/stdenv/generic/setup.sh index 326a60676a26..60067a4051de 100644 --- a/pkgs/stdenv/generic/setup.sh +++ b/pkgs/stdenv/generic/setup.sh @@ -98,7 +98,7 @@ _callImplicitHook() { # hooks exits the hook, not the caller. Also will only pass args if # command can take them _eval() { - if [ "$(type -t "$1")" = function ]; then + if declare -F "$1" > /dev/null 2>&1; then set +u "$@" # including args else
The first change is pretty easy to understand. It just replaces the
type
call with a declare
call, utilizing an exit code in place of
stdout. Unfortunately, declare
is a Bashism which is not available
in all POSIX shells. It’s been ill defined whether Bashisms can be
used in Nixpkgs, but we now will require Nixpkgs to be sourced with
Bash 4+.
diff --git a/pkgs/stdenv/generic/setup.sh b/pkgs/stdenv/generic/setup.sh index 60067a4051de..7e7f8739845b 100644 --- a/pkgs/stdenv/generic/setup.sh +++ b/pkgs/stdenv/generic/setup.sh @@ -403,6 +403,7 @@ findInputs() { # The current package's host and target offset together # provide a <=-preserving homomorphism from the relative # offsets to current offset + local -i mapOffsetResult function mapOffset() { local -ri inputOffset="$1" if (( "$inputOffset" <= 0 )); then @@ -410,7 +411,7 @@ findInputs() { else local -ri outputOffset="$inputOffset - 1 + $targetOffset" fi - echo "$outputOffset" + mapOffsetResult="$outputOffset" } # Host offset relative to that of the package whose immediate @@ -422,8 +423,8 @@ findInputs() { # Host offset relative to the package currently being # built---as absolute an offset as will be used. - local -i hostOffsetNext - hostOffsetNext="$(mapOffset relHostOffset)" + mapOffset relHostOffset + local -i hostOffsetNext="$mapOffsetResult" # Ensure we're in bounds relative to the package currently # being built. @@ -441,8 +442,8 @@ findInputs() { # Target offset relative to the package currently being # built. - local -i targetOffsetNext - targetOffsetNext="$(mapOffset relTargetOffset)" + mapOffset relTargetOffset + local -i targetOffsetNext="$mapOffsetResult" # Once again, ensure we're in bounds relative to the # package currently being built.
Similarly, this change makes mapOffset
set to it’s result to
mapOffsetResult
instead of printing it to stdout, avoiding the
subshell. Less functional, but more performant!
diff --git a/pkgs/stdenv/generic/setup.sh b/pkgs/stdenv/generic/setup.sh index 7e7f8739845b..e25ea735a93c 100644 --- a/pkgs/stdenv/generic/setup.sh +++ b/pkgs/stdenv/generic/setup.sh @@ -73,21 +73,18 @@ _callImplicitHook() { set -u local def="$1" local hookName="$2" - case "$(type -t "$hookName")" in - (function|alias|builtin) - set +u - "$hookName";; - (file) - set +u - source "$hookName";; - (keyword) :;; - (*) if [ -z "${!hookName:-}" ]; then - return "$def"; - else - set +u - eval "${!hookName}" - fi;; - esac + if declare -F "$hookName" > /dev/null; then + set +u + "$hookName" + elif type -p "$hookName" > /dev/null; then + set +u + source "$hookName" + elif [ -n "${!hookName:-}" ]; then + set +u + eval "${!hookName}" + else + return "$def" + fi # `_eval` expects hook to need nounset disable and leave it # disabled anyways, so Ok to to delegate. The alternative of a # return trap is no good because it would affect nested returns.
This change replaces the type -t
command with calls to specific Bash
builtins. declare -F
tells us if the hook is a function, type -p
tells us if hookName
is a file, and otherwise -n
tells us if the
hook is non-empty. Again, this introduces a Bashism.
In the worst case, this does replace one case
with multiple if
branches. But since most hooks are functions, most of the time this
ends up being a single if
.
diff --git a/pkgs/stdenv/generic/setup.sh b/pkgs/stdenv/generic/setup.sh index e25ea735a93c..ea550a6d534b 100644 --- a/pkgs/stdenv/generic/setup.sh +++ b/pkgs/stdenv/generic/setup.sh @@ -449,7 +449,8 @@ findInputs() { [[ -f "$pkg/nix-support/$file" ]] || continue local pkgNext - for pkgNext in $(< "$pkg/nix-support/$file"); do + read -r -d '' pkgNext < "$pkg/nix-support/$file" || true + for pkgNext in $pkgNext; do findInputs "$pkgNext" "$hostOffsetNext" "$targetOffsetNext" done done
This change replaces the $(< )
call with a read
call. This is a
little surprising since read
is using an empty delimiter ''
instead of a new line. This replaces one Bashsism $(< )
with another
in -d
. And, the result, gets rid of a remaining subshell usage.
diff --git a/pkgs/build-support/bintools-wrapper/setup-hook.sh b/pkgs/build-support/bintools-wrapper/setup-hook.sh index f65b792485a0..27d3e6ad5120 100644 --- a/pkgs/build-support/bintools-wrapper/setup-hook.sh +++ b/pkgs/build-support/bintools-wrapper/setup-hook.sh @@ -61,9 +61,8 @@ do if PATH=$_PATH type -p "@targetPrefix@${cmd}" > /dev/null then - upper_case="$(echo "$cmd" | tr "[:lower:]" "[:upper:]")" - export "${role_pre}${upper_case}=@targetPrefix@${cmd}"; - export "${upper_case}${role_post}=@targetPrefix@${cmd}"; + export "${role_pre}${cmd^^}=@targetPrefix@${cmd}"; + export "${cmd^^}${role_post}=@targetPrefix@${cmd}"; fi done
This replace a call to tr
with a usage of the ^^
.
${parameter^^pattern}
is a Bash 4 feature and allows you to
upper-case a string without calling out to tr
.
diff --git a/pkgs/build-support/bintools-wrapper/setup-hook.sh b/pkgs/build-support/bintools-wrapper/setup-hook.sh index 27d3e6ad5120..2e15fa95c794 100644 --- a/pkgs/build-support/bintools-wrapper/setup-hook.sh +++ b/pkgs/build-support/bintools-wrapper/setup-hook.sh @@ -24,7 +24,8 @@ bintoolsWrapper_addLDVars () { # Python and Haskell packages often only have directories like $out/lib/ghc-8.4.3/ or # $out/lib/python3.6/, so having them in LDFLAGS just makes the linker search unnecessary # directories and bloats the size of the environment variable space. - if [[ -n "$(echo $1/lib/lib*)" ]]; then + local -a glob=( $1/lib/lib* ) + if [ "${#glob[*]}" -gt 0 ]; then export NIX_${role_pre}LDFLAGS+=" -L$1/lib" fi fi
Here, we are checking for whether any files exist in /lib/lib*
using
a glob. It originally used a subshell to check if the result was
empty, but this change replaces it with the Bash ${#parameter}
length operation.
diff --git a/pkgs/stdenv/generic/setup.sh b/pkgs/stdenv/generic/setup.sh index 311292169ecd..326a60676a26 100644 --- a/pkgs/stdenv/generic/setup.sh +++ b/pkgs/stdenv/generic/setup.sh @@ -17,7 +17,8 @@ fi # code). The hooks for <hookName> are the shell function or variable # <hookName>, and the values of the shell array ‘<hookName>Hooks’. runHook() { - local oldOpts="$(shopt -po nounset)" + local oldOpts="-u" + shopt -qo nounset || oldOpts="+u" set -u # May be called from elsewhere, so do `set -u`. local hookName="$1" @@ -32,7 +33,7 @@ runHook() { set -u # To balance `_eval` done - eval "${oldOpts}" + set "$oldOpts" return 0 } @@ -40,7 +41,8 @@ runHook() { # Run all hooks with the specified name, until one succeeds (returns a # zero exit code). If none succeed, return a non-zero exit code. runOneHook() { - local oldOpts="$(shopt -po nounset)" + local oldOpts="-u" + shopt -qo nounset || oldOpts="+u" set -u # May be called from elsewhere, so do `set -u`. local hookName="$1" @@ -57,7 +59,7 @@ runOneHook() { set -u # To balance `_eval` done - eval "${oldOpts}" + set "$oldOpts" return "$ret" } @@ -500,10 +502,11 @@ activatePackage() { (( "$hostOffset" <= "$targetOffset" )) || exit -1 if [ -f "$pkg" ]; then - local oldOpts="$(shopt -po nounset)" + local oldOpts="-u" + shopt -qo nounset || oldOpts="+u" set +u source "$pkg" - eval "$oldOpts" + set "$oldOpts" fi # Only dependencies whose host platform is guaranteed to match the @@ -522,10 +525,11 @@ activatePackage() { fi if [[ -f "$pkg/nix-support/setup-hook" ]]; then - local oldOpts="$(shopt -po nounset)" + local oldOpts="-u" + shopt -qo nounset || oldOpts="+u" set +u source "$pkg/nix-support/setup-hook" - eval "$oldOpts" + set "$oldOpts" fi } @@ -1273,17 +1277,19 @@ showPhaseHeader() { genericBuild() { if [ -f "${buildCommandPath:-}" ]; then - local oldOpts="$(shopt -po nounset)" + local oldOpts="-u" + shopt -qo nounset || oldOpts="+u" set +u source "$buildCommandPath" - eval "$oldOpts" + set "$oldOpts" return fi if [ -n "${buildCommand:-}" ]; then - local oldOpts="$(shopt -po nounset)" + local oldOpts="-u" + shopt -qo nounset || oldOpts="+u" set +u eval "$buildCommand" - eval "$oldOpts" + set "$oldOpts" return fi @@ -1313,10 +1319,11 @@ genericBuild() { # Evaluate the variable named $curPhase if it exists, otherwise the # function named $curPhase. - local oldOpts="$(shopt -po nounset)" + local oldOpts="-u" + shopt -qo nounset || oldOpts="+u" set +u eval "${!curPhase:-$curPhase}" - eval "$oldOpts" + set "$oldOpts" if [ "$curPhase" = unpackPhase ]; then cd "${sourceRoot:-.}"
This last change is maybe the trickiest. $(shopt -po nounset)
is
used to get the old value of nounset
. The nounset
setting tells
Bash to treat unset variables as an error. This is used temporarily
for phases and hooks to enforce this property. It will be reset to its
previous value after we finish evaling the current phase or hook. To
avoid the subshell here, the stdout provided in shopt -po
is
replaced with an exit code provided in shopt -qo nounset
. If the
shopt -qo nounset
fails, we set oldOpts
to +u
, otherwise it is
assumed that it is -u
.
This commit was first merged in on September 20, but it takes a while for it to hit master. Today, it was finally merged into master (October 13) in 4e6826a so we can finally can see the benefits from it!
Hyperfine makes it easy to compare differences in timings. You can install it locally with:
$ nix-env -iA nixpkgs.hyperfine
Here are some of the results:
$ hyperfine --warmup 3 \ 'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p stdenv --run :' \ 'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p stdenv --run :' Benchmark #1: nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p stdenv --run : Time (mean ± σ): 436.4 ms ± 8.5 ms [User: 324.7 ms, System: 107.8 ms] Range (min … max): 430.8 ms … 459.6 ms 10 runs Benchmark #2: nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p stdenv --run : Time (mean ± σ): 244.5 ms ± 2.3 ms [User: 190.7 ms, System: 34.2 ms] Range (min … max): 241.8 ms … 248.3 ms 12 runs Summary 'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p stdenv --run :' ran 1.79 ± 0.04 times faster than 'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p stdenv --run :'
$ hyperfine --warmup 3 \ 'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p i3.buildInputs --run :' \ 'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p i3.buildInputs --run :' Benchmark #1: nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p i3.buildInputs --run : Time (mean ± σ): 3.428 s ± 0.015 s [User: 2.489 s, System: 1.081 s] Range (min … max): 3.404 s … 3.453 s 10 runs Benchmark #2: nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p i3.buildInputs --run : Time (mean ± σ): 873.4 ms ± 12.2 ms [User: 714.7 ms, System: 89.3 ms] Range (min … max): 861.5 ms … 906.4 ms 10 runs Summary 'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p i3.buildInputs --run :' ran 3.92 ± 0.06 times faster than 'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p i3.buildInputs --run :'
$ hyperfine --warmup 3 \ 'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p inkscape.buildInputs --run :' \ 'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p inkscape.buildInputs --run :' Benchmark #1: nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p inkscape.buildInputs --run : Time (mean ± σ): 4.380 s ± 0.024 s [User: 3.155 s, System: 1.443 s] Range (min … max): 4.339 s … 4.409 s 10 runs Benchmark #2: nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p inkscape.buildInputs --run : Time (mean ± σ): 1.007 s ± 0.011 s [User: 826.7 ms, System: 114.2 ms] Range (min … max): 0.995 s … 1.026 s 10 runs Summary 'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p inkscape.buildInputs --run :' ran 4.35 ± 0.05 times faster than 'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p inkscape.buildInputs --run :'
Try running these commands yourself, and compare the results.
Avoiding subshells leads to a decrease in up to 4x of the time it used to take. That multiplier is going to depend on precisely how many inputs we are processing. It’s a pretty impressive improvement, and it comes with no added cost. These kind of easy wins in performance are pretty rare, and worth celebrating!
Last week, we’ve released agent version 0.5.0. The main theme for the release is ease of installation. Running an agent should be as simple as possible, so we made:
Follow getting started guide to set up your first agent.
If you have and you’re using the module (NixOS, NixOps, nix-darwin) the update is entirely self-explanatory. Otherwise, check the notes.
The agent now relies on being a trusted-user
to the Nix daemon. The agent does not allow projects to execute arbitrary Nix store operations anyway. It may improve security since it simplifies configuration and secrets handling.
The security model for the agent is simple at this point: only build git refs from your repository. This way, third-party contributors can not run arbitrary code on your agent system; only contributors with write access to the repo can.
Talking about trust, we’ll share some details about securely doing CI for Open Source with Bors soon!
0 - Check the Prerequisites
I use this android.nix to ensure my NixOS environment has the prerequisites install and configured for it's side of the process.
1 - Backup any Files You Want to Keep
I like to use adb
to pull the files from the device. There are also other
methods available too.
$ adb pull /sdcard/MyFolder ./Downloads/MyDevice/
Usage of adb
is documented at Android Debug Bridge
2 - Download LineageOS ROM and optional GAPPS package
I downloaded lineage-16.0-20191001-UNOFFICIAL-gts28wifi.zip from gts28wifi.
I also downloaded Open GApps ARM, nano to enable Google Apps.
I could have also downloaded and installed LineageOS addonsu and addonsu-remove but opted not to at this point.
3 - Copy LineageOS image & additional packages to the SM-T710
I use adb
to copy the files files across:
$ adb push ./lineage-16.0-20191001-UNOFFICIAL-gts28wifi.zip /sdcard/
./lineage-16.0-20191001-UNOFFICIAL-gts28wifi.zip: 1 file pushed. 12.1 MB/s (408677035 bytes in 32.263s)
$ adb push ./open_gapps-arm-9.0-nano-20190405.zip /sdcard/
./open_gapps-arm-9.0-nano-20190405.zip: 1 file pushed. 11.1 MB/s (185790181 bytes in 15.948s)
I also copy both to the SD card at this point as the SM-T710 is an awful device to work with and in many random cases will not work with ADB. When this happens, I fall back to the SD card.
4 - Boot into recovery mode
I power the device off, then power it back into recovery mode by holding down
[home]
+[volume up]
+[power]
.
5 - Wipe the existing installation
Press Wipe then Advanced Wipe.
Select:
Swipe Swipe to Wipe at the bottom of the screen.
Press Back to return to the Advanced Wipe screen.
Press the triangular "back" button once to return to the Wipe screen.
6 - Format the device
Press Format Data.
Type yes and press blue check mark at the bottom-right corner to commence the format process.
Press Back to return to the Advanced Wipe screen.
Press the triangular "back" button twice to return to the main screen.
7 - Install LineageOS ROM and other optional ROMs
Press Install, select the images you wish to install and swipe make it go.
Reboot when it's completed and you should be off and running wtth a brand new LineageOS 16 on this tablet.
On 6th of September, Cachix experienced 3 hours of downtime.
We’d like to let you know exactly what happened and what measures we have taken to prevent such an event from happening in the future.
The backend logs were full of:
Sep 06 17:02:34 cachix-production.cachix cachix-server[6488]: Network.Socket.recvBuf: resource vanished (Connection reset by peer)
And:
(ConnectionFailure Network.BSD.getProtocolByName: does not exist (no such protocol name: tcp)))
Most importantly, there were no logs after downtime was triggered and until the restart:
Sep 06 17:15:48 cachix-production.cachix cachix-server[6488]: Network.Socket.recvBuf: resource vanished (Connection reset by peer)
Sep 06 20:19:26 cachix-production.cachix systemd[1]: Stopping cachix server service...
Our monitoring revealed an increased number of nginx connections and file handles (the time are in CEST - UTC+2):
The main cause for downtime was hanged backend. The underlying cause was not identified due to lack of information.
The backend was failing some requests due to reaching the limit of 1024 file descriptors.
The duration of the downtime was due to the absence of a telephone signal.
To avoid any hangs in the future, we have configured systemd watchdog which automatically restarts the service if the backend doesn’t respond for 3 seconds. Doing so we released warp-systemd Haskell library to integrate Warp (Haskell web server) with systemd, such as socket activation and watchdog features.
We’ve increased file descriptors limit to 8192.
We’ve set up Cachix status page so that you can check the state of the service.
For a better visibility into errors like file handles, we’ve configured sentry.io error reporting. Doing so we released katip-raven for seamless Sentry integration of structured logging which we also use to log Warp (Haskell web server) exceptions.
Robert is now fully onboarded to be able to resolve any Cachix issues
We’ve made a number of improvements for the performance of Cachix. Just tuning GHC RTS settings shows 15% speed up in common usage.
Enable debugging builds for production. This would allow systemd watchdog to send signal SIGQUIT and get an execution stack in which program hanged.
We opened nixpkgs pull request to lay the ground work to be able to compile debugging builds.
However there’s a GHC bug opened showing debugging builds alter the performance of programs, so we need to asses our impact first.
Upgrade network library to 3.0 fixing unneeded file handle usage and a possible candidate for a deadlock.
Stackage just included network-3.* in latest snapshot so it’s a matter of weeks.
Improve load testing tooling to be able to reason about performance implications.
We’re confident such issues shouldn’t affect the production anymore and since availability of Cachix is our utmost priority, we are going to make sure to complete the rest of the work in a timely manner.
Automated hosted infrastructure for Nix, reliable and reproducible developer tooling, to speed up adoption and lower integration cost. We offer Continuous Integration and Binary Caches.
The current NixOS Manual is a little sparse on details for different options to configure wireless networking. The version in master is a little better but still ambiguous. I've made a pull request to resolve this but in the interim, this documents how to configure a number of wireless scenarios with NixOS.
If you're going to use NetworkManager, this is not for you. This is for those of us who want reproducible configurations.
To enable a wireless connection with no spaces or special characters in the name that uses a pre-shared key, you first need to generate the raw PSK:
$ wpa_passphrase exampleSSID abcd1234
network={
ssid="exampleSSID"
#psk="abcd1234"
psk=46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d
}
Now you can add the following stanza to your configuration.nix to enable wireless networking and this specific wireless connection:
networking.wireless = {
enable = true;
userControlled.enable = true;
networks = {
exampleSSID = {
pskRaw = "46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d";
};
};
};
If you had another WiFi connection that had spaces and/or special characters in the name, you would configure it like this:
networking.wireless = {
enable = true;
userControlled.enable = true;
networks = {
"example's SSID" = {
pskRaw = "46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d";
};
};
};
If you need to connect to a hidden network, you would do it like this:
networking.wireless = {
enable = true;
userControlled.enable = true;
networks = {
myHiddenSSID = {
hidden = true;
pskRaw = "46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d";
};
};
};
The final scenario that I have, is connecting to open SSIDs that have some kind of secondary method (like a login in web page) for authentication of connections:
networking.wireless = {
enable = true;
userControlled.enable = true;
networks = {
FreeWiFi = {};
};
};
This is all fairly straight forward but was non-trivial to find the answers too.
Deploying a vanilla Tiny Tiny RSS server on NixOS via NixOps is fairly straight forward.
My preferred method is to craft a tt-rss.nix
file describes the configuration
of the TT-RSS server.
{ config, pkgs, lib, ... }:
{
services.tt-rss = {
enable = true; # Enable TT-RSS
database = { # Configure the database
type = "pgsql"; # Database type
passwordFile = "/run/keys/tt-rss-dbpass"; # Where to find the password
};
email = {
fromAddress = "news@mydomain"; # Address for outgoing email
fromName = "News at mydomain"; # Display name for outgoing email
};
selfUrlPath = "https://news.mydomain/"; # Root web URL
virtualHost = "news.mydomain"; # Setup a virtualhost
};
services.postgresql = {
enable = true; # Ensure postgresql is enabled
authentication = ''
local tt_rss all ident map=tt_rss-users
'';
identMap = # Map the tt-rss user to postgresql
''
tt_rss-users tt_rss tt_rss
'';
};
services.nginx = {
enable = true; # Enable Nginx
recommendedGzipSettings = true;
recommendedOptimisation = true;
recommendedProxySettings = true;
recommendedTlsSettings = true;
virtualHosts."news.mydomain" = { # TT-RSS hostname
enableACME = true; # Use ACME certs
forceSSL = true; # Force SSL
};
};
security.acme.certs = {
"news.mydomain".email = "email@mydomain";
};
}
This line from the above file should stand out:
passwordFile = "/run/keys/tt-rss-dbpass"; # Where to find the password
The passwordFile
option requires that you use a secrets file with NixOps.
Where does that file come from? It's pulled from a secrets.nix
file
(example)
that for this example, could look like this:
{ config, pkgs, ... }:
{
deployment.keys = {
# Database key for TT-RSS
tt-rss-dbpass = {
text = "vaetohH{u9Veegh3caechish"; # Password, generated using pwgen -yB 24
user = "tt_rss"; # User to own the key file
group = "wheel"; # Group to own the key file
permissions = "0640"; # Key file permissions
};
};
}
The file's path /run/keys/tt-rss-dbpass
is determined by the elements. So
deployment.keys
determines the initial path of /run/keys
and the next
element tt-rss-dbpass
is a descriptive name provided by the stanza's author to
describe the key's use and also provide the final file name.
Now that we have described the TT-RSS service in tt-rss_for_NixOps.nix and the required credentials in secrets.nix we need to pull it all together for deployment. We achieve that in this case by importing both these files into our existing host definition:
myhost.nix:
{
myhost =
{ config, pkgs, lib, ... }:
{
imports =
[
./secrets.nix # Import our secrets
./servers/tt-rss_for_NixOps.nix # Import TT-RSS description
];
deployment.targetHost = "192.168.132.123"; # Target's IP address
networking.hostName = "myhost"; # Target's hostname.
};
}
To deploy TT-RSS to your NixOps managed host, you merely run the deploy command for your already configured host and deployment, which would look like this:
$ nixops deploy -d MyDeployment --include myhost
You should now have a running TT-RSS server and be able to login with the default admin user (admin: password).
In my nixos-examples repo I have a servers directory with some example files and a README with information and instructions. You can use two of the files to generate a TT-RSS VM to take a quick poke around. There is also an example of how you can deploy TT-RSS in production using NixOps, as per this post.
If you wish to dig a little deeper, I have my production deployment over at mio-ops.
Munich NixOS Meetup
The next stable NixOS release 19.09 'Loris' is going to happen at the end of September. The goal of this sprint is to fix critical issues before the release. Some maintainers will be attending and are available for guidance and feedback.
• Blocking issues: https://github.com/Ni...
• All 19.09 issues: https://github.com/Ni...
ZHF issue: https://github.com/Ni...
The sprint will be held at the Mayflower office in Munich on Friday starting at 11:00. Drinks will be provided.
München 80687 - Germany
Friday, September 13 at 11:00 AM
15
https://www.meetup.com/Munich-NixOS-Meetup/events/264400018/
I've been using GitLab for years but recently opted to switch to Gitea, primarily because of timing and I was looking for something more lightweight, not because of any particular problems with GitLab.
To deploy Gitea via NixOps I chose to craft a Nix file (example) that would be included in a host definition. The linked and below definition provides a deployment of Gitea, using Postgres, Nginx, ACME certificates and ReStructured Text rendering with syntax highlighting.
version-management/gitea_for_NixOps.nix:
{ config, pkgs, lib, ... }:
{
services.gitea = {
enable = true; # Enable Gitea
appName = "MyDomain: Gitea Service"; # Give the site a name
database = {
type = "postgres"; # Database type
passwordFile = "/run/keys/gitea-dbpass"; # Where to find the password
};
domain = "source.mydomain.tld"; # Domain name
rootUrl = "https://source.mydomaain.tld/"; # Root web URL
httpPort = 3001; # Provided unique port
extraConfig = let
docutils =
pkgs.python37.withPackages (ps: with ps; [
docutils # Provides rendering of ReStructured Text files
pygments # Provides syntax highlighting
]);
in ''
[mailer]
ENABLED = true
FROM = "gitea@mydomain.tld"
[service]
REGISTER_EMAIL_CONFIRM = true
[markup.restructuredtext]
ENABLED = true
FILE_EXTENSIONS = .rst
RENDER_COMMAND = ${docutils}/bin/rst2html.py
IS_INPUT_FILE = false
'';
};
services.postgresql = {
enable = true; # Ensure postgresql is enabled
authentication = ''
local gitea all ident map=gitea-users
'';
identMap = # Map the gitea user to postgresql
''
gitea-users gitea gitea
'';
};
services.nginx = {
enable = true; # Enable Nginx
recommendedGzipSettings = true;
recommendedOptimisation = true;
recommendedProxySettings = true;
recommendedTlsSettings = true;
virtualHosts."source.MyDomain.tld" = { # Gitea hostname
enableACME = true; # Use ACME certs
forceSSL = true; # Force SSL
locations."/".proxyPass = "http://localhost:3001/"; # Proxy Gitea
};
};
security.acme.certs = {
"source.mydomain".email = "anEmail@mydomain.tld";
};
}
This line from the above file should stand out:
passwordFile = "/run/keys/gitea-dbpass"; # Where to find the password
Where does that file come from? It's pulled from a secrets.nix
file
(example)
that for this example, could look like this:
{ config, pkgs, ... }:
{
deployment.keys = {
# An example set of keys to be used for the Gitea service's DB authentication
gitea-dbpass = {
text = "uNgiakei+x>i7shuiwaeth3z"; # Password, generated using pwgen -yB 24
user = "gitea"; # User to own the key file
group = "wheel"; # Group to own the key file
permissions = "0640"; # Key file permissions
};
};
}
The file's path /run/keys/gitea-dbpass
is determined by the elements. So
deployment.keys
determines the initial path of /run/keys
and the next
element gitea-dbpass
is a descriptive name provided by the stanza's author to
describe the key's use and also provide the final file name.
Now that we have described the Gitea service in gitea_for_NixOps.nix and the required credentials in secrets.nix we need to pull it all together for deployment. We achieve that in this case by importing both these files into our existing host definition:
myhost.nix:
{
myhost =
{ config, pkgs, lib, ... }:
{
imports =
[
./secrets.nix # Import our secrets
./version-management/gitea_got_NixOps.nix # Import Gitea
];
deployment.targetHost = "192.168.132.123"; # Target's IP address
networking.hostName = "myhost"; # Target's hostname.
};
}
To deploy Gitea to your NixOps managed host, you merely run the deploy command for your already configured host and deployment, which would look like this:
$ nixops deploy -d MyDeployment --include myhost
You should now have a running Gitea server and be able to create an initial admin user.
In my nixos-examples repo I have a version-management directory with some example files and a README with information and instructions. You can use two of the files to generate a Gitea VM to take a quick poke around. There is also an example of how you can deploy Gitea in production using NixOps, as per this post.
If you wish to dig a little deeper, I have my production deployment over at mio-ops.
It's fairly well documented how to replace a NixOS service in the stable channel with one from the unstable channel.
What if you need to build from an upstream branch that's not in either of stable or unstable channels? This is how I go about it, including building a VM in which to test the result.
I specifically wanted to test the new hydra-notify
service, so to test that,
I need to replace the existing Hydra module in nixpkgs with the one from
upstream source. Start by checking out the hydra source:
$ git clone https://github.com/NixOS/hydra.git
We Can configure Nix to replace the nixpkgs version of Hydra with a build from hydra/master.
You can see a completed example in hydra_notify.nix but the key points are that we need to disable Hydra in the standard Nix packages:
disabledModules = [ "services/continuous-integration/hydra/default.nix" ];
as well as import the module definition from the Hydra source we downloaded:
imports =
[
"/path/to/source/hydra/hydra-module.nix"
];
and we need to switch services.hydra
to services.hydra-dev
in two
locations:
networking.firewall.allowedTCPPorts = [ config.services.hydra-dev.port 80 443 ];
services.hydra-dev = {
...
};
With these three changes, we have swapped out the Hydra in nixpkgs for one to
be built from the upstream source in hydra_notify.nix
.
Next we need to build a configuration for our VM that uses the replaced Hydra
module declared in hydra_notify.nix
. This is
hydra_vm.nix,
which is a simple NixOS configuration, which importantly includes our replaced
Hydra module:
imports =
[
./hydra_notify.nix
];
to give this a run yourself, checkout
nixos-examples
and change to the services/hydra_upstream
directory:
$ git clone https://code.mcwhirter.io/craige/nixos-examples.git
$ cd nixos-examples/services/hydra_upstream
After updating the path to Hydra's source, We can then build the VM with:
$ nix-build '<nixpkgs/nixos>' -A vm --arg configuration ./hydra_vm.nix
Before launching the VM, I like to make sure that it is provided with enough RAM and both hydra's web UI and SSH are available by exporting the below Qemu options:
$ export QEMU_OPTS="-m 4192"
$ export QEMU_NET_OPTS="hostfwd=tcp::10443-:443,hostfwd=tcp::10022-:22"
So now we're ready to launch the VM:
./result/bin/run-hydra-notications-vm
Once it has booted, you should be able to ssh nixos@localhost -p 10022
and
hit the Hydra web UI at localhost:10443
.
Once you've logged into the VM you can run systemctl status hydra-notify
to
check that you're running upstream Hydra.
Today we are releasing a new feature we’ve been working on the last couple of weeks.
As a developer you often bump a dependency version or add a new dependency.
Every time your package files change, you need to regenerate the Nix expressions that describe how the project is built.
There are two ways to regenerate Nix expressions in that case:
Outside the Nix domain, possibly with an automated script and commands like bundix
, cabal2nix
, yarn2nix
. This quickly grows from a nuisance to a
maintenance headache as your git repository grows in size due to generated artifacts. It requires special care when diffing, merging, etc.
Let Nix generate Nix expressions during the build. Sounds simple, but it’s quite subtle.
Additionally, Nixpkgs builds forbid option (2), which leads to manual work.
As of today Hercules natively supports option (2), let’s dig into the subtleties.
The Nix language describes how software is built, which happens in two phases.
The first phase is called evaluation:
Evaluation takes a Nix expression and results into a dependency tree of derivations.
A derivation is a set of instructions how to build software.
The second phase is called realization:
Realizing a derivation is the process of building. The builder is usually a shell script, although any executable can be specified.
Since a derivation describes all the necessary inputs, the result is guaranteed to be deterministic.
This begs the question, why have intermediate representation (derivations)? There are a couple of reasons:
Evaluation can include significant computation. It can range from a couple of seconds, to typically minutes, or even an hour for huge projects. We want to evaluate only once and then distribute derivations to multiple machines for speedup and realize them as we traverse the graph of dependencies.
Evaluation can produce derivations that are built on different platforms or require some specific hardware. By copying the derivations to these machines, we don’t need to worry about running evaluation on those specific machines.
In case of a build failure, it allows the machine to retry immediately instead of re-evaluating again.
All in all, derivation files save us computation compared to evaluating more than once.
Sometimes it’s worth mixing the two phases.
A build produces Nix expressions that we now would like to evaluate, but we’re already in the realization phase, so we have:
This is called Import-From-Derivation or shortly, IFD.
let
pkgs = import <nixpkgs> {};
getHello = pkgs.runCommand "get-hello.nix" {} ''
# Call any command here to generate an expression. A simple example:
echo 'pkgs: pkgs.hello' > $out
'';
in import getHello pkgs
In the last line we’re importing from getHello
,
which is a Nix derivation that we need to build
before evaluation can continue to use pkgs: pkgs.hello
Nix expression
in the output.
haskell.nix is an alternative Haskell infrastructure for Nixpkgs.
Given a Haskell project with a Cabal file (Haskell’s package manager),
drop the following default.nix
into root of your repository:
let
pkgs = import (import ./nix/sources.nix).nixpkgs {};
haskell = import (import ./nix/sources.nix)."haskell.nix" { inherit pkgs; };
plan = haskell.callCabalProjectToNix
{ index-state = "2019-08-26T00:00:00Z"; src = pkgs.lib.cleanSource ./.;};
pkgSet = haskell.mkCabalProjectPkgSet {
plan-pkgs = import plan;
pkg-def-extras = [];
modules = [];
};
in pkgSet.config.hsPkgs.mypackage.components.all
Once you replace mypackage
with the name from your Cabal file,
your whole dependency tree is deterministic by pinning the package index to a timestamp
using index-state
and hash of your local folder using ./.
.
Haskell.nix will generate all expressions how to build each package on the fly via import from derivation.
Using different platforms (typically Linux and macOS) during IFD is one of the reasons why upstream forbids IFD, since their evaluator is running on Linux and it can’t build for macOS.
Our CI dispatches all builds during IFD back to our scheduler, so it’s able to dispatch those builds to either specific platform or specific hardware.
IFD support is seamless. There’s nothing extra to configure.
In case of build errors during evaluation UI will show you all the details including build log:
In order to use IFD support you will need to upgrade to hercules-ci-agent-0.4.0.
Some Nix tools already embrace IFD, such as haskell.nix, yarn2nix (Node.js), pnpm2nix (Node.js) and opam2nix (OCaml).
We encourage more language tools to take advantage of this feature.
Currently Nix evaluation is single threaded and IFD evaluation is blocking until the builds are done. We have some ideas to make IFD concurrent.
We believe this is a huge step forward to simplify day-to-day Nix development.
Automated hosted infrastructure for Nix, reliable and reproducible developer tooling, to speed up adoption and lower integration cost. We offer Continuous Integration and Binary Caches.
2019-09-08: Add opam2nix