Project import generated by Copybara.

GitOrigin-RevId: a5cc7d3197705f933d88e97c0c61849219ce76c1
This commit is contained in:
Default email 2020-07-18 18:06:22 +02:00
parent 6bdfd4c3f2
commit 1ffc76754d
2008 changed files with 45763 additions and 29009 deletions

View file

@ -9,17 +9,33 @@ exemptLabels:
# Label to use when marking an issue as stale # Label to use when marking an issue as stale
staleLabel: "2.status: stale" staleLabel: "2.status: stale"
# Comment to post when marking an issue as stale. Set to `false` to disable # Comment to post when marking an issue as stale. Set to `false` to disable
markComment: | pulls:
Thank you for your contributions. markComment: |
Hello, I'm a bot and I thank you in the name of the community for your contributions.
This has been automatically marked as stale because it has had no activity for 180 days. Nixpkgs is a busy repository, and unfortunately sometimes PRs get left behind for too long. Nevertheless, we'd like to help committers reach the PRs that are still important. This PR has had no activity for 180 days, and so I marked it as stale, but you can rest assured it will never be closed by a non-human.
If this is still important to you, we ask that you leave a comment below. Your comment can be as simple as "still important to me". This lets people see that at least one person still cares about this. Someone will have to do this at most twice a year if there is no other activity. If this is still important to you and you'd like to remove the stale label, we ask that you leave a comment. Your comment can be as simple as "still important to me". But there's a bit more you can do:
Here are suggestions that might help resolve this more quickly: If you received an approval by an unprivileged maintainer and you are just waiting for a merge, you can @ mention someone with merge permissions and ask them to help. You might be able to find someone relevant by using [Git blame](https://git-scm.com/docs/git-blame) on the relevant files, or via [GitHub's web interface](https://docs.github.com/en/github/managing-files-in-a-repository/tracking-changes-in-a-file). You can see if someone's a member of the [nixpkgs-committers](https://github.com/orgs/NixOS/teams/nixpkgs-committers) team, by hovering with the mouse over their username on the web interface, or by searching them directly on [the list](https://github.com/orgs/NixOS/teams/nixpkgs-committers).
If your PR wasn't reviewed at all, it might help to find someone who's perhaps a user of the package or module you are changing, or alternatively, ask once more for a review by the maintainer of the package/module this is about. If you don't know any, you can use [Git blame](https://git-scm.com/docs/git-blame) on the relevant files, or [GitHub's web interface](https://docs.github.com/en/github/managing-files-in-a-repository/tracking-changes-in-a-file) to find someone who touched the relevant files in the past.
If your PR has had reviews and nevertheless got stale, make sure you've responded to all of the reviewer's requests / questions. Usually when PR authors show responsibility and dedication, reviewers (privileged or not) show dedication as well. If you've pushed a change, it's possible the reviewer wasn't notified about your push via email, so you can always [officially request them for a review](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/requesting-a-pull-request-review), or just @ mention them and say you've addressed their comments.
Lastly, you can always ask for help at [our Discourse Forum](https://discourse.nixos.org/), or more specifically, [at this thread](https://discourse.nixos.org/t/prs-in-distress/3604) or at [#nixos' IRC channel](https://webchat.freenode.net/#nixos).
issues:
markComment: |
Hello, I'm a bot and I thank you in the name of the community for opening this issue.
To help our human contributors focus on the most-relevant reports, I check up on old issues to see if they're still relevant. This issue has had no activity for 180 days, and so I marked it as stale, but you can rest assured it will never be closed by a non-human.
The community would appreciate your effort in checking if the issue is still valid. If it isn't, please close it.
If the issue persists, and you'd like to remove the stale label, you simply need to leave a comment. Your comment can be as simple as "still important to me". If you'd like it to get more attention, you can ask for help by searching for maintainers and people that previously touched related code and @ mention them in a comment. You can use [Git blame](https://git-scm.com/docs/git-blame) or [GitHub's web interface](https://docs.github.com/en/github/managing-files-in-a-repository/tracking-changes-in-a-file) on the relevant files to find them.
Lastly, you can always ask for help at [our Discourse Forum](https://discourse.nixos.org/) or at [#nixos' IRC channel](https://webchat.freenode.net/#nixos).
1. Search for maintainers and people that previously touched the related code and @ mention them in a comment.
2. Ask on the [NixOS Discourse](https://discourse.nixos.org/).
3. Ask on the [#nixos channel](irc://irc.freenode.net/#nixos) on [irc.freenode.net](https://freenode.net).
# Comment to post when closing a stale issue. Set to `false` to disable # Comment to post when closing a stale issue. Set to `false` to disable
closeComment: false closeComment: false

View file

@ -166,7 +166,7 @@ hello latest de2bf4786de6 About a minute ago 25.2MB
<title>buildLayeredImage</title> <title>buildLayeredImage</title>
<para> <para>
Create a Docker image with many of the store paths being on their own layer to improve sharing between images. Create a Docker image with many of the store paths being on their own layer to improve sharing between images. The image is realized into the Nix store as a gzipped tarball. Depending on the intended usage, many users might prefer to use <function>streamLayeredImage</function> instead, which this function uses internally.
</para> </para>
<variablelist> <variablelist>
@ -327,6 +327,27 @@ pkgs.dockerTools.buildLayeredImage {
</section> </section>
</section> </section>
<section xml:id="ssec-pkgs-dockerTools-streamLayeredImage">
<title>streamLayeredImage</title>
<para>
Builds a script which, when run, will stream an uncompressed tarball of a Docker image to stdout. The arguments to this function are as for <function>buildLayeredImage</function>. This method of constructing an image does not realize the image into the Nix store, so it saves on IO and disk/cache space, particularly with large images.
</para>
<para>
The image produced by running the output script can be piped directly into <command>docker load</command>, to load it into the local docker daemon:
<screen><![CDATA[
$(nix-build) | docker load
]]></screen>
</para>
<para>
Alternatively, the image be piped via <command>gzip</command> into <command>skopeo</command>, e.g. to copy it into a registry:
<screen><![CDATA[
$(nix-build) | gzip --fast | skopeo copy docker-archive:/dev/stdin docker://some_docker_registry/myimage:tag
]]></screen>
</para>
</section>
<section xml:id="ssec-pkgs-dockerTools-fetchFromRegistry"> <section xml:id="ssec-pkgs-dockerTools-fetchFromRegistry">
<title>pullImage</title> <title>pullImage</title>

View file

@ -4,34 +4,36 @@
<title>Citrix Workspace</title> <title>Citrix Workspace</title>
<para> <para>
<note> The <link xlink:href="https://www.citrix.com/products/workspace-app/">Citrix Workspace App</link> is a remote desktop viewer which provides access to <link xlink:href="https://www.citrix.com/products/xenapp-xendesktop/">XenDesktop</link> installations.
<para>
Please note that the <literal>citrix_receiver</literal> package has been deprecated since its development was <link xlink:href="https://docs.citrix.com/en-us/citrix-workspace-app.html">discontinued by upstream</link> and has been replaced by <link xlink:href="https://www.citrix.com/products/workspace-app/">the citrix workspace app</link>.
</para>
</note>
<link xlink:href="https://www.citrix.com/products/receiver/">Citrix Receiver</link> and <link xlink:href="https://www.citrix.com/products/workspace-app/">Citrix Workspace App</link> are a remote desktop viewers which provide access to <link xlink:href="https://www.citrix.com/products/xenapp-xendesktop/">XenDesktop</link> installations.
</para> </para>
<section xml:id="sec-citrix-base"> <section xml:id="sec-citrix-base">
<title>Basic usage</title> <title>Basic usage</title>
<para> <para>
The tarball archive needs to be downloaded manually as the license agreements of the vendor for <link xlink:href="https://www.citrix.com/downloads/citrix-receiver/">Citrix Receiver</link> or <link xlink:href="https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html">Citrix Workspace</link> need to be accepted first. Then run <command>nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz</command>. With the archive available in the store the package can be built and installed with Nix. The tarball archive needs to be downloaded manually as the license agreements of the vendor for <link xlink:href="https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html">Citrix Workspace</link> needs to be accepted first. Then run <command>nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz</command>. With the archive available in the store the package can be built and installed with Nix.
</para> </para>
</section>
<warning> <section xml:id="sec-citrix-selfservice">
<title>Caution with <command>nix-shell</command> installs</title> <title>Citrix Selfservice</title>
<para> <para>
It's recommended to install <literal>Citrix Receiver</literal> and/or <literal>Citrix Workspace</literal> using <literal>nix-env -i</literal> or globally to ensure that the <literal>.desktop</literal> files are installed properly into <literal>$XDG_CONFIG_DIRS</literal>. Otherwise it won't be possible to open <literal>.ica</literal> files automatically from the browser to start a Citrix connection. The <link xlink:href="https://support.citrix.com/article/CTX200337">selfservice</link> is an application managing Citrix desktops and applications. Please note that this feature only works with at least <package>citrix_workspace_20_06_0</package> and later versions.
</para> </para>
</warning> <para>
In order to set this up, you first have to <link xlink:href="https://its.uiowa.edu/support/article/102186">download the <literal>.cr</literal> file from the Netscaler Gateway</link>. After that you can configure the <command>selfservice</command> like this:
<screen>
<prompt>$ </prompt>storebrowse -C ~/Downloads/receiverconfig.cr
<prompt>$ </prompt>selfservice
</screen>
</para>
</section> </section>
<section xml:id="sec-citrix-custom-certs"> <section xml:id="sec-citrix-custom-certs">
<title>Custom certificates</title> <title>Custom certificates</title>
<para> <para>
The <literal>Citrix Workspace App</literal> in <literal>nixpkgs</literal> trust several certificates <link xlink:href="https://curl.haxx.se/docs/caextract.html">from the Mozilla database</link> by default. However several companies using Citrix might require their own corporate certificate. On distros with imperative packaging these certs can be stored easily in <link xlink:href="https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/"><literal>$ICAROOT</literal></link>, however this directory is a store path in <literal>nixpkgs</literal>. In order to work around this issue the package provides a simple mechanism to add custom certificates without rebuilding the entire package using <literal>symlinkJoin</literal>: The <literal>Citrix Workspace App</literal> in <literal>nixpkgs</literal> trusts several certificates <link xlink:href="https://curl.haxx.se/docs/caextract.html">from the Mozilla database</link> by default. However several companies using Citrix might require their own corporate certificate. On distros with imperative packaging these certs can be stored easily in <link xlink:href="https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/"><literal>$ICAROOT</literal></link>, however this directory is a store path in <literal>nixpkgs</literal>. In order to work around this issue the package provides a simple mechanism to add custom certificates without rebuilding the entire package using <literal>symlinkJoin</literal>:
<programlisting> <programlisting>
<![CDATA[with import <nixpkgs> { config.allowUnfree = true; }; <![CDATA[with import <nixpkgs> { config.allowUnfree = true; };
let extraCerts = [ ./custom-cert-1.pem ./custom-cert-2.pem /* ... */ ]; in let extraCerts = [ ./custom-cert-1.pem ./custom-cert-2.pem /* ... */ ]; in

View file

@ -3,159 +3,193 @@
xml:id="sec-language-perl"> xml:id="sec-language-perl">
<title>Perl</title> <title>Perl</title>
<para> <section xml:id="ssec-perl-running">
Nixpkgs provides a function <varname>buildPerlPackage</varname>, a generic package builder function for any Perl package that has a standard <varname>Makefile.PL</varname>. Its implemented in <link <title>Running perl programs on the shell</title>
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/perl-modules/generic"><filename>pkgs/development/perl-modules/generic</filename></link>.
</para>
<para>
Perl packages from CPAN are defined in <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/perl-packages.nix"><filename>pkgs/top-level/perl-packages.nix</filename></link>, rather than <filename>pkgs/all-packages.nix</filename>. Most Perl packages are so straight-forward to build that they are defined here directly, rather than having a separate function for each package called from <filename>perl-packages.nix</filename>. However, more complicated packages should be put in a separate file, typically in <filename>pkgs/development/perl-modules</filename>. Here is an example of the former:
<programlisting>
ClassC3 = buildPerlPackage rec {
name = "Class-C3-0.21";
src = fetchurl {
url = "mirror://cpan/authors/id/F/FL/FLORA/${name}.tar.gz";
sha256 = "1bl8z095y4js66pwxnm7s853pi9czala4sqc743fdlnk27kq94gz";
};
};
</programlisting>
Note the use of <literal>mirror://cpan/</literal>, and the <literal>${name}</literal> in the URL definition to ensure that the name attribute is consistent with the source that were actually downloading. Perl packages are made available in <filename>all-packages.nix</filename> through the variable <varname>perlPackages</varname>. For instance, if you have a package that needs <varname>ClassC3</varname>, you would typically write
<programlisting>
foo = import ../path/to/foo.nix {
inherit stdenv fetchurl ...;
inherit (perlPackages) ClassC3;
};
</programlisting>
in <filename>all-packages.nix</filename>. You can test building a Perl package as follows:
<screen>
<prompt>$ </prompt>nix-build -A perlPackages.ClassC3
</screen>
<varname>buildPerlPackage</varname> adds <literal>perl-</literal> to the start of the name attribute, so the package above is actually called <literal>perl-Class-C3-0.21</literal>. So to install it, you can say:
<screen>
<prompt>$ </prompt>nix-env -i perl-Class-C3
</screen>
(Of course you can also install using the attribute name: <literal>nix-env -i -A perlPackages.ClassC3</literal>.)
</para>
<para>
So what does <varname>buildPerlPackage</varname> do? It does the following:
<orderedlist>
<listitem>
<para>
In the configure phase, it calls <literal>perl Makefile.PL</literal> to generate a Makefile. You can set the variable <varname>makeMakerFlags</varname> to pass flags to <filename>Makefile.PL</filename>
</para>
</listitem>
<listitem>
<para>
It adds the contents of the <envar>PERL5LIB</envar> environment variable to <literal>#! .../bin/perl</literal> line of Perl scripts as <literal>-I<replaceable>dir</replaceable></literal> flags. This ensures that a script can find its dependencies. (This can cause this shebang line to become too long for Darwin to handle; see the note below.)
</para>
</listitem>
<listitem>
<para>
In the fixup phase, it writes the propagated build inputs (<varname>propagatedBuildInputs</varname>) to the file <filename>$out/nix-support/propagated-user-env-packages</filename>. <command>nix-env</command> recursively installs all packages listed in this file when you install a package that has it. This ensures that a Perl package can find its dependencies.
</para>
</listitem>
</orderedlist>
</para>
<para>
<varname>buildPerlPackage</varname> is built on top of <varname>stdenv</varname>, so everything can be customised in the usual way. For instance, the <literal>BerkeleyDB</literal> module has a <varname>preConfigure</varname> hook to generate a configuration file used by <filename>Makefile.PL</filename>:
<programlisting>
{ buildPerlPackage, fetchurl, db }:
buildPerlPackage rec {
name = "BerkeleyDB-0.36";
src = fetchurl {
url = "mirror://cpan/authors/id/P/PM/PMQS/${name}.tar.gz";
sha256 = "07xf50riarb60l1h6m2dqmql8q5dij619712fsgw7ach04d8g3z1";
};
preConfigure = ''
echo "LIB = ${db.out}/lib" > config.in
echo "INCLUDE = ${db.dev}/include" >> config.in
'';
}
</programlisting>
</para>
<para>
Dependencies on other Perl packages can be specified in the <varname>buildInputs</varname> and <varname>propagatedBuildInputs</varname> attributes. If something is exclusively a build-time dependency, use <varname>buildInputs</varname>; if its (also) a runtime dependency, use <varname>propagatedBuildInputs</varname>. For instance, this builds a Perl module that has runtime dependencies on a bunch of other modules:
<programlisting>
ClassC3Componentised = buildPerlPackage rec {
name = "Class-C3-Componentised-1.0004";
src = fetchurl {
url = "mirror://cpan/authors/id/A/AS/ASH/${name}.tar.gz";
sha256 = "0xql73jkcdbq4q9m0b0rnca6nrlvf5hyzy8is0crdk65bynvs8q1";
};
propagatedBuildInputs = [
ClassC3 ClassInspector TestException MROCompat
];
};
</programlisting>
</para>
<para>
On Darwin, if a script has too many <literal>-I<replaceable>dir</replaceable></literal> flags in its first line (its “shebang line”), it will not run. This can be worked around by calling the <literal>shortenPerlShebang</literal> function from the <literal>postInstall</literal> phase:
<programlisting>
{ stdenv, buildPerlPackage, fetchurl, shortenPerlShebang }:
ImageExifTool = buildPerlPackage {
pname = "Image-ExifTool";
version = "11.50";
src = fetchurl {
url = "https://www.sno.phy.queensu.ca/~phil/exiftool/Image-ExifTool-11.50.tar.gz";
sha256 = "0d8v48y94z8maxkmw1rv7v9m0jg2dc8xbp581njb6yhr7abwqdv3";
};
buildInputs = stdenv.lib.optional stdenv.isDarwin shortenPerlShebang;
postInstall = stdenv.lib.optional stdenv.isDarwin ''
shortenPerlShebang $out/bin/exiftool
'';
};
</programlisting>
This will remove the <literal>-I</literal> flags from the shebang line, rewrite them in the <literal>use lib</literal> form, and put them on the next line instead. This function can be given any number of Perl scripts as arguments; it will modify them in-place.
</para>
<section xml:id="ssec-generation-from-CPAN">
<title>Generation from CPAN</title>
<para> <para>
Nix expressions for Perl packages can be generated (almost) automatically from CPAN. This is done by the program <command>nix-generate-from-cpan</command>, which can be installed as follows: When executing a Perl script, it is possible you get an error such as <literal>./myscript.pl: bad interpreter: /usr/bin/perl: no such file or directory</literal>. This happens when the script expects Perl to be installed at <filename>/usr/bin/perl</filename>, which is not the case when using Perl from nixpkgs. You can fix the script by changing the first line to:
<programlisting>
#!/usr/bin/env perl
</programlisting>
to take the Perl installation from the <literal>PATH</literal> environment variable, or invoke Perl directly with:
<screen>
<prompt>$ </prompt>perl ./myscript.pl
</screen>
</para> </para>
<screen> <para>
<prompt>$ </prompt>nix-env -i nix-generate-from-cpan When the script is using a Perl library that is not installed globally, you might get an error such as <literal>Can't locate DB_File.pm in @INC (you may need to install the DB_File module)</literal>. In that case, you can use <command>nix-shell</command> to start an ad-hoc shell with that library installed, for instance:
</screen> <screen>
<prompt>$ </prompt>nix-shell -p perl perlPackages.DBFile --run ./myscript.pl
</screen>
</para>
<para> <para>
This program takes a Perl module name, looks it up on CPAN, fetches and unpacks the corresponding package, and prints a Nix expression on standard output. For example: If you are always using the script in places where <command>nix-shell</command> is available, you can embed the <command>nix-shell</command> invocation in the shebang like this:
<screen> <programlisting>
<prompt>$ </prompt>nix-generate-from-cpan XML::Simple #!/usr/bin/env nix-shell
XMLSimple = buildPerlPackage rec { #! nix-shell -i perl -p perl perlPackages.DBFile
name = "XML-Simple-2.22"; </programlisting>
src = fetchurl {
url = "mirror://cpan/authors/id/G/GR/GRANTM/${name}.tar.gz";
sha256 = "b9450ef22ea9644ae5d6ada086dc4300fa105be050a2030ebd4efd28c198eb49";
};
propagatedBuildInputs = [ XMLNamespaceSupport XMLSAX XMLSAXExpat ];
meta = {
description = "An API for simple XML files";
license = with stdenv.lib.licenses; [ artistic1 gpl1Plus ];
};
};
</screen>
The output can be pasted into <filename>pkgs/top-level/perl-packages.nix</filename> or wherever else you need it.
</para> </para>
</section> </section>
<section xml:id="ssec-perl-cross-compilation"> <section xml:id="ssec-perl-packaging">
<title>Cross-compiling modules</title> <title>Packaging Perl programs</title>
<para> <para>
Nixpkgs has experimental support for cross-compiling Perl modules. In many cases, it will just work out of the box, even for modules with native extensions. Sometimes, however, the Makefile.PL for a module may (indirectly) import a native module. In that case, you will need to make a stub for that module that will satisfy the Makefile.PL and install it into <filename>lib/perl5/site_perl/cross_perl/${perl.version}</filename>. See the <varname>postInstall</varname> for <varname>DBI</varname> for an example. Nixpkgs provides a function <varname>buildPerlPackage</varname>, a generic package builder function for any Perl package that has a standard <varname>Makefile.PL</varname>. Its implemented in <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/perl-modules/generic"><filename>pkgs/development/perl-modules/generic</filename></link>.
</para> </para>
<para>
Perl packages from CPAN are defined in <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/perl-packages.nix"><filename>pkgs/top-level/perl-packages.nix</filename></link>, rather than <filename>pkgs/all-packages.nix</filename>. Most Perl packages are so straight-forward to build that they are defined here directly, rather than having a separate function for each package called from <filename>perl-packages.nix</filename>. However, more complicated packages should be put in a separate file, typically in <filename>pkgs/development/perl-modules</filename>. Here is an example of the former:
<programlisting>
ClassC3 = buildPerlPackage rec {
name = "Class-C3-0.21";
src = fetchurl {
url = "mirror://cpan/authors/id/F/FL/FLORA/${name}.tar.gz";
sha256 = "1bl8z095y4js66pwxnm7s853pi9czala4sqc743fdlnk27kq94gz";
};
};
</programlisting>
Note the use of <literal>mirror://cpan/</literal>, and the <literal>${name}</literal> in the URL definition to ensure that the name attribute is consistent with the source that were actually downloading. Perl packages are made available in <filename>all-packages.nix</filename> through the variable <varname>perlPackages</varname>. For instance, if you have a package that needs <varname>ClassC3</varname>, you would typically write
<programlisting>
foo = import ../path/to/foo.nix {
inherit stdenv fetchurl ...;
inherit (perlPackages) ClassC3;
};
</programlisting>
in <filename>all-packages.nix</filename>. You can test building a Perl package as follows:
<screen>
<prompt>$ </prompt>nix-build -A perlPackages.ClassC3
</screen>
<varname>buildPerlPackage</varname> adds <literal>perl-</literal> to the start of the name attribute, so the package above is actually called <literal>perl-Class-C3-0.21</literal>. So to install it, you can say:
<screen>
<prompt>$ </prompt>nix-env -i perl-Class-C3
</screen>
(Of course you can also install using the attribute name: <literal>nix-env -i -A perlPackages.ClassC3</literal>.)
</para>
<para>
So what does <varname>buildPerlPackage</varname> do? It does the following:
<orderedlist>
<listitem>
<para>
In the configure phase, it calls <literal>perl Makefile.PL</literal> to generate a Makefile. You can set the variable <varname>makeMakerFlags</varname> to pass flags to <filename>Makefile.PL</filename>
</para>
</listitem>
<listitem>
<para>
It adds the contents of the <envar>PERL5LIB</envar> environment variable to <literal>#! .../bin/perl</literal> line of Perl scripts as <literal>-I<replaceable>dir</replaceable></literal> flags. This ensures that a script can find its dependencies. (This can cause this shebang line to become too long for Darwin to handle; see the note below.)
</para>
</listitem>
<listitem>
<para>
In the fixup phase, it writes the propagated build inputs (<varname>propagatedBuildInputs</varname>) to the file <filename>$out/nix-support/propagated-user-env-packages</filename>. <command>nix-env</command> recursively installs all packages listed in this file when you install a package that has it. This ensures that a Perl package can find its dependencies.
</para>
</listitem>
</orderedlist>
</para>
<para>
<varname>buildPerlPackage</varname> is built on top of <varname>stdenv</varname>, so everything can be customised in the usual way. For instance, the <literal>BerkeleyDB</literal> module has a <varname>preConfigure</varname> hook to generate a configuration file used by <filename>Makefile.PL</filename>:
<programlisting>
{ buildPerlPackage, fetchurl, db }:
buildPerlPackage rec {
name = "BerkeleyDB-0.36";
src = fetchurl {
url = "mirror://cpan/authors/id/P/PM/PMQS/${name}.tar.gz";
sha256 = "07xf50riarb60l1h6m2dqmql8q5dij619712fsgw7ach04d8g3z1";
};
preConfigure = ''
echo "LIB = ${db.out}/lib" > config.in
echo "INCLUDE = ${db.dev}/include" >> config.in
'';
}
</programlisting>
</para>
<para>
Dependencies on other Perl packages can be specified in the <varname>buildInputs</varname> and <varname>propagatedBuildInputs</varname> attributes. If something is exclusively a build-time dependency, use <varname>buildInputs</varname>; if its (also) a runtime dependency, use <varname>propagatedBuildInputs</varname>. For instance, this builds a Perl module that has runtime dependencies on a bunch of other modules:
<programlisting>
ClassC3Componentised = buildPerlPackage rec {
name = "Class-C3-Componentised-1.0004";
src = fetchurl {
url = "mirror://cpan/authors/id/A/AS/ASH/${name}.tar.gz";
sha256 = "0xql73jkcdbq4q9m0b0rnca6nrlvf5hyzy8is0crdk65bynvs8q1";
};
propagatedBuildInputs = [
ClassC3 ClassInspector TestException MROCompat
];
};
</programlisting>
</para>
<para>
On Darwin, if a script has too many <literal>-I<replaceable>dir</replaceable></literal> flags in its first line (its “shebang line”), it will not run. This can be worked around by calling the <literal>shortenPerlShebang</literal> function from the <literal>postInstall</literal> phase:
<programlisting>
{ stdenv, buildPerlPackage, fetchurl, shortenPerlShebang }:
ImageExifTool = buildPerlPackage {
pname = "Image-ExifTool";
version = "11.50";
src = fetchurl {
url = "https://www.sno.phy.queensu.ca/~phil/exiftool/Image-ExifTool-11.50.tar.gz";
sha256 = "0d8v48y94z8maxkmw1rv7v9m0jg2dc8xbp581njb6yhr7abwqdv3";
};
buildInputs = stdenv.lib.optional stdenv.isDarwin shortenPerlShebang;
postInstall = stdenv.lib.optional stdenv.isDarwin ''
shortenPerlShebang $out/bin/exiftool
'';
};
</programlisting>
This will remove the <literal>-I</literal> flags from the shebang line, rewrite them in the <literal>use lib</literal> form, and put them on the next line instead. This function can be given any number of Perl scripts as arguments; it will modify them in-place.
</para>
<section xml:id="ssec-generation-from-CPAN">
<title>Generation from CPAN</title>
<para>
Nix expressions for Perl packages can be generated (almost) automatically from CPAN. This is done by the program <command>nix-generate-from-cpan</command>, which can be installed as follows:
</para>
<screen>
<prompt>$ </prompt>nix-env -i nix-generate-from-cpan
</screen>
<para>
This program takes a Perl module name, looks it up on CPAN, fetches and unpacks the corresponding package, and prints a Nix expression on standard output. For example:
<screen>
<prompt>$ </prompt>nix-generate-from-cpan XML::Simple
XMLSimple = buildPerlPackage rec {
name = "XML-Simple-2.22";
src = fetchurl {
url = "mirror://cpan/authors/id/G/GR/GRANTM/${name}.tar.gz";
sha256 = "b9450ef22ea9644ae5d6ada086dc4300fa105be050a2030ebd4efd28c198eb49";
};
propagatedBuildInputs = [ XMLNamespaceSupport XMLSAX XMLSAXExpat ];
meta = {
description = "An API for simple XML files";
license = with stdenv.lib.licenses; [ artistic1 gpl1Plus ];
};
};
</screen>
The output can be pasted into <filename>pkgs/top-level/perl-packages.nix</filename> or wherever else you need it.
</para>
</section>
<section xml:id="ssec-perl-cross-compilation">
<title>Cross-compiling modules</title>
<para>
Nixpkgs has experimental support for cross-compiling Perl modules. In many cases, it will just work out of the box, even for modules with native extensions. Sometimes, however, the Makefile.PL for a module may (indirectly) import a native module. In that case, you will need to make a stub for that module that will satisfy the Makefile.PL and install it into <filename>lib/perl5/site_perl/cross_perl/${perl.version}</filename>. See the <varname>postInstall</varname> for <varname>DBI</varname> for an example.
</para>
</section>
</section> </section>
</section> </section>

View file

@ -155,17 +155,17 @@ hello-2.3 A program that produces a familiar, friendly greeting
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para> <para>
Single license referenced by attribute (preferred) <literal>stdenv.lib.licenses.gpl3</literal>. Single license referenced by attribute (preferred) <literal>stdenv.lib.licenses.gpl3Only</literal>.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Single license referenced by its attribute shortName (frowned upon) <literal>"gpl3"</literal>. Single license referenced by its attribute shortName (frowned upon) <literal>"gpl3Only"</literal>.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Single license referenced by its attribute spdxId (frowned upon) <literal>"GPL-3.0"</literal>. Single license referenced by its attribute spdxId (frowned upon) <literal>"GPL-3.0-only"</literal>.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>

View file

@ -2018,6 +2018,9 @@ addEnvHooks "$hostOffset" myBashFunction
<para> <para>
In certain situations you may want to run the main command (<command>autoPatchelf</command>) of the setup hook on a file or a set of directories instead of unconditionally patching all outputs. This can be done by setting the <varname>dontAutoPatchelf</varname> environment variable to a non-empty value. In certain situations you may want to run the main command (<command>autoPatchelf</command>) of the setup hook on a file or a set of directories instead of unconditionally patching all outputs. This can be done by setting the <varname>dontAutoPatchelf</varname> environment variable to a non-empty value.
</para> </para>
<para>
By default <command>autoPatchelf</command> will fail as soon as any ELF file requires a dependency which cannot be resolved via the given build inputs. In some situations you might prefer to just leave missing dependencies unpatched and continue to patch the rest. This can be achieved by setting the <envar>autoPatchelfIgnoreMissingDeps</envar> environment variable to a non-empty value.
</para>
<para> <para>
The <command>autoPatchelf</command> command also recognizes a <parameter class="command">--no-recurse</parameter> command line flag, which prevents it from recursing into subdirectories. The <command>autoPatchelf</command> command also recognizes a <parameter class="command">--no-recurse</parameter> command line flag, which prevents it from recursing into subdirectories.
</para> </para>

View file

@ -85,19 +85,19 @@
<title>Installing packages on unsupported systems</title> <title>Installing packages on unsupported systems</title>
<para> <para>
There are also two ways to try compiling a package which has been marked as unsuported for the given system. There are also two ways to try compiling a package which has been marked as unsupported for the given system.
</para> </para>
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para> <para>
For allowing the build of a broken package once, you can use an environment variable for a single invocation of the nix tools: For allowing the build of an unsupported package once, you can use an environment variable for a single invocation of the nix tools:
<programlisting>$ export NIXPKGS_ALLOW_UNSUPPORTED_SYSTEM=1</programlisting> <programlisting>$ export NIXPKGS_ALLOW_UNSUPPORTED_SYSTEM=1</programlisting>
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
For permanently allowing broken packages to be built, you may add <literal>allowUnsupportedSystem = true;</literal> to your user's configuration file, like this: For permanently allowing unsupported packages to be built, you may add <literal>allowUnsupportedSystem = true;</literal> to your user's configuration file, like this:
<programlisting> <programlisting>
{ {
allowUnsupportedSystem = true; allowUnsupportedSystem = true;
@ -162,10 +162,10 @@
</programlisting> </programlisting>
</para> </para>
<para> <para>
The following example configuration blacklists the <literal>gpl3</literal> and <literal>agpl3</literal> licenses: The following example configuration blacklists the <literal>gpl3Only</literal> and <literal>agpl3Only</literal> licenses:
<programlisting> <programlisting>
{ {
blacklistedLicenses = with stdenv.lib.licenses; [ agpl3 gpl3 ]; blacklistedLicenses = with stdenv.lib.licenses; [ agpl3Only gpl3Only ];
} }
</programlisting> </programlisting>
</para> </para>

View file

@ -469,6 +469,7 @@ rec {
getBin = getOutput "bin"; getBin = getOutput "bin";
getLib = getOutput "lib"; getLib = getOutput "lib";
getDev = getOutput "dev"; getDev = getOutput "dev";
getMan = getOutput "man";
/* Pick the outputs of packages to place in buildInputs */ /* Pick the outputs of packages to place in buildInputs */
chooseDevOutputs = drvs: builtins.map getDev drvs; chooseDevOutputs = drvs: builtins.map getDev drvs;

View file

@ -77,7 +77,7 @@ let
genAttrs isDerivation toDerivation optionalAttrs genAttrs isDerivation toDerivation optionalAttrs
zipAttrsWithNames zipAttrsWith zipAttrs recursiveUpdateUntil zipAttrsWithNames zipAttrsWith zipAttrs recursiveUpdateUntil
recursiveUpdate matchAttrs overrideExisting getOutput getBin recursiveUpdate matchAttrs overrideExisting getOutput getBin
getLib getDev chooseDevOutputs zipWithNames zip getLib getDev getMan chooseDevOutputs zipWithNames zip
recurseIntoAttrs dontRecurseIntoAttrs; recurseIntoAttrs dontRecurseIntoAttrs;
inherit (lists) singleton forEach foldr fold foldl foldl' imap0 imap1 inherit (lists) singleton forEach foldr fold foldl foldl' imap0 imap1
concatMap flatten remove findSingle findFirst any all count concatMap flatten remove findSingle findFirst any all count

View file

@ -28,7 +28,7 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
fullName = "Academic Free License v3.0"; fullName = "Academic Free License v3.0";
}; };
agpl3 = spdx { agpl3Only = spdx {
spdxId = "AGPL-3.0-only"; spdxId = "AGPL-3.0-only";
fullName = "GNU Affero General Public License v3.0 only"; fullName = "GNU Affero General Public License v3.0 only";
}; };
@ -95,6 +95,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
fullName = ''BSD 2-clause "Simplified" License''; fullName = ''BSD 2-clause "Simplified" License'';
}; };
bsd2Patent = spdx {
spdxId = "BSD-2-Clause-Patent";
fullName = ''BSD-2-Clause Plus Patent License'';
};
bsd3 = spdx { bsd3 = spdx {
spdxId = "BSD-3-Clause"; spdxId = "BSD-3-Clause";
fullName = ''BSD 3-clause "New" or "Revised" License''; fullName = ''BSD 3-clause "New" or "Revised" License'';
@ -276,12 +281,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
fullName = "European Union Public License 1.2"; fullName = "European Union Public License 1.2";
}; };
fdl11 = spdx { fdl11Only = spdx {
spdxId = "GFDL-1.1-only"; spdxId = "GFDL-1.1-only";
fullName = "GNU Free Documentation License v1.1 only"; fullName = "GNU Free Documentation License v1.1 only";
}; };
fdl12 = spdx { fdl12Only = spdx {
spdxId = "GFDL-1.2-only"; spdxId = "GFDL-1.2-only";
fullName = "GNU Free Documentation License v1.2 only"; fullName = "GNU Free Documentation License v1.2 only";
}; };
@ -291,7 +296,7 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
fullName = "GNU Free Documentation License v1.2 or later"; fullName = "GNU Free Documentation License v1.2 or later";
}; };
fdl13 = spdx { fdl13Only = spdx {
spdxId = "GFDL-1.3-only"; spdxId = "GFDL-1.3-only";
fullName = "GNU Free Documentation License v1.3 only"; fullName = "GNU Free Documentation License v1.3 only";
}; };
@ -322,7 +327,7 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
free = false; free = false;
}; };
gpl1 = spdx { gpl1Only = spdx {
spdxId = "GPL-1.0-only"; spdxId = "GPL-1.0-only";
fullName = "GNU General Public License v1.0 only"; fullName = "GNU General Public License v1.0 only";
}; };
@ -332,7 +337,7 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
fullName = "GNU General Public License v1.0 or later"; fullName = "GNU General Public License v1.0 or later";
}; };
gpl2 = spdx { gpl2Only = spdx {
spdxId = "GPL-2.0-only"; spdxId = "GPL-2.0-only";
fullName = "GNU General Public License v2.0 only"; fullName = "GNU General Public License v2.0 only";
}; };
@ -357,7 +362,7 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
fullName = "GNU General Public License v2.0 or later"; fullName = "GNU General Public License v2.0 or later";
}; };
gpl3 = spdx { gpl3Only = spdx {
spdxId = "GPL-3.0-only"; spdxId = "GPL-3.0-only";
fullName = "GNU General Public License v3.0 only"; fullName = "GNU General Public License v3.0 only";
}; };
@ -432,7 +437,7 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
fullName = "JasPer License"; fullName = "JasPer License";
}; };
lgpl2 = spdx { lgpl2Only = spdx {
spdxId = "LGPL-2.0-only"; spdxId = "LGPL-2.0-only";
fullName = "GNU Library General Public License v2 only"; fullName = "GNU Library General Public License v2 only";
}; };
@ -442,7 +447,7 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
fullName = "GNU Library General Public License v2 or later"; fullName = "GNU Library General Public License v2 or later";
}; };
lgpl21 = spdx { lgpl21Only = spdx {
spdxId = "LGPL-2.1-only"; spdxId = "LGPL-2.1-only";
fullName = "GNU Lesser General Public License v2.1 only"; fullName = "GNU Lesser General Public License v2.1 only";
}; };
@ -452,7 +457,7 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
fullName = "GNU Lesser General Public License v2.1 or later"; fullName = "GNU Lesser General Public License v2.1 or later";
}; };
lgpl3 = spdx { lgpl3Only = spdx {
spdxId = "LGPL-3.0-only"; spdxId = "LGPL-3.0-only";
fullName = "GNU Lesser General Public License v3.0 only"; fullName = "GNU Lesser General Public License v3.0 only";
}; };
@ -462,6 +467,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
fullName = "GNU Lesser General Public License v3.0 or later"; fullName = "GNU Lesser General Public License v3.0 or later";
}; };
lgpllr = spdx {
spdxId = "LGPLLR";
fullName = "Lesser General Public License For Linguistic Resources";
};
libpng = spdx { libpng = spdx {
spdxId = "Libpng"; spdxId = "Libpng";
fullName = "libpng License"; fullName = "libpng License";
@ -482,6 +492,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
url = "https://opensource.franz.com/preamble.html"; url = "https://opensource.franz.com/preamble.html";
}; };
llvm-exception = spdx {
spdxId = "LLVM-exception";
fullName = "LLVM Exception"; # LLVM exceptions to the Apache 2.0 License
};
lppl12 = spdx { lppl12 = spdx {
spdxId = "LPPL-1.2"; spdxId = "LPPL-1.2";
fullName = "LaTeX Project Public License v1.2"; fullName = "LaTeX Project Public License v1.2";
@ -545,6 +560,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
fullName = "Non-Profit Open Software License 3.0"; fullName = "Non-Profit Open Software License 3.0";
}; };
obsidian = {
fullName = "Obsidian End User Agreement";
url = "https://obsidian.md/eula";
free = false;
};
ocamlpro_nc = { ocamlpro_nc = {
fullName = "OCamlPro Non Commercial license version 1"; fullName = "OCamlPro Non Commercial license version 1";
url = "https://alt-ergo.ocamlpro.com/http/alt-ergo-2.2.0/OCamlPro-Non-Commercial-License.pdf"; url = "https://alt-ergo.ocamlpro.com/http/alt-ergo-2.2.0/OCamlPro-Non-Commercial-License.pdf";
@ -761,4 +782,16 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
spdxId = "ZPL-2.1"; spdxId = "ZPL-2.1";
fullName = "Zope Public License 2.1"; fullName = "Zope Public License 2.1";
}; };
} // {
# TODO: remove legacy aliases
agpl3 = lib.licenses.agpl3Only;
fdl11 = lib.licenses.fdl11Only;
fdl12 = lib.licenses.fdl12Only;
fdl13 = lib.licenses.fdl13Only;
gpl1 = lib.licenses.gpl1Only;
gpl2 = lib.licenses.gpl2Only;
gpl3 = lib.licenses.gpl3Only;
lgpl2 = lib.licenses.lgpl2Only;
lgpl21 = lib.licenses.lgpl21Only;
lgpl3 = lib.licenses.lgpl3Only;
} }

View file

@ -145,10 +145,14 @@ rec {
# packed-refs file, so we have to grep through it: # packed-refs file, so we have to grep through it:
then then
let fileContent = readFile packedRefsName; let fileContent = readFile packedRefsName;
matchRef = match (".*\n([^\n ]*) " + file + "\n.*") fileContent; matchRef = builtins.match "([a-z0-9]+) ${file}";
in if matchRef == null isRef = s: builtins.isString s && (matchRef s) != null;
# there is a bug in libstdc++ leading to stackoverflow for long strings:
# https://github.com/NixOS/nix/issues/2147#issuecomment-659868795
refs = builtins.filter isRef (builtins.split "\n" fileContent);
in if refs == []
then throw ("Could not find " + file + " in " + packedRefsName) then throw ("Could not find " + file + " in " + packedRefsName)
else lib.head matchRef else lib.head (matchRef (lib.head refs))
else throw ("Not a .git directory: " + path); else throw ("Not a .git directory: " + path);
in readCommitFromFile "HEAD"; in readCommitFromFile "HEAD";

View file

@ -283,6 +283,12 @@
githubId = 273837; githubId = 273837;
name = "Mateusz Czapliński"; name = "Mateusz Czapliński";
}; };
akamaus = {
email = "dmitryvyal@gmail.com";
github = "akamaus";
githubId = 58955;
name = "Dmitry Vyal";
};
akaWolf = { akaWolf = {
email = "akawolf0@gmail.com"; email = "akawolf0@gmail.com";
github = "akaWolf"; github = "akaWolf";
@ -633,6 +639,12 @@
githubId = 1296771; githubId = 1296771;
name = "Anders Riutta"; name = "Anders Riutta";
}; };
arnarg = {
email = "arnarg@fastmail.com";
github = "arnarg";
githubId = 1291396;
name = "Arnar Ingason";
};
arnoldfarkas = { arnoldfarkas = {
email = "arnold.farkas@gmail.com"; email = "arnold.farkas@gmail.com";
github = "arnoldfarkas"; github = "arnoldfarkas";
@ -1025,6 +1037,12 @@
githubId = 5718007; githubId = 5718007;
name = "Bastian Köcher"; name = "Bastian Köcher";
}; };
blaggacao = {
name = "David Arnold";
email = "dar@xoe.solutions";
github = "blaggacao";
githubId = 7548295;
};
blanky0230 = { blanky0230 = {
email = "blanky0230@gmail.com"; email = "blanky0230@gmail.com";
github = "blanky0230"; github = "blanky0230";
@ -1043,6 +1061,12 @@
githubId = 16330; githubId = 16330;
name = "Mathijs Kwik"; name = "Mathijs Kwik";
}; };
bmilanov = {
name = "Biser Milanov";
email = "bmilanov11+nixpkgs@gmail.com";
github = "bmilanov";
githubId = 30090366;
};
bobakker = { bobakker = {
email = "bobakk3r@gmail.com"; email = "bobakk3r@gmail.com";
github = "bobakker"; github = "bobakker";
@ -1133,6 +1157,12 @@
githubId = 7716744; githubId = 7716744;
name = "Berno Strik"; name = "Berno Strik";
}; };
breakds = {
email = "breakds@gmail.com";
github = "breakds";
githubId = 1111035;
name = "Break Yang";
};
brettlyons = { brettlyons = {
email = "blyons@fastmail.com"; email = "blyons@fastmail.com";
github = "brettlyons"; github = "brettlyons";
@ -1598,6 +1628,12 @@
githubId = 32609395; githubId = 32609395;
name = "B YI"; name = "B YI";
}; };
conradmearns = {
email = "conradmearns+github@pm.me";
github = "ConradMearns";
githubId = 5510514;
name = "Conrad Mearns";
};
couchemar = { couchemar = {
email = "couchemar@yandex.ru"; email = "couchemar@yandex.ru";
github = "couchemar"; github = "couchemar";
@ -1610,6 +1646,16 @@
githubId = 411324; githubId = 411324;
name = "Carles Pagès"; name = "Carles Pagès";
}; };
cpu = {
email = "daniel@binaryparadox.net";
github = "cpu";
githubId = 292650;
name = "Daniel McCarney";
keys = [{
longkeyid = "rsa2048/0x08FB2BFC470E75B4";
fingerprint = "8026 D24A A966 BF9C D3CD CB3C 08FB 2BFC 470E 75B4";
}];
};
craigem = { craigem = {
email = "craige@mcwhirter.io"; email = "craige@mcwhirter.io";
github = "craigem"; github = "craigem";
@ -1634,6 +1680,16 @@
githubId = 1222362; githubId = 1222362;
name = "Matías Lang"; name = "Matías Lang";
}; };
CRTified = {
email = "carl.schneider+nixos@rub.de";
github = "CRTified";
githubId = 2440581;
name = "Carl Richard Theodor Schneider";
keys = [{
longkeyid = "rsa4096/0x45BCC1E2709B1788";
fingerprint = "2017 E152 BB81 5C16 955C E612 45BC C1E2 709B 1788";
}];
};
cryptix = { cryptix = {
email = "cryptix@riseup.net"; email = "cryptix@riseup.net";
github = "cryptix"; github = "cryptix";
@ -1772,10 +1828,22 @@
githubId = 4971975; githubId = 4971975;
name = "Janne Heß"; name = "Janne Heß";
}; };
"dasj19" = {
email = "daniel@serbanescu.dk";
github = "dasj19";
githubId = 7589338;
name = "Daniel Șerbănescu";
};
dasuxullebt = { dasuxullebt = {
email = "christoph.senjak@googlemail.com"; email = "christoph.senjak@googlemail.com";
name = "Christoph-Simon Senjak"; name = "Christoph-Simon Senjak";
}; };
david-sawatzke = {
email = "d-nix@sawatzke.dev";
github = "david-sawatzke";
githubId = 11035569;
name = "David Sawatzke";
};
david50407 = { david50407 = {
email = "me@davy.tw"; email = "me@davy.tw";
github = "david50407"; github = "david50407";
@ -2632,6 +2700,12 @@
fingerprint = "F549 3B7F 9372 5578 FDD3 D0B8 A1BC 8428 323E CFE8"; fingerprint = "F549 3B7F 9372 5578 FDD3 D0B8 A1BC 8428 323E CFE8";
}]; }];
}; };
fionera = {
email = "nix@fionera.de";
github = "fionera";
githubId = 5741401;
name = "Tim Windelschmidt";
};
FireyFly = { FireyFly = {
email = "nix@firefly.nu"; email = "nix@firefly.nu";
github = "FireyFly"; github = "FireyFly";
@ -2950,6 +3024,16 @@
githubId = 615606; githubId = 615606;
name = "Glenn Searby"; name = "Glenn Searby";
}; };
glittershark = {
name = "Griffin Smith";
email = "root@gws.fyi";
github = "glittershark";
githubId = 1481027;
keys = [{
longkeyid = "rsa2048/0x44EF5B5E861C09A7";
fingerprint = "0F11 A989 879E 8BBB FDC1 E236 44EF 5B5E 861C 09A7";
}];
};
gloaming = { gloaming = {
email = "ch9871@gmail.com"; email = "ch9871@gmail.com";
github = "gloaming"; github = "gloaming";
@ -3698,6 +3782,12 @@
githubId = 41977; githubId = 41977;
name = "Joachim Fasting"; name = "Joachim Fasting";
}; };
joachimschmidt557 = {
email = "joachim.schmidt557@outlook.com";
github = "joachimschmidt557";
githubId = 28556218;
name = "Joachim Schmidt";
};
joamaki = { joamaki = {
email = "joamaki@gmail.com"; email = "joamaki@gmail.com";
github = "joamaki"; github = "joamaki";
@ -3879,6 +3969,12 @@
githubId = 4611077; githubId = 4611077;
name = "Raymond Gauthier"; name = "Raymond Gauthier";
}; };
jschievink = {
email = "jonasschievink@gmail.com";
github = "jonas-schievink";
githubId = 1786438;
name = "Jonas Schievink";
};
jtcoolen = { jtcoolen = {
email = "jtcoolen@pm.me"; email = "jtcoolen@pm.me";
name = "Julien Coolen"; name = "Julien Coolen";
@ -4707,6 +4803,12 @@
githubId = 34683288; githubId = 34683288;
name = "Luke Bentley-Fox"; name = "Luke Bentley-Fox";
}; };
lukegb = {
email = "nix@lukegb.com";
github = "lukegb";
githubId = 246745;
name = "Luke Granger-Brown";
};
lukego = { lukego = {
email = "luke@snabb.co"; email = "luke@snabb.co";
github = "lukego"; github = "lukego";
@ -5003,6 +5105,12 @@
githubId = 2971615; githubId = 2971615;
name = "Marius Bergmann"; name = "Marius Bergmann";
}; };
mcbeth = {
email = "mcbeth@broggs.org";
github = "mcbeth";
githubId = 683809;
name = "Jeffrey Brent McBeth";
};
mcmtroffaes = { mcmtroffaes = {
email = "matthias.troffaes@gmail.com"; email = "matthias.troffaes@gmail.com";
github = "mcmtroffaes"; github = "mcmtroffaes";
@ -5560,6 +5668,12 @@
githubId = 5047140; githubId = 5047140;
name = "Victor Collod"; name = "Victor Collod";
}; };
mupdt = {
email = "nix@pdtpartners.com";
github = "mupdt";
githubId = 25388474;
name = "Matej Urbas";
};
mvnetbiz = { mvnetbiz = {
email = "mvnetbiz@gmail.com"; email = "mvnetbiz@gmail.com";
github = "mvnetbiz"; github = "mvnetbiz";
@ -6296,6 +6410,12 @@
fingerprint = "240B 57DE 4271 2480 7CE3 EAC8 4F74 D536 1C4C A31E"; fingerprint = "240B 57DE 4271 2480 7CE3 EAC8 4F74 D536 1C4C A31E";
}]; }];
}; };
priegger = {
email = "philipp@riegger.name";
github = "priegger";
githubId = 228931;
name = "Philipp Riegger";
};
prikhi = { prikhi = {
email = "pavan.rikhi@gmail.com"; email = "pavan.rikhi@gmail.com";
github = "prikhi"; github = "prikhi";
@ -6450,6 +6570,12 @@
githubId = 131856; githubId = 131856;
name = "Arnout Engelen"; name = "Arnout Engelen";
}; };
RaghavSood = {
email = "r@raghavsood.com";
github = "RaghavSood";
githubId = 903072;
name = "Raghav Sood";
};
rafaelgg = { rafaelgg = {
email = "rafael.garcia.gallego@gmail.com"; email = "rafael.garcia.gallego@gmail.com";
github = "rafaelgg"; github = "rafaelgg";
@ -7154,6 +7280,12 @@
githubId = 24496705; githubId = 24496705;
name = "Scott Hamilton"; name = "Scott Hamilton";
}; };
ShamrockLee = {
name = "Shamrock Lee";
email = "44064051+ShamrockLee@users.noreply.github.com";
github = "ShamrockLee";
githubId = 44064051;
};
shanemikel = { shanemikel = {
email = "shanepearlman@pm.me"; email = "shanepearlman@pm.me";
github = "shanemikel"; github = "shanemikel";
@ -7266,6 +7398,16 @@
githubId = 2770647; githubId = 2770647;
name = "Simon Vandel Sillesen"; name = "Simon Vandel Sillesen";
}; };
siriobalmelli = {
email = "sirio@b-ad.ch";
github = "siriobalmelli";
githubId = 23038812;
name = "Sirio Balmelli";
keys = [{
longkeyid = "ed25519/0xF72C4A887F9A24CA";
fingerprint = "B234 EFD4 2B42 FE81 EE4D 7627 F72C 4A88 7F9A 24CA";
}];
};
sivteck = { sivteck = {
email = "sivaram1992@gmail.com"; email = "sivaram1992@gmail.com";
github = "sivteck"; github = "sivteck";
@ -7840,6 +7982,12 @@
githubId = 1141680; githubId = 1141680;
name = "Thane Gill"; name = "Thane Gill";
}; };
TheBrainScrambler = {
email = "esthromeris@riseup.net";
github = "TheBrainScrambler";
githubId = 34945377;
name = "John Smith";
};
thedavidmeister = { thedavidmeister = {
email = "thedavidmeister@gmail.com"; email = "thedavidmeister@gmail.com";
github = "thedavidmeister"; github = "thedavidmeister";
@ -8058,6 +8206,12 @@
githubId = 1486805; githubId = 1486805;
name = "Toon Nolten"; name = "Toon Nolten";
}; };
toschmidt = {
email = "tobias.schmidt@in.tum.de";
github = "toschmidt";
githubId = 27586264;
name = "Tobias Schmidt";
};
travisbhartwell = { travisbhartwell = {
email = "nafai@travishartwell.net"; email = "nafai@travishartwell.net";
github = "travisbhartwell"; github = "travisbhartwell";
@ -8961,4 +9115,28 @@
github = "cpcloud"; github = "cpcloud";
githubId = 417981; githubId = 417981;
}; };
davegallant = {
name = "Dave Gallant";
email = "davegallant@gmail.com";
github = "davegallant";
githubId = 4519234;
};
saulecabrera = {
name = "Saúl Cabrera";
email = "saulecabrera@gmail.com";
github = "saulecabrera";
githubId = 1423601;
};
tfmoraes = {
name = "Thiago Franco de Moraes";
email = "351108+tfmoraes@users.noreply.github.com";
github = "tfmoraes";
githubId = 351108;
};
deifactor = {
name = "Ash Zahlen";
email = "ext0l@riseup.net";
github = "deifactor";
githubId = 30192992;
};
} }

View file

@ -2,7 +2,7 @@
# Download patches from debian project # Download patches from debian project
# Usage $0 debian-patches.txt debian-patches.nix # Usage $0 debian-patches.txt debian-patches.nix
# An example input and output files can be found in applications/graphics/xara/ # An example input and output files can be found in tools/graphics/plotutils
DEB_URL=https://sources.debian.org/data/main DEB_URL=https://sources.debian.org/data/main
declare -a deb_patches declare -a deb_patches

View file

@ -74,6 +74,7 @@ moonscript,,,,,arobyn
nvim-client,,,,, nvim-client,,,,,
penlight,,,,, penlight,,,,,
rapidjson,,,,, rapidjson,,,,,
readline,,,,,
say,,,,, say,,,,,
std__debug,std._debug,,,, std__debug,std._debug,,,,
std_normalize,std.normalize,,,, std_normalize,std.normalize,,,,

1 # nix name luarocks name server version luaversion maintainers
74 nvim-client
75 penlight
76 rapidjson
77 readline
78 say
79 std__debug std._debug
80 std_normalize std.normalize

View file

@ -9,7 +9,13 @@
# TODO: add assert statements # TODO: add assert statements
let let
pkgs = import ./../../default.nix (if include-overlays then { } else { overlays = []; }); pkgs = import ./../../default.nix (
if include-overlays == false then
{ overlays = []; }
else if include-overlays == true then
{ } # Let Nixpkgs include overlays impurely.
else { overlays = include-overlays; }
);
inherit (pkgs) lib; inherit (pkgs) lib;

View file

@ -40,6 +40,7 @@ with lib.maintainers; {
cstrahan cstrahan
Frostman Frostman
kalbasit kalbasit
mdlayher
mic92 mic92
orivej orivej
rvolosatovs rvolosatovs
@ -53,6 +54,7 @@ with lib.maintainers; {
hedning hedning
jtojnar jtojnar
worldofpeace worldofpeace
dasj19
]; ];
scope = "Maintain GNOME desktop environment and platform."; scope = "Maintain GNOME desktop environment and platform.";
}; };

View file

@ -18,6 +18,7 @@
<xi:include href="user-mgmt.xml" /> <xi:include href="user-mgmt.xml" />
<xi:include href="file-systems.xml" /> <xi:include href="file-systems.xml" />
<xi:include href="x-windows.xml" /> <xi:include href="x-windows.xml" />
<xi:include href="gpu-accel.xml" />
<xi:include href="xfce.xml" /> <xi:include href="xfce.xml" />
<xi:include href="networking.xml" /> <xi:include href="networking.xml" />
<xi:include href="linux-kernel.xml" /> <xi:include href="linux-kernel.xml" />

View file

@ -0,0 +1,104 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-gpu-accel">
<title>GPU acceleration</title>
<para>
NixOS provides various APIs that benefit from GPU hardware
acceleration, such as VA-API and VDPAU for video playback; OpenGL and
Vulkan for 3D graphics; and OpenCL for general-purpose computing.
This chapter describes how to set up GPU hardware acceleration (as far
as this is not done automatically) and how to verify that hardware
acceleration is indeed used.
</para>
<para>
Most of the aforementioned APIs are agnostic with regards to which
display server is used. Consequently, these instructions should apply
both to the X Window System and Wayland compositors.
</para>
<section xml:id="sec-gpu-accel-opencl">
<title>OpenCL</title>
<para>
<link xlink:href="https://en.wikipedia.org/wiki/OpenCL">OpenCL</link> is a
general compute API. It is used by various applications such as
Blender and Darktable to accelerate certain operations.
</para>
<para>
OpenCL applications load drivers through the <emphasis>Installable Client
Driver</emphasis> (ICD) mechanism. In this mechanism, an ICD file
specifies the path to the OpenCL driver for a particular GPU family.
In NixOS, there are two ways to make ICD files visible to the ICD
loader. The first is through the <varname>OCL_ICD_VENDORS</varname>
environment variable. This variable can contain a directory which
is scanned by the ICL loader for ICD files. For example:
<screen><prompt>$</prompt> export \
OCL_ICD_VENDORS=`nix-build '&lt;nixpkgs&gt;' --no-out-link -A rocm-opencl-icd`/etc/OpenCL/vendors/</screen>
</para>
<para>
The second mechanism is to add the OpenCL driver package to
<xref linkend="opt-hardware.opengl.extraPackages"/>. This links the
ICD file under <filename>/run/opengl-driver</filename>, where it will
be visible to the ICD loader.
</para>
<para>
The proper installation of OpenCL drivers can be verified through
the <command>clinfo</command> command of the <package>clinfo</package>
package. This command will report the number of hardware devides
that is found and give detailed information for each device:
</para>
<screen><prompt>$</prompt> clinfo | head -n3
Number of platforms 1
Platform Name AMD Accelerated Parallel Processing
Platform Vendor Advanced Micro Devices, Inc.</screen>
<section xml:id="sec-gpu-accel-opencl-amd">
<title>AMD</title>
<para>
Modern AMD <link
xlink:href="https://en.wikipedia.org/wiki/Graphics_Core_Next">Graphics
Core Next</link> (GCN) GPUs are supported through the
<package>rocm-opencl-icd</package> package. Adding this package to
<xref linkend="opt-hardware.opengl.extraPackages"/> enables OpenCL
support. However, OpenCL Image support is provided through the
non-free <package>rocm-runtime-ext</package> package. This package can
be added to the same configuration option, but requires that
<varname>allowUnfree</varname> option is is enabled for nixpkgs. Full
OpenCL support on supported AMD GPUs is thus enabled as follows:
<programlisting><xref linkend="opt-hardware.opengl.extraPackages"/> = [
rocm-opencl-icd
rocm-runtime-ext
];</programlisting>
</para>
<para>
It is also possible to use the OpenCL Image extension without a
system-wide installation of the <package>rocm-runtime-ext</package>
package by setting the <varname>ROCR_EXT_DIR</varname> environment
variable to the directory that contains the extension:
<screen><prompt>$</prompt> export \
ROCR_EXT_DIR=`nix-build '&lt;nixpkgs&gt;' --no-out-link -A rocm-runtime-ext`/lib/rocm-runtime-ext</screen>
</para>
<para>
With either approach, you can verify that OpenCL Image support
is indeed working with the <command>clinfo</command> command:
<screen><prompt>$</prompt> clinfo | grep Image
Image support Yes</screen>
</para>
</section>
</section>
</chapter>

View file

@ -9,7 +9,6 @@
This profile just enables a <systemitem class="username">demo</systemitem> This profile just enables a <systemitem class="username">demo</systemitem>
user, with password <literal>demo</literal>, uid <literal>1000</literal>, user, with password <literal>demo</literal>, uid <literal>1000</literal>,
<systemitem class="groupname">wheel</systemitem> group and <systemitem class="groupname">wheel</systemitem> group and
<link linkend="opt-services.xserver.displayManager.sddm.autoLogin"> autologin <link linkend="opt-services.xserver.displayManager.autoLogin">autologin in the SDDM display manager</link>.
in the SDDM display manager</link>.
</para> </para>
</section> </section>

View file

@ -90,10 +90,50 @@
using lightdm for a user <literal>alice</literal>: using lightdm for a user <literal>alice</literal>:
<programlisting> <programlisting>
<xref linkend="opt-services.xserver.displayManager.lightdm.enable"/> = true; <xref linkend="opt-services.xserver.displayManager.lightdm.enable"/> = true;
<xref linkend="opt-services.xserver.displayManager.lightdm.autoLogin.enable"/> = true; <xref linkend="opt-services.xserver.displayManager.autoLogin.enable"/> = true;
<xref linkend="opt-services.xserver.displayManager.lightdm.autoLogin.user"/> = "alice"; <xref linkend="opt-services.xserver.displayManager.autoLogin.user"/> = "alice";
</programlisting> </programlisting>
The options are named identically for all other display managers. </para>
</simplesect>
<simplesect xml:id="sec-x11--graphics-cards-intel">
<title>Intel Graphics drivers</title>
<para>
There are two choices for Intel Graphics drivers in X.org:
<literal>modesetting</literal> (included in the <package>xorg-server</package> itself)
and <literal>intel</literal> (provided by the package <package>xf86-video-intel</package>).
</para>
<para>
The default and recommended is <literal>modesetting</literal>.
It is a generic driver which uses the kernel
<link xlink:href="https://en.wikipedia.org/wiki/Mode_setting">mode setting</link>
(KMS) mechanism. It supports Glamor (2D graphics acceleration via OpenGL)
and is actively maintained but may perform worse in some cases (like in old chipsets).
</para>
<para>
The second driver, <literal>intel</literal>, is specific to Intel GPUs,
but not recommended by most distributions: it lacks several modern features
(for example, it doesn't support Glamor) and the package hasn't been officially
updated since 2015.
</para>
<para>
The results vary depending on the hardware, so you may have to try both drivers.
Use the option <xref linkend="opt-services.xserver.videoDrivers"/> to set one.
The recommended configuration for modern systems is:
<programlisting>
<xref linkend="opt-services.xserver.videoDrivers"/> = [ "modesetting" ];
<xref linkend="opt-services.xserver.useGlamor"/> = true;
</programlisting>
If you experience screen tearing no matter what, this configuration was
reported to resolve the issue:
<programlisting>
<xref linkend="opt-services.xserver.videoDrivers"/> = [ "intel" ];
<xref linkend="opt-services.xserver.deviceSection"/> = ''
Option "DRI" "2"
Option "TearFree" "true"
'';
</programlisting>
Note that this will likely downgrade the performance compared to
<literal>modesetting</literal> or <literal>intel</literal> with DRI 3 (default).
</para> </para>
</simplesect> </simplesect>
<simplesect xml:id="sec-x11-graphics-cards-nvidia"> <simplesect xml:id="sec-x11-graphics-cards-nvidia">

View file

@ -38,7 +38,12 @@ starting VDE switch for network 1
</para> </para>
<para> <para>
The machine state is kept across VM restarts in You can re-use the VM states coming from a previous run
<filename>/tmp/vm-state-</filename><varname>machinename</varname>. by setting the <command>--keep-vm-state</command> flag.
<screen>
<prompt>$ </prompt>./result/bin/nixos-run-vms --keep-vm-state
</screen>
The machine state is stored in the
<filename>$TMPDIR/vm-state-</filename><varname>machinename</varname> directory.
</para> </para>
</section> </section>

View file

@ -360,6 +360,18 @@ start_all()
</note> </note>
</listitem> </listitem>
</varlistentry> </varlistentry>
<varlistentry>
<term>
<methodname>wait_for_console_text</methodname>
</term>
<listitem>
<para>
Wait until the supplied regular expressions match a line of the serial
console output. This method is useful when OCR is not possibile or
accurate enough.
</para>
</listitem>
</varlistentry>
<varlistentry> <varlistentry>
<term> <term>
<methodname>wait_for_window</methodname> <methodname>wait_for_window</methodname>
@ -378,7 +390,7 @@ start_all()
<listitem> <listitem>
<para> <para>
Copies a file from host to machine, e.g., Copies a file from host to machine, e.g.,
<literal>copy_file_from_host("myfile", "/etc/my/important/file")</literal>. <literal>copy_from_host("myfile", "/etc/my/important/file")</literal>.
</para> </para>
<para> <para>
The first argument is the file on the host. The file needs to be The first argument is the file on the host. The file needs to be

View file

@ -49,7 +49,7 @@
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Click on Settings / Display / Screen and select VBoxVGA as Graphics Controller Click on Settings / Display / Screen and select VMSVGA as Graphics Controller
</para> </para>
</listitem> </listitem>
<listitem> <listitem>

View file

@ -146,7 +146,7 @@
partition. It uses the initially reserved 512MiB at the start of the partition. It uses the initially reserved 512MiB at the start of the
disk. disk.
<screen language="commands"><prompt># </prompt>parted /dev/sda -- mkpart ESP fat32 1MiB 512MiB <screen language="commands"><prompt># </prompt>parted /dev/sda -- mkpart ESP fat32 1MiB 512MiB
<prompt># </prompt>parted /dev/sda -- set 3 boot on</screen> <prompt># </prompt>parted /dev/sda -- set 3 esp on</screen>
</para> </para>
</listitem> </listitem>
</orderedlist> </orderedlist>
@ -513,7 +513,7 @@ Retype new UNIX password: ***</screen>
<prompt># </prompt>parted /dev/sda -- mkpart primary 512MiB -8GiB <prompt># </prompt>parted /dev/sda -- mkpart primary 512MiB -8GiB
<prompt># </prompt>parted /dev/sda -- mkpart primary linux-swap -8GiB 100% <prompt># </prompt>parted /dev/sda -- mkpart primary linux-swap -8GiB 100%
<prompt># </prompt>parted /dev/sda -- mkpart ESP fat32 1MiB 512MiB <prompt># </prompt>parted /dev/sda -- mkpart ESP fat32 1MiB 512MiB
<prompt># </prompt>parted /dev/sda -- set 3 boot on</screen> <prompt># </prompt>parted /dev/sda -- set 3 esp on</screen>
</example> </example>
<example xml:id="ex-install-sequence"> <example xml:id="ex-install-sequence">

View file

@ -315,7 +315,7 @@
switch</command>), because the hardware and boot loader configuration in switch</command>), because the hardware and boot loader configuration in
the VM are different. The boot loader is installed on an automatically the VM are different. The boot loader is installed on an automatically
generated virtual disk containing a <filename>/boot</filename> generated virtual disk containing a <filename>/boot</filename>
partition, which is mounted read-only in the VM. partition.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>

View file

@ -792,7 +792,7 @@ users.users.me =
The <option>services.xserver.displayManager.auto</option> module has been removed. The <option>services.xserver.displayManager.auto</option> module has been removed.
It was only intended for use in internal NixOS tests, and gave the false impression It was only intended for use in internal NixOS tests, and gave the false impression
of it being a special display manager when it's actually LightDM. of it being a special display manager when it's actually LightDM.
Please use the <xref linkend="opt-services.xserver.displayManager.lightdm.autoLogin"/> options instead, Please use the <option>services.xserver.displayManager.lightdm.autoLogin</option> options instead,
or any other display manager in NixOS as they all support auto-login. If you used this module specifically or any other display manager in NixOS as they all support auto-login. If you used this module specifically
because it permitted root auto-login you can override the lightdm-autologin pam module like: because it permitted root auto-login you can override the lightdm-autologin pam module like:
<programlisting> <programlisting>

View file

@ -110,6 +110,26 @@ systemd.services.mysql.serviceConfig.ReadWritePaths = [ "/var/data" ];
</programlisting> </programlisting>
</para> </para>
</listitem> </listitem>
<listitem>
<para>
Two new option <link linkend="opt-documentation.man.generateCaches">documentation.man.generateCaches</link>
has been added to automatically generate the <literal>man-db</literal> caches, which are needed by utilities
like <command>whatis</command> and <command>apropos</command>. The caches are generated during the build of
the NixOS configuration: since this can be expensive when a large number of packages are installed, the
feature is disabled by default.
</para>
</listitem>
<listitem>
<para>
<varname>services.postfix.sslCACert</varname> was replaced by <varname>services.postfix.tlsTrustedAuthorities</varname> which now defaults to system certifcate authorities.
</para>
</listitem>
<listitem>
<para>
Subordinate GID and UID mappings are now set up automatically for all normal users.
This will make container tools like Podman work as non-root users out of the box.
</para>
</listitem>
</itemizedlist> </itemizedlist>
</section> </section>
@ -495,6 +515,22 @@ systemd.services.nginx.serviceConfig.ReadWritePaths = [ "/var/www" ];
In the <literal>resilio</literal> module, <xref linkend="opt-services.resilio.httpListenAddr"/> has been changed to listen to <literal>[::1]</literal> instead of <literal>0.0.0.0</literal>. In the <literal>resilio</literal> module, <xref linkend="opt-services.resilio.httpListenAddr"/> has been changed to listen to <literal>[::1]</literal> instead of <literal>0.0.0.0</literal>.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
Radicale's default package has changed from 2.x to 3.x. An upgrade
checklist can be found
<link xlink:href="https://github.com/Kozea/Radicale/blob/3.0.x/NEWS.md#upgrade-checklist">here</link>.
You can use the newer version in the NixOS service by setting the
<literal>package</literal> to <literal>radicale3</literal>, which is done
automatically if <literal>stateVersion</literal> is 20.09 or higher.
</para>
</listitem>
<listitem>
<para>
We now have a unified <xref linkend="opt-services.xserver.displayManager.autoLogin"/> option interface
to be used for every display-manager in NixOS.
</para>
</listitem>
</itemizedlist> </itemizedlist>
</section> </section>
@ -656,6 +692,18 @@ systemd.services.nginx.serviceConfig.ReadWritePaths = [ "/var/www" ];
<package>nextcloud18</package> before upgrading to <package>nextcloud19</package> <package>nextcloud18</package> before upgrading to <package>nextcloud19</package>
since Nextcloud doesn't support upgrades across multiple major versions. since Nextcloud doesn't support upgrades across multiple major versions.
</para> </para>
<para>
The <literal>nixos-run-vms</literal> script now deletes the
previous run machines states on test startup. You can use the
<literal>--keep-vm-state</literal> flag to match the previous
behaviour and keep the same VM state between different test runs.
</para>
</listitem>
<listitem>
<para>
The <link linkend="opt-nix.buildMachines">nix.buildMachines</link> option is now type-checked.
There are no functional changes, however this may require updating some configurations to use correct types for all attributes.
</para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
</section> </section>

View file

@ -181,6 +181,7 @@ let format' = format; in let
export NIX_STATE_DIR=$TMPDIR/state export NIX_STATE_DIR=$TMPDIR/state
nix-store --load-db < ${closureInfo}/registration nix-store --load-db < ${closureInfo}/registration
chmod 755 "$TMPDIR"
echo "running nixos-install..." echo "running nixos-install..."
nixos-install --root $root --no-bootloader --no-root-passwd \ nixos-install --root $root --no-bootloader --no-root-passwd \
--system ${config.system.build.toplevel} --channel ${channelSources} --substituters "" --system ${config.system.build.toplevel} --channel ${channelSources} --substituters ""

View file

@ -17,7 +17,7 @@
, e2fsprogs , e2fsprogs
, libfaketime , libfaketime
, perl , perl
, lkl , fakeroot
}: }:
let let
@ -26,7 +26,7 @@ in
pkgs.stdenv.mkDerivation { pkgs.stdenv.mkDerivation {
name = "ext4-fs.img${lib.optionalString compressImage ".zst"}"; name = "ext4-fs.img${lib.optionalString compressImage ".zst"}";
nativeBuildInputs = [ e2fsprogs.bin libfaketime perl lkl ] nativeBuildInputs = [ e2fsprogs.bin libfaketime perl fakeroot ]
++ lib.optional compressImage zstd; ++ lib.optional compressImage zstd;
buildCommand = buildCommand =
@ -37,32 +37,31 @@ pkgs.stdenv.mkDerivation {
${populateImageCommands} ${populateImageCommands}
) )
# Add the closures of the top-level store objects. echo "Preparing store paths for image..."
storePaths=$(cat ${sdClosureInfo}/store-paths)
# Create nix/store before copying path
mkdir -p ./rootImage/nix/store
xargs -I % cp -a --reflink=auto % -t ./rootImage/nix/store/ < ${sdClosureInfo}/store-paths
(
GLOBIGNORE=".:.."
shopt -u dotglob
cp -a --reflink=auto ./files/* -t ./rootImage/
)
# Also include a manifest of the closures in a format suitable for nix-store --load-db
cp ${sdClosureInfo}/registration ./rootImage/nix-path-registration
# Make a crude approximation of the size of the target image. # Make a crude approximation of the size of the target image.
# If the script starts failing, increase the fudge factors here. # If the script starts failing, increase the fudge factors here.
numInodes=$(find $storePaths ./files | wc -l) numInodes=$(find ./rootImage | wc -l)
numDataBlocks=$(du -s -c -B 4096 --apparent-size $storePaths ./files | tail -1 | awk '{ print int($1 * 1.10) }') numDataBlocks=$(du -s -c -B 4096 --apparent-size ./rootImage | tail -1 | awk '{ print int($1 * 1.10) }')
bytes=$((2 * 4096 * $numInodes + 4096 * $numDataBlocks)) bytes=$((2 * 4096 * $numInodes + 4096 * $numDataBlocks))
echo "Creating an EXT4 image of $bytes bytes (numInodes=$numInodes, numDataBlocks=$numDataBlocks)" echo "Creating an EXT4 image of $bytes bytes (numInodes=$numInodes, numDataBlocks=$numDataBlocks)"
truncate -s $bytes $img truncate -s $bytes $img
faketime -f "1970-01-01 00:00:01" mkfs.ext4 -L ${volumeLabel} -U ${uuid} $img
# Also include a manifest of the closures in a format suitable for nix-store --load-db. faketime -f "1970-01-01 00:00:01" fakeroot mkfs.ext4 -L ${volumeLabel} -U ${uuid} -d ./rootImage $img
cp ${sdClosureInfo}/registration nix-path-registration
cptofs -t ext4 -i $img nix-path-registration /
# Create nix/store before copying paths
faketime -f "1970-01-01 00:00:01" mkdir -p nix/store
cptofs -t ext4 -i $img nix /
echo "copying store paths to image..."
cptofs -t ext4 -i $img $storePaths /nix/store/
echo "copying files to image..."
cptofs -t ext4 -i $img ./files/* /
export EXT2FS_NO_MTAB_OK=yes export EXT2FS_NO_MTAB_OK=yes
# I have ended up with corrupted images sometimes, I suspect that happens when the build machine's disk gets full during the build. # I have ended up with corrupted images sometimes, I suspect that happens when the build machine's disk gets full during the build.

View file

@ -3,7 +3,10 @@ from contextlib import contextmanager, _GeneratorContextManager
from queue import Queue, Empty from queue import Queue, Empty
from typing import Tuple, Any, Callable, Dict, Iterator, Optional, List from typing import Tuple, Any, Callable, Dict, Iterator, Optional, List
from xml.sax.saxutils import XMLGenerator from xml.sax.saxutils import XMLGenerator
import queue
import io
import _thread import _thread
import argparse
import atexit import atexit
import base64 import base64
import codecs import codecs
@ -19,6 +22,7 @@ import subprocess
import sys import sys
import tempfile import tempfile
import time import time
import traceback
import unicodedata import unicodedata
CHAR_TO_KEY = { CHAR_TO_KEY = {
@ -671,6 +675,22 @@ class Machine:
with self.nested("waiting for {} to appear on screen".format(regex)): with self.nested("waiting for {} to appear on screen".format(regex)):
retry(screen_matches) retry(screen_matches)
def wait_for_console_text(self, regex: str) -> None:
self.log("waiting for {} to appear on console".format(regex))
# Buffer the console output, this is needed
# to match multiline regexes.
console = io.StringIO()
while True:
try:
console.write(self.last_lines.get())
except queue.Empty:
self.sleep(1)
continue
console.seek(0)
matches = re.search(regex, console.read())
if matches is not None:
return
def send_key(self, key: str) -> None: def send_key(self, key: str) -> None:
key = CHAR_TO_KEY.get(key, key) key = CHAR_TO_KEY.get(key, key)
self.send_monitor_command("sendkey {}".format(key)) self.send_monitor_command("sendkey {}".format(key))
@ -734,11 +754,16 @@ class Machine:
self.monitor, _ = self.monitor_socket.accept() self.monitor, _ = self.monitor_socket.accept()
self.shell, _ = self.shell_socket.accept() self.shell, _ = self.shell_socket.accept()
# Store last serial console lines for use
# of wait_for_console_text
self.last_lines: Queue = Queue()
def process_serial_output() -> None: def process_serial_output() -> None:
assert self.process.stdout is not None assert self.process.stdout is not None
for _line in self.process.stdout: for _line in self.process.stdout:
# Ignore undecodable bytes that may occur in boot menus # Ignore undecodable bytes that may occur in boot menus
line = _line.decode(errors="ignore").replace("\r", "").rstrip() line = _line.decode(errors="ignore").replace("\r", "").rstrip()
self.last_lines.put(line)
eprint("{} # {}".format(self.name, line)) eprint("{} # {}".format(self.name, line))
self.logger.enqueue({"msg": line, "machine": self.name}) self.logger.enqueue({"msg": line, "machine": self.name})
@ -751,6 +776,11 @@ class Machine:
self.log("QEMU running (pid {})".format(self.pid)) self.log("QEMU running (pid {})".format(self.pid))
def cleanup_statedir(self) -> None:
self.log("delete the VM state directory")
if os.path.isfile(self.state_dir):
shutil.rmtree(self.state_dir)
def shutdown(self) -> None: def shutdown(self) -> None:
if not self.booted: if not self.booted:
return return
@ -863,7 +893,8 @@ def run_tests() -> None:
try: try:
exec(tests, globals()) exec(tests, globals())
except Exception as e: except Exception as e:
eprint("error: {}".format(str(e))) eprint("error: ")
traceback.print_exc()
sys.exit(1) sys.exit(1)
else: else:
ptpython.repl.embed(locals(), globals()) ptpython.repl.embed(locals(), globals())
@ -889,6 +920,15 @@ def subtest(name: str) -> Iterator[None]:
if __name__ == "__main__": if __name__ == "__main__":
arg_parser = argparse.ArgumentParser()
arg_parser.add_argument(
"-K",
"--keep-vm-state",
help="re-use a VM state coming from a previous run",
action="store_true",
)
(cli_args, vm_scripts) = arg_parser.parse_known_args()
log = Logger() log = Logger()
vlan_nrs = list(dict.fromkeys(os.environ.get("VLANS", "").split())) vlan_nrs = list(dict.fromkeys(os.environ.get("VLANS", "").split()))
@ -896,8 +936,10 @@ if __name__ == "__main__":
for nr, vde_socket, _, _ in vde_sockets: for nr, vde_socket, _, _ in vde_sockets:
os.environ["QEMU_VDE_SOCKET_{}".format(nr)] = vde_socket os.environ["QEMU_VDE_SOCKET_{}".format(nr)] = vde_socket
vm_scripts = sys.argv[1:]
machines = [create_machine({"startCommand": s}) for s in vm_scripts] machines = [create_machine({"startCommand": s}) for s in vm_scripts]
for machine in machines:
if not cli_args.keep_vm_state:
machine.cleanup_statedir()
machine_eval = [ machine_eval = [
"{0} = machines[{1}]".format(m.name, idx) for idx, m in enumerate(machines) "{0} = machines[{1}]".format(m.name, idx) for idx, m in enumerate(machines)
] ]
@ -911,7 +953,6 @@ if __name__ == "__main__":
continue continue
log.log("killing {} (pid {})".format(machine.name, machine.pid)) log.log("killing {} (pid {})".format(machine.name, machine.pid))
machine.process.kill() machine.process.kill()
for _, _, process, _ in vde_sockets: for _, _, process, _ in vde_sockets:
process.terminate() process.terminate()
log.close() log.close()

View file

@ -114,7 +114,7 @@ rec {
imagemagick_tiff = imagemagick_light.override { inherit libtiff; }; imagemagick_tiff = imagemagick_light.override { inherit libtiff; };
# Generate onvenience wrappers for running the test driver # Generate convenience wrappers for running the test driver
# interactively with the specified network, and for starting the # interactively with the specified network, and for starting the
# VMs from the command line. # VMs from the command line.
driver = let warn = if skipLint then lib.warn "Linting is disabled!" else lib.id; in warn (runCommand testDriverName driver = let warn = if skipLint then lib.warn "Linting is disabled!" else lib.id; in warn (runCommand testDriverName

View file

@ -281,3 +281,58 @@ foreach my $u (values %usersOut) {
} }
updateFile("/etc/shadow", \@shadowNew, 0600); updateFile("/etc/shadow", \@shadowNew, 0600);
# Rewrite /etc/subuid & /etc/subgid to include default container mappings
my $subUidMapFile = "/var/lib/nixos/auto-subuid-map";
my $subUidMap = -e $subUidMapFile ? decode_json(read_file($subUidMapFile)) : {};
my (%subUidsUsed, %subUidsPrevUsed);
$subUidsPrevUsed{$_} = 1 foreach values %{$subUidMap};
sub allocSubUid {
my ($name, @rest) = @_;
# TODO: No upper bounds?
my ($min, $max, $up) = (100000, 100000 * 100, 1);
my $prevId = $subUidMap->{$name};
if (defined $prevId && !defined $subUidsUsed{$prevId}) {
$subUidsUsed{$prevId} = 1;
return $prevId;
}
my $id = allocId(\%subUidsUsed, \%subUidsPrevUsed, $min, $max, $up, sub { my ($uid) = @_; getpwuid($uid) });
my $offset = $id - 100000;
my $count = $offset * 65536;
my $subordinate = 100000 + $count;
return $subordinate;
}
my @subGids;
my @subUids;
foreach my $u (values %usersOut) {
my $name = $u->{name};
foreach my $range (@{$u->{subUidRanges}}) {
my $value = join(":", ($name, $range->{startUid}, $range->{count}));
push @subUids, $value;
}
foreach my $range (@{$u->{subGidRanges}}) {
my $value = join(":", ($name, $range->{startGid}, $range->{count}));
push @subGids, $value;
}
if($u->{isNormalUser}) {
my $subordinate = allocSubUid($name);
$subUidMap->{$name} = $subordinate;
my $value = join(":", ($name, $subordinate, 65536));
push @subUids, $value;
push @subGids, $value;
}
}
updateFile("/etc/subuid", join("\n", @subUids) . "\n");
updateFile("/etc/subgid", join("\n", @subGids) . "\n");
updateFile($subUidMapFile, encode_json($subUidMap) . "\n");

View file

@ -6,6 +6,16 @@ let
ids = config.ids; ids = config.ids;
cfg = config.users; cfg = config.users;
# Check whether a password hash will allow login.
allowsLogin = hash:
hash == "" # login without password
|| !(lib.elem hash
[ null # password login disabled
"!" # password login disabled
"!!" # a variant of "!"
"*" # password unset
]);
passwordDescription = '' passwordDescription = ''
The options <option>hashedPassword</option>, The options <option>hashedPassword</option>,
<option>password</option> and <option>passwordFile</option> <option>password</option> and <option>passwordFile</option>
@ -25,8 +35,19 @@ let
''; '';
hashedPasswordDescription = '' hashedPasswordDescription = ''
To generate hashed password install <literal>mkpasswd</literal> To generate a hashed password install the <literal>mkpasswd</literal>
package and run <literal>mkpasswd -m sha-512</literal>. package and run <literal>mkpasswd -m sha-512</literal>.
If set to an empty string (<literal>""</literal>), this user will
be able to log in without being asked for a password (but not via remote
services such as SSH, or indirectly via <command>su</command> or
<command>sudo</command>). This should only be used for e.g. bootable
live systems. Note: this is different from setting an empty password,
which ca be achieved using <option>users.users.&lt;name?&gt;.password</option>.
If set to <literal>null</literal> (default) this user will not
be able to log in using a password (i.e. via <command>login</command>
command).
''; '';
userOpts = { name, config, ... }: { userOpts = { name, config, ... }: {
@ -354,18 +375,6 @@ let
}; };
}; };
mkSubuidEntry = user: concatStrings (
map (range: "${user.name}:${toString range.startUid}:${toString range.count}\n")
user.subUidRanges);
subuidFile = concatStrings (map mkSubuidEntry (attrValues cfg.users));
mkSubgidEntry = user: concatStrings (
map (range: "${user.name}:${toString range.startGid}:${toString range.count}\n")
user.subGidRanges);
subgidFile = concatStrings (map mkSubgidEntry (attrValues cfg.users));
idsAreUnique = set: idAttr: !(fold (name: args@{ dup, acc }: idsAreUnique = set: idAttr: !(fold (name: args@{ dup, acc }:
let let
id = builtins.toString (builtins.getAttr idAttr (builtins.getAttr name set)); id = builtins.toString (builtins.getAttr idAttr (builtins.getAttr name set));
@ -385,6 +394,7 @@ let
{ inherit (u) { inherit (u)
name uid group description home createHome isSystemUser name uid group description home createHome isSystemUser
password passwordFile hashedPassword password passwordFile hashedPassword
isNormalUser subUidRanges subGidRanges
initialPassword initialHashedPassword; initialPassword initialHashedPassword;
shell = utils.toShellPath u.shell; shell = utils.toShellPath u.shell;
}) cfg.users; }) cfg.users;
@ -406,6 +416,12 @@ in {
imports = [ imports = [
(mkAliasOptionModule [ "users" "extraUsers" ] [ "users" "users" ]) (mkAliasOptionModule [ "users" "extraUsers" ] [ "users" "users" ])
(mkAliasOptionModule [ "users" "extraGroups" ] [ "users" "groups" ]) (mkAliasOptionModule [ "users" "extraGroups" ] [ "users" "groups" ])
(mkChangedOptionModule
[ "security" "initialRootPassword" ]
[ "users" "users" "root" "initialHashedPassword" ]
(cfg: if cfg.security.initialRootPassword == "!"
then null
else cfg.security.initialRootPassword))
]; ];
###### interface ###### interface
@ -477,14 +493,6 @@ in {
''; '';
}; };
# FIXME: obsolete - will remove.
security.initialRootPassword = mkOption {
type = types.str;
default = "!";
example = "";
visible = false;
};
}; };
@ -499,7 +507,6 @@ in {
home = "/root"; home = "/root";
shell = mkDefault cfg.defaultUserShell; shell = mkDefault cfg.defaultUserShell;
group = "root"; group = "root";
initialHashedPassword = mkDefault config.security.initialRootPassword;
}; };
nobody = { nobody = {
uid = ids.uids.nobody; uid = ids.uids.nobody;
@ -549,16 +556,7 @@ in {
# Install all the user shells # Install all the user shells
environment.systemPackages = systemShells; environment.systemPackages = systemShells;
environment.etc = { environment.etc = (mapAttrs' (name: { packages, ... }: {
subuid = {
text = subuidFile;
mode = "0644";
};
subgid = {
text = subgidFile;
mode = "0644";
};
} // (mapAttrs' (name: { packages, ... }: {
name = "profiles/per-user/${name}"; name = "profiles/per-user/${name}";
value.source = pkgs.buildEnv { value.source = pkgs.buildEnv {
name = "user-environment"; name = "user-environment";
@ -588,7 +586,7 @@ in {
|| cfg.group == "wheel" || cfg.group == "wheel"
|| elem "wheel" cfg.extraGroups) || elem "wheel" cfg.extraGroups)
&& &&
((cfg.hashedPassword != null && cfg.hashedPassword != "!") (allowsLogin cfg.hashedPassword
|| cfg.password != null || cfg.password != null
|| cfg.passwordFile != null || cfg.passwordFile != null
|| cfg.openssh.authorizedKeys.keys != [] || cfg.openssh.authorizedKeys.keys != []
@ -598,7 +596,17 @@ in {
Neither the root account nor any wheel user has a password or SSH authorized key. Neither the root account nor any wheel user has a password or SSH authorized key.
You must set one to prevent being locked out of your system.''; You must set one to prevent being locked out of your system.'';
} }
]; ] ++ flip mapAttrsToList cfg.users (name: user:
{
assertion = (user.hashedPassword != null)
-> (builtins.match ".*:.*" user.hashedPassword == null);
message = ''
The password hash of user "${name}" contains a ":" character.
This is invalid and would break the login system because the fields
of /etc/shadow (file where hashes are stored) are colon-separated.
Please check the value of option `users.users."${name}".hashedPassword`.'';
}
);
warnings = warnings =
builtins.filter (x: x != null) ( builtins.filter (x: x != null) (
@ -621,14 +629,13 @@ in {
content = "${base64}${sep}${base64}"; content = "${base64}${sep}${base64}";
mcf = "^${sep}${scheme}${sep}${content}$"; mcf = "^${sep}${scheme}${sep}${content}$";
in in
if (user.hashedPassword != null if (allowsLogin user.hashedPassword
&& user.hashedPassword != "" # login without password
&& builtins.match mcf user.hashedPassword == null) && builtins.match mcf user.hashedPassword == null)
then then ''
''
The password hash of user "${name}" may be invalid. You must set a The password hash of user "${name}" may be invalid. You must set a
valid hash or the user will be locked out of his account. Please valid hash or the user will be locked out of their account. Please
check the value of option `users.users."${name}".hashedPassword`. check the value of option `users.users."${name}".hashedPassword`.''
''
else null else null
)); ));

View file

@ -22,11 +22,22 @@ in {
example = literalExample "pkgs.device-tree_rpi"; example = literalExample "pkgs.device-tree_rpi";
type = types.path; type = types.path;
description = '' description = ''
The package containing the base device-tree (.dtb) to boot. Contains The path containing the base device-tree (.dtb) to boot. Contains
device trees bundled with the Linux kernel by default. device trees bundled with the Linux kernel by default.
''; '';
}; };
name = mkOption {
default = null;
example = "some-dtb.dtb";
type = types.nullOr types.str;
description = ''
The name of an explicit dtb to be loaded, relative to the dtb base.
Useful in extlinux scenarios if the bootloader doesn't pick the
right .dtb file from FDTDIR.
'';
};
overlays = mkOption { overlays = mkOption {
default = []; default = [];
example = literalExample example = literalExample

View file

@ -64,7 +64,7 @@ in
# Without dconf enabled it is impossible to use IBus # Without dconf enabled it is impossible to use IBus
programs.dconf.enable = true; programs.dconf.enable = true;
programs.dconf.profiles.ibus = "${ibusPackage}/etc/dconf/profile/ibus"; programs.dconf.packages = [ ibusPackage ];
services.dbus.packages = [ services.dbus.packages = [
ibusAutostart ibusAutostart

View file

@ -11,15 +11,17 @@ with lib;
services.xserver.desktopManager.gnome3.enable = true; services.xserver.desktopManager.gnome3.enable = true;
services.xserver.displayManager.gdm = { services.xserver.displayManager = {
enable = true; gdm = {
# autoSuspend makes the machine automatically suspend after inactivity. enable = true;
# It's possible someone could/try to ssh'd into the machine and obviously # autoSuspend makes the machine automatically suspend after inactivity.
# have issues because it's inactive. # It's possible someone could/try to ssh'd into the machine and obviously
# See: # have issues because it's inactive.
# * https://github.com/NixOS/nixpkgs/pull/63790 # See:
# * https://gitlab.gnome.org/GNOME/gnome-control-center/issues/22 # * https://github.com/NixOS/nixpkgs/pull/63790
autoSuspend = false; # * https://gitlab.gnome.org/GNOME/gnome-control-center/issues/22
autoSuspend = false;
};
autoLogin = { autoLogin = {
enable = true; enable = true;
user = "nixos"; user = "nixos";

View file

@ -16,8 +16,8 @@ with lib;
}; };
# Automatically login as nixos. # Automatically login as nixos.
displayManager.sddm = { displayManager = {
enable = true; sddm.enable = true;
autoLogin = { autoLogin = {
enable = true; enable = true;
user = "nixos"; user = "nixos";

View file

@ -2,12 +2,6 @@
# nix-build nixos -I nixos-config=nixos/modules/installer/cd-dvd/sd-image-aarch64.nix -A config.system.build.sdImage # nix-build nixos -I nixos-config=nixos/modules/installer/cd-dvd/sd-image-aarch64.nix -A config.system.build.sdImage
{ config, lib, pkgs, ... }: { config, lib, pkgs, ... }:
let
extlinux-conf-builder =
import ../../system/boot/loader/generic-extlinux-compatible/extlinux-conf-builder.nix {
pkgs = pkgs.buildPackages;
};
in
{ {
imports = [ imports = [
../../profiles/base.nix ../../profiles/base.nix
@ -56,7 +50,7 @@ in
''; '';
populateRootCommands = '' populateRootCommands = ''
mkdir -p ./files/boot mkdir -p ./files/boot
${extlinux-conf-builder} -t 3 -c ${config.system.build.toplevel} -d ./files/boot ${config.boot.loader.generic-extlinux-compatible.populateCmd} -c ${config.system.build.toplevel} -d ./files/boot
''; '';
}; };

View file

@ -2,12 +2,6 @@
# nix-build nixos -I nixos-config=nixos/modules/installer/cd-dvd/sd-image-armv7l-multiplatform.nix -A config.system.build.sdImage # nix-build nixos -I nixos-config=nixos/modules/installer/cd-dvd/sd-image-armv7l-multiplatform.nix -A config.system.build.sdImage
{ config, lib, pkgs, ... }: { config, lib, pkgs, ... }:
let
extlinux-conf-builder =
import ../../system/boot/loader/generic-extlinux-compatible/extlinux-conf-builder.nix {
pkgs = pkgs.buildPackages;
};
in
{ {
imports = [ imports = [
../../profiles/base.nix ../../profiles/base.nix
@ -53,7 +47,7 @@ in
''; '';
populateRootCommands = '' populateRootCommands = ''
mkdir -p ./files/boot mkdir -p ./files/boot
${extlinux-conf-builder} -t 3 -c ${config.system.build.toplevel} -d ./files/boot ${config.boot.loader.generic-extlinux-compatible.populateCmd} -c ${config.system.build.toplevel} -d ./files/boot
''; '';
}; };

View file

@ -2,12 +2,6 @@
# nix-build nixos -I nixos-config=nixos/modules/installer/cd-dvd/sd-image-raspberrypi.nix -A config.system.build.sdImage # nix-build nixos -I nixos-config=nixos/modules/installer/cd-dvd/sd-image-raspberrypi.nix -A config.system.build.sdImage
{ config, lib, pkgs, ... }: { config, lib, pkgs, ... }:
let
extlinux-conf-builder =
import ../../system/boot/loader/generic-extlinux-compatible/extlinux-conf-builder.nix {
pkgs = pkgs.buildPackages;
};
in
{ {
imports = [ imports = [
../../profiles/base.nix ../../profiles/base.nix
@ -42,7 +36,7 @@ in
''; '';
populateRootCommands = '' populateRootCommands = ''
mkdir -p ./files/boot mkdir -p ./files/boot
${extlinux-conf-builder} -t 3 -c ${config.system.build.toplevel} -d ./files/boot ${config.boot.loader.generic-extlinux-compatible.populateCmd} -c ${config.system.build.toplevel} -d ./files/boot
''; '';
}; };

View file

@ -99,7 +99,7 @@ in
}; };
populateRootCommands = mkOption { populateRootCommands = mkOption {
example = literalExample "''\${extlinux-conf-builder} -t 3 -c \${config.system.build.toplevel} -d ./files/boot''"; example = literalExample "''\${config.boot.loader.generic-extlinux-compatible.populateCmd} -c \${config.system.build.toplevel} -d ./files/boot''";
description = '' description = ''
Shell commands to populate the ./files directory. Shell commands to populate the ./files directory.
All files in that directory are copied to the All files in that directory are copied to the

View file

@ -1,6 +1,6 @@
{ {
x86_64-linux = "/nix/store/j8dbv5w6jl34caywh2ygdy88knx1mdf7-nix-2.3.6"; x86_64-linux = "/nix/store/4vz8sh9ngx34ivi0bw5hlycxdhvy5hvz-nix-2.3.7";
i686-linux = "/nix/store/9fqvbdisahqp0238vrs7wn5anpri0a65-nix-2.3.6"; i686-linux = "/nix/store/dzxkg9lpp60bjmzvagns42vqlz3yq5kx-nix-2.3.7";
aarch64-linux = "/nix/store/72pwn0nm9bjqx9vpi8sgh4bl6g5wh814-nix-2.3.6"; aarch64-linux = "/nix/store/cfvf8nl8mwyw817by5y8zd3s8pnf5m9f-nix-2.3.7";
x86_64-darwin = "/nix/store/g37vk77m90p5zcl5nixjlzp3vqpisfn5-nix-2.3.6"; x86_64-darwin = "/nix/store/5ira7xgs92inqz1x8l0n1wci4r79hnd0-nix-2.3.7";
} }

View file

@ -628,6 +628,7 @@ EOF
write_file($fn, <<EOF); write_file($fn, <<EOF);
@configuration@ @configuration@
EOF EOF
print STDERR "For more hardware-specific settings, see https://github.com/NixOS/nixos-hardware"
} else { } else {
print STDERR "warning: not overwriting existing $fn\n"; print STDERR "warning: not overwriting existing $fn\n";
} }

View file

@ -71,6 +71,17 @@ if ! test -e "$mountPoint"; then
exit 1 exit 1
fi fi
# Verify permissions are okay-enough
checkPath="$(realpath "$mountPoint")"
while [[ "$checkPath" != "/" ]]; do
mode="$(stat -c '%a' "$checkPath")"
if [[ "${mode: -1}" -lt "5" ]]; then
echo "path $checkPath should have permissions 755, but had permissions $mode. Consider running 'chmod o+rx $checkPath'."
exit 1
fi
checkPath="$(dirname "$checkPath")"
done
# Get the path of the NixOS configuration file. # Get the path of the NixOS configuration file.
if [[ -z $NIXOS_CONFIG ]]; then if [[ -z $NIXOS_CONFIG ]]; then
NIXOS_CONFIG=$mountPoint/etc/nixos/configuration.nix NIXOS_CONFIG=$mountPoint/etc/nixos/configuration.nix

View file

@ -102,6 +102,16 @@ in
''; '';
}; };
man.generateCaches = mkOption {
type = types.bool;
default = false;
description = ''
Whether to generate the manual page index caches using
<literal>mandb(8)</literal>. This allows searching for a page or
keyword using utilities like <literal>apropos(1)</literal>.
'';
};
info.enable = mkOption { info.enable = mkOption {
type = types.bool; type = types.bool;
default = true; default = true;
@ -187,7 +197,33 @@ in
environment.systemPackages = [ pkgs.man-db ]; environment.systemPackages = [ pkgs.man-db ];
environment.pathsToLink = [ "/share/man" ]; environment.pathsToLink = [ "/share/man" ];
environment.extraOutputsToInstall = [ "man" ] ++ optional cfg.dev.enable "devman"; environment.extraOutputsToInstall = [ "man" ] ++ optional cfg.dev.enable "devman";
environment.etc."man.conf".source = "${pkgs.man-db}/etc/man_db.conf"; environment.etc."man_db.conf".text =
let
manualPages = pkgs.buildEnv {
name = "man-paths";
paths = config.environment.systemPackages;
pathsToLink = [ "/share/man" ];
extraOutputsToInstall = ["man"];
ignoreCollisions = true;
};
manualCache = pkgs.runCommandLocal "man-cache" { }
''
echo "MANDB_MAP ${manualPages}/share/man $out" > man.conf
${pkgs.man-db}/bin/mandb -C man.conf -psc
'';
in
''
# Manual pages paths for NixOS
MANPATH_MAP /run/current-system/sw/bin /run/current-system/sw/share/man
MANPATH_MAP /run/wrappers/bin /run/current-system/sw/share/man
${optionalString cfg.man.generateCaches ''
# Generated manual pages cache for NixOS (immutable)
MANDB_MAP /run/current-system/sw/share/man ${manualCache}
''}
# Manual pages caches for NixOS
MANDB_MAP /run/current-system/sw/share/man /var/cache/man/nixos
'';
}) })
(mkIf cfg.info.enable { (mkIf cfg.info.enable {

View file

@ -589,6 +589,7 @@
./services/networking/autossh.nix ./services/networking/autossh.nix
./services/networking/bird.nix ./services/networking/bird.nix
./services/networking/bitlbee.nix ./services/networking/bitlbee.nix
./services/networking/blockbook-frontend.nix
./services/networking/charybdis.nix ./services/networking/charybdis.nix
./services/networking/cjdns.nix ./services/networking/cjdns.nix
./services/networking/cntlm.nix ./services/networking/cntlm.nix
@ -606,6 +607,7 @@
./services/networking/dnscrypt-wrapper.nix ./services/networking/dnscrypt-wrapper.nix
./services/networking/dnsdist.nix ./services/networking/dnsdist.nix
./services/networking/dnsmasq.nix ./services/networking/dnsmasq.nix
./services/networking/ncdns.nix
./services/networking/ejabberd.nix ./services/networking/ejabberd.nix
./services/networking/epmd.nix ./services/networking/epmd.nix
./services/networking/ergo.nix ./services/networking/ergo.nix
@ -685,6 +687,7 @@
./services/networking/ocserv.nix ./services/networking/ocserv.nix
./services/networking/ofono.nix ./services/networking/ofono.nix
./services/networking/oidentd.nix ./services/networking/oidentd.nix
./services/networking/onedrive.nix
./services/networking/openfire.nix ./services/networking/openfire.nix
./services/networking/openvpn.nix ./services/networking/openvpn.nix
./services/networking/ostinato.nix ./services/networking/ostinato.nix
@ -831,6 +834,7 @@
./services/web-apps/atlassian/crowd.nix ./services/web-apps/atlassian/crowd.nix
./services/web-apps/atlassian/jira.nix ./services/web-apps/atlassian/jira.nix
./services/web-apps/codimd.nix ./services/web-apps/codimd.nix
./services/web-apps/convos.nix
./services/web-apps/cryptpad.nix ./services/web-apps/cryptpad.nix
./services/web-apps/documize.nix ./services/web-apps/documize.nix
./services/web-apps/dokuwiki.nix ./services/web-apps/dokuwiki.nix
@ -935,6 +939,7 @@
./system/boot/grow-partition.nix ./system/boot/grow-partition.nix
./system/boot/initrd-network.nix ./system/boot/initrd-network.nix
./system/boot/initrd-ssh.nix ./system/boot/initrd-ssh.nix
./system/boot/initrd-openvpn.nix
./system/boot/kernel.nix ./system/boot/kernel.nix
./system/boot/kexec.nix ./system/boot/kexec.nix
./system/boot/loader/efi.nix ./system/boot/loader/efi.nix

View file

@ -11,9 +11,11 @@
uid = 1000; uid = 1000;
}; };
services.xserver.displayManager.sddm.autoLogin = { services.xserver.displayManager = {
enable = true; autoLogin = {
relogin = true; enable = true;
user = "demo"; user = "demo";
};
sddm.autoLogin.relogin = true;
}; };
} }

View file

@ -4,13 +4,24 @@ with lib;
let let
cfg = config.programs.dconf; cfg = config.programs.dconf;
cfgDir = pkgs.symlinkJoin {
mkDconfProfile = name: path: name = "dconf-system-config";
{ paths = map (x: "${x}/etc/dconf") cfg.packages;
name = "dconf/profile/${name}"; postBuild = ''
value.source = path; mkdir -p $out/profile
}; mkdir -p $out/db
'' + (
concatStringsSep "\n" (
mapAttrsToList (
name: path: ''
ln -s ${path} $out/profile/${name}
''
) cfg.profiles
)
) + ''
${pkgs.dconf}/bin/dconf update $out/db
'';
};
in in
{ {
###### interface ###### interface
@ -22,18 +33,24 @@ in
profiles = mkOption { profiles = mkOption {
type = types.attrsOf types.path; type = types.attrsOf types.path;
default = {}; default = {};
description = "Set of dconf profile files."; description = "Set of dconf profile files, installed at <filename>/etc/dconf/profiles/<replaceable>name</replaceable></filename>.";
internal = true; internal = true;
}; };
packages = mkOption {
type = types.listOf types.package;
default = [];
description = "A list of packages which provide dconf profiles and databases in <filename>/etc/dconf</filename>.";
};
}; };
}; };
###### implementation ###### implementation
config = mkIf (cfg.profiles != {} || cfg.enable) { config = mkIf (cfg.profiles != {} || cfg.enable) {
environment.etc = optionalAttrs (cfg.profiles != {}) environment.etc.dconf = mkIf (cfg.profiles != {} || cfg.packages != []) {
(mapAttrs' mkDconfProfile cfg.profiles); source = cfgDir;
};
services.dbus.packages = [ pkgs.dconf ]; services.dbus.packages = [ pkgs.dconf ];

View file

@ -102,6 +102,9 @@ in
programs.fish.shellAliases = mapAttrs (name: mkDefault) cfge.shellAliases; programs.fish.shellAliases = mapAttrs (name: mkDefault) cfge.shellAliases;
# Required for man completions
documentation.man.generateCaches = true;
environment.etc."fish/foreign-env/shellInit".text = cfge.shellInit; environment.etc."fish/foreign-env/shellInit".text = cfge.shellInit;
environment.etc."fish/foreign-env/loginShellInit".text = cfge.loginShellInit; environment.etc."fish/foreign-env/loginShellInit".text = cfge.loginShellInit;
environment.etc."fish/foreign-env/interactiveShellInit".text = cfge.interactiveShellInit; environment.etc."fish/foreign-env/interactiveShellInit".text = cfge.interactiveShellInit;

View file

@ -39,7 +39,7 @@ with lib;
The services.xserver.displayManager.auto module has been removed The services.xserver.displayManager.auto module has been removed
because it was only intended for use in internal NixOS tests, and gave the because it was only intended for use in internal NixOS tests, and gave the
false impression of it being a special display manager when it's actually false impression of it being a special display manager when it's actually
LightDM. Please use the services.xserver.displayManager.lightdm.autoLogin options LightDM. Please use the services.xserver.displayManager.autoLogin options
instead, or any other display manager in NixOS as they all support auto-login. instead, or any other display manager in NixOS as they all support auto-login.
'') '')
(mkRemovedOptionModule [ "services" "dnscrypt-proxy" ] "Use services.dnscrypt-proxy2 instead") (mkRemovedOptionModule [ "services" "dnscrypt-proxy" ] "Use services.dnscrypt-proxy2 instead")

View file

@ -302,6 +302,11 @@ in
lpath = "acme/${cert}"; lpath = "acme/${cert}";
apath = "/var/lib/${lpath}"; apath = "/var/lib/${lpath}";
spath = "/var/lib/acme/.lego/${cert}"; spath = "/var/lib/acme/.lego/${cert}";
keyName = builtins.replaceStrings ["*"] ["_"] data.domain;
requestedDomains = pipe ([ data.domain ] ++ (attrNames data.extraDomains)) [
(domains: sort builtins.lessThan domains)
(domains: concatStringsSep "," domains)
];
fileMode = if data.allowKeysForGroup then "640" else "600"; fileMode = if data.allowKeysForGroup then "640" else "600";
globalOpts = [ "-d" data.domain "--email" data.email "--path" "." "--key-type" data.keyType ] globalOpts = [ "-d" data.domain "--email" data.email "--path" "." "--key-type" data.keyType ]
++ optionals (cfg.acceptTerms) [ "--accept-tos" ] ++ optionals (cfg.acceptTerms) [ "--accept-tos" ]
@ -316,6 +321,7 @@ in
certOpts ++ data.extraLegoRenewFlags); certOpts ++ data.extraLegoRenewFlags);
acmeService = { acmeService = {
description = "Renew ACME Certificate for ${cert}"; description = "Renew ACME Certificate for ${cert}";
path = with pkgs; [ openssl ];
after = [ "network.target" "network-online.target" ]; after = [ "network.target" "network-online.target" ];
wants = [ "network-online.target" ]; wants = [ "network-online.target" ];
wantedBy = mkIf (!config.boot.isContainer) [ "multi-user.target" ]; wantedBy = mkIf (!config.boot.isContainer) [ "multi-user.target" ];
@ -332,11 +338,18 @@ in
ExecStart = pkgs.writeScript "acme-start" '' ExecStart = pkgs.writeScript "acme-start" ''
#!${pkgs.runtimeShell} -e #!${pkgs.runtimeShell} -e
test -L ${spath}/accounts -o -d ${spath}/accounts || ln -s ../accounts ${spath}/accounts test -L ${spath}/accounts -o -d ${spath}/accounts || ln -s ../accounts ${spath}/accounts
${pkgs.lego}/bin/lego ${renewOpts} || ${pkgs.lego}/bin/lego ${runOpts} LEGO_ARGS=(${runOpts})
if [ -e ${spath}/certificates/${keyName}.crt ]; then
REQUESTED_DOMAINS="${requestedDomains}"
EXISTING_DOMAINS="$(openssl x509 -in ${spath}/certificates/${keyName}.crt -noout -ext subjectAltName | tail -n1 | sed -e 's/ *DNS://g')"
if [ "''${REQUESTED_DOMAINS}" == "''${EXISTING_DOMAINS}" ]; then
LEGO_ARGS=(${renewOpts})
fi
fi
${pkgs.lego}/bin/lego ''${LEGO_ARGS[@]}
''; '';
ExecStartPost = ExecStartPost =
let let
keyName = builtins.replaceStrings ["*"] ["_"] data.domain;
script = pkgs.writeScript "acme-post-start" '' script = pkgs.writeScript "acme-post-start" ''
#!${pkgs.runtimeShell} -e #!${pkgs.runtimeShell} -e
cd ${apath} cd ${apath}

View file

@ -16,7 +16,7 @@ in
type = types.lines; type = types.lines;
description = '' description = ''
Configuration for Spotifyd. For syntax and directives, see Configuration for Spotifyd. For syntax and directives, see
https://github.com/Spotifyd/spotifyd#Configuration. <link xlink:href="https://github.com/Spotifyd/spotifyd#Configuration"/>.
''; '';
}; };
}; };

View file

@ -31,6 +31,59 @@ in
''; '';
}; };
rcloneOptions = mkOption {
type = with types; nullOr (attrsOf (oneOf [ str bool ]));
default = null;
description = ''
Options to pass to rclone to control its behavior.
See <link xlink:href="https://rclone.org/docs/#options"/> for
available options. When specifying option names, strip the
leading <literal>--</literal>. To set a flag such as
<literal>--drive-use-trash</literal>, which does not take a value,
set the value to the Boolean <literal>true</literal>.
'';
example = {
bwlimit = "10M";
drive-use-trash = "true";
};
};
rcloneConfig = mkOption {
type = with types; nullOr (attrsOf (oneOf [ str bool ]));
default = null;
description = ''
Configuration for the rclone remote being used for backup.
See the remote's specific options under rclone's docs at
<link xlink:href="https://rclone.org/docs/"/>. When specifying
option names, use the "config" name specified in the docs.
For example, to set <literal>--b2-hard-delete</literal> for a B2
remote, use <literal>hard_delete = true</literal> in the
attribute set.
Warning: Secrets set in here will be world-readable in the Nix
store! Consider using the <literal>rcloneConfigFile</literal>
option instead to specify secret values separately. Note that
options set here will override those set in the config file.
'';
example = {
type = "b2";
account = "xxx";
key = "xxx";
hard_delete = true;
};
};
rcloneConfigFile = mkOption {
type = with types; nullOr path;
default = null;
description = ''
Path to the file containing rclone configuration. This file
must contain configuration for the remote specified in this backup
set and also must be readable by root. Options set in
<literal>rcloneConfig</literal> will override those set in this
file.
'';
};
repository = mkOption { repository = mkOption {
type = types.str; type = types.str;
description = '' description = ''
@ -170,11 +223,22 @@ in
( resticCmd + " forget --prune " + (concatStringsSep " " backup.pruneOpts) ) ( resticCmd + " forget --prune " + (concatStringsSep " " backup.pruneOpts) )
( resticCmd + " check" ) ( resticCmd + " check" )
]; ];
# Helper functions for rclone remotes
rcloneRemoteName = builtins.elemAt (splitString ":" backup.repository) 1;
rcloneAttrToOpt = v: "RCLONE_" + toUpper (builtins.replaceStrings [ "-" ] [ "_" ] v);
rcloneAttrToConf = v: "RCLONE_CONFIG_" + toUpper (rcloneRemoteName + "_" + v);
toRcloneVal = v: if lib.isBool v then lib.boolToString v else v;
in nameValuePair "restic-backups-${name}" ({ in nameValuePair "restic-backups-${name}" ({
environment = { environment = {
RESTIC_PASSWORD_FILE = backup.passwordFile; RESTIC_PASSWORD_FILE = backup.passwordFile;
RESTIC_REPOSITORY = backup.repository; RESTIC_REPOSITORY = backup.repository;
}; } // optionalAttrs (backup.rcloneOptions != null) (mapAttrs' (name: value:
nameValuePair (rcloneAttrToOpt name) (toRcloneVal value)
) backup.rcloneOptions) // optionalAttrs (backup.rcloneConfigFile != null) {
RCLONE_CONFIG = backup.rcloneConfigFile;
} // optionalAttrs (backup.rcloneConfig != null) (mapAttrs' (name: value:
nameValuePair (rcloneAttrToConf name) (toRcloneVal value)
) backup.rcloneConfig);
path = [ pkgs.openssh ]; path = [ pkgs.openssh ];
restartIfChanged = false; restartIfChanged = false;
serviceConfig = { serviceConfig = {

View file

@ -67,7 +67,7 @@ in
type = types.bool; type = types.bool;
default = false; default = false;
description = '' description = ''
Wether to enable the slurm control daemon. Whether to enable the slurm control daemon.
Note that the standard authentication method is "munge". Note that the standard authentication method is "munge".
The "munge" service needs to be provided with a password file in order for The "munge" service needs to be provided with a password file in order for
slurm to work properly (see <literal>services.munge.password</literal>). slurm to work properly (see <literal>services.munge.password</literal>).
@ -135,7 +135,7 @@ in
type = types.bool; type = types.bool;
default = false; default = false;
description = '' description = ''
Wether to provide a slurm.conf file. Whether to provide a slurm.conf file.
Enable this option if you do not run a slurm daemon on this host Enable this option if you do not run a slurm daemon on this host
(i.e. <literal>server.enable</literal> and <literal>client.enable</literal> are <literal>false</literal>) (i.e. <literal>server.enable</literal> and <literal>client.enable</literal> are <literal>false</literal>)
but you still want to run slurm commands from this host. but you still want to run slurm commands from this host.

View file

@ -29,7 +29,7 @@ let
with open('${cfg.workerPassFile}', 'r', encoding='utf-8') as passwd_file: with open('${cfg.workerPassFile}', 'r', encoding='utf-8') as passwd_file:
passwd = passwd_file.read().strip('\r\n') passwd = passwd_file.read().strip('\r\n')
keepalive = 600 keepalive = ${toString cfg.keepalive}
umask = None umask = None
maxdelay = 300 maxdelay = 300
numcpus = None numcpus = None
@ -116,6 +116,15 @@ in {
description = "Specifies the Buildbot Worker connection string."; description = "Specifies the Buildbot Worker connection string.";
}; };
keepalive = mkOption {
default = 600;
type = types.int;
description = "
This is a number that indicates how frequently keepalive messages should be sent
from the worker to the buildmaster, expressed in seconds.
";
};
package = mkOption { package = mkOption {
type = types.package; type = types.package;
default = pkgs.python3Packages.buildbot-worker; default = pkgs.python3Packages.buildbot-worker;

View file

@ -1,14 +1,16 @@
{ config, lib, pkgs, ... }: { config, lib, pkgs, ... }:
with builtins;
with lib; with lib;
let let
cfg = config.services.gitlab-runner; cfg = config.services.gitlab-runner;
hasDocker = config.virtualisation.docker.enable; hasDocker = config.virtualisation.docker.enable;
hashedServices = with builtins; (mapAttrs' (name: service: nameValuePair hashedServices = mapAttrs'
"${name}_${config.networking.hostName}_${ (name: service: nameValuePair
"${name}_${config.networking.hostName}_${
substring 0 12 substring 0 12
(hashString "md5" (unsafeDiscardStringContext (toJSON service)))}" (hashString "md5" (unsafeDiscardStringContext (toJSON service)))}"
service) service)
cfg.services); cfg.services;
configPath = "$HOME/.gitlab-runner/config.toml"; configPath = "$HOME/.gitlab-runner/config.toml";
configureScript = pkgs.writeShellScriptBin "gitlab-runner-configure" ( configureScript = pkgs.writeShellScriptBin "gitlab-runner-configure" (
if (cfg.configFile != null) then '' if (cfg.configFile != null) then ''
@ -76,7 +78,7 @@ let
++ map (v: "--docker-allowed-images ${escapeShellArg v}") service.dockerAllowedImages ++ map (v: "--docker-allowed-images ${escapeShellArg v}") service.dockerAllowedImages
++ map (v: "--docker-allowed-services ${escapeShellArg v}") service.dockerAllowedServices ++ map (v: "--docker-allowed-services ${escapeShellArg v}") service.dockerAllowedServices
) )
))} && sleep 1 ))} && sleep 1 || exit 1
fi fi
'') hashedServices)} '') hashedServices)}
@ -89,8 +91,17 @@ let
# update global options # update global options
remarshal --if toml --of json ${configPath} \ remarshal --if toml --of json ${configPath} \
| jq -cM '.check_interval = ${toString cfg.checkInterval} | | jq -cM ${escapeShellArg (concatStringsSep " | " [
.concurrent = ${toString cfg.concurrent}' \ ".check_interval = ${toJSON cfg.checkInterval}"
".concurrent = ${toJSON cfg.concurrent}"
".sentry_dsn = ${toJSON cfg.sentryDSN}"
".listen_address = ${toJSON cfg.prometheusListenAddress}"
".session_server.listen_address = ${toJSON cfg.sessionServer.listenAddress}"
".session_server.advertise_address = ${toJSON cfg.sessionServer.advertiseAddress}"
".session_server.session_timeout = ${toJSON cfg.sessionServer.sessionTimeout}"
"del(.[] | nulls)"
"del(.session_server[] | nulls)"
])} \
| remarshal --if json --of toml \ | remarshal --if json --of toml \
| sponge ${configPath} | sponge ${configPath}
@ -141,6 +152,66 @@ in
0 does not mean unlimited. 0 does not mean unlimited.
''; '';
}; };
sentryDSN = mkOption {
type = types.nullOr types.str;
default = null;
example = "https://public:private@host:port/1";
description = ''
Data Source Name for tracking of all system level errors to Sentry.
'';
};
prometheusListenAddress = mkOption {
type = types.nullOr types.str;
default = null;
example = "localhost:8080";
description = ''
Address (&lt;host&gt;:&lt;port&gt;) on which the Prometheus metrics HTTP server
should be listening.
'';
};
sessionServer = mkOption {
type = types.submodule {
options = {
listenAddress = mkOption {
type = types.nullOr types.str;
default = null;
example = "0.0.0.0:8093";
description = ''
An internal URL to be used for the session server.
'';
};
advertiseAddress = mkOption {
type = types.nullOr types.str;
default = null;
example = "runner-host-name.tld:8093";
description = ''
The URL that the Runner will expose to GitLab to be used
to access the session server.
Fallbacks to <option>listenAddress</option> if not defined.
'';
};
sessionTimeout = mkOption {
type = types.int;
default = 1800;
description = ''
How long in seconds the session can stay active after
the job completes (which will block the job from finishing).
'';
};
};
};
default = { };
example = literalExample ''
{
listenAddress = "0.0.0.0:8093";
}
'';
description = ''
The session server allows the user to interact with jobs
that the Runner is responsible for. A good example of this is the
<link xlink:href="https://docs.gitlab.com/ee/ci/interactive_web_terminal/index.html">interactive web terminal</link>.
'';
};
gracefulTermination = mkOption { gracefulTermination = mkOption {
type = types.bool; type = types.bool;
default = false; default = false;

View file

@ -5,14 +5,14 @@ with lib;
let let
cfg = config.services.openldap; cfg = config.services.openldap;
openldap = pkgs.openldap; openldap = cfg.package;
dataFile = pkgs.writeText "ldap-contents.ldif" cfg.declarativeContents; dataFile = pkgs.writeText "ldap-contents.ldif" cfg.declarativeContents;
configFile = pkgs.writeText "slapd.conf" ((optionalString cfg.defaultSchemas '' configFile = pkgs.writeText "slapd.conf" ((optionalString cfg.defaultSchemas ''
include ${pkgs.openldap.out}/etc/schema/core.schema include ${openldap.out}/etc/schema/core.schema
include ${pkgs.openldap.out}/etc/schema/cosine.schema include ${openldap.out}/etc/schema/cosine.schema
include ${pkgs.openldap.out}/etc/schema/inetorgperson.schema include ${openldap.out}/etc/schema/inetorgperson.schema
include ${pkgs.openldap.out}/etc/schema/nis.schema include ${openldap.out}/etc/schema/nis.schema
'') + '' '') + ''
${cfg.extraConfig} ${cfg.extraConfig}
database ${cfg.database} database ${cfg.database}
@ -46,6 +46,18 @@ in
"; ";
}; };
package = mkOption {
type = types.package;
default = pkgs.openldap;
description = ''
OpenLDAP package to use.
This can be used to, for example, set an OpenLDAP package
with custom overrides to enable modules or other
functionality.
'';
};
user = mkOption { user = mkOption {
type = types.str; type = types.str;
default = "openldap"; default = "openldap";
@ -152,10 +164,10 @@ in
"; ";
example = literalExample '' example = literalExample ''
''' '''
include ${pkgs.openldap.out}/etc/schema/core.schema include ${openldap.out}/etc/schema/core.schema
include ${pkgs.openldap.out}/etc/schema/cosine.schema include ${openldap.out}/etc/schema/cosine.schema
include ${pkgs.openldap.out}/etc/schema/inetorgperson.schema include ${openldap.out}/etc/schema/inetorgperson.schema
include ${pkgs.openldap.out}/etc/schema/nis.schema include ${openldap.out}/etc/schema/nis.schema
database bdb database bdb
suffix dc=example,dc=org suffix dc=example,dc=org

View file

@ -6,10 +6,7 @@ let
cfg = config.services.jupyter; cfg = config.services.jupyter;
# NOTE: We don't use top-level jupyter because we don't package = cfg.package;
# want to pass in JUPYTER_PATH but use .environment instead,
# saving a rebuild.
package = pkgs.python3.pkgs.notebook;
kernels = (pkgs.jupyter-kernel.create { kernels = (pkgs.jupyter-kernel.create {
definitions = if cfg.kernels != null definitions = if cfg.kernels != null
@ -37,6 +34,27 @@ in {
''; '';
}; };
package = mkOption {
type = types.package;
# NOTE: We don't use top-level jupyter because we don't
# want to pass in JUPYTER_PATH but use .environment instead,
# saving a rebuild.
default = pkgs.python3.pkgs.notebook;
description = ''
Jupyter package to use.
'';
};
command = mkOption {
type = types.str;
default = "jupyter-notebook";
example = "jupyter-lab";
description = ''
Which command the service runs. Note that not all jupyter packages
have all commands, e.g. jupyter-lab isn't present in the default package.
'';
};
port = mkOption { port = mkOption {
type = types.int; type = types.int;
default = 8888; default = 8888;
@ -157,7 +175,7 @@ in {
serviceConfig = { serviceConfig = {
Restart = "always"; Restart = "always";
ExecStart = ''${package}/bin/jupyter-notebook \ ExecStart = ''${package}/bin/${cfg.command} \
--no-browser \ --no-browser \
--ip=${cfg.ip} \ --ip=${cfg.ip} \
--port=${toString cfg.port} --port-retries 0 \ --port=${toString cfg.port} --port-retries 0 \

View file

@ -25,8 +25,11 @@ let
in in
{ {
options.services.undervolt = { options.services.undervolt = {
enable = mkEnableOption enable = mkEnableOption ''
"Intel CPU undervolting service (WARNING: may permanently damage your hardware!)"; Undervolting service for Intel CPUs.
Warning: This service is not endorsed by Intel and may permanently damage your hardware. Use at your own risk!
'';
verbose = mkOption { verbose = mkOption {
type = types.bool; type = types.bool;

View file

@ -280,6 +280,17 @@ in
description = "Whether to enable smtp submission."; description = "Whether to enable smtp submission.";
}; };
enableSubmissions = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable smtp submission via smtps.
According to RFC 8314 this should be preferred
over STARTTLS for submission of messages by end user clients.
'';
};
submissionOptions = mkOption { submissionOptions = mkOption {
type = types.attrs; type = types.attrs;
default = { default = {
@ -298,6 +309,29 @@ in
description = "Options for the submission config in master.cf"; description = "Options for the submission config in master.cf";
}; };
submissionsOptions = mkOption {
type = types.attrs;
default = {
smtpd_sasl_auth_enable = "yes";
smtpd_client_restrictions = "permit_sasl_authenticated,reject";
milter_macro_daemon_name = "ORIGINATING";
};
example = {
smtpd_sasl_auth_enable = "yes";
smtpd_sasl_type = "dovecot";
smtpd_client_restrictions = "permit_sasl_authenticated,reject";
milter_macro_daemon_name = "ORIGINATING";
};
description = ''
Options for the submission config via smtps in master.cf.
smtpd_tls_security_level will be set to encrypt, if it is missing
or has one of the values "may" or "none".
smtpd_tls_wrappermode with value "yes" will be added automatically.
'';
};
setSendmail = mkOption { setSendmail = mkOption {
type = types.bool; type = types.bool;
default = true; default = true;
@ -454,7 +488,7 @@ in
''; '';
example = { example = {
mail_owner = "postfix"; mail_owner = "postfix";
smtp_use_tls = true; smtp_tls_security_level = "may";
}; };
}; };
@ -466,18 +500,20 @@ in
"; ";
}; };
tlsTrustedAuthorities = mkOption {
type = types.str;
default = "${pkgs.cacert}/etc/ssl/certs/ca-bundle.crt";
description = ''
File containing trusted certification authorities (CA) to verify certificates of mailservers contacted for mail delivery. This basically sets smtp_tls_CAfile and enables opportunistic tls. Defaults to NixOS trusted certification authorities.
'';
};
sslCert = mkOption { sslCert = mkOption {
type = types.str; type = types.str;
default = ""; default = "";
description = "SSL certificate to use."; description = "SSL certificate to use.";
}; };
sslCACert = mkOption {
type = types.str;
default = "";
description = "SSL certificate of CA.";
};
sslKey = mkOption { sslKey = mkOption {
type = types.str; type = types.str;
default = ""; default = "";
@ -771,18 +807,20 @@ in
recipient_canonical_classes = [ "envelope_recipient" ]; recipient_canonical_classes = [ "envelope_recipient" ];
} }
// optionalAttrs cfg.enableHeaderChecks { header_checks = [ "regexp:/etc/postfix/header_checks" ]; } // optionalAttrs cfg.enableHeaderChecks { header_checks = [ "regexp:/etc/postfix/header_checks" ]; }
// optionalAttrs (cfg.tlsTrustedAuthorities != "") {
smtp_tls_CAfile = cfg.tlsTrustedAuthorities;
smtp_tls_security_level = "may";
}
// optionalAttrs (cfg.sslCert != "") { // optionalAttrs (cfg.sslCert != "") {
smtp_tls_CAfile = cfg.sslCACert;
smtp_tls_cert_file = cfg.sslCert; smtp_tls_cert_file = cfg.sslCert;
smtp_tls_key_file = cfg.sslKey; smtp_tls_key_file = cfg.sslKey;
smtp_use_tls = true; smtp_tls_security_level = "may";
smtpd_tls_CAfile = cfg.sslCACert;
smtpd_tls_cert_file = cfg.sslCert; smtpd_tls_cert_file = cfg.sslCert;
smtpd_tls_key_file = cfg.sslKey; smtpd_tls_key_file = cfg.sslKey;
smtpd_use_tls = true; smtpd_tls_security_level = "may";
}; };
services.postfix.masterConfig = { services.postfix.masterConfig = {
@ -878,6 +916,23 @@ in
command = "smtp"; command = "smtp";
args = [ "-o" "smtp_fallback_relay=" ]; args = [ "-o" "smtp_fallback_relay=" ];
}; };
} // optionalAttrs cfg.enableSubmissions {
submissions = {
type = "inet";
private = false;
command = "smtpd";
args = let
mkKeyVal = opt: val: [ "-o" (opt + "=" + val) ];
adjustSmtpTlsSecurityLevel = !(cfg.submissionsOptions ? smtpd_tls_security_level) ||
cfg.submissionsOptions.smtpd_tls_security_level == "none" ||
cfg.submissionsOptions.smtpd_tls_security_level == "may";
submissionsOptions = cfg.submissionsOptions // {
smtpd_tls_wrappermode = "yes";
} // optionalAttrs adjustSmtpTlsSecurityLevel {
smtpd_tls_security_level = "encrypt";
};
in concatLists (mapAttrsToList mkKeyVal submissionsOptions);
};
}; };
} }
@ -900,4 +955,9 @@ in
services.postfix.mapFiles.client_access = checkClientAccessFile; services.postfix.mapFiles.client_access = checkClientAccessFile;
}) })
]); ]);
imports = [
(mkRemovedOptionModule [ "services" "postfix" "sslCACert" ]
"services.postfix.sslCACert was replaced by services.postfix.tlsTrustedAuthorities. In case you intend that your server should validate requested client certificates use services.postfix.extraConfig.")
];
} }

View file

@ -95,6 +95,18 @@ in
''; '';
}; };
maxAttachmentSize = mkOption {
type = types.int;
default = 18;
description = ''
The maximum attachment size in MB.
Note: Since roundcube only uses 70% of max upload values configured in php
30% is added automatically to <xref linkend="opt-services.roundcube.maxAttachmentSize"/>.
'';
apply = configuredMaxAttachmentSize: "${toString (configuredMaxAttachmentSize * 1.3)}M";
};
extraConfig = mkOption { extraConfig = mkOption {
type = types.lines; type = types.lines;
default = ""; default = "";
@ -115,7 +127,7 @@ in
$config = array(); $config = array();
$config['db_dsnw'] = 'pgsql://${cfg.database.username}${lib.optionalString (!localDB) ":' . $password . '"}@${if localDB then "unix(/run/postgresql)" else cfg.database.host}/${cfg.database.dbname}'; $config['db_dsnw'] = 'pgsql://${cfg.database.username}${lib.optionalString (!localDB) ":' . $password . '"}@${if localDB then "unix(/run/postgresql)" else cfg.database.host}/${cfg.database.dbname}';
$config['log_driver'] = 'syslog'; $config['log_driver'] = 'syslog';
$config['max_message_size'] = '25M'; $config['max_message_size'] = '${cfg.maxAttachmentSize}';
$config['plugins'] = [${concatMapStringsSep "," (p: "'${p}'") cfg.plugins}]; $config['plugins'] = [${concatMapStringsSep "," (p: "'${p}'") cfg.plugins}];
$config['des_key'] = file_get_contents('/var/lib/roundcube/des_key'); $config['des_key'] = file_get_contents('/var/lib/roundcube/des_key');
$config['mime_types'] = '${pkgs.nginx}/conf/mime.types'; $config['mime_types'] = '${pkgs.nginx}/conf/mime.types';
@ -172,8 +184,8 @@ in
phpOptions = '' phpOptions = ''
error_log = 'stderr' error_log = 'stderr'
log_errors = on log_errors = on
post_max_size = 25M post_max_size = ${cfg.maxAttachmentSize}
upload_max_filesize = 25M upload_max_filesize = ${cfg.maxAttachmentSize}
''; '';
settings = mapAttrs (name: mkDefault) { settings = mapAttrs (name: mkDefault) {
"listen.owner" = "nginx"; "listen.owner" = "nginx";

View file

@ -27,7 +27,10 @@ in
type = types.str; type = types.str;
default = "/var/lib/gitolite"; default = "/var/lib/gitolite";
description = '' description = ''
Gitolite home directory (used to store all the repositories). The gitolite home directory used to store all repositories. If left as the default value
this directory will automatically be created before the gitolite server starts, otherwise
the sysadmin is responsible for ensuring the directory exists with appropriate ownership
and permissions.
''; '';
}; };
@ -149,14 +152,6 @@ in
}; };
users.groups.${cfg.group}.gid = config.ids.gids.gitolite; users.groups.${cfg.group}.gid = config.ids.gids.gitolite;
systemd.tmpfiles.rules = [
"d '${cfg.dataDir}' 0750 ${cfg.user} ${cfg.group} - -"
"d '${cfg.dataDir}'/.gitolite - ${cfg.user} ${cfg.group} - -"
"d '${cfg.dataDir}'/.gitolite/logs - ${cfg.user} ${cfg.group} - -"
"Z ${cfg.dataDir} 0750 ${cfg.user} ${cfg.group} - -"
];
systemd.services.gitolite-init = { systemd.services.gitolite-init = {
description = "Gitolite initialization"; description = "Gitolite initialization";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
@ -167,13 +162,19 @@ in
GITOLITE_RC_DEFAULT = "${rcDir}/gitolite.rc.default"; GITOLITE_RC_DEFAULT = "${rcDir}/gitolite.rc.default";
}; };
serviceConfig = { serviceConfig = mkMerge [
Type = "oneshot"; (mkIf (cfg.dataDir == "/var/lib/gitolite") {
User = cfg.user; StateDirectory = "gitolite gitolite/.gitolite gitolite/.gitolite/logs";
Group = cfg.group; StateDirectoryMode = "0750";
WorkingDirectory = "~"; })
RemainAfterExit = true; {
}; Type = "oneshot";
User = cfg.user;
Group = cfg.group;
WorkingDirectory = "~";
RemainAfterExit = true;
}
];
path = [ pkgs.gitolite pkgs.git pkgs.perl pkgs.bash pkgs.diffutils config.programs.ssh.package ]; path = [ pkgs.gitolite pkgs.git pkgs.perl pkgs.bash pkgs.diffutils config.programs.ssh.package ];
script = script =

View file

@ -14,9 +14,9 @@
<para> <para>
This chapter will show you how to set up your own, self-hosted Matrix This chapter will show you how to set up your own, self-hosted Matrix
homeserver using the Synapse reference homeserver, and how to serve your own homeserver using the Synapse reference homeserver, and how to serve your own
copy of the Riot web client. See the copy of the Element web client. See the
<link xlink:href="https://matrix.org/docs/projects/try-matrix-now.html">Try <link xlink:href="https://matrix.org/docs/projects/try-matrix-now.html">Try
Matrix Now!</link> overview page for links to Riot Apps for Android and iOS, Matrix Now!</link> overview page for links to Element Apps for Android and iOS,
desktop clients, as well as bridges to other networks and other projects desktop clients, as well as bridges to other networks and other projects
around Matrix. around Matrix.
</para> </para>
@ -84,7 +84,7 @@ in {
"m.homeserver" = { "base_url" = "https://${fqdn}"; }; "m.homeserver" = { "base_url" = "https://${fqdn}"; };
"m.identity_server" = { "base_url" = "https://vector.im"; }; "m.identity_server" = { "base_url" = "https://vector.im"; };
}; };
# ACAO required to allow riot-web on any URL to request this json file # ACAO required to allow element-web on any URL to request this json file
in '' in ''
add_header Content-Type application/json; add_header Content-Type application/json;
add_header Access-Control-Allow-Origin *; add_header Access-Control-Allow-Origin *;
@ -98,7 +98,7 @@ in {
<link linkend="opt-services.nginx.virtualHosts._name_.forceSSL">forceSSL</link> = true; <link linkend="opt-services.nginx.virtualHosts._name_.forceSSL">forceSSL</link> = true;
# Or do a redirect instead of the 404, or whatever is appropriate for you. # Or do a redirect instead of the 404, or whatever is appropriate for you.
# But do not put a Matrix Web client here! See the Riot Web section below. # But do not put a Matrix Web client here! See the Element web section below.
<link linkend="opt-services.nginx.virtualHosts._name_.locations._name_.extraConfig">locations."/".extraConfig</link> = '' <link linkend="opt-services.nginx.virtualHosts._name_.locations._name_.extraConfig">locations."/".extraConfig</link> = ''
return 404; return 404;
''; '';
@ -171,17 +171,19 @@ Success!
option until a better solution for NixOS is in place. option until a better solution for NixOS is in place.
</para> </para>
</section> </section>
<section xml:id="module-services-matrix-riot-web"> <section xml:id="module-services-matrix-element-web">
<title>Riot Web Client</title> <title>Element (formerly known as Riot) Web Client</title>
<para> <para>
<link xlink:href="https://github.com/vector-im/riot-web/">Riot Web</link> is <link xlink:href="https://github.com/vector-im/riot-web/">Element Web</link> is
the reference web client for Matrix and developed by the core team at the reference web client for Matrix and developed by the core team at
matrix.org. The following snippet can be optionally added to the code before matrix.org. Element was formerly known as Riot.im, see the
<link xlink:href="https://element.io/blog/welcome-to-element/">Element introductory blog post</link>
for more information. The following snippet can be optionally added to the code before
to complete the synapse installation with a web client served at to complete the synapse installation with a web client served at
<code>https://riot.myhostname.example.org</code> and <code>https://element.myhostname.example.org</code> and
<code>https://riot.example.org</code>. Alternatively, you can use the hosted <code>https://element.example.org</code>. Alternatively, you can use the hosted
copy at <link xlink:href="https://riot.im/app">https://riot.im/app</link>, copy at <link xlink:href="https://app.element.io/">https://app.element.io/</link>,
or use other web clients or native client applications. Due to the or use other web clients or native client applications. Due to the
<literal>/.well-known</literal> urls set up done above, many clients should <literal>/.well-known</literal> urls set up done above, many clients should
fill in the required connection details automatically when you enter your fill in the required connection details automatically when you enter your
@ -191,14 +193,14 @@ Success!
featureset. featureset.
<programlisting> <programlisting>
{ {
services.nginx.virtualHosts."riot.${fqdn}" = { services.nginx.virtualHosts."element.${fqdn}" = {
<link linkend="opt-services.nginx.virtualHosts._name_.enableACME">enableACME</link> = true; <link linkend="opt-services.nginx.virtualHosts._name_.enableACME">enableACME</link> = true;
<link linkend="opt-services.nginx.virtualHosts._name_.forceSSL">forceSSL</link> = true; <link linkend="opt-services.nginx.virtualHosts._name_.forceSSL">forceSSL</link> = true;
<link linkend="opt-services.nginx.virtualHosts._name_.serverAliases">serverAliases</link> = [ <link linkend="opt-services.nginx.virtualHosts._name_.serverAliases">serverAliases</link> = [
"riot.${config.networking.domain}" "element.${config.networking.domain}"
]; ];
<link linkend="opt-services.nginx.virtualHosts._name_.root">root</link> = pkgs.riot-web.override { <link linkend="opt-services.nginx.virtualHosts._name_.root">root</link> = pkgs.element-web.override {
conf = { conf = {
default_server_config."m.homeserver" = { default_server_config."m.homeserver" = {
"base_url" = "${config.networking.domain}"; "base_url" = "${config.networking.domain}";
@ -212,13 +214,13 @@ Success!
</para> </para>
<para> <para>
Note that the Riot developers do not recommend running Riot and your Matrix Note that the Element developers do not recommend running Element and your Matrix
homeserver on the same fully-qualified domain name for security reasons. In homeserver on the same fully-qualified domain name for security reasons. In
the example, this means that you should not reuse the the example, this means that you should not reuse the
<literal>myhostname.example.org</literal> virtualHost to also serve Riot, <literal>myhostname.example.org</literal> virtualHost to also serve Element,
but instead serve it on a different subdomain, like but instead serve it on a different subdomain, like
<literal>riot.example.org</literal> in the example. See the <literal>element.example.org</literal> in the example. See the
<link xlink:href="https://github.com/vector-im/riot-web#important-security-note">Riot <link xlink:href="https://github.com/vector-im/riot-web#important-security-note">Element
Important Security Notes</link> for more information on this subject. Important Security Notes</link> for more information on this subject.
</para> </para>
</section> </section>

View file

@ -193,50 +193,111 @@ in
}; };
buildMachines = mkOption { buildMachines = mkOption {
type = types.listOf types.attrs; type = types.listOf (types.submodule ({
options = {
hostName = mkOption {
type = types.str;
example = "nixbuilder.example.org";
description = ''
The hostname of the build machine.
'';
};
system = mkOption {
type = types.nullOr types.str;
default = null;
example = "x86_64-linux";
description = ''
The system type the build machine can execute derivations on.
Either this attribute or <varname>systems</varname> must be
present, where <varname>system</varname> takes precedence if
both are set.
'';
};
systems = mkOption {
type = types.listOf types.str;
default = [];
example = [ "x86_64-linux" "aarch64-linux" ];
description = ''
The system types the build machine can execute derivations on.
Either this attribute or <varname>system</varname> must be
present, where <varname>system</varname> takes precedence if
both are set.
'';
};
sshUser = mkOption {
type = types.nullOr types.str;
default = null;
example = "builder";
description = ''
The username to log in as on the remote host. This user must be
able to log in and run nix commands non-interactively. It must
also be privileged to build derivations, so must be included in
<option>nix.trustedUsers</option>.
'';
};
sshKey = mkOption {
type = types.nullOr types.str;
default = null;
example = "/root/.ssh/id_buildhost_builduser";
description = ''
The path to the SSH private key with which to authenticate on
the build machine. The private key must not have a passphrase.
If null, the building user (root on NixOS machines) must have an
appropriate ssh configuration to log in non-interactively.
Note that for security reasons, this path must point to a file
in the local filesystem, *not* to the nix store.
'';
};
maxJobs = mkOption {
type = types.int;
default = 1;
description = ''
The number of concurrent jobs the build machine supports. The
build machine will enforce its own limits, but this allows hydra
to schedule better since there is no work-stealing between build
machines.
'';
};
speedFactor = mkOption {
type = types.int;
default = 1;
description = ''
The relative speed of this builder. This is an arbitrary integer
that indicates the speed of this builder, relative to other
builders. Higher is faster.
'';
};
mandatoryFeatures = mkOption {
type = types.listOf types.str;
default = [];
example = [ "big-parallel" ];
description = ''
A list of features mandatory for this builder. The builder will
be ignored for derivations that don't require all features in
this list. All mandatory features are automatically included in
<varname>supportedFeatures</varname>.
'';
};
supportedFeatures = mkOption {
type = types.listOf types.str;
default = [];
example = [ "kvm" "big-parallel" ];
description = ''
A list of features supported by this builder. The builder will
be ignored for derivations that require features not in this
list.
'';
};
};
}));
default = []; default = [];
example = literalExample ''
[ { hostName = "voila.labs.cs.uu.nl";
sshUser = "nix";
sshKey = "/root/.ssh/id_buildfarm";
system = "powerpc-darwin";
maxJobs = 1;
}
{ hostName = "linux64.example.org";
sshUser = "buildfarm";
sshKey = "/root/.ssh/id_buildfarm";
system = "x86_64-linux";
maxJobs = 2;
speedFactor = 2;
supportedFeatures = [ "kvm" ];
mandatoryFeatures = [ "perf" ];
}
]
'';
description = '' description = ''
This option lists the machines to be used if distributed This option lists the machines to be used if distributed builds are
builds are enabled (see enabled (see <option>nix.distributedBuilds</option>).
<option>nix.distributedBuilds</option>). Nix will perform Nix will perform derivations on those machines via SSH by copying the
derivations on those machines via SSH by copying the inputs inputs to the Nix store on the remote machine, starting the build,
to the Nix store on the remote machine, starting the build, then copying the output back to the local Nix store.
then copying the output back to the local Nix store. Each
element of the list should be an attribute set containing
the machine's host name (<varname>hostname</varname>), the
user name to be used for the SSH connection
(<varname>sshUser</varname>), the Nix system type
(<varname>system</varname>, e.g.,
<literal>"i686-linux"</literal>), the maximum number of
jobs to be run in parallel on that machine
(<varname>maxJobs</varname>), the path to the SSH private
key to be used to connect (<varname>sshKey</varname>), a
list of supported features of the machine
(<varname>supportedFeatures</varname>) and a list of
mandatory features of the machine
(<varname>mandatoryFeatures</varname>). The SSH private key
should not have a passphrase, and the corresponding public
key should be added to
<filename>~<replaceable>sshUser</replaceable>/authorized_keys</filename>
on the remote machine.
''; '';
}; };
@ -461,14 +522,14 @@ in
{ enable = cfg.buildMachines != []; { enable = cfg.buildMachines != [];
text = text =
concatMapStrings (machine: concatMapStrings (machine:
"${if machine ? sshUser then "${machine.sshUser}@" else ""}${machine.hostName} " "${if machine.sshUser != null then "${machine.sshUser}@" else ""}${machine.hostName} "
+ machine.system or (concatStringsSep "," machine.systems) + (if machine.system != null then machine.system else concatStringsSep "," machine.systems)
+ " ${machine.sshKey or "-"} ${toString machine.maxJobs or 1} " + " ${if machine.sshKey != null then machine.sshKey else "-"} ${toString machine.maxJobs} "
+ toString (machine.speedFactor or 1) + toString (machine.speedFactor)
+ " " + " "
+ concatStringsSep "," (machine.mandatoryFeatures or [] ++ machine.supportedFeatures or []) + concatStringsSep "," (machine.mandatoryFeatures ++ machine.supportedFeatures)
+ " " + " "
+ concatStringsSep "," machine.mandatoryFeatures or [] + concatStringsSep "," machine.mandatoryFeatures
+ "\n" + "\n"
) cfg.buildMachines; ) cfg.buildMachines;
}; };

View file

@ -34,6 +34,7 @@ let
"mail" "mail"
"mikrotik" "mikrotik"
"minio" "minio"
"modemmanager"
"nextcloud" "nextcloud"
"nginx" "nginx"
"node" "node"

View file

@ -0,0 +1,33 @@
{ config, lib, pkgs, options }:
with lib;
let
cfg = config.services.prometheus.exporters.modemmanager;
in
{
port = 9539;
extraOpts = {
refreshRate = mkOption {
type = types.str;
default = "5s";
description = ''
How frequently ModemManager will refresh the extended signal quality
information for each modem. The duration should be specified in seconds
("5s"), minutes ("1m"), or hours ("1h").
'';
};
};
serviceOpts = {
serviceConfig = {
# Required in order to authenticate with ModemManager via D-Bus.
SupplementaryGroups = "networkmanager";
ExecStart = ''
${pkgs.prometheus-modemmanager-exporter}/bin/modemmanager_exporter \
-addr ${cfg.listenAddress}:${toString cfg.port} \
-rate ${cfg.refreshRate} \
${concatStringsSep " \\\n " cfg.extraFlags}
'';
};
};
}

View file

@ -124,7 +124,7 @@ in {
<literal>"iponly"</literal>: specifies no authentication. ACLs authorization is used. <literal>"iponly"</literal>: specifies no authentication. ACLs authorization is used.
</para></listitem> </para></listitem>
<listitem><para> <listitem><para>
<literal>"strong"</literal>: authentication by username/password. If user is not registered his access is denied regardless of ACLs. <literal>"strong"</literal>: authentication by username/password. If user is not registered their access is denied regardless of ACLs.
</para></listitem> </para></listitem>
</itemizedlist> </itemizedlist>

View file

@ -0,0 +1,272 @@
{ config, lib, pkgs, ... }:
with lib;
let
eachBlockbook = config.services.blockbook-frontend;
blockbookOpts = { config, lib, name, ...}: {
options = {
enable = mkEnableOption "blockbook-frontend application.";
package = mkOption {
type = types.package;
default = pkgs.blockbook;
description = "Which blockbook package to use.";
};
user = mkOption {
type = types.str;
default = "blockbook-frontend-${name}";
description = "The user as which to run blockbook-frontend-${name}.";
};
group = mkOption {
type = types.str;
default = "${config.user}";
description = "The group as which to run blockbook-frontend-${name}.";
};
certFile = mkOption {
type = types.nullOr types.path;
default = null;
example = "/etc/secrets/blockbook-frontend-${name}/certFile";
description = ''
To enable SSL, specify path to the name of certificate files without extension.
Expecting <filename>certFile.crt</filename> and <filename>certFile.key</filename>.
'';
};
configFile = mkOption {
type = with types; nullOr path;
default = null;
example = "${config.dataDir}/config.json";
description = "Location of the blockbook configuration file.";
};
coinName = mkOption {
type = types.str;
default = "Bitcoin";
example = "Bitcoin";
description = ''
See <link xlink:href="https://github.com/trezor/blockbook/blob/master/bchain/coins/blockchain.go#L61"/>
for current of coins supported in master (Note: may differ from release).
'';
};
cssDir = mkOption {
type = types.path;
default = "${config.package}/share/css/";
example = "${config.dataDir}/static/css/";
description = ''
Location of the dir with <filename>main.css</filename> CSS file.
By default, the one shipped with the package is used.
'';
};
dataDir = mkOption {
type = types.path;
default = "/var/lib/blockbook-frontend-${name}";
description = "Location of blockbook-frontend-${name} data directory.";
};
debug = mkOption {
type = types.bool;
default = false;
description = "Debug mode, return more verbose errors, reload templates on each request.";
};
internal = mkOption {
type = types.nullOr types.str;
default = ":9030";
example = ":9030";
description = "Internal http server binding <literal>[address]:port</literal>.";
};
messageQueueBinding = mkOption {
type = types.str;
default = "tcp://127.0.0.1:38330";
example = "tcp://127.0.0.1:38330";
description = "Message Queue Binding <literal>address:port</literal>.";
};
public = mkOption {
type = types.nullOr types.str;
default = ":9130";
example = ":9130";
description = "Public http server binding <literal>[address]:port</literal>.";
};
rpc = {
url = mkOption {
type = types.str;
default = "http://127.0.0.1";
description = "URL for JSON-RPC connections.";
};
port = mkOption {
type = types.port;
default = 8030;
description = "Port for JSON-RPC connections.";
};
user = mkOption {
type = types.str;
default = "rpc";
example = "rpc";
description = "Username for JSON-RPC connections.";
};
password = mkOption {
type = types.str;
default = "rpc";
example = "rpc";
description = ''
RPC password for JSON-RPC connections.
Warning: this is stored in cleartext in the Nix store!!!
Use <literal>configFile</literal> or <literal>passwordFile</literal> if needed.
'';
};
passwordFile = mkOption {
type = types.nullOr types.path;
default = null;
description = ''
File containing password of the RPC user.
Note: This options is ignored when <literal>configFile</literal> is used.
'';
};
};
sync = mkOption {
type = types.bool;
default = true;
description = "Synchronizes until tip, if together with zeromq, keeps index synchronized.";
};
templateDir = mkOption {
type = types.path;
default = "${config.package}/share/templates/";
example = "${config.dataDir}/templates/static/";
description = "Location of the HTML templates. By default, ones shipped with the package are used.";
};
extraConfig = mkOption {
type = types.attrs;
default = {};
example = literalExample '' {
alternative_estimate_fee = "whatthefee-disabled";
alternative_estimate_fee_params = "{\"url\": \"https://whatthefee.io/data.json\", \"periodSeconds\": 60}";
fiat_rates = "coingecko";
fiat_rates_params = "{\"url\": \"https://api.coingecko.com/api/v3\", \"coin\": \"bitcoin\", \"periodSeconds\": 60}";
coin_shortcut = "BTC";
coin_label = "Bitcoin";
xpub_magic = 76067358;
xpub_magic_segwit_p2sh = 77429938;
xpub_magic_segwit_native = 78792518;
}'';
description = ''
Additional configurations to be appended to <filename>coin.conf</filename>.
Overrides any already defined configuration options.
See <link xlink:href="https://github.com/trezor/blockbook/tree/master/configs/coins"/>
for current configuration options supported in master (Note: may differ from release).
'';
};
extraCmdLineOptions = mkOption {
type = types.listOf types.str;
default = [];
example = [ "-workers=1" "-dbcache=0" "-logtosderr" ];
description = ''
Extra command line options to pass to Blockbook.
Run blockbook --help to list all available options.
'';
};
};
};
in
{
# interface
options = {
services.blockbook-frontend = mkOption {
type = types.attrsOf (types.submodule blockbookOpts);
default = {};
description = "Specification of one or more blockbook-frontend instances.";
};
};
# implementation
config = mkIf (eachBlockbook != {}) {
systemd.services = mapAttrs' (blockbookName: cfg: (
nameValuePair "blockbook-frontend-${blockbookName}" (
let
configFile = if cfg.configFile != null then cfg.configFile else
pkgs.writeText "config.conf" (builtins.toJSON ( {
coin_name = "${cfg.coinName}";
rpc_user = "${cfg.rpc.user}";
rpc_pass = "${cfg.rpc.password}";
rpc_url = "${cfg.rpc.url}:${toString cfg.rpc.port}";
message_queue_binding = "${cfg.messageQueueBinding}";
} // cfg.extraConfig)
);
in {
description = "blockbook-frontend-${blockbookName} daemon";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
preStart = ''
ln -sf ${cfg.templateDir} ${cfg.dataDir}/static/
ln -sf ${cfg.cssDir} ${cfg.dataDir}/static/
${optionalString (cfg.rpc.passwordFile != null && cfg.configFile == null) ''
CONFIGTMP=$(mktemp)
${pkgs.jq}/bin/jq ".rpc_pass = \"$(cat ${cfg.rpc.passwordFile})\"" ${configFile} > $CONFIGTMP
mv $CONFIGTMP ${cfg.dataDir}/${blockbookName}-config.json
''}
'';
serviceConfig = {
User = cfg.user;
Group = cfg.group;
ExecStart = ''
${cfg.package}/bin/blockbook \
${if (cfg.rpc.passwordFile != null && cfg.configFile == null) then
"-blockchaincfg=${cfg.dataDir}/${blockbookName}-config.json"
else
"-blockchaincfg=${configFile}"
} \
-datadir=${cfg.dataDir} \
${optionalString (cfg.sync != false) "-sync"} \
${optionalString (cfg.certFile != null) "-certfile=${toString cfg.certFile}"} \
${optionalString (cfg.debug != false) "-debug"} \
${optionalString (cfg.internal != null) "-internal=${toString cfg.internal}"} \
${optionalString (cfg.public != null) "-public=${toString cfg.public}"} \
${toString cfg.extraCmdLineOptions}
'';
Restart = "on-failure";
WorkingDirectory = cfg.dataDir;
LimitNOFILE = 65536;
};
}
) )) eachBlockbook;
systemd.tmpfiles.rules = flatten (mapAttrsToList (blockbookName: cfg: [
"d ${cfg.dataDir} 0750 ${cfg.user} ${cfg.group} - -"
"d ${cfg.dataDir}/static 0750 ${cfg.user} ${cfg.group} - -"
]) eachBlockbook);
users.users = mapAttrs' (blockbookName: cfg: (
nameValuePair "blockbook-frontend-${blockbookName}" {
name = cfg.user;
group = cfg.group;
home = cfg.dataDir;
isSystemUser = true;
})) eachBlockbook;
users.groups = mapAttrs' (instanceName: cfg: (
nameValuePair "${cfg.group}" { })) eachBlockbook;
};
}

View file

@ -77,6 +77,8 @@ in {
AmbientCapabilities = "CAP_NET_ADMIN CAP_NET_RAW"; AmbientCapabilities = "CAP_NET_ADMIN CAP_NET_RAW";
NoNewPrivileges = true; NoNewPrivileges = true;
DynamicUser = true; DynamicUser = true;
Type = "notify";
NotifyAccess = "main";
ExecStart = "${getBin cfg.package}/bin/corerad -c=${cfg.configFile}"; ExecStart = "${getBin cfg.package}/bin/corerad -c=${cfg.configFile}";
Restart = "on-failure"; Restart = "on-failure";
}; };

View file

@ -0,0 +1,278 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfgs = config.services;
cfg = cfgs.ncdns;
dataDir = "/var/lib/ncdns";
username = "ncdns";
valueType = with types; oneOf [ int str bool path ]
// { description = "setting type (integer, string, bool or path)"; };
configType = with types; attrsOf (nullOr (either valueType configType))
// { description = ''
ncdns.conf configuration type. The format consists of an
attribute set of settings. Each setting can be either `null`,
a value or an attribute set. The allowed values are integers,
strings, booleans or paths.
'';
};
configFile = pkgs.runCommand "ncdns.conf"
{ json = builtins.toJSON cfg.settings;
passAsFile = [ "json" ];
}
"${pkgs.remarshal}/bin/json2toml < $jsonPath > $out";
defaultFiles = {
public = "${dataDir}/bit.key";
private = "${dataDir}/bit.private";
zonePublic = "${dataDir}/bit-zone.key";
zonePrivate = "${dataDir}/bit-zone.private";
};
# if all keys are the default value
needsKeygen = all id (flip mapAttrsToList cfg.dnssec.keys
(n: v: v == getAttr n defaultFiles));
mkDefaultAttrs = mapAttrs (n: v: mkDefault v);
in
{
###### interface
options = {
services.ncdns = {
enable = mkEnableOption ''
ncdns, a Go daemon to bridge Namecoin to DNS.
To resolve .bit domains set <literal>services.namecoind.enable = true;</literal>
and an RPC username/password
'';
address = mkOption {
type = types.str;
default = "127.0.0.1";
description = ''
The IP address the ncdns resolver will bind to. Leave this unchanged
if you do not wish to directly expose the resolver.
'';
};
port = mkOption {
type = types.port;
default = 5333;
description = ''
The port the ncdns resolver will bind to.
'';
};
identity.hostname = mkOption {
type = types.str;
default = config.networking.hostName;
example = "example.com";
description = ''
The hostname of this ncdns instance, which defaults to the machine
hostname. If specified, ncdns lists the hostname as an NS record at
the zone apex:
<programlisting>
bit. IN NS ns1.example.com.
</programlisting>
If unset ncdns will generate an internal psuedo-hostname under the
zone, which will resolve to the value of
<option>services.ncdns.identity.address</option>.
If you are only using ncdns locally you can ignore this.
'';
};
identity.hostmaster = mkOption {
type = types.str;
default = "";
example = "root@example.com";
description = ''
An email address for the SOA record at the bit zone.
If you are only using ncdns locally you can ignore this.
'';
};
identity.address = mkOption {
type = types.str;
default = "127.127.127.127";
description = ''
The IP address the hostname specified in
<option>services.ncdns.identity.hostname</option> should resolve to.
If you are only using ncdns locally you can ignore this.
'';
};
dnssec.enable = mkEnableOption ''
DNSSEC support in ncdns. This will generate KSK and ZSK keypairs
(unless provided via the options
<option>services.ncdns.dnssec.publicKey</option>,
<option>services.ncdns.dnssec.privateKey</option> etc.) and add a trust
anchor to recursive resolvers
'';
dnssec.keys.public = mkOption {
type = types.path;
default = defaultFiles.public;
description = ''
Path to the file containing the KSK public key.
The key can be generated using the <literal>dnssec-keygen</literal>
command, provided by the package <package>bind</package> as follows:
<programlisting>
$ dnssec-keygen -a RSASHA256 -3 -b 2048 -f KSK bit
</programlisting>
'';
};
dnssec.keys.private = mkOption {
type = types.path;
default = defaultFiles.private;
description = ''
Path to the file containing the KSK private key.
'';
};
dnssec.keys.zonePublic = mkOption {
type = types.path;
default = defaultFiles.zonePublic;
description = ''
Path to the file containing the ZSK public key.
The key can be generated using the <literal>dnssec-keygen</literal>
command, provided by the package <package>bind</package> as follows:
<programlisting>
$ dnssec-keygen -a RSASHA256 -3 -b 2048 bit
</programlisting>
'';
};
dnssec.keys.zonePrivate = mkOption {
type = types.path;
default = defaultFiles.zonePrivate;
description = ''
Path to the file containing the ZSK private key.
'';
};
settings = mkOption {
type = configType;
default = { };
example = literalExample ''
{ # enable webserver
ncdns.httplistenaddr = ":8202";
# synchronize TLS certs
certstore.nss = true;
# note: all paths are relative to the config file
certstore.nsscertdir = "../../var/lib/ncdns";
certstore.nssdbdir = "../../home/alice/.pki/nssdb";
}
'';
description = ''
ncdns settings. Use this option to configure ncds
settings not exposed in a NixOS option or to bypass one.
See the example ncdns.conf file at <link xlink:href="
https://git.io/JfX7g"/> for the available options.
'';
};
};
services.pdns-recursor.resolveNamecoin = mkOption {
type = types.bool;
default = false;
description = ''
Resolve <literal>.bit</literal> top-level domains using ncdns and namecoin.
'';
};
};
###### implementation
config = mkIf cfg.enable {
services.pdns-recursor = mkIf cfgs.pdns-recursor.resolveNamecoin {
forwardZonesRecurse.bit = "127.0.0.1:${toString cfg.port}";
luaConfig =
if cfg.dnssec.enable
then ''readTrustAnchorsFromFile("${cfg.dnssec.keys.public}")''
else ''addNTA("bit", "namecoin DNSSEC disabled")'';
};
# Avoid pdns-recursor not finding the DNSSEC keys
systemd.services.pdns-recursor = mkIf cfgs.pdns-recursor.resolveNamecoin {
after = [ "ncdns.service" ];
wants = [ "ncdns.service" ];
};
services.ncdns.settings = mkDefaultAttrs {
ncdns =
{ # Namecoin RPC
namecoinrpcaddress =
"${cfgs.namecoind.rpc.address}:${toString cfgs.namecoind.rpc.port}";
namecoinrpcusername = cfgs.namecoind.rpc.user;
namecoinrpcpassword = cfgs.namecoind.rpc.password;
# Identity
selfname = cfg.identity.hostname;
hostmaster = cfg.identity.hostmaster;
selfip = cfg.identity.address;
# Other
bind = "${cfg.address}:${toString cfg.port}";
}
// optionalAttrs cfg.dnssec.enable
{ # DNSSEC
publickey = "../.." + cfg.dnssec.keys.public;
privatekey = "../.." + cfg.dnssec.keys.private;
zonepublickey = "../.." + cfg.dnssec.keys.zonePublic;
zoneprivatekey = "../.." + cfg.dnssec.keys.zonePrivate;
};
# Daemon
service.daemon = true;
xlog.journal = true;
};
users.users.ncdns =
{ description = "ncdns daemon user"; };
systemd.services.ncdns = {
description = "ncdns daemon";
after = [ "namecoind.service" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
User = "ncdns";
StateDirectory = "ncdns";
Restart = "on-failure";
ExecStart = "${pkgs.ncdns}/bin/ncdns -conf=${configFile}";
};
preStart = optionalString (cfg.dnssec.enable && needsKeygen) ''
cd ${dataDir}
if [ ! -e bit.key ]; then
${pkgs.bind}/bin/dnssec-keygen -a RSASHA256 -3 -b 2048 bit
mv Kbit.*.key bit-zone.key
mv Kbit.*.private bit-zone.private
${pkgs.bind}/bin/dnssec-keygen -a RSASHA256 -3 -b 2048 -f KSK bit
mv Kbit.*.key bit.key
mv Kbit.*.private bit.private
fi
'';
};
};
meta.maintainers = with lib.maintainers; [ rnhmjoj ];
}

View file

@ -11,8 +11,6 @@ let
# build nsd with the options needed for the given config # build nsd with the options needed for the given config
nsdPkg = pkgs.nsd.override { nsdPkg = pkgs.nsd.override {
configFile = "${configFile}/nsd.conf";
bind8Stats = cfg.bind8Stats; bind8Stats = cfg.bind8Stats;
ipv6 = cfg.ipv6; ipv6 = cfg.ipv6;
ratelimit = cfg.ratelimit.enable; ratelimit = cfg.ratelimit.enable;
@ -897,7 +895,10 @@ in
+ "want, please enable 'services.nsd.rootServer'."; + "want, please enable 'services.nsd.rootServer'.";
}; };
environment.systemPackages = [ nsdPkg ]; environment = {
systemPackages = [ nsdPkg ];
etc."nsd/nsd.conf".source = "${configFile}/nsd.conf";
};
users.groups.${username}.gid = config.ids.gids.nsd; users.groups.${username}.gid = config.ids.gids.nsd;

View file

@ -0,0 +1,72 @@
{ config, lib, pkgs, ... }:
let
cfg = config.services.onedrive;
onedriveLauncher = pkgs.writeShellScriptBin
"onedrive-launcher"
''
# XDG_CONFIG_HOME is not recognized in the environment here.
if [ -f $HOME/.config/onedrive-launcher ]
then
# Hopefully using underscore boundary helps locate variables
for _onedrive_config_dirname_ in $(cat $HOME/.config/onedrive-launcher | grep -v '[ \t]*#' )
do
systemctl --user start onedrive@$_onedrive_config_dirname_
done
else
systemctl --user start onedrive@onedrive
fi
''
;
in {
### Documentation
# meta.doc = ./onedrive.xml;
### Interface
options.services.onedrive = {
enable = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Enable OneDrive service";
};
package = lib.mkOption {
type = lib.types.package;
default = pkgs.onedrive;
defaultText = "pkgs.onedrive";
example = lib.literalExample "pkgs.onedrive";
description = ''
OneDrive package to use.
'';
};
};
### Implementation
config = lib.mkIf cfg.enable {
environment.systemPackages = [ cfg.package ];
systemd.user.services."onedrive@" = {
description = "Onedrive sync service";
serviceConfig = {
Type = "simple";
ExecStart = ''
${cfg.package}/bin/onedrive --monitor --verbose --confdir=%h/.config/%i
'';
Restart="on-failure";
RestartSec=3;
RestartPreventExitStatus=3;
};
};
systemd.user.services.onedrive-launcher = {
wantedBy = [ "default.target" ];
serviceConfig = {
Type = "oneshot";
ExecStart = "${onedriveLauncher}/bin/onedrive-launcher";
};
};
};
}

View file

@ -0,0 +1,34 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="onedrive">
<title>Microsoft OneDrive</title>
<para>
Microsoft Onedrive is a popular cloud file-hosting service, used by 85% of Fortune 500 companies. NixOS uses a popular OneDrive client for Linux maintained by github user abraunegg. The Linux client is excellent and allows customization of which files or paths to download, not much unlike the default Windows OneDrive client by Microsoft itself. The client allows syncing with multiple onedrive accounts at the same time, of any type- OneDrive personal, OneDrive business, Office365 and Sharepoint libraries, without any additional charge.
</para>
<para>
For more information, guides and documentation, see <link xlink:href="https://abraunegg.github.io/"/>.
</para>
<para>
To enable OneDrive support, add the following to your <filename>configuration.nix</filename>:
<programlisting>
<xref linkend="opt-services.onedrive.enable"/> = true;
</programlisting>
This installs the <literal>onedrive</literal> package and a service <literal>onedriveLauncher</literal> which will instantiate a <literal>onedrive</literal> service for all your OneDrive accounts. Follow the steps in documentation of the onedrive client to setup your accounts. To use the service with multiple accounts, create a file named <filename>onedrive-launcher</filename> in <filename>~/.config</filename> and add the filename of the config directory, relative to <filename>~/.config</filename>. For example, if you have two OneDrive accounts with configs in <filename>~/.config/onedrive_bob_work</filename> and <filename>~/.config/onedrive_bob_personal</filename>, add the following lines:
<programlisting>
onedrive_bob_work
# Not in use:
# onedrive_bob_office365
onedrive_bob_personal
</programlisting>
No such file needs to be created if you are using only a single OneDrive account with config in the default location <filename>~/.config/onedrive</filename>, in the absence of <filename>~/.config/onedrive-launcher</filename>, only a single service is instantiated, with default config path.
</para>
<para>
If you wish to use a custom OneDrive package, say from another channel, add the following line:
<programlisting>
<xref linkend="opt-services.onedrive.package"/> = pkgs.unstable.onedrive;
</programlisting>
</para>
</chapter>

View file

@ -8,8 +8,10 @@ let
confFile = pkgs.writeText "radicale.conf" cfg.config; confFile = pkgs.writeText "radicale.conf" cfg.config;
# This enables us to default to version 2 while still not breaking configurations of people with version 1 defaultPackage = if versionAtLeast config.system.stateVersion "20.09" then {
defaultPackage = if versionAtLeast config.system.stateVersion "17.09" then { pkg = pkgs.radicale3;
text = "pkgs.radicale3";
} else if versionAtLeast config.system.stateVersion "17.09" then {
pkg = pkgs.radicale2; pkg = pkgs.radicale2;
text = "pkgs.radicale2"; text = "pkgs.radicale2";
} else { } else {
@ -35,8 +37,9 @@ in
defaultText = defaultPackage.text; defaultText = defaultPackage.text;
description = '' description = ''
Radicale package to use. This defaults to version 1.x if Radicale package to use. This defaults to version 1.x if
<literal>system.stateVersion &lt; 17.09</literal> and version 2.x <literal>system.stateVersion &lt; 17.09</literal>, version 2.x if
otherwise. <literal>17.09 system.stateVersion &lt; 20.09</literal>, and
version 3.x otherwise.
''; '';
}; };

View file

@ -15,7 +15,11 @@ let
listen: listen:
( (
{ host: "${cfg.listenAddress}"; port: "${toString cfg.port}"; } ${
concatMapStringsSep ",\n"
(addr: ''{ host: "${addr}"; port: "${toString cfg.port}"; }'')
cfg.listenAddresses
}
); );
${cfg.appendConfig} ${cfg.appendConfig}
@ -33,6 +37,10 @@ let
''; '';
in in
{ {
imports = [
(mkRenamedOptionModule [ "services" "sslh" "listenAddress" ] [ "services" "sslh" "listenAddresses" ])
];
options = { options = {
services.sslh = { services.sslh = {
enable = mkEnableOption "sslh"; enable = mkEnableOption "sslh";
@ -55,10 +63,10 @@ in
description = "Will the services behind sslh (Apache, sshd and so on) see the external IP and ports as if the external world connected directly to them"; description = "Will the services behind sslh (Apache, sshd and so on) see the external IP and ports as if the external world connected directly to them";
}; };
listenAddress = mkOption { listenAddresses = mkOption {
type = types.str; type = types.coercedTo types.str singleton (types.listOf types.str);
default = "0.0.0.0"; default = [ "0.0.0.0" "[::]" ];
description = "Listening address or hostname."; description = "Listening addresses or hostnames.";
}; };
port = mkOption { port = mkOption {

View file

@ -162,6 +162,8 @@ in
unitConfig.RequiresMountsFor = stateDir; unitConfig.RequiresMountsFor = stateDir;
# This a HACK to fix missing dependencies of dynamic libs extracted from jars # This a HACK to fix missing dependencies of dynamic libs extracted from jars
environment.LD_LIBRARY_PATH = with pkgs.stdenv; "${cc.cc.lib}/lib"; environment.LD_LIBRARY_PATH = with pkgs.stdenv; "${cc.cc.lib}/lib";
# Make sure package upgrades trigger a service restart
restartTriggers = [ cfg.unifiPackage cfg.mongodbPackage ];
serviceConfig = { serviceConfig = {
Type = "simple"; Type = "simple";

View file

@ -122,7 +122,7 @@ in
ExecStart = '' ExecStart = ''
${cfg.package}/bin/xandikos \ ${cfg.package}/bin/xandikos \
--directory /var/lib/xandikos \ --directory /var/lib/xandikos \
--listen_address ${cfg.address} \ --listen-address ${cfg.address} \
--port ${toString cfg.port} \ --port ${toString cfg.port} \
--route-prefix ${cfg.routePrefix} \ --route-prefix ${cfg.routePrefix} \
${lib.concatStringsSep " " cfg.extraOptions} ${lib.concatStringsSep " " cfg.extraOptions}

View file

@ -4,12 +4,21 @@ with lib;
let let
cfg = config.services.nginx.sso; cfg = config.services.nginx.sso;
pkg = getBin pkgs.nginx-sso; pkg = getBin cfg.package;
configYml = pkgs.writeText "nginx-sso.yml" (builtins.toJSON cfg.configuration); configYml = pkgs.writeText "nginx-sso.yml" (builtins.toJSON cfg.configuration);
in { in {
options.services.nginx.sso = { options.services.nginx.sso = {
enable = mkEnableOption "nginx-sso service"; enable = mkEnableOption "nginx-sso service";
package = mkOption {
type = types.package;
default = pkgs.nginx-sso;
defaultText = "pkgs.nginx-sso";
description = ''
The nginx-sso package that should be used.
'';
};
configuration = mkOption { configuration = mkOption {
type = types.attrsOf types.unspecified; type = types.attrsOf types.unspecified;
default = {}; default = {};

View file

@ -159,7 +159,7 @@ in
type = types.bool; type = types.bool;
default = false; default = false;
description = '' description = ''
Wheter to enable Tor control socket. Control socket is created Whether to enable Tor control socket. Control socket is created
in <literal>${torRunDirectory}/control</literal> in <literal>${torRunDirectory}/control</literal>
''; '';
}; };

View file

@ -93,7 +93,7 @@ in
type = types.bool; type = types.bool;
default = true; default = true;
description = '' description = ''
Wheter to enable HSTS if HTTPS is also enabled. Whether to enable HSTS if HTTPS is also enabled.
''; '';
}; };
maxAgeSeconds = mkOption { maxAgeSeconds = mkOption {
@ -385,7 +385,7 @@ in
type = types.bool; type = types.bool;
default = true; default = true;
description = '' description = ''
Wether to enable email registration. Whether to enable email registration.
''; '';
}; };
allowGravatar = mkOption { allowGravatar = mkOption {

View file

@ -0,0 +1,72 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.convos;
in
{
options.services.convos = {
enable = mkEnableOption "Convos";
listenPort = mkOption {
type = types.port;
default = 3000;
example = 8080;
description = "Port the web interface should listen on";
};
listenAddress = mkOption {
type = types.str;
default = "*";
example = "127.0.0.1";
description = "Address or host the web interface should listen on";
};
reverseProxy = mkOption {
type = types.bool;
default = false;
description = ''
Enables reverse proxy support. This will allow Convos to automatically
pick up the <literal>X-Forwarded-For</literal> and
<literal>X-Request-Base</literal> HTTP headers set in your reverse proxy
web server. Note that enabling this option without a reverse proxy in
front will be a security issue.
'';
};
};
config = mkIf cfg.enable {
systemd.services.convos = {
description = "Convos Service";
wantedBy = [ "multi-user.target" ];
after = [ "networking.target" ];
environment = {
CONVOS_HOME = "%S/convos";
CONVOS_REVERSE_PROXY = if cfg.reverseProxy then "1" else "0";
MOJO_LISTEN = "http://${toString cfg.listenAddress}:${toString cfg.listenPort}";
};
serviceConfig = {
ExecStart = "${pkgs.convos}/bin/convos daemon";
Restart = "on-failure";
StateDirectory = "convos";
WorkingDirectory = "%S/convos";
DynamicUser = true;
MemoryDenyWriteExecute = true;
ProtectHome = true;
ProtectClock = true;
ProtectHostname = true;
ProtectKernelTunables = true;
ProtectKernelModules = true;
ProtectKernelLogs = true;
ProtectControlGroups = true;
PrivateDevices = true;
PrivateMounts = true;
PrivateUsers = true;
LockPersonality = true;
RestrictRealtime = true;
RestrictNamespaces = true;
RestrictAddressFamilies = [ "AF_INET" "AF_INET6"];
SystemCallFilter = "@system-service";
SystemCallArchitectures = "native";
CapabilityBoundingSet = "";
};
};
};
}

View file

@ -40,7 +40,7 @@ let
$CFG->disableupdateautodeploy = true; $CFG->disableupdateautodeploy = true;
$CFG->pathtogs = '${pkgs.ghostscript}/bin/gs'; $CFG->pathtogs = '${pkgs.ghostscript}/bin/gs';
$CFG->pathtophp = '${pkgs.php}/bin/php'; $CFG->pathtophp = '${phpExt}/bin/php';
$CFG->pathtodu = '${pkgs.coreutils}/bin/du'; $CFG->pathtodu = '${pkgs.coreutils}/bin/du';
$CFG->aspellpath = '${pkgs.aspell}/bin/aspell'; $CFG->aspellpath = '${pkgs.aspell}/bin/aspell';
$CFG->pathtodot = '${pkgs.graphviz}/bin/dot'; $CFG->pathtodot = '${pkgs.graphviz}/bin/dot';
@ -55,6 +55,9 @@ let
mysqlLocal = cfg.database.createLocally && cfg.database.type == "mysql"; mysqlLocal = cfg.database.createLocally && cfg.database.type == "mysql";
pgsqlLocal = cfg.database.createLocally && cfg.database.type == "pgsql"; pgsqlLocal = cfg.database.createLocally && cfg.database.type == "pgsql";
phpExt = pkgs.php.withExtensions
({ enabled, all }: with all; [ iconv mbstring curl openssl tokenizer xmlrpc soap ctype zip gd simplexml dom intl json sqlite3 pgsql pdo_sqlite pdo_pgsql pdo_odbc pdo_mysql pdo mysqli session zlib xmlreader fileinfo ]);
in in
{ {
# interface # interface
@ -222,6 +225,7 @@ in
services.phpfpm.pools.moodle = { services.phpfpm.pools.moodle = {
inherit user group; inherit user group;
phpPackage = phpExt;
phpEnv.MOODLE_CONFIG = "${moodleConfig}"; phpEnv.MOODLE_CONFIG = "${moodleConfig}";
phpOptions = '' phpOptions = ''
zend_extension = opcache.so zend_extension = opcache.so
@ -263,13 +267,13 @@ in
after = optional mysqlLocal "mysql.service" ++ optional pgsqlLocal "postgresql.service"; after = optional mysqlLocal "mysql.service" ++ optional pgsqlLocal "postgresql.service";
environment.MOODLE_CONFIG = moodleConfig; environment.MOODLE_CONFIG = moodleConfig;
script = '' script = ''
${pkgs.php}/bin/php ${cfg.package}/share/moodle/admin/cli/check_database_schema.php && rc=$? || rc=$? ${phpExt}/bin/php ${cfg.package}/share/moodle/admin/cli/check_database_schema.php && rc=$? || rc=$?
[ "$rc" == 1 ] && ${pkgs.php}/bin/php ${cfg.package}/share/moodle/admin/cli/upgrade.php \ [ "$rc" == 1 ] && ${phpExt}/bin/php ${cfg.package}/share/moodle/admin/cli/upgrade.php \
--non-interactive \ --non-interactive \
--allow-unstable --allow-unstable
[ "$rc" == 2 ] && ${pkgs.php}/bin/php ${cfg.package}/share/moodle/admin/cli/install_database.php \ [ "$rc" == 2 ] && ${phpExt}/bin/php ${cfg.package}/share/moodle/admin/cli/install_database.php \
--agree-license \ --agree-license \
--adminpass=${cfg.initialPassword} --adminpass=${cfg.initialPassword}
@ -289,7 +293,7 @@ in
serviceConfig = { serviceConfig = {
User = user; User = user;
Group = group; Group = group;
ExecStart = "${pkgs.php}/bin/php ${cfg.package}/share/moodle/admin/cli/cron.php"; ExecStart = "${phpExt}/bin/php ${cfg.package}/share/moodle/admin/cli/cron.php";
}; };
}; };

View file

@ -708,6 +708,7 @@ in
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
wants = concatLists (map (hostOpts: [ "acme-${hostOpts.hostName}.service" "acme-selfsigned-${hostOpts.hostName}.service" ]) vhostsACME); wants = concatLists (map (hostOpts: [ "acme-${hostOpts.hostName}.service" "acme-selfsigned-${hostOpts.hostName}.service" ]) vhostsACME);
after = [ "network.target" "fs.target" ] ++ map (hostOpts: "acme-selfsigned-${hostOpts.hostName}.service") vhostsACME; after = [ "network.target" "fs.target" ] ++ map (hostOpts: "acme-selfsigned-${hostOpts.hostName}.service") vhostsACME;
before = map (hostOpts: "acme-${hostOpts.hostName}.service") vhostsACME;
path = [ pkg pkgs.coreutils pkgs.gnugrep ]; path = [ pkg pkgs.coreutils pkgs.gnugrep ];

View file

@ -693,6 +693,10 @@ in
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
wants = concatLists (map (vhostConfig: ["acme-${vhostConfig.serverName}.service" "acme-selfsigned-${vhostConfig.serverName}.service"]) acmeEnabledVhosts); wants = concatLists (map (vhostConfig: ["acme-${vhostConfig.serverName}.service" "acme-selfsigned-${vhostConfig.serverName}.service"]) acmeEnabledVhosts);
after = [ "network.target" ] ++ map (vhostConfig: "acme-selfsigned-${vhostConfig.serverName}.service") acmeEnabledVhosts; after = [ "network.target" ] ++ map (vhostConfig: "acme-selfsigned-${vhostConfig.serverName}.service") acmeEnabledVhosts;
# Nginx needs to be started in order to be able to request certificates
# (it's hosting the acme challenge after all)
# This fixes https://github.com/NixOS/nixpkgs/issues/81842
before = map (vhostConfig: "acme-${vhostConfig.serverName}.service") acmeEnabledVhosts;
stopIfChanged = false; stopIfChanged = false;
preStart = '' preStart = ''
${cfg.preStart} ${cfg.preStart}

View file

@ -20,10 +20,10 @@ let
in valueType; in valueType;
dynamicConfigFile = if cfg.dynamicConfigFile == null then dynamicConfigFile = if cfg.dynamicConfigFile == null then
pkgs.runCommand "config.toml" { pkgs.runCommand "config.toml" {
buildInputs = [ pkgs.yj ]; buildInputs = [ pkgs.remarshal ];
preferLocalBuild = true; preferLocalBuild = true;
} '' } ''
yj -jt -i \ remarshal -if json -of toml \
< ${ < ${
pkgs.writeText "dynamic_config.json" pkgs.writeText "dynamic_config.json"
(builtins.toJSON cfg.dynamicConfigOptions) (builtins.toJSON cfg.dynamicConfigOptions)

View file

@ -321,7 +321,7 @@ in
fonts.fonts = with pkgs; [ noto-fonts hack-font ]; fonts.fonts = with pkgs; [ noto-fonts hack-font ];
fonts.fontconfig.defaultFonts = { fonts.fontconfig.defaultFonts = {
monospace = [ "Hack" "Noto Mono" ]; monospace = [ "Hack" "Noto Sans Mono" ];
sansSerif = [ "Noto Sans" ]; sansSerif = [ "Noto Sans" ];
serif = [ "Noto Serif" ]; serif = [ "Noto Serif" ];
}; };

View file

@ -332,12 +332,45 @@ in
}; };
# Configuration for automatic login. Common for all DM.
autoLogin = mkOption {
type = types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = cfg.displayManager.autoLogin.user != null;
description = ''
Automatically log in as <option>autoLogin.user</option>.
'';
};
user = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
User to be used for the automatic login.
'';
};
};
};
default = {};
description = ''
Auto login configuration attrset.
'';
};
}; };
}; };
config = { config = {
assertions = [ assertions = [
{ assertion = cfg.displayManager.autoLogin.enable -> cfg.displayManager.autoLogin.user != null;
message = ''
services.xserver.displayManager.autoLogin.enable requires services.xserver.displayManager.autoLogin.user to be set
'';
}
{ {
assertion = cfg.desktopManager.default != null || cfg.windowManager.default != null -> cfg.displayManager.defaultSession == defaultSessionFromLegacyOptions; assertion = cfg.desktopManager.default != null || cfg.windowManager.default != null -> cfg.displayManager.defaultSession == defaultSessionFromLegacyOptions;
message = "You cannot use both services.xserver.displayManager.defaultSession option and legacy options (services.xserver.desktopManager.default and services.xserver.windowManager.default)."; message = "You cannot use both services.xserver.displayManager.defaultSession option and legacy options (services.xserver.desktopManager.default and services.xserver.windowManager.default).";

View file

@ -37,6 +37,22 @@ let
in in
{ {
imports = [
(mkRenamedOptionModule [ "services" "xserver" "displayManager" "gdm" "autoLogin" "enable" ] [
"services"
"xserver"
"displayManager"
"autoLogin"
"enable"
])
(mkRenamedOptionModule [ "services" "xserver" "displayManager" "gdm" "autoLogin" "user" ] [
"services"
"xserver"
"displayManager"
"autoLogin"
"user"
])
];
meta = { meta = {
maintainers = teams.gnome.members; maintainers = teams.gnome.members;
@ -56,40 +72,13 @@ in
debugging messages in GDM debugging messages in GDM
''; '';
autoLogin = mkOption { # Auto login options specific to GDM
default = {}; autoLogin.delay = mkOption {
type = types.int;
default = 0;
description = '' description = ''
Auto login configuration attrset. Seconds of inactivity after which the autologin will be performed.
''; '';
type = types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Automatically log in as the sepecified <option>autoLogin.user</option>.
'';
};
user = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
User to be used for the autologin.
'';
};
delay = mkOption {
type = types.int;
default = 0;
description = ''
Seconds of inactivity after which the autologin will be performed.
'';
};
};
};
}; };
wayland = mkOption { wayland = mkOption {
@ -128,12 +117,6 @@ in
config = mkIf cfg.gdm.enable { config = mkIf cfg.gdm.enable {
assertions = [
{ assertion = cfg.gdm.autoLogin.enable -> cfg.gdm.autoLogin.user != null;
message = "GDM auto-login requires services.xserver.displayManager.gdm.autoLogin.user to be set";
}
];
services.xserver.displayManager.lightdm.enable = false; services.xserver.displayManager.lightdm.enable = false;
users.users.gdm = users.users.gdm =
@ -287,14 +270,14 @@ in
environment.etc."gdm/custom.conf".text = '' environment.etc."gdm/custom.conf".text = ''
[daemon] [daemon]
WaylandEnable=${if cfg.gdm.wayland then "true" else "false"} WaylandEnable=${if cfg.gdm.wayland then "true" else "false"}
${optionalString cfg.gdm.autoLogin.enable ( ${optionalString cfg.autoLogin.enable (
if cfg.gdm.autoLogin.delay > 0 then '' if cfg.gdm.autoLogin.delay > 0 then ''
TimedLoginEnable=true TimedLoginEnable=true
TimedLogin=${cfg.gdm.autoLogin.user} TimedLogin=${cfg.autoLogin.user}
TimedLoginDelay=${toString cfg.gdm.autoLogin.delay} TimedLoginDelay=${toString cfg.gdm.autoLogin.delay}
'' else '' '' else ''
AutomaticLoginEnable=true AutomaticLoginEnable=true
AutomaticLogin=${cfg.gdm.autoLogin.user} AutomaticLogin=${cfg.autoLogin.user}
'') '')
} }

View file

@ -43,7 +43,7 @@ in
services.xserver.displayManager.lightdm.extraSeatDefaults = "greeter-show-manual-login=true"; services.xserver.displayManager.lightdm.extraSeatDefaults = "greeter-show-manual-login=true";
environment.etc."lightdm/io.elementary.greeter.conf".source = "${pkgs.pantheon.elementary-greeter}/etc/lightdm/io.elementary.greeter.conf"; environment.etc."lightdm/io.elementary.greeter.conf".source = "${pkgs.pantheon.elementary-greeter}/etc/lightdm/io.elementary.greeter.conf";
environment.etc."wingpanel.d/io.elementary.greeter.whitelist".source = "${pkgs.pantheon.elementary-default-settings}/etc/wingpanel.d/io.elementary.greeter.whitelist"; environment.etc."wingpanel.d/io.elementary.greeter.allowed".source = "${pkgs.pantheon.elementary-default-settings}/etc/wingpanel.d/io.elementary.greeter.allowed";
}; };
} }

View file

@ -53,8 +53,8 @@ let
${optionalString cfg.greeter.enable '' ${optionalString cfg.greeter.enable ''
greeter-session = ${cfg.greeter.name} greeter-session = ${cfg.greeter.name}
''} ''}
${optionalString cfg.autoLogin.enable '' ${optionalString dmcfg.autoLogin.enable ''
autologin-user = ${cfg.autoLogin.user} autologin-user = ${dmcfg.autoLogin.user}
autologin-user-timeout = ${toString cfg.autoLogin.timeout} autologin-user-timeout = ${toString cfg.autoLogin.timeout}
autologin-session = ${sessionData.autologinSession} autologin-session = ${sessionData.autologinSession}
''} ''}
@ -82,6 +82,20 @@ in
./lightdm-greeters/enso-os.nix ./lightdm-greeters/enso-os.nix
./lightdm-greeters/pantheon.nix ./lightdm-greeters/pantheon.nix
./lightdm-greeters/tiny.nix ./lightdm-greeters/tiny.nix
(mkRenamedOptionModule [ "services" "xserver" "displayManager" "lightdm" "autoLogin" "enable" ] [
"services"
"xserver"
"displayManager"
"autoLogin"
"enable"
])
(mkRenamedOptionModule [ "services" "xserver" "displayManager" "lightdm" "autoLogin" "user" ] [
"services"
"xserver"
"displayManager"
"autoLogin"
"user"
])
]; ];
options = { options = {
@ -149,39 +163,13 @@ in
description = "Extra lines to append to SeatDefaults section."; description = "Extra lines to append to SeatDefaults section.";
}; };
autoLogin = mkOption { # Configuration for automatic login specific to LightDM
default = {}; autoLogin.timeout = mkOption {
type = types.int;
default = 0;
description = '' description = ''
Configuration for automatic login. Show the greeter for this many seconds before automatic login occurs.
''; '';
type = types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Automatically log in as the specified <option>autoLogin.user</option>.
'';
};
user = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
User to be used for the automatic login.
'';
};
timeout = mkOption {
type = types.int;
default = 0;
description = ''
Show the greeter for this many seconds before automatic login occurs.
'';
};
};
};
}; };
}; };
@ -195,17 +183,12 @@ in
LightDM requires services.xserver.enable to be true LightDM requires services.xserver.enable to be true
''; '';
} }
{ assertion = cfg.autoLogin.enable -> cfg.autoLogin.user != null; { assertion = dmcfg.autoLogin.enable -> sessionData.autologinSession != null;
message = ''
LightDM auto-login requires services.xserver.displayManager.lightdm.autoLogin.user to be set
'';
}
{ assertion = cfg.autoLogin.enable -> sessionData.autologinSession != null;
message = '' message = ''
LightDM auto-login requires that services.xserver.displayManager.defaultSession is set. LightDM auto-login requires that services.xserver.displayManager.defaultSession is set.
''; '';
} }
{ assertion = !cfg.greeter.enable -> (cfg.autoLogin.enable && cfg.autoLogin.timeout == 0); { assertion = !cfg.greeter.enable -> (dmcfg.autoLogin.enable && cfg.autoLogin.timeout == 0);
message = '' message = ''
LightDM can only run without greeter if automatic login is enabled and the timeout for it LightDM can only run without greeter if automatic login is enabled and the timeout for it
is set to zero. is set to zero.
@ -218,7 +201,7 @@ in
# Set default session in session chooser to a specified values basically ignore session history. # Set default session in session chooser to a specified values basically ignore session history.
# Auto-login is already covered by a config value. # Auto-login is already covered by a config value.
services.xserver.displayManager.job.preStart = optionalString (!cfg.autoLogin.enable && dmcfg.defaultSession != null) '' services.xserver.displayManager.job.preStart = optionalString (!dmcfg.autoLogin.enable && dmcfg.defaultSession != null) ''
${setSessionScript}/bin/set-session ${dmcfg.defaultSession} ${setSessionScript}/bin/set-session ${dmcfg.defaultSession}
''; '';

View file

@ -61,9 +61,9 @@ let
EnableHidpi=${if cfg.enableHidpi then "true" else "false"} EnableHidpi=${if cfg.enableHidpi then "true" else "false"}
SessionDir=${dmcfg.sessionData.desktops}/share/wayland-sessions SessionDir=${dmcfg.sessionData.desktops}/share/wayland-sessions
${optionalString cfg.autoLogin.enable '' ${optionalString dmcfg.autoLogin.enable ''
[Autologin] [Autologin]
User=${cfg.autoLogin.user} User=${dmcfg.autoLogin.user}
Session=${autoLoginSessionName}.desktop Session=${autoLoginSessionName}.desktop
Relogin=${boolToString cfg.autoLogin.relogin} Relogin=${boolToString cfg.autoLogin.relogin}
''} ''}
@ -78,6 +78,20 @@ in
imports = [ imports = [
(mkRemovedOptionModule [ "services" "xserver" "displayManager" "sddm" "themes" ] (mkRemovedOptionModule [ "services" "xserver" "displayManager" "sddm" "themes" ]
"Set the option `services.xserver.displayManager.sddm.package' instead.") "Set the option `services.xserver.displayManager.sddm.package' instead.")
(mkRenamedOptionModule [ "services" "xserver" "displayManager" "sddm" "autoLogin" "enable" ] [
"services"
"xserver"
"displayManager"
"autoLogin"
"enable"
])
(mkRenamedOptionModule [ "services" "xserver" "displayManager" "sddm" "autoLogin" "user" ] [
"services"
"xserver"
"displayManager"
"autoLogin"
"user"
])
]; ];
options = { options = {
@ -153,40 +167,14 @@ in
''; '';
}; };
autoLogin = mkOption { # Configuration for automatic login specific to SDDM
default = {}; autoLogin.relogin = mkOption {
type = types.bool;
default = false;
description = '' description = ''
Configuration for automatic login. If true automatic login will kick in again on session exit (logout), otherwise it
will only log in automatically when the display-manager is started.
''; '';
type = types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Automatically log in as <option>autoLogin.user</option>.
'';
};
user = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
User to be used for the automatic login.
'';
};
relogin = mkOption {
type = types.bool;
default = false;
description = ''
If true automatic login will kick in again on session exit (logout), otherwise it
will only log in automatically when the display-manager is started.
'';
};
};
};
}; };
}; };
@ -201,12 +189,7 @@ in
SDDM requires services.xserver.enable to be true SDDM requires services.xserver.enable to be true
''; '';
} }
{ assertion = cfg.autoLogin.enable -> cfg.autoLogin.user != null; { assertion = dmcfg.autoLogin.enable -> autoLoginSessionName != null;
message = ''
SDDM auto-login requires services.xserver.displayManager.sddm.autoLogin.user to be set
'';
}
{ assertion = cfg.autoLogin.enable -> autoLoginSessionName != null;
message = '' message = ''
SDDM auto-login requires that services.xserver.displayManager.defaultSession is set. SDDM auto-login requires that services.xserver.displayManager.defaultSession is set.
''; '';

View file

@ -0,0 +1,81 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.boot.initrd.network.openvpn;
in
{
options = {
boot.initrd.network.openvpn.enable = mkOption {
type = types.bool;
default = false;
description = ''
Starts an OpenVPN client during initrd boot. It can be used to e.g.
remotely accessing the SSH service controlled by
<option>boot.initrd.network.ssh</option> or other network services
included. Service is killed when stage-1 boot is finished.
'';
};
boot.initrd.network.openvpn.configuration = mkOption {
type = types.path; # Same type as boot.initrd.secrets
description = ''
The configuration file for OpenVPN.
<warning>
<para>
Unless your bootloader supports initrd secrets, this configuration
is stored insecurely in the global Nix store.
</para>
</warning>
'';
example = "./configuration.ovpn";
};
};
config = mkIf (config.boot.initrd.network.enable && cfg.enable) {
assertions = [
{
assertion = cfg.configuration != null;
message = "You should specify a configuration for initrd OpenVPN";
}
];
# Add kernel modules needed for OpenVPN
boot.initrd.kernelModules = [ "tun" "tap" ];
# Add openvpn and ip binaries to the initrd
# The shared libraries are required for DNS resolution
boot.initrd.extraUtilsCommands = ''
copy_bin_and_libs ${pkgs.openvpn}/bin/openvpn
copy_bin_and_libs ${pkgs.iproute}/bin/ip
cp -pv ${pkgs.glibc}/lib/libresolv.so.2 $out/lib
cp -pv ${pkgs.glibc}/lib/libnss_dns.so.2 $out/lib
'';
boot.initrd.secrets = {
"/etc/initrd.ovpn" = cfg.configuration;
};
# openvpn --version would exit with 1 instead of 0
boot.initrd.extraUtilsCommandsTest = ''
$out/bin/openvpn --show-gateway
'';
# Add `iproute /bin/ip` to the config, to ensure that openvpn
# is able to set the routes
boot.initrd.network.postCommands = ''
(cat /etc/initrd.ovpn; echo -e '\niproute /bin/ip') | \
openvpn /dev/stdin &
'';
};
}

View file

@ -54,7 +54,7 @@ let
type = types.bool // { merge = mergeFalseByDefault; }; type = types.bool // { merge = mergeFalseByDefault; };
default = false; default = false;
description = '' description = ''
Wether option should generate a failure when unused. Whether option should generate a failure when unused.
Upon merging values, mandatory wins over optional. Upon merging values, mandatory wins over optional.
''; '';
}; };

View file

@ -4,11 +4,15 @@ with lib;
let let
blCfg = config.boot.loader; blCfg = config.boot.loader;
dtCfg = config.hardware.deviceTree;
cfg = blCfg.generic-extlinux-compatible; cfg = blCfg.generic-extlinux-compatible;
timeoutStr = if blCfg.timeout == null then "-1" else toString blCfg.timeout; timeoutStr = if blCfg.timeout == null then "-1" else toString blCfg.timeout;
# The builder used to write during system activation
builder = import ./extlinux-conf-builder.nix { inherit pkgs; }; builder = import ./extlinux-conf-builder.nix { inherit pkgs; };
# The builder exposed in populateCmd, which runs on the build architecture
populateBuilder = import ./extlinux-conf-builder.nix { pkgs = pkgs.buildPackages; };
in in
{ {
options = { options = {
@ -34,11 +38,28 @@ in
Maximum number of configurations in the boot menu. Maximum number of configurations in the boot menu.
''; '';
}; };
populateCmd = mkOption {
type = types.str;
readOnly = true;
description = ''
Contains the builder command used to populate an image,
honoring all options except the <literal>-c &lt;path-to-default-configuration&gt;</literal>
argument.
Useful to have for sdImage.populateRootCommands
'';
};
}; };
}; };
config = mkIf cfg.enable { config = let
system.build.installBootLoader = "${builder} -g ${toString cfg.configurationLimit} -t ${timeoutStr} -c"; builderArgs = "-g ${toString cfg.configurationLimit} -t ${timeoutStr}" + lib.optionalString (dtCfg.name != null) " -n ${dtCfg.name}";
system.boot.loader.id = "generic-extlinux-compatible"; in
}; mkIf cfg.enable {
system.build.installBootLoader = "${builder} ${builderArgs} -c";
system.boot.loader.id = "generic-extlinux-compatible";
boot.loader.generic-extlinux-compatible.populateCmd = "${populateBuilder} ${builderArgs}";
};
} }

View file

@ -6,7 +6,7 @@ export PATH=/empty
for i in @path@; do PATH=$PATH:$i/bin; done for i in @path@; do PATH=$PATH:$i/bin; done
usage() { usage() {
echo "usage: $0 -t <timeout> -c <path-to-default-configuration> [-d <boot-dir>] [-g <num-generations>]" >&2 echo "usage: $0 -t <timeout> -c <path-to-default-configuration> [-d <boot-dir>] [-g <num-generations>] [-n <dtbName>]" >&2
exit 1 exit 1
} }
@ -15,7 +15,7 @@ default= # Default configuration
target=/boot # Target directory target=/boot # Target directory
numGenerations=0 # Number of other generations to include in the menu numGenerations=0 # Number of other generations to include in the menu
while getopts "t:c:d:g:" opt; do while getopts "t:c:d:g:n:" opt; do
case "$opt" in case "$opt" in
t) # U-Boot interprets '0' as infinite and negative as instant boot t) # U-Boot interprets '0' as infinite and negative as instant boot
if [ "$OPTARG" -lt 0 ]; then if [ "$OPTARG" -lt 0 ]; then
@ -29,6 +29,7 @@ while getopts "t:c:d:g:" opt; do
c) default="$OPTARG" ;; c) default="$OPTARG" ;;
d) target="$OPTARG" ;; d) target="$OPTARG" ;;
g) numGenerations="$OPTARG" ;; g) numGenerations="$OPTARG" ;;
n) dtbName="$OPTARG" ;;
\?) usage ;; \?) usage ;;
esac esac
done done
@ -96,7 +97,17 @@ addEntry() {
echo " LINUX ../nixos/$(basename $kernel)" echo " LINUX ../nixos/$(basename $kernel)"
echo " INITRD ../nixos/$(basename $initrd)" echo " INITRD ../nixos/$(basename $initrd)"
if [ -d "$dtbDir" ]; then if [ -d "$dtbDir" ]; then
echo " FDTDIR ../nixos/$(basename $dtbs)" # if a dtbName was specified explicitly, use that, else use FDTDIR
if [ -n "$dtbName" ]; then
echo " FDT ../nixos/$(basename $dtbs)/${dtbName}"
else
echo " FDTDIR ../nixos/$(basename $dtbs)"
fi
else
if [ -n "$dtbName" ]; then
echo "Explicitly requested dtbName $dtbName, but there's no FDTDIR - bailing out." >&2
exit 1
fi
fi fi
echo " APPEND systemConfig=$path init=$path/init $extraParams" echo " APPEND systemConfig=$path init=$path/init $extraParams"
} }

View file

@ -55,11 +55,13 @@ let
storePath = config.boot.loader.grub.storePath; storePath = config.boot.loader.grub.storePath;
bootloaderId = if args.efiBootloaderId == null then "NixOS${efiSysMountPoint'}" else args.efiBootloaderId; bootloaderId = if args.efiBootloaderId == null then "NixOS${efiSysMountPoint'}" else args.efiBootloaderId;
timeout = if config.boot.loader.timeout == null then -1 else config.boot.loader.timeout; timeout = if config.boot.loader.timeout == null then -1 else config.boot.loader.timeout;
users = if cfg.users == {} || cfg.version != 1 then cfg.users else throw "GRUB version 1 does not support user accounts.";
inherit efiSysMountPoint; inherit efiSysMountPoint;
inherit (args) devices; inherit (args) devices;
inherit (efi) canTouchEfiVariables; inherit (efi) canTouchEfiVariables;
inherit (cfg) inherit (cfg)
version extraConfig extraPerEntryConfig extraEntries forceInstall useOSProber version extraConfig extraPerEntryConfig extraEntries forceInstall useOSProber
extraGrubInstallArgs
extraEntriesBeforeNixOS extraPrepareConfig configurationLimit copyKernels extraEntriesBeforeNixOS extraPrepareConfig configurationLimit copyKernels
default fsIdentifier efiSupport efiInstallAsRemovable gfxmodeEfi gfxmodeBios gfxpayloadEfi gfxpayloadBios; default fsIdentifier efiSupport efiInstallAsRemovable gfxmodeEfi gfxmodeBios gfxpayloadEfi gfxpayloadBios;
path = with pkgs; makeBinPath ( path = with pkgs; makeBinPath (
@ -137,6 +139,67 @@ in
''; '';
}; };
users = mkOption {
default = {};
example = {
root = { hashedPasswordFile = "/path/to/file"; };
};
description = ''
User accounts for GRUB. When specified, the GRUB command line and
all boot options except the default are password-protected.
All passwords and hashes provided will be stored in /boot/grub/grub.cfg,
and will be visible to any local user who can read this file. Additionally,
any passwords and hashes provided directly in a Nix configuration
(as opposed to external files) will be copied into the Nix store, and
will be visible to all local users.
'';
type = with types; attrsOf (submodule {
options = {
hashedPasswordFile = mkOption {
example = "/path/to/file";
default = null;
type = with types; uniq (nullOr str);
description = ''
Specifies the path to a file containing the password hash
for the account, generated with grub-mkpasswd-pbkdf2.
This hash will be stored in /boot/grub/grub.cfg, and will
be visible to any local user who can read this file.
'';
};
hashedPassword = mkOption {
example = "grub.pbkdf2.sha512.10000.674DFFDEF76E13EA...2CC972B102CF4355";
default = null;
type = with types; uniq (nullOr str);
description = ''
Specifies the password hash for the account,
generated with grub-mkpasswd-pbkdf2.
This hash will be copied to the Nix store, and will be visible to all local users.
'';
};
passwordFile = mkOption {
example = "/path/to/file";
default = null;
type = with types; uniq (nullOr str);
description = ''
Specifies the path to a file containing the
clear text password for the account.
This password will be stored in /boot/grub/grub.cfg, and will
be visible to any local user who can read this file.
'';
};
password = mkOption {
example = "Pa$$w0rd!";
default = null;
type = with types; uniq (nullOr str);
description = ''
Specifies the clear text password for the account.
This password will be copied to the Nix store, and will be visible to all local users.
'';
};
};
});
};
mirroredBoots = mkOption { mirroredBoots = mkOption {
default = [ ]; default = [ ];
example = [ example = [
@ -236,6 +299,33 @@ in
''; '';
}; };
extraGrubInstallArgs = mkOption {
default = [ ];
example = [ "--modules=nativedisk ahci pata part_gpt part_msdos diskfilter mdraid1x lvm ext2" ];
type = types.listOf types.str;
description = ''
Additional arguments passed to <literal>grub-install</literal>.
A use case for this is to build specific GRUB2 modules
directly into the GRUB2 kernel image, so that they are available
and activated even in the <literal>grub rescue</literal> shell.
They are also necessary when the BIOS/UEFI is bugged and cannot
correctly read large disks (e.g. above 2 TB), so GRUB2's own
<literal>nativedisk</literal> and related modules can be used
to use its own disk drivers. The example shows one such case.
This is also useful for booting from USB.
See the
<link xlink:href="http://git.savannah.gnu.org/cgit/grub.git/tree/grub-core/commands/nativedisk.c?h=grub-2.04#n326">
GRUB source code
</link>
for which disk modules are available.
The list elements are passed directly as <literal>argv</literal>
arguments to the <literal>grub-install</literal> program, in order.
'';
};
extraPerEntryConfig = mkOption { extraPerEntryConfig = mkOption {
default = ""; default = "";
example = "root (hd0)"; example = "root (hd0)";
@ -607,7 +697,7 @@ in
in pkgs.writeScript "install-grub.sh" ('' in pkgs.writeScript "install-grub.sh" (''
#!${pkgs.runtimeShell} #!${pkgs.runtimeShell}
set -e set -e
export PERL5LIB=${with pkgs.perlPackages; makePerlPath [ FileSlurp XMLLibXML XMLSAX XMLSAXBase ListCompare ]} export PERL5LIB=${with pkgs.perlPackages; makePerlPath [ FileSlurp XMLLibXML XMLSAX XMLSAXBase ListCompare JSON ]}
${optionalString cfg.enableCryptodisk "export GRUB_ENABLE_CRYPTODISK=y"} ${optionalString cfg.enableCryptodisk "export GRUB_ENABLE_CRYPTODISK=y"}
'' + flip concatMapStrings cfg.mirroredBoots (args: '' '' + flip concatMapStrings cfg.mirroredBoots (args: ''
${pkgs.perl}/bin/perl ${install-grub-pl} ${grubConfig args} $@ ${pkgs.perl}/bin/perl ${install-grub-pl} ${grubConfig args} $@

View file

@ -8,6 +8,7 @@ use File::stat;
use File::Copy; use File::Copy;
use File::Slurp; use File::Slurp;
use File::Temp; use File::Temp;
use JSON;
require List::Compare; require List::Compare;
use POSIX; use POSIX;
use Cwd; use Cwd;
@ -20,6 +21,16 @@ my $dom = XML::LibXML->load_xml(location => $ARGV[0]);
sub get { my ($name) = @_; return $dom->findvalue("/expr/attrs/attr[\@name = '$name']/*/\@value"); } sub get { my ($name) = @_; return $dom->findvalue("/expr/attrs/attr[\@name = '$name']/*/\@value"); }
sub getList {
my ($name) = @_;
my @list = ();
foreach my $entry ($dom->findnodes("/expr/attrs/attr[\@name = '$name']/list/string/\@value")) {
$entry = $entry->findvalue(".") or die;
push(@list, $entry);
}
return @list;
}
sub readFile { sub readFile {
my ($fn) = @_; local $/ = undef; my ($fn) = @_; local $/ = undef;
open FILE, "<$fn" or return undef; my $s = <FILE>; close FILE; open FILE, "<$fn" or return undef; my $s = <FILE>; close FILE;
@ -241,12 +252,51 @@ if ($grubVersion == 1) {
timeout $timeout timeout $timeout
"; ";
if ($splashImage) { if ($splashImage) {
copy $splashImage, "$bootPath/background.xpm.gz" or die "cannot copy $splashImage to $bootPath\n"; copy $splashImage, "$bootPath/background.xpm.gz" or die "cannot copy $splashImage to $bootPath: $!\n";
$conf .= "splashimage " . ($grubBoot->path eq "/" ? "" : $grubBoot->path) . "/background.xpm.gz\n"; $conf .= "splashimage " . ($grubBoot->path eq "/" ? "" : $grubBoot->path) . "/background.xpm.gz\n";
} }
} }
else { else {
my @users = ();
foreach my $user ($dom->findnodes('/expr/attrs/attr[@name = "users"]/attrs/attr')) {
my $name = $user->findvalue('@name') or die;
my $hashedPassword = $user->findvalue('./attrs/attr[@name = "hashedPassword"]/string/@value');
my $hashedPasswordFile = $user->findvalue('./attrs/attr[@name = "hashedPasswordFile"]/string/@value');
my $password = $user->findvalue('./attrs/attr[@name = "password"]/string/@value');
my $passwordFile = $user->findvalue('./attrs/attr[@name = "passwordFile"]/string/@value');
if ($hashedPasswordFile) {
open(my $f, '<', $hashedPasswordFile) or die "Can't read file '$hashedPasswordFile'!";
$hashedPassword = <$f>;
chomp $hashedPassword;
}
if ($passwordFile) {
open(my $f, '<', $passwordFile) or die "Can't read file '$passwordFile'!";
$password = <$f>;
chomp $password;
}
if ($hashedPassword) {
if (index($hashedPassword, "grub.pbkdf2.") == 0) {
$conf .= "\npassword_pbkdf2 $name $hashedPassword";
}
else {
die "Password hash for GRUB user '$name' is not valid!";
}
}
elsif ($password) {
$conf .= "\npassword $name $password";
}
else {
die "GRUB user '$name' has no password!";
}
push(@users, $name);
}
if (@users) {
$conf .= "\nset superusers=\"" . join(' ',@users) . "\"\n";
}
if ($copyKernels == 0) { if ($copyKernels == 0) {
$conf .= " $conf .= "
" . $grubStore->search; " . $grubStore->search;
@ -280,7 +330,7 @@ else {
"; ";
if ($font) { if ($font) {
copy $font, "$bootPath/converted-font.pf2" or die "cannot copy $font to $bootPath\n"; copy $font, "$bootPath/converted-font.pf2" or die "cannot copy $font to $bootPath: $!\n";
$conf .= " $conf .= "
insmod font insmod font
if loadfont " . ($grubBoot->path eq "/" ? "" : $grubBoot->path) . "/converted-font.pf2; then if loadfont " . ($grubBoot->path eq "/" ? "" : $grubBoot->path) . "/converted-font.pf2; then
@ -308,7 +358,7 @@ else {
background_color '$backgroundColor' background_color '$backgroundColor'
"; ";
} }
copy $splashImage, "$bootPath/background$suffix" or die "cannot copy $splashImage to $bootPath\n"; copy $splashImage, "$bootPath/background$suffix" or die "cannot copy $splashImage to $bootPath: $!\n";
$conf .= " $conf .= "
insmod " . substr($suffix, 1) . " insmod " . substr($suffix, 1) . "
if background_image --mode '$splashMode' " . ($grubBoot->path eq "/" ? "" : $grubBoot->path) . "/background$suffix; then if background_image --mode '$splashMode' " . ($grubBoot->path eq "/" ? "" : $grubBoot->path) . "/background$suffix; then
@ -342,15 +392,15 @@ sub copyToKernelsDir {
# kernels or initrd if this script is ever interrupted. # kernels or initrd if this script is ever interrupted.
if (! -e $dst) { if (! -e $dst) {
my $tmp = "$dst.tmp"; my $tmp = "$dst.tmp";
copy $path, $tmp or die "cannot copy $path to $tmp\n"; copy $path, $tmp or die "cannot copy $path to $tmp: $!\n";
rename $tmp, $dst or die "cannot rename $tmp to $dst\n"; rename $tmp, $dst or die "cannot rename $tmp to $dst: $!\n";
} }
$copied{$dst} = 1; $copied{$dst} = 1;
return ($grubBoot->path eq "/" ? "" : $grubBoot->path) . "/kernels/$name"; return ($grubBoot->path eq "/" ? "" : $grubBoot->path) . "/kernels/$name";
} }
sub addEntry { sub addEntry {
my ($name, $path) = @_; my ($name, $path, $options) = @_;
return unless -e "$path/kernel" && -e "$path/initrd"; return unless -e "$path/kernel" && -e "$path/initrd";
my $kernel = copyToKernelsDir(Cwd::abs_path("$path/kernel")); my $kernel = copyToKernelsDir(Cwd::abs_path("$path/kernel"));
@ -366,10 +416,10 @@ sub addEntry {
# Make sure initrd is not world readable (won't work if /boot is FAT) # Make sure initrd is not world readable (won't work if /boot is FAT)
umask 0137; umask 0137;
my $initrdSecretsPathTemp = File::Temp::mktemp("$initrdSecretsPath.XXXXXXXX"); my $initrdSecretsPathTemp = File::Temp::mktemp("$initrdSecretsPath.XXXXXXXX");
system("$path/append-initrd-secrets", $initrdSecretsPathTemp) == 0 or die "failed to create initrd secrets\n"; system("$path/append-initrd-secrets", $initrdSecretsPathTemp) == 0 or die "failed to create initrd secrets: $!\n";
# Check whether any secrets were actually added # Check whether any secrets were actually added
if (-e $initrdSecretsPathTemp && ! -z _) { if (-e $initrdSecretsPathTemp && ! -z _) {
rename $initrdSecretsPathTemp, $initrdSecretsPath or die "failed to move initrd secrets into place\n"; rename $initrdSecretsPathTemp, $initrdSecretsPath or die "failed to move initrd secrets into place: $!\n";
$copied{$initrdSecretsPath} = 1; $copied{$initrdSecretsPath} = 1;
$initrd .= " " . ($grubBoot->path eq "/" ? "" : $grubBoot->path) . "/kernels/$initrdName-secrets"; $initrd .= " " . ($grubBoot->path eq "/" ? "" : $grubBoot->path) . "/kernels/$initrdName-secrets";
} else { } else {
@ -396,7 +446,7 @@ sub addEntry {
$conf .= " " . ($xen ? "module" : "kernel") . " $kernel $kernelParams\n"; $conf .= " " . ($xen ? "module" : "kernel") . " $kernel $kernelParams\n";
$conf .= " " . ($xen ? "module" : "initrd") . " $initrd\n\n"; $conf .= " " . ($xen ? "module" : "initrd") . " $initrd\n\n";
} else { } else {
$conf .= "menuentry \"$name\" {\n"; $conf .= "menuentry \"$name\" " . ($options||"") . " {\n";
$conf .= $grubBoot->search . "\n"; $conf .= $grubBoot->search . "\n";
if ($copyKernels == 0) { if ($copyKernels == 0) {
$conf .= $grubStore->search . "\n"; $conf .= $grubStore->search . "\n";
@ -413,7 +463,7 @@ sub addEntry {
# Add default entries. # Add default entries.
$conf .= "$extraEntries\n" if $extraEntriesBeforeNixOS; $conf .= "$extraEntries\n" if $extraEntriesBeforeNixOS;
addEntry("NixOS - Default", $defaultConfig); addEntry("NixOS - Default", $defaultConfig, "--unrestricted");
$conf .= "$extraEntries\n" unless $extraEntriesBeforeNixOS; $conf .= "$extraEntries\n" unless $extraEntriesBeforeNixOS;
@ -536,7 +586,7 @@ if (get("useOSProber") eq "true") {
} }
# Atomically switch to the new config # Atomically switch to the new config
rename $tmpFile, $confFile or die "cannot rename $tmpFile to $confFile\n"; rename $tmpFile, $confFile or die "cannot rename $tmpFile to $confFile: $!\n";
# Remove obsolete files from $bootPath/kernels. # Remove obsolete files from $bootPath/kernels.
@ -557,9 +607,12 @@ struct(GrubState => {
efi => '$', efi => '$',
devices => '$', devices => '$',
efiMountPoint => '$', efiMountPoint => '$',
extraGrubInstallArgs => '@',
}); });
# If you add something to the state file, only add it to the end
# because it is read line-by-line.
sub readGrubState { sub readGrubState {
my $defaultGrubState = GrubState->new(name => "", version => "", efi => "", devices => "", efiMountPoint => "" ); my $defaultGrubState = GrubState->new(name => "", version => "", efi => "", devices => "", efiMountPoint => "", extraGrubInstallArgs => () );
open FILE, "<$bootPath/grub/state" or return $defaultGrubState; open FILE, "<$bootPath/grub/state" or return $defaultGrubState;
local $/ = "\n"; local $/ = "\n";
my $name = <FILE>; my $name = <FILE>;
@ -572,24 +625,37 @@ sub readGrubState {
chomp($devices); chomp($devices);
my $efiMountPoint = <FILE>; my $efiMountPoint = <FILE>;
chomp($efiMountPoint); chomp($efiMountPoint);
# Historically, arguments in the state file were one per each line, but that
# gets really messy when newlines are involved, structured arguments
# like lists are needed (they have to have a separator encoding), or even worse,
# when we need to remove a setting in the future. Thus, the 6th line is a JSON
# object that can store structured data, with named keys, and all new state
# should go in there.
my $jsonStateLine = <FILE>;
# For historical reasons we do not check the values above for un-definedness
# (that is, when the state file has too few lines and EOF is reached),
# because the above come from the first version of this logic and are thus
# guaranteed to be present.
$jsonStateLine = defined $jsonStateLine ? $jsonStateLine : '{}'; # empty JSON object
chomp($jsonStateLine);
if ($jsonStateLine eq "") {
$jsonStateLine = '{}'; # empty JSON object
}
my %jsonState = %{decode_json($jsonStateLine)};
my @extraGrubInstallArgs = exists($jsonState{'extraGrubInstallArgs'}) ? @{$jsonState{'extraGrubInstallArgs'}} : ();
close FILE; close FILE;
my $grubState = GrubState->new(name => $name, version => $version, efi => $efi, devices => $devices, efiMountPoint => $efiMountPoint ); my $grubState = GrubState->new(name => $name, version => $version, efi => $efi, devices => $devices, efiMountPoint => $efiMountPoint, extraGrubInstallArgs => \@extraGrubInstallArgs );
return $grubState return $grubState
} }
sub getDeviceTargets { my @deviceTargets = getList('devices');
my @devices = ();
foreach my $dev ($dom->findnodes('/expr/attrs/attr[@name = "devices"]/list/string/@value')) {
$dev = $dev->findvalue(".") or die;
push(@devices, $dev);
}
return @devices;
}
my @deviceTargets = getDeviceTargets();
my $prevGrubState = readGrubState(); my $prevGrubState = readGrubState();
my @prevDeviceTargets = split/,/, $prevGrubState->devices; my @prevDeviceTargets = split/,/, $prevGrubState->devices;
my @extraGrubInstallArgs = getList('extraGrubInstallArgs');
my @prevExtraGrubInstallArgs = @{$prevGrubState->extraGrubInstallArgs};
my $devicesDiffer = scalar (List::Compare->new( '-u', '-a', \@deviceTargets, \@prevDeviceTargets)->get_symmetric_difference()); my $devicesDiffer = scalar (List::Compare->new( '-u', '-a', \@deviceTargets, \@prevDeviceTargets)->get_symmetric_difference());
my $extraGrubInstallArgsDiffer = scalar (List::Compare->new( '-u', '-a', \@extraGrubInstallArgs, \@prevExtraGrubInstallArgs)->get_symmetric_difference());
my $nameDiffer = get("fullName") ne $prevGrubState->name; my $nameDiffer = get("fullName") ne $prevGrubState->name;
my $versionDiffer = get("fullVersion") ne $prevGrubState->version; my $versionDiffer = get("fullVersion") ne $prevGrubState->version;
my $efiDiffer = $efiTarget ne $prevGrubState->efi; my $efiDiffer = $efiTarget ne $prevGrubState->efi;
@ -598,25 +664,25 @@ if (($ENV{'NIXOS_INSTALL_GRUB'} // "") eq "1") {
warn "NIXOS_INSTALL_GRUB env var deprecated, use NIXOS_INSTALL_BOOTLOADER"; warn "NIXOS_INSTALL_GRUB env var deprecated, use NIXOS_INSTALL_BOOTLOADER";
$ENV{'NIXOS_INSTALL_BOOTLOADER'} = "1"; $ENV{'NIXOS_INSTALL_BOOTLOADER'} = "1";
} }
my $requireNewInstall = $devicesDiffer || $nameDiffer || $versionDiffer || $efiDiffer || $efiMountPointDiffer || (($ENV{'NIXOS_INSTALL_BOOTLOADER'} // "") eq "1"); my $requireNewInstall = $devicesDiffer || $extraGrubInstallArgsDiffer || $nameDiffer || $versionDiffer || $efiDiffer || $efiMountPointDiffer || (($ENV{'NIXOS_INSTALL_BOOTLOADER'} // "") eq "1");
# install a symlink so that grub can detect the boot drive # install a symlink so that grub can detect the boot drive
my $tmpDir = File::Temp::tempdir(CLEANUP => 1) or die "Failed to create temporary space"; my $tmpDir = File::Temp::tempdir(CLEANUP => 1) or die "Failed to create temporary space: $!";
symlink "$bootPath", "$tmpDir/boot" or die "Failed to symlink $tmpDir/boot"; symlink "$bootPath", "$tmpDir/boot" or die "Failed to symlink $tmpDir/boot: $!";
# install non-EFI GRUB # install non-EFI GRUB
if (($requireNewInstall != 0) && ($efiTarget eq "no" || $efiTarget eq "both")) { if (($requireNewInstall != 0) && ($efiTarget eq "no" || $efiTarget eq "both")) {
foreach my $dev (@deviceTargets) { foreach my $dev (@deviceTargets) {
next if $dev eq "nodev"; next if $dev eq "nodev";
print STDERR "installing the GRUB $grubVersion boot loader on $dev...\n"; print STDERR "installing the GRUB $grubVersion boot loader on $dev...\n";
my @command = ("$grub/sbin/grub-install", "--recheck", "--root-directory=$tmpDir", Cwd::abs_path($dev)); my @command = ("$grub/sbin/grub-install", "--recheck", "--root-directory=$tmpDir", Cwd::abs_path($dev), @extraGrubInstallArgs);
if ($forceInstall eq "true") { if ($forceInstall eq "true") {
push @command, "--force"; push @command, "--force";
} }
if ($grubTarget ne "") { if ($grubTarget ne "") {
push @command, "--target=$grubTarget"; push @command, "--target=$grubTarget";
} }
(system @command) == 0 or die "$0: installation of GRUB on $dev failed\n"; (system @command) == 0 or die "$0: installation of GRUB on $dev failed: $!\n";
} }
} }
@ -624,7 +690,7 @@ if (($requireNewInstall != 0) && ($efiTarget eq "no" || $efiTarget eq "both")) {
# install EFI GRUB # install EFI GRUB
if (($requireNewInstall != 0) && ($efiTarget eq "only" || $efiTarget eq "both")) { if (($requireNewInstall != 0) && ($efiTarget eq "only" || $efiTarget eq "both")) {
print STDERR "installing the GRUB $grubVersion EFI boot loader into $efiSysMountPoint...\n"; print STDERR "installing the GRUB $grubVersion EFI boot loader into $efiSysMountPoint...\n";
my @command = ("$grubEfi/sbin/grub-install", "--recheck", "--target=$grubTargetEfi", "--boot-directory=$bootPath", "--efi-directory=$efiSysMountPoint"); my @command = ("$grubEfi/sbin/grub-install", "--recheck", "--target=$grubTargetEfi", "--boot-directory=$bootPath", "--efi-directory=$efiSysMountPoint", @extraGrubInstallArgs);
if ($forceInstall eq "true") { if ($forceInstall eq "true") {
push @command, "--force"; push @command, "--force";
} }
@ -635,17 +701,29 @@ if (($requireNewInstall != 0) && ($efiTarget eq "only" || $efiTarget eq "both"))
push @command, "--removable" if $efiInstallAsRemovable eq "true"; push @command, "--removable" if $efiInstallAsRemovable eq "true";
} }
(system @command) == 0 or die "$0: installation of GRUB EFI into $efiSysMountPoint failed\n"; (system @command) == 0 or die "$0: installation of GRUB EFI into $efiSysMountPoint failed: $!\n";
} }
# update GRUB state file # update GRUB state file
if ($requireNewInstall != 0) { if ($requireNewInstall != 0) {
open FILE, ">$bootPath/grub/state" or die "cannot create $bootPath/grub/state: $!\n"; # Temp file for atomic rename.
my $stateFile = "$bootPath/grub/state";
my $stateFileTmp = $stateFile . ".tmp";
open FILE, ">$stateFileTmp" or die "cannot create $stateFileTmp: $!\n";
print FILE get("fullName"), "\n" or die; print FILE get("fullName"), "\n" or die;
print FILE get("fullVersion"), "\n" or die; print FILE get("fullVersion"), "\n" or die;
print FILE $efiTarget, "\n" or die; print FILE $efiTarget, "\n" or die;
print FILE join( ",", @deviceTargets ), "\n" or die; print FILE join( ",", @deviceTargets ), "\n" or die;
print FILE $efiSysMountPoint, "\n" or die; print FILE $efiSysMountPoint, "\n" or die;
my %jsonState = (
extraGrubInstallArgs => \@extraGrubInstallArgs
);
my $jsonStateLine = encode_json(\%jsonState);
print FILE $jsonStateLine, "\n" or die;
close FILE or die; close FILE or die;
# Atomically switch to the new state file
rename $stateFileTmp, $stateFile or die "cannot rename $stateFileTmp to $stateFile: $!\n";
} }

View file

@ -140,7 +140,7 @@ let
umount /crypt-ramfs 2>/dev/null umount /crypt-ramfs 2>/dev/null
''; '';
openCommand = name': { name, device, header, keyFile, keyFileSize, keyFileOffset, allowDiscards, yubikey, gpgCard, fido2, fallbackToPassword, ... }: assert name' == name; openCommand = name': { name, device, header, keyFile, keyFileSize, keyFileOffset, allowDiscards, yubikey, gpgCard, fido2, fallbackToPassword, preOpenCommands, postOpenCommands,... }: assert name' == name;
let let
csopen = "cryptsetup luksOpen ${device} ${name} ${optionalString allowDiscards "--allow-discards"} ${optionalString (header != null) "--header=${header}"}"; csopen = "cryptsetup luksOpen ${device} ${name} ${optionalString allowDiscards "--allow-discards"} ${optionalString (header != null) "--header=${header}"}";
cschange = "cryptsetup luksChangeKey ${device} ${optionalString (header != null) "--header=${header}"}"; cschange = "cryptsetup luksChangeKey ${device} ${optionalString (header != null) "--header=${header}"}";
@ -412,11 +412,17 @@ let
} }
''} ''}
# commands to run right before we mount our device
${preOpenCommands}
${if (luks.yubikeySupport && (yubikey != null)) || (luks.gpgSupport && (gpgCard != null)) || (luks.fido2Support && (fido2.credential != null)) then '' ${if (luks.yubikeySupport && (yubikey != null)) || (luks.gpgSupport && (gpgCard != null)) || (luks.fido2Support && (fido2.credential != null)) then ''
open_with_hardware open_with_hardware
'' else '' '' else ''
open_normally open_normally
''} ''}
# commands to run right after we mounted our device
${postOpenCommands}
''; '';
askPass = pkgs.writeScriptBin "cryptsetup-askpass" '' askPass = pkgs.writeScriptBin "cryptsetup-askpass" ''
@ -467,8 +473,6 @@ in
[ "aes" "aes_generic" "blowfish" "twofish" [ "aes" "aes_generic" "blowfish" "twofish"
"serpent" "cbc" "xts" "lrw" "sha1" "sha256" "sha512" "serpent" "cbc" "xts" "lrw" "sha1" "sha256" "sha512"
"af_alg" "algif_skcipher" "af_alg" "algif_skcipher"
(if pkgs.stdenv.hostPlatform.system == "x86_64-linux" then "aes_x86_64" else "aes_i586")
]; ];
description = '' description = ''
A list of cryptographic kernel modules needed to decrypt the root device(s). A list of cryptographic kernel modules needed to decrypt the root device(s).
@ -735,6 +739,30 @@ in
}; };
}); });
}; };
preOpenCommands = mkOption {
type = types.lines;
default = "";
example = ''
mkdir -p /tmp/persistent
mount -t zfs rpool/safe/persistent /tmp/persistent
'';
description = ''
Commands that should be run right before we try to mount our LUKS device.
This can be useful, if the keys needed to open the drive is on another partion.
'';
};
postOpenCommands = mkOption {
type = types.lines;
default = "";
example = ''
umount /tmp/persistent
'';
description = ''
Commands that should be run right after we have mounted our LUKS device.
'';
};
}; };
})); }));
}; };

View file

@ -28,7 +28,7 @@ with lib;
Any additional configuration to be appended to the generated Any additional configuration to be appended to the generated
<filename>modprobe.conf</filename>. This is typically used to <filename>modprobe.conf</filename>. This is typically used to
specify module options. See specify module options. See
<citerefentry><refentrytitle>modprobe.conf</refentrytitle> <citerefentry><refentrytitle>modprobe.d</refentrytitle>
<manvolnum>5</manvolnum></citerefentry> for details. <manvolnum>5</manvolnum></citerefentry> for details.
''; '';
type = types.lines; type = types.lines;

View file

@ -302,7 +302,7 @@ let
checkDhcpV6 = checkUnitConfig "DHCPv6" [ checkDhcpV6 = checkUnitConfig "DHCPv6" [
(assertOnlyFields [ (assertOnlyFields [
"UseDns" "UseNTP" "RapidCommit" "ForceDHCPv6PDOtherInformation" "UseDNS" "UseNTP" "RapidCommit" "ForceDHCPv6PDOtherInformation"
"PrefixDelegationHint" "PrefixDelegationHint"
]) ])
(assertValueOneOf "UseDNS" boolValues) (assertValueOneOf "UseDNS" boolValues)
@ -488,7 +488,7 @@ let
vlanConfig = mkOption { vlanConfig = mkOption {
default = {}; default = {};
example = { Id = "4"; }; example = { Id = 4; };
type = types.addCheck (types.attrsOf unitOption) checkVlan; type = types.addCheck (types.attrsOf unitOption) checkVlan;
description = '' description = ''
Each attribute in this set specifies an option in the Each attribute in this set specifies an option in the

Some files were not shown because too many files have changed in this diff Show more