diff --git a/third_party/nixpkgs/.github/CODEOWNERS b/third_party/nixpkgs/.github/CODEOWNERS index 6344ca3902..47069c9a79 100644 --- a/third_party/nixpkgs/.github/CODEOWNERS +++ b/third_party/nixpkgs/.github/CODEOWNERS @@ -11,9 +11,6 @@ # This also holds true for GitHub teams. Since almost none of our teams have write # permissions, you need to list all members of the team with commit access individually. -# This file -/.github/CODEOWNERS @edolstra - # GitHub actions /.github/workflows @NixOS/Security @Mic92 @zowoq /.github/workflows/merge-staging @FRidh @@ -22,12 +19,12 @@ /.editorconfig @Mic92 @zowoq # Libraries -/lib @edolstra @infinisil +/lib @infinisil /lib/systems @alyssais @ericson2314 @amjoseph-nixpkgs -/lib/generators.nix @edolstra @Profpatsch -/lib/cli.nix @edolstra @Profpatsch -/lib/debug.nix @edolstra @Profpatsch -/lib/asserts.nix @edolstra @Profpatsch +/lib/generators.nix @infinisil @Profpatsch +/lib/cli.nix @infinisil @Profpatsch +/lib/debug.nix @infinisil @Profpatsch +/lib/asserts.nix @infinisil @Profpatsch /lib/path.* @infinisil @fricklerhandwerk /lib/fileset @infinisil /doc/functions/fileset.section.md @infinisil @@ -48,6 +45,8 @@ /pkgs/build-support/setup-hooks/auto-patchelf.sh @layus /pkgs/build-support/setup-hooks/auto-patchelf.py @layus /pkgs/pkgs-lib @infinisil +## Format generators/serializers +/pkgs/pkgs-lib/formats/libconfig @ckiee # pkgs/by-name /pkgs/test/nixpkgs-check-by-name @infinisil @@ -59,7 +58,7 @@ /pkgs/build-support/writers @lassulus @Profpatsch # Nixpkgs make-disk-image -/doc/builders/images/makediskimage.section.md @raitobezarius +/doc/build-helpers/images/makediskimage.section.md @raitobezarius /nixos/lib/make-disk-image.nix @raitobezarius # Nixpkgs documentation @@ -116,7 +115,6 @@ /maintainers/scripts/update-python-libraries @FRidh /pkgs/development/interpreters/python @FRidh /doc/languages-frameworks/python.section.md @FRidh @mweinelt -/pkgs/development/tools/poetry2nix @adisbladis /pkgs/development/interpreters/python/hooks @FRidh @jonringer # Haskell @@ -149,6 +147,8 @@ # C compilers /pkgs/development/compilers/gcc @amjoseph-nixpkgs /pkgs/development/compilers/llvm @RaitoBezarius +/pkgs/development/compilers/emscripten @raitobezarius +/doc/languages-frameworks/emscripten.section.md @raitobezarius # Audio /nixos/modules/services/audio/botamusique.nix @mweinelt @@ -216,7 +216,7 @@ pkgs/development/python-modules/buildcatrust/ @ajs124 @lukegb @mweinelt /nixos/tests/knot.nix @mweinelt # Web servers -/doc/builders/packages/nginx.section.md @raitobezarius +/doc/packages/nginx.section.md @raitobezarius /pkgs/servers/http/nginx/ @raitobezarius /nixos/modules/services/web-servers/nginx/ @raitobezarius @@ -269,7 +269,7 @@ pkgs/development/python-modules/buildcatrust/ @ajs124 @lukegb @mweinelt # Docker tools /pkgs/build-support/docker @roberth /nixos/tests/docker-tools* @roberth -/doc/builders/images/dockertools.section.md @roberth +/doc/build-helpers/images/dockertools.section.md @roberth # Blockchains /pkgs/applications/blockchains @mmahut @RaghavSood diff --git a/third_party/nixpkgs/.github/ISSUE_TEMPLATE/unreproducible_package.md b/third_party/nixpkgs/.github/ISSUE_TEMPLATE/unreproducible_package.md index a868c26ca5..8046e809a2 100644 --- a/third_party/nixpkgs/.github/ISSUE_TEMPLATE/unreproducible_package.md +++ b/third_party/nixpkgs/.github/ISSUE_TEMPLATE/unreproducible_package.md @@ -7,25 +7,81 @@ assignees: '' --- -Building this package twice does not produce the bit-by-bit identical result each time, making it harder to detect CI breaches. You can read more about this at https://reproducible-builds.org/ . + + +Building this package multiple times does not yield bit-by-bit identical +results, complicating the detection of Continuous Integration (CI) breaches. For +more information on this issue, visit +[reproducible-builds.org](https://reproducible-builds.org/). + +Fixing bit-by-bit reproducibility also has additional advantages, such as +avoiding hard-to-reproduce bugs, making content-addressed storage more effective +and reducing rebuilds in such systems. ### Steps To Reproduce -``` -nix-build '' -A ... --check --keep-failed -``` +In the following steps, replace `` with the canonical name of the +package. -You can use `diffoscope` to analyze the differences in the output of the two builds. +#### 1. Build the package -To view the build log of the build that produced the artifact in the binary cache: +This step will build the package. Specific arguments are passed to the command +to keep the build artifacts so we can compare them in case of differences. + +Execute the following command: ``` -nix-store --read-log $(nix-instantiate '' -A ...) +nix-build '' -A && nix-build '' -A --check --keep-failed +``` + +Or using the new command line style: + +``` +nix build nixpkgs# && nix build nixpkgs# --rebuild --keep-failed +``` + +#### 2. Compare the build artifacts + +If the previous command completes successfully, no differences were found and +there's nothing to do, builds are reproducible. +If it terminates with the error message `error: derivation '' may not be +deterministic: output '' differs from ''`, use `diffoscope` to investigate +the discrepancies between the two build outputs. You may need to add the +`--exclude-directory-metadata recursive` option to ignore files and directories +metadata (*e.g. timestamp*) differences. + +``` +nix run nixpkgs#diffoscopeMinimal -- --exclude-directory-metadata recursive +``` + +#### 3. Examine the build log + +To examine the build log, use: + +``` +nix-store --read-log $(nix-instantiate '' -A ) +``` + +Or with the new command line style: + +``` +nix log $(nix path-info --derivation nixpkgs#) ``` ### Additional context -(please share the relevant fragment of the diffoscope output here, -and any additional analysis you may have done) +(please share the relevant fragment of the diffoscope output here, and any +additional analysis you may have done) diff --git a/third_party/nixpkgs/.github/PULL_REQUEST_TEMPLATE.md b/third_party/nixpkgs/.github/PULL_REQUEST_TEMPLATE.md index 4517080bb3..a7d8a17865 100644 --- a/third_party/nixpkgs/.github/PULL_REQUEST_TEMPLATE.md +++ b/third_party/nixpkgs/.github/PULL_REQUEST_TEMPLATE.md @@ -14,7 +14,9 @@ For new packages please briefly describe the package or provide a link to its ho - [ ] aarch64-linux - [ ] x86_64-darwin - [ ] aarch64-darwin -- [ ] For non-Linux: Is `sandbox = true` set in `nix.conf`? (See [Nix manual](https://nixos.org/manual/nix/stable/command-ref/conf-file.html)) +- For non-Linux: Is sandboxing enabled in `nix.conf`? (See [Nix manual](https://nixos.org/manual/nix/stable/command-ref/conf-file.html)) + - [ ] `sandbox = relaxed` + - [ ] `sandbox = true` - [ ] Tested, as applicable: - [NixOS test(s)](https://nixos.org/manual/nixos/unstable/index.html#sec-nixos-tests) (look inside [nixos/tests](https://github.com/NixOS/nixpkgs/blob/master/nixos/tests)) - and/or [package tests](https://nixos.org/manual/nixpkgs/unstable/#sec-package-tests) diff --git a/third_party/nixpkgs/.github/labeler.yml b/third_party/nixpkgs/.github/labeler.yml index c05c496cb1..5822603122 100644 --- a/third_party/nixpkgs/.github/labeler.yml +++ b/third_party/nixpkgs/.github/labeler.yml @@ -37,6 +37,11 @@ "6.topic: fetch": - pkgs/build-support/fetch*/**/* +"6.topic: flakes": + - '**/flake.nix' + - lib/systems/flake-systems.nix + - nixos/modules/config/nix-flakes.nix + "6.topic: GNOME": - doc/languages-frameworks/gnome.section.md - nixos/modules/services/desktops/gnome/**/* diff --git a/third_party/nixpkgs/.github/workflows/backport.yml b/third_party/nixpkgs/.github/workflows/backport.yml index d174203238..9343e29d59 100644 --- a/third_party/nixpkgs/.github/workflows/backport.yml +++ b/third_party/nixpkgs/.github/workflows/backport.yml @@ -24,7 +24,7 @@ jobs: with: ref: ${{ github.event.pull_request.head.sha }} - name: Create backport PRs - uses: korthout/backport-action@v1.3.1 + uses: korthout/backport-action@v2.1.1 with: # Config README: https://github.com/korthout/backport-action#backport-action copy_labels_pattern: 'severity:\ssecurity' diff --git a/third_party/nixpkgs/CONTRIBUTING.md b/third_party/nixpkgs/CONTRIBUTING.md index 32201333c3..0270094961 100644 --- a/third_party/nixpkgs/CONTRIBUTING.md +++ b/third_party/nixpkgs/CONTRIBUTING.md @@ -322,6 +322,8 @@ All the review template samples provided in this section are generic and meant a To get more information about how to review specific parts of Nixpkgs, refer to the documents linked to in the [overview section][overview]. +If a pull request contains documentation changes that might require feedback from the documentation team, ping @NixOS/documentation-team on the pull request. + If you consider having enough knowledge and experience in a topic and would like to be a long-term reviewer for related submissions, please contact the current reviewers for that topic. They will give you information about the reviewing process. The main reviewers for a topic can be hard to find as there is no list, but checking past pull requests to see who reviewed or git-blaming the code to see who committed to that topic can give some hints. Container system, boot system and library changes are some examples of the pull requests fitting this category. @@ -352,8 +354,8 @@ In a case a contributor definitively leaves the Nix community, they should creat # Flow of merged pull requests -After a pull requests is merged, it eventually makes it to the [official Hydra CI](https://hydra.nixos.org/). -Hydra regularly evaluates and builds Nixpkgs, updating [the official channels](http://channels.nixos.org/) when specific Hydra jobs succeeded. +After a pull request is merged, it eventually makes it to the [official Hydra CI](https://hydra.nixos.org/). +Hydra regularly evaluates and builds Nixpkgs, updating [the official channels](https://channels.nixos.org/) when specific Hydra jobs succeeded. See [Nix Channel Status](https://status.nixos.org/) for the current channels and their state. Here's a brief overview of the main Git branches and what channels they're used for: @@ -465,7 +467,7 @@ Is the change [acceptable for releases][release-acceptable] and do you wish to h - No: Use the `master` branch, do not backport the pull request. - Yes: Can the change be implemented the same way on the `master` and release branches? For example, a packages major version might differ between the `master` and release branches, such that separate security patches are required. - - Yes: Use the `master` branch and [backport the pull request](#backporting-changes). + - Yes: Use the `master` branch and [backport the pull request](#how-to-backport-pull-requests). - No: Create separate pull requests to the `master` and `release-XX.YY` branches. Furthermore, if the change causes a [mass rebuild][mass-rebuild], use the appropriate staging branch instead: @@ -512,34 +514,19 @@ To get a sense for what changes are considered mass rebuilds, see [previously me - If you have commits `pkg-name: oh, forgot to insert whitespace`: squash commits in this case. Use `git rebase -i`. -- Format the commit messages in the following way: +- For consistency, there should not be a period at the end of the commit message's summary line (the first line of the commit message). - ``` - (pkg-name | nixos/): (from -> to | init at version | refactor | etc) - - (Motivation for change. Link to release notes. Additional information.) - ``` - - For consistency, there should not be a period at the end of the commit message's summary line (the first line of the commit message). - - Examples: - - * nginx: init at 2.0.1 - * firefox: 54.0.1 -> 55.0 - - https://www.mozilla.org/en-US/firefox/55.0/releasenotes/ - * nixos/hydra: add bazBaz option - - Dual baz behavior is needed to do foo. - * nixos/nginx: refactor config generation - - The old config generation system used impure shell scripts and could break in specific circumstances (see #1234). - - When adding yourself as maintainer, in the same pull request, make a separate +- When adding yourself as maintainer in the same pull request, make a separate commit with the message `maintainers: add `. Add the commit before those making changes to the package or module. See [Nixpkgs Maintainers](./maintainers/README.md) for details. +- Make sure you read about any commit conventions specific to the area you're touching. See: + - [Commit conventions](./pkgs/README.md#commit-conventions) for changes to `pkgs`. + - [Commit conventions](./lib/README.md#commit-conventions) for changes to `lib`. + - [Commit conventions](./nixos/README.md#commit-conventions) for changes to `nixos`. + - [Commit conventions](./doc/README.md#commit-conventions) for changes to `doc`, the Nixpkgs manual. + ### Writing good commit messages In addition to writing properly formatted commit messages, it's important to include relevant information so other developers can later understand *why* a change was made. While this information usually can be found by digging code, mailing list/Discourse archives, pull request discussions or upstream changes, it may require a lot of work. @@ -565,7 +552,7 @@ Names of files and directories should be in lowercase, with dashes between words - Do not use tab characters, i.e. configure your editor to use soft tabs. For instance, use `(setq-default indent-tabs-mode nil)` in Emacs. Everybody has different tab settings so it’s asking for trouble. -- Use `lowerCamelCase` for variable names, not `UpperCamelCase`. Note, this rule does not apply to package attribute names, which instead follow the rules in [](#sec-package-naming). +- Use `lowerCamelCase` for variable names, not `UpperCamelCase`. Note, this rule does not apply to package attribute names, which instead follow the rules in [package naming](./pkgs/README.md#package-naming). - Function calls with attribute set arguments are written as diff --git a/third_party/nixpkgs/doc/README.md b/third_party/nixpkgs/doc/README.md index 3f9aff1a38..9dee2d30d7 100644 --- a/third_party/nixpkgs/doc/README.md +++ b/third_party/nixpkgs/doc/README.md @@ -114,3 +114,24 @@ pear watermelon : green fruit with red flesh ``` + +## Commit conventions + +- Make sure you read about the [commit conventions](../CONTRIBUTING.md#commit-conventions) common to Nixpkgs as a whole. + +- If creating a commit purely for documentation changes, format the commit message in the following way: + + ``` + doc: (documentation summary) + + (Motivation for change, relevant links, additional information.) + ``` + + Examples: + + * doc: update the kernel config documentation to use `nix-shell` + * doc: add information about `nix-update-script` + + Closes #216321. + +- If the commit contains more than just documentation changes, follow the commit message format relevant for the rest of the changes. diff --git a/third_party/nixpkgs/doc/build-helpers.md b/third_party/nixpkgs/doc/build-helpers.md new file mode 100644 index 0000000000..06737e1667 --- /dev/null +++ b/third_party/nixpkgs/doc/build-helpers.md @@ -0,0 +1,28 @@ +# Build helpers {#part-builders} + +A build helper is a function that produces derivations. + +:::{.warning} +This is not to be confused with the [`builder` argument of the Nix `derivation` primitive](https://nixos.org/manual/nix/unstable/language/derivations.html), which refers to the executable that produces the build result, or [remote builder](https://nixos.org/manual/nix/stable/advanced-topics/distributed-builds.html), which refers to a remote machine that could run such an executable. +::: + +Such a function is usually designed to abstract over a typical workflow for a given programming language or framework. +This allows declaring a build recipe by setting a limited number of options relevant to the particular use case instead of using the `derivation` function directly. + +[`stdenv.mkDerivation`](#part-stdenv) is the most widely used build helper, and serves as a basis for many others. +In addition, it offers various options to customize parts of the builds. + +There is no uniform interface for build helpers. +[Trivial build helpers](#chap-trivial-builders) and [fetchers](#chap-pkgs-fetchers) have various input types for convenience. +[Language- or framework-specific build helpers](#chap-language-support) usually follow the style of `stdenv.mkDerivation`, which accepts an attribute set or a fixed-point function taking an attribute set. + +```{=include=} chapters +build-helpers/fetchers.chapter.md +build-helpers/trivial-build-helpers.chapter.md +build-helpers/testers.chapter.md +build-helpers/special.md +build-helpers/images.md +hooks/index.md +languages-frameworks/index.md +packages/index.md +``` diff --git a/third_party/nixpkgs/doc/builders/fetchers.chapter.md b/third_party/nixpkgs/doc/build-helpers/fetchers.chapter.md similarity index 82% rename from third_party/nixpkgs/doc/builders/fetchers.chapter.md rename to third_party/nixpkgs/doc/build-helpers/fetchers.chapter.md index ba7b1b1901..7bd1bbd6de 100644 --- a/third_party/nixpkgs/doc/builders/fetchers.chapter.md +++ b/third_party/nixpkgs/doc/build-helpers/fetchers.chapter.md @@ -1,13 +1,28 @@ # Fetchers {#chap-pkgs-fetchers} Building software with Nix often requires downloading source code and other files from the internet. -`nixpkgs` provides *fetchers* for different protocols and services. Fetchers are functions that simplify downloading files. +To this end, Nixpkgs provides *fetchers*: functions to obtain remote sources via various protocols and services. + +Nixpkgs fetchers differ from built-in fetchers such as [`builtins.fetchTarball`](https://nixos.org/manual/nix/stable/language/builtins.html#builtins-fetchTarball): +- A built-in fetcher will download and cache files at evaluation time and produce a [store path](https://nixos.org/manual/nix/stable/glossary#gloss-store-path). + A Nixpkgs fetcher will create a ([fixed-output](https://nixos.org/manual/nix/stable/glossary#gloss-fixed-output-derivation)) [derivation](https://nixos.org/manual/nix/stable/language/derivations), and files are downloaded at build time. +- Built-in fetchers will invalidate their cache after [`tarball-ttl`](https://nixos.org/manual/nix/stable/command-ref/conf-file#conf-tarball-ttl) expires, and will require network activity to check if the cache entry is up to date. + Nixpkgs fetchers only re-download if the specified hash changes or the store object is not otherwise available. +- Built-in fetchers do not use [substituters](https://nixos.org/manual/nix/stable/command-ref/conf-file#conf-substituters). + Derivations produced by Nixpkgs fetchers will use any configured binary cache transparently. + +This significantly reduces the time needed to evaluate the entirety of Nixpkgs, and allows [Hydra](https://nixos.org/hydra) to retain and re-distribute sources used by Nixpkgs in the [public binary cache](https://cache.nixos.org). +For these reasons, built-in fetchers are not allowed in Nixpkgs source code. + +The following table shows an overview of the differences: + +| Fetchers | Download | Output | Cache | Re-download when | +|-|-|-|-|-| +| `builtins.fetch*` | evaluation time | store path | `/nix/store`, `~/.cache/nix` | `tarball-ttl` expires, cache miss in `~/.cache/nix`, output store object not in local store | +| `pkgs.fetch*` | build time | derivation | `/nix/store`, substituters | output store object not available | ## Caveats {#chap-pkgs-fetchers-caveats} -Fetchers create [fixed output derivations](https://nixos.org/manual/nix/stable/#fixed-output-drvs) from downloaded files. -Nix can reuse the downloaded files via the hash of the resulting derivation. - The fact that the hash belongs to the Nix derivation output and not the file itself can lead to confusion. For example, consider the following fetcher: @@ -243,21 +258,21 @@ or *** ``` -## `fetchFromBittorrent` {#fetchfrombittorrent} +## `fetchtorrent` {#fetchtorrent} -`fetchFromBittorrent` expects two arguments. `url` which can either be a Magnet URI (Magnet Link) such as `magnet:?xt=urn:btih:dd8255ecdc7ca55fb0bbf81323d87062db1f6d1c` or an HTTP URL pointing to a `.torrent` file. It can also take a `config` argument which will craft a `settings.json` configuration file and give it to `transmission`, the underlying program that is performing the fetch. The available config options for `transmission` can be found [here](https://github.com/transmission/transmission/blob/main/docs/Editing-Configuration-Files.md#options) +`fetchtorrent` expects two arguments. `url` which can either be a Magnet URI (Magnet Link) such as `magnet:?xt=urn:btih:dd8255ecdc7ca55fb0bbf81323d87062db1f6d1c` or an HTTP URL pointing to a `.torrent` file. It can also take a `config` argument which will craft a `settings.json` configuration file and give it to `transmission`, the underlying program that is performing the fetch. The available config options for `transmission` can be found [here](https://github.com/transmission/transmission/blob/main/docs/Editing-Configuration-Files.md#options) ``` -{ fetchFromBittorrent }: +{ fetchtorrent }: -fetchFromBittorrent { +fetchtorrent { config = { peer-limit-global = 100; }; url = "magnet:?xt=urn:btih:dd8255ecdc7ca55fb0bbf81323d87062db1f6d1c"; sha256 = ""; } ``` -### Parameters {#fetchfrombittorrent-parameters} +### Parameters {#fetchtorrent-parameters} - `url`: Magnet URI (Magnet Link) such as `magnet:?xt=urn:btih:dd8255ecdc7ca55fb0bbf81323d87062db1f6d1c` or an HTTP URL pointing to a `.torrent` file. diff --git a/third_party/nixpkgs/doc/builders/images.md b/third_party/nixpkgs/doc/build-helpers/images.md similarity index 100% rename from third_party/nixpkgs/doc/builders/images.md rename to third_party/nixpkgs/doc/build-helpers/images.md diff --git a/third_party/nixpkgs/doc/builders/images/appimagetools.section.md b/third_party/nixpkgs/doc/build-helpers/images/appimagetools.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/images/appimagetools.section.md rename to third_party/nixpkgs/doc/build-helpers/images/appimagetools.section.md diff --git a/third_party/nixpkgs/doc/builders/images/binarycache.section.md b/third_party/nixpkgs/doc/build-helpers/images/binarycache.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/images/binarycache.section.md rename to third_party/nixpkgs/doc/build-helpers/images/binarycache.section.md diff --git a/third_party/nixpkgs/doc/builders/images/dockertools.section.md b/third_party/nixpkgs/doc/build-helpers/images/dockertools.section.md similarity index 99% rename from third_party/nixpkgs/doc/builders/images/dockertools.section.md rename to third_party/nixpkgs/doc/build-helpers/images/dockertools.section.md index 3ac4f224b5..42d6e297f5 100644 --- a/third_party/nixpkgs/doc/builders/images/dockertools.section.md +++ b/third_party/nixpkgs/doc/build-helpers/images/dockertools.section.md @@ -275,7 +275,7 @@ pullImage { `nix-prefetch-docker` command can be used to get required image parameters: ```ShellSession -$ nix run nixpkgs.nix-prefetch-docker -c nix-prefetch-docker --image-name mysql --image-tag 5 +$ nix run nixpkgs#nix-prefetch-docker -- --image-name mysql --image-tag 5 ``` Since a given `imageName` may transparently refer to a manifest list of images which support multiple architectures and/or operating systems, you can supply the `--os` and `--arch` arguments to specify exactly which image you want. By default it will match the OS and architecture of the host the command is run on. diff --git a/third_party/nixpkgs/doc/builders/images/makediskimage.section.md b/third_party/nixpkgs/doc/build-helpers/images/makediskimage.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/images/makediskimage.section.md rename to third_party/nixpkgs/doc/build-helpers/images/makediskimage.section.md diff --git a/third_party/nixpkgs/doc/builders/images/ocitools.section.md b/third_party/nixpkgs/doc/build-helpers/images/ocitools.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/images/ocitools.section.md rename to third_party/nixpkgs/doc/build-helpers/images/ocitools.section.md diff --git a/third_party/nixpkgs/doc/builders/images/portableservice.section.md b/third_party/nixpkgs/doc/build-helpers/images/portableservice.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/images/portableservice.section.md rename to third_party/nixpkgs/doc/build-helpers/images/portableservice.section.md diff --git a/third_party/nixpkgs/doc/builders/images/snaptools.section.md b/third_party/nixpkgs/doc/build-helpers/images/snaptools.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/images/snaptools.section.md rename to third_party/nixpkgs/doc/build-helpers/images/snaptools.section.md diff --git a/third_party/nixpkgs/doc/builders/special.md b/third_party/nixpkgs/doc/build-helpers/special.md similarity index 56% rename from third_party/nixpkgs/doc/builders/special.md rename to third_party/nixpkgs/doc/build-helpers/special.md index 6d07fa87f3..f88648207f 100644 --- a/third_party/nixpkgs/doc/builders/special.md +++ b/third_party/nixpkgs/doc/build-helpers/special.md @@ -1,11 +1,10 @@ -# Special builders {#chap-special} +# Special build helpers {#chap-special} -This chapter describes several special builders. +This chapter describes several special build helpers. ```{=include=} sections special/fhs-environments.section.md special/makesetuphook.section.md special/mkshell.section.md -special/darwin-builder.section.md special/vm-tools.section.md ``` diff --git a/third_party/nixpkgs/doc/builders/special/fhs-environments.section.md b/third_party/nixpkgs/doc/build-helpers/special/fhs-environments.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/special/fhs-environments.section.md rename to third_party/nixpkgs/doc/build-helpers/special/fhs-environments.section.md diff --git a/third_party/nixpkgs/doc/builders/special/makesetuphook.section.md b/third_party/nixpkgs/doc/build-helpers/special/makesetuphook.section.md similarity index 91% rename from third_party/nixpkgs/doc/builders/special/makesetuphook.section.md rename to third_party/nixpkgs/doc/build-helpers/special/makesetuphook.section.md index eb04241213..e83164b7eb 100644 --- a/third_party/nixpkgs/doc/builders/special/makesetuphook.section.md +++ b/third_party/nixpkgs/doc/build-helpers/special/makesetuphook.section.md @@ -1,6 +1,6 @@ # pkgs.makeSetupHook {#sec-pkgs.makeSetupHook} -`pkgs.makeSetupHook` is a builder that produces hooks that go in to `nativeBuildInputs` +`pkgs.makeSetupHook` is a build helper that produces hooks that go in to `nativeBuildInputs` ## Usage {#sec-pkgs.makeSetupHook-usage} diff --git a/third_party/nixpkgs/doc/builders/special/mkshell.section.md b/third_party/nixpkgs/doc/build-helpers/special/mkshell.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/special/mkshell.section.md rename to third_party/nixpkgs/doc/build-helpers/special/mkshell.section.md diff --git a/third_party/nixpkgs/doc/builders/special/vm-tools.section.md b/third_party/nixpkgs/doc/build-helpers/special/vm-tools.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/special/vm-tools.section.md rename to third_party/nixpkgs/doc/build-helpers/special/vm-tools.section.md diff --git a/third_party/nixpkgs/doc/builders/testers.chapter.md b/third_party/nixpkgs/doc/build-helpers/testers.chapter.md similarity index 100% rename from third_party/nixpkgs/doc/builders/testers.chapter.md rename to third_party/nixpkgs/doc/build-helpers/testers.chapter.md diff --git a/third_party/nixpkgs/doc/builders/trivial-builders.chapter.md b/third_party/nixpkgs/doc/build-helpers/trivial-build-helpers.chapter.md similarity index 99% rename from third_party/nixpkgs/doc/builders/trivial-builders.chapter.md rename to third_party/nixpkgs/doc/build-helpers/trivial-build-helpers.chapter.md index 2cb1f2debc..a0cda86a66 100644 --- a/third_party/nixpkgs/doc/builders/trivial-builders.chapter.md +++ b/third_party/nixpkgs/doc/build-helpers/trivial-build-helpers.chapter.md @@ -1,4 +1,4 @@ -# Trivial builders {#chap-trivial-builders} +# Trivial build helpers {#chap-trivial-builders} Nixpkgs provides a couple of functions that help with building derivations. The most important one, `stdenv.mkDerivation`, has already been documented above. The following functions wrap `stdenv.mkDerivation`, making it easier to use in certain cases. diff --git a/third_party/nixpkgs/doc/builders.md b/third_party/nixpkgs/doc/builders.md deleted file mode 100644 index 2e95942240..0000000000 --- a/third_party/nixpkgs/doc/builders.md +++ /dev/null @@ -1,12 +0,0 @@ -# Builders {#part-builders} - -```{=include=} chapters -builders/fetchers.chapter.md -builders/trivial-builders.chapter.md -builders/testers.chapter.md -builders/special.md -builders/images.md -hooks/index.md -languages-frameworks/index.md -builders/packages/index.md -``` diff --git a/third_party/nixpkgs/doc/default.nix b/third_party/nixpkgs/doc/default.nix index 18e12c1a8a..61bbd2ba8d 100644 --- a/third_party/nixpkgs/doc/default.nix +++ b/third_party/nixpkgs/doc/default.nix @@ -23,6 +23,7 @@ let { name = "sources"; description = "source filtering functions"; } { name = "cli"; description = "command-line serialization functions"; } { name = "gvariant"; description = "GVariant formatted string serialization functions"; } + { name = "customisation"; description = "Functions to customise (derivation-related) functions, derivatons, or attribute sets"; } ]; }; diff --git a/third_party/nixpkgs/doc/functions/fileset.section.md b/third_party/nixpkgs/doc/functions/fileset.section.md index 08b9ba9eae..c42337feab 100644 --- a/third_party/nixpkgs/doc/functions/fileset.section.md +++ b/third_party/nixpkgs/doc/functions/fileset.section.md @@ -6,11 +6,8 @@ The [`lib.fileset`](#sec-functions-library-fileset) library allows you to work w A file set is a mathematical set of local files that can be added to the Nix store for use in Nix derivations. File sets are easy and safe to use, providing obvious and composable semantics with good error messages to prevent mistakes. -These sections apply to the entire library. See the [function reference](#sec-functions-library-fileset) for function-specific documentation. -The file set library is currently somewhat limited but is being expanded to include more functions over time. - ## Implicit coercion from paths to file sets {#sec-fileset-path-coercion} All functions accepting file sets as arguments can also accept [paths](https://nixos.org/manual/nix/stable/language/values.html#type-path) as arguments. diff --git a/third_party/nixpkgs/doc/hooks/autopatchelf.section.md b/third_party/nixpkgs/doc/hooks/autopatchelf.section.md index 008a90d461..995204b902 100644 --- a/third_party/nixpkgs/doc/hooks/autopatchelf.section.md +++ b/third_party/nixpkgs/doc/hooks/autopatchelf.section.md @@ -6,6 +6,6 @@ You can also specify a `runtimeDependencies` variable which lists dependencies t In certain situations you may want to run the main command (`autoPatchelf`) of the setup hook on a file or a set of directories instead of unconditionally patching all outputs. This can be done by setting the `dontAutoPatchelf` environment variable to a non-empty value. -By default `autoPatchelf` will fail as soon as any ELF file requires a dependency which cannot be resolved via the given build inputs. In some situations you might prefer to just leave missing dependencies unpatched and continue to patch the rest. This can be achieved by setting the `autoPatchelfIgnoreMissingDeps` environment variable to a non-empty value. `autoPatchelfIgnoreMissingDeps` can be set to a list like `autoPatchelfIgnoreMissingDeps = [ "libcuda.so.1" "libcudart.so.1" ];` or to simply `[ "*" ]` to ignore all missing dependencies. +By default `autoPatchelf` will fail as soon as any ELF file requires a dependency which cannot be resolved via the given build inputs. In some situations you might prefer to just leave missing dependencies unpatched and continue to patch the rest. This can be achieved by setting the `autoPatchelfIgnoreMissingDeps` environment variable to a non-empty value. `autoPatchelfIgnoreMissingDeps` can be set to a list like `autoPatchelfIgnoreMissingDeps = [ "libcuda.so.1" "libcudart.so.1" ];` or to `[ "*" ]` to ignore all missing dependencies. The `autoPatchelf` command also recognizes a `--no-recurse` command line flag, which prevents it from recursing into subdirectories. diff --git a/third_party/nixpkgs/doc/hooks/index.md b/third_party/nixpkgs/doc/hooks/index.md index 363d627e52..1534ef85cc 100644 --- a/third_party/nixpkgs/doc/hooks/index.md +++ b/third_party/nixpkgs/doc/hooks/index.md @@ -25,7 +25,6 @@ perl.section.md pkg-config.section.md postgresql-test-hook.section.md python.section.md -qt-4.section.md scons.section.md tetex-tex-live.section.md unzip.section.md diff --git a/third_party/nixpkgs/doc/hooks/meson.section.md b/third_party/nixpkgs/doc/hooks/meson.section.md index fd7779e646..3a7fb50320 100644 --- a/third_party/nixpkgs/doc/hooks/meson.section.md +++ b/third_party/nixpkgs/doc/hooks/meson.section.md @@ -1,25 +1,83 @@ # Meson {#meson} -Overrides the configure phase to run meson to generate Ninja files. To run these files, you should accompany Meson with ninja. By default, `enableParallelBuilding` is enabled as Meson supports parallel building almost everywhere. +[Meson](https://mesonbuild.com/) is an open source meta build system meant to be +fast and user-friendly. -## Variables controlling Meson {#variables-controlling-meson} +In Nixpkgs, meson comes with a setup hook that overrides the configure, check, +and install phases. -### `mesonFlags` {#mesonflags} +Being a meta build system, meson needs an accompanying backend. In the context +of Nixpkgs, the typical companion backend is [Ninja](#ninja), that provides a +setup hook registering ninja-based build and install phases. -Controls the flags passed to meson. +## Variables controlling Meson {#meson-variables-controlling} -### `mesonBuildType` {#mesonbuildtype} +### Meson Exclusive Variables {#meson-exclusive-variables} -Which [`--buildtype`](https://mesonbuild.com/Builtin-options.html#core-options) to pass to Meson. We default to `plain`. +#### `mesonFlags` {#meson-flags} -### `mesonAutoFeatures` {#mesonautofeatures} +Controls the flags passed to `meson setup` during configure phase. -What value to set [`-Dauto_features=`](https://mesonbuild.com/Builtin-options.html#core-options) to. We default to `enabled`. +#### `mesonWrapMode` {#meson-wrap-mode} -### `mesonWrapMode` {#mesonwrapmode} +Which value is passed as +[`-Dwrap_mode=`](https://mesonbuild.com/Builtin-options.html#core-options) +to. In Nixpkgs the default value is `nodownload`, so that no subproject will be +downloaded (since network access is already disabled during deployment in +Nixpkgs). -What value to set [`-Dwrap_mode=`](https://mesonbuild.com/Builtin-options.html#core-options) to. We default to `nodownload` as we disallow network access. +Note: Meson allows pre-population of subprojects that would otherwise be +downloaded. -### `dontUseMesonConfigure` {#dontusemesonconfigure} +#### `mesonBuildType` {#meson-build-type} -Disables using Meson’s `configurePhase`. +Which value is passed as +[`--buildtype`](https://mesonbuild.com/Builtin-options.html#core-options) to +`meson setup` during configure phase. In Nixpkgs the default value is `plain`. + +#### `mesonAutoFeatures` {#meson-auto-features} + +Which value is passed as +[`-Dauto_features=`](https://mesonbuild.com/Builtin-options.html#core-options) +to `meson setup` during configure phase. In Nixpkgs the default value is +`enabled`, meaning that every feature declared as "auto" by the meson scripts +will be enabled. + +#### `mesonCheckFlags` {#meson-check-flags} + +Controls the flags passed to `meson test` during check phase. + +#### `mesonInstallFlags` {#meson-install-flags} + +Controls the flags passed to `meson install` during install phase. + +#### `mesonInstallTags` {#meson-install-tags} + +A list of installation tags passed to Meson's commandline option +[`--tags`](https://mesonbuild.com/Installing.html#installation-tags) during +install phase. + +Note: `mesonInstallTags` should be a list of strings, that will be converted to +a comma-separated string that is recognized to `--tags`. +Example: `mesonInstallTags = [ "emulator" "assembler" ];` will be converted to +`--tags emulator,assembler`. + +#### `dontUseMesonConfigure` {#dont-use-meson-configure} + +When set to true, don't use the predefined `mesonConfigurePhase`. + +#### `dontUseMesonCheck` {#dont-use-meson-check} + +When set to true, don't use the predefined `mesonCheckPhase`. + +#### `dontUseMesonInstall` {#dont-use-meson-install} + +When set to true, don't use the predefined `mesonInstallPhase`. + +### Honored variables {#meson-honored-variables} + +The following variables commonly used by `stdenv.mkDerivation` are honored by +Meson setup hook. + +- `prefixKey` +- `enableParallelBuilding` diff --git a/third_party/nixpkgs/doc/hooks/ninja.section.md b/third_party/nixpkgs/doc/hooks/ninja.section.md index 4b0e33feb5..bbc9481088 100644 --- a/third_party/nixpkgs/doc/hooks/ninja.section.md +++ b/third_party/nixpkgs/doc/hooks/ninja.section.md @@ -1,3 +1,5 @@ # ninja {#ninja} Overrides the build, install, and check phase to run ninja instead of make. You can disable this behavior with the `dontUseNinjaBuild`, `dontUseNinjaInstall`, and `dontUseNinjaCheck`, respectively. Parallel building is enabled by default in Ninja. + +Note that if the [Meson setup hook](#meson) is also active, Ninja's install and check phases will be disabled in favor of Meson's. diff --git a/third_party/nixpkgs/doc/hooks/qt-4.section.md b/third_party/nixpkgs/doc/hooks/qt-4.section.md deleted file mode 100644 index 4b704df495..0000000000 --- a/third_party/nixpkgs/doc/hooks/qt-4.section.md +++ /dev/null @@ -1,3 +0,0 @@ -# Qt 4 {#qt-4} - -Sets the `QTDIR` environment variable to Qt’s path. diff --git a/third_party/nixpkgs/doc/languages-frameworks/agda.section.md b/third_party/nixpkgs/doc/languages-frameworks/agda.section.md index ff3d70ef0c..cb1f12eec2 100644 --- a/third_party/nixpkgs/doc/languages-frameworks/agda.section.md +++ b/third_party/nixpkgs/doc/languages-frameworks/agda.section.md @@ -146,7 +146,7 @@ agdaPackages.mkDerivation { ### Building Agda packages {#building-agda-packages} -The default build phase for `agdaPackages.mkDerivation` simply runs `agda` on the `Everything.agda` file. +The default build phase for `agdaPackages.mkDerivation` runs `agda` on the `Everything.agda` file. If something else is needed to build the package (e.g. `make`) then the `buildPhase` should be overridden. Additionally, a `preBuild` or `configurePhase` can be used if there are steps that need to be done prior to checking the `Everything.agda` file. `agda` and the Agda libraries contained in `buildInputs` are made available during the build phase. @@ -250,7 +250,7 @@ Usually, the maintainers will answer within a week or two with a new release. Bumping the version of that reverse dependency should be a further commit on your PR. In the rare case that a new release is not to be expected within an acceptable time, -simply mark the broken package as broken by setting `meta.broken = true;`. +mark the broken package as broken by setting `meta.broken = true;`. This will exclude it from the build test. It can be added later when it is fixed, and does not hinder the advancement of the whole package set in the meantime. diff --git a/third_party/nixpkgs/doc/languages-frameworks/beam.section.md b/third_party/nixpkgs/doc/languages-frameworks/beam.section.md index 2cb4863fc5..1e83d4b93c 100644 --- a/third_party/nixpkgs/doc/languages-frameworks/beam.section.md +++ b/third_party/nixpkgs/doc/languages-frameworks/beam.section.md @@ -44,7 +44,7 @@ There is also a `buildMix` helper, whose behavior is closer to that of `buildErl ## How to Install BEAM Packages {#how-to-install-beam-packages} -BEAM builders are not registered at the top level, simply because they are not relevant to the vast majority of Nix users. +BEAM builders are not registered at the top level, because they are not relevant to the vast majority of Nix users. To use any of those builders into your environment, refer to them by their attribute path under `beamPackages`, e.g. `beamPackages.rebar3`: ::: {.example #ex-beam-ephemeral-shell} diff --git a/third_party/nixpkgs/doc/languages-frameworks/dart.section.md b/third_party/nixpkgs/doc/languages-frameworks/dart.section.md index b00327b78e..9da43714a1 100644 --- a/third_party/nixpkgs/doc/languages-frameworks/dart.section.md +++ b/third_party/nixpkgs/doc/languages-frameworks/dart.section.md @@ -8,10 +8,12 @@ It fetches its Dart dependencies automatically through `fetchDartDeps`, and (thr If you are packaging a Flutter desktop application, use [`buildFlutterApplication`](#ssec-dart-flutter) instead. -`vendorHash`: is the hash of the output of the dependency fetcher derivation. To obtain it, simply set it to `lib.fakeHash` (or omit it) and run the build ([more details here](#sec-source-hashes)). +`vendorHash`: is the hash of the output of the dependency fetcher derivation. To obtain it, set it to `lib.fakeHash` (or omit it) and run the build ([more details here](#sec-source-hashes)). If the upstream source is missing a `pubspec.lock` file, you'll have to vendor one and specify it using `pubspecLockFile`. If it is needed, one will be generated for you and printed when attempting to build the derivation. +The `depsListFile` must always be provided when packaging in Nixpkgs. It will be generated and printed if the derivation is attempted to be built without one. Alternatively, `autoDepsList` may be set to `true` only when outside of Nixpkgs, as it relies on import-from-derivation. + The `dart` commands run can be overridden through `pubGetScript` and `dartCompileCommand`, you can also add flags using `dartCompileFlags` or `dartJitFlags`. Dart supports multiple [outputs types](https://dart.dev/tools/dart-compile#types-of-output), you can choose between them using `dartOutputType` (defaults to `exe`). If you want to override the binaries path or the source path they come from, you can use `dartEntryPoints`. Outputs that require a runtime will automatically be wrapped with the relevant runtime (`dartaotruntime` for `aot-snapshot`, `dart run` for `jit-snapshot` and `kernel`, `node` for `js`), this can be overridden through `dartRuntimeCommand`. @@ -31,6 +33,7 @@ buildDartApplication rec { }; pubspecLockFile = ./pubspec.lock; + depsListFile = ./deps.json; vendorHash = "sha256-Atm7zfnDambN/BmmUf4BG0yUz/y6xWzf0reDw3Ad41s="; } ``` @@ -39,9 +42,7 @@ buildDartApplication rec { The function `buildFlutterApplication` builds Flutter applications. -The deps.json file must always be provided when packaging in Nixpkgs. It will be generated and printed if the derivation is attempted to be built without one. Alternatively, `autoDepsList` may be set to `true` when outside of Nixpkgs, as it relies on import-from-derivation. - -A `pubspec.lock` file must be available. See the [Dart documentation](#ssec-dart-applications) for more details. +See the [Dart documentation](#ssec-dart-applications) for more details on required files and arguments. ```nix { flutter, fetchFromGitHub }: diff --git a/third_party/nixpkgs/doc/languages-frameworks/dhall.section.md b/third_party/nixpkgs/doc/languages-frameworks/dhall.section.md index 7322a61687..83567ab17a 100644 --- a/third_party/nixpkgs/doc/languages-frameworks/dhall.section.md +++ b/third_party/nixpkgs/doc/languages-frameworks/dhall.section.md @@ -323,7 +323,7 @@ $ nix-shell -p haskellPackages.dhall-nixpkgs nix-prefetch-git ``` :::{.note} -`nix-prefetch-git` has to be in `$PATH` for `dhall-to-nixpkgs` to work. +`nix-prefetch-git` is added to the `nix-shell -p` invocation above, because it has to be in `$PATH` for `dhall-to-nixpkgs` to work. ::: The utility takes care of automatically detecting remote imports and converting diff --git a/third_party/nixpkgs/doc/languages-frameworks/dotnet.section.md b/third_party/nixpkgs/doc/languages-frameworks/dotnet.section.md index 9ba0fef2a2..978ec07cb9 100644 --- a/third_party/nixpkgs/doc/languages-frameworks/dotnet.section.md +++ b/third_party/nixpkgs/doc/languages-frameworks/dotnet.section.md @@ -138,7 +138,9 @@ in buildDotnetModule rec { src = ./.; projectFile = "src/project.sln"; - nugetDeps = ./deps.nix; # File generated with `nix-build -A package.passthru.fetch-deps`. + # File generated with `nix-build -A package.passthru.fetch-deps`. + # To run fetch-deps when this file does not yet exist, set nugetDeps to null + nugetDeps = ./deps.nix; projectReferences = [ referencedProject ]; # `referencedProject` must contain `nupkg` in the folder structure. diff --git a/third_party/nixpkgs/doc/languages-frameworks/emscripten.section.md b/third_party/nixpkgs/doc/languages-frameworks/emscripten.section.md index 5f93dd5ff3..20d358f2e9 100644 --- a/third_party/nixpkgs/doc/languages-frameworks/emscripten.section.md +++ b/third_party/nixpkgs/doc/languages-frameworks/emscripten.section.md @@ -2,168 +2,159 @@ [Emscripten](https://github.com/kripken/emscripten): An LLVM-to-JavaScript Compiler -This section of the manual covers how to use `emscripten` in nixpkgs. +If you want to work with `emcc`, `emconfigure` and `emmake` as you are used to from Ubuntu and similar distributions, -Minimal requirements: - -* nix -* nixpkgs - -Modes of use of `emscripten`: - -* **Imperative usage** (on the command line): - - If you want to work with `emcc`, `emconfigure` and `emmake` as you are used to from Ubuntu and similar distributions you can use these commands: - - * `nix-env -f "" -iA emscripten` - * `nix-shell -p emscripten` - -* **Declarative usage**: - - This mode is far more power full since this makes use of `nix` for dependency management of emscripten libraries and targets by using the `mkDerivation` which is implemented by `pkgs.emscriptenStdenv` and `pkgs.buildEmscriptenPackage`. The source for the packages is in `pkgs/top-level/emscripten-packages.nix` and the abstraction behind it in `pkgs/development/em-modules/generic/default.nix`. From the root of the nixpkgs repository: - * build and install all packages: - * `nix-env -iA emscriptenPackages` - - * dev-shell for zlib implementation hacking: - * `nix-shell -A emscriptenPackages.zlib` - -## Imperative usage {#imperative-usage} +```console +nix-shell -p emscripten +``` A few things to note: * `export EMCC_DEBUG=2` is nice for debugging -* `~/.emscripten`, the build artifact cache sometimes creates issues and needs to be removed from time to time +* The build artifact cache in `~/.emscripten` sometimes creates issues and needs to be removed from time to time -## Declarative usage {#declarative-usage} +## Examples {#declarative-usage} Let's see two different examples from `pkgs/top-level/emscripten-packages.nix`: * `pkgs.zlib.override` * `pkgs.buildEmscriptenPackage` -Both are interesting concepts. +A special requirement of the `pkgs.buildEmscriptenPackage` is the `doCheck = true`. +This means each Emscripten package requires that a [`checkPhase`](#ssec-check-phase) is implemented. -A special requirement of the `pkgs.buildEmscriptenPackage` is the `doCheck = true` is a default meaning that each emscriptenPackage requires a `checkPhase` implemented. +* Use `export EMCC_DEBUG=2` from within a phase to get more detailed debug output what is going wrong. +* The cache at `~/.emscripten` requires to set `HOME=$TMPDIR` in individual phases. + This makes compilation slower but also more deterministic. -* Use `export EMCC_DEBUG=2` from within a emscriptenPackage's `phase` to get more detailed debug output what is going wrong. -* ~/.emscripten cache is requiring us to set `HOME=$TMPDIR` in individual phases. This makes compilation slower but also makes it more deterministic. +::: {.example #usage-1-pkgs.zlib.override} -### Usage 1: pkgs.zlib.override {#usage-1-pkgs.zlib.override} +# Using `pkgs.zlib.override {}` -This example uses `zlib` from nixpkgs but instead of compiling **C** to **ELF** it compiles **C** to **JS** since we were using `pkgs.zlib.override` and changed stdenv to `pkgs.emscriptenStdenv`. A few adaptions and hacks were set in place to make it working. One advantage is that when `pkgs.zlib` is updated, it will automatically update this package as well. However, this can also be the downside... +This example uses `zlib` from Nixpkgs, but instead of compiling **C** to **ELF** it compiles **C** to **JavaScript** since we were using `pkgs.zlib.override` and changed `stdenv` to `pkgs.emscriptenStdenv`. -See the `zlib` example: +A few adaptions and hacks were put in place to make it work. +One advantage is that when `pkgs.zlib` is updated, it will automatically update this package as well. - zlib = (pkgs.zlib.override { - stdenv = pkgs.emscriptenStdenv; - }).overrideAttrs - (old: rec { - buildInputs = old.buildInputs ++ [ pkg-config ]; - # we need to reset this setting! - env = (old.env or { }) // { NIX_CFLAGS_COMPILE = ""; }; - configurePhase = '' - # FIXME: Some tests require writing at $HOME - HOME=$TMPDIR - runHook preConfigure - #export EMCC_DEBUG=2 - emconfigure ./configure --prefix=$out --shared +```nix +(pkgs.zlib.override { + stdenv = pkgs.emscriptenStdenv; +}).overrideAttrs +(old: rec { + buildInputs = old.buildInputs ++ [ pkg-config ]; + # we need to reset this setting! + env = (old.env or { }) // { NIX_CFLAGS_COMPILE = ""; }; + configurePhase = '' + # FIXME: Some tests require writing at $HOME + HOME=$TMPDIR + runHook preConfigure - runHook postConfigure - ''; - dontStrip = true; - outputs = [ "out" ]; - buildPhase = '' - emmake make - ''; - installPhase = '' - emmake make install - ''; - checkPhase = '' - echo "================= testing zlib using node =================" + #export EMCC_DEBUG=2 + emconfigure ./configure --prefix=$out --shared - echo "Compiling a custom test" - set -x - emcc -O2 -s EMULATE_FUNCTION_POINTER_CASTS=1 test/example.c -DZ_SOLO \ - libz.so.${old.version} -I . -o example.js + runHook postConfigure + ''; + dontStrip = true; + outputs = [ "out" ]; + buildPhase = '' + emmake make + ''; + installPhase = '' + emmake make install + ''; + checkPhase = '' + echo "================= testing zlib using node =================" - echo "Using node to execute the test" - ${pkgs.nodejs}/bin/node ./example.js + echo "Compiling a custom test" + set -x + emcc -O2 -s EMULATE_FUNCTION_POINTER_CASTS=1 test/example.c -DZ_SOLO \ + libz.so.${old.version} -I . -o example.js - set +x - if [ $? -ne 0 ]; then - echo "test failed for some reason" - exit 1; - else - echo "it seems to work! very good." - fi - echo "================= /testing zlib using node =================" - ''; + echo "Using node to execute the test" + ${pkgs.nodejs}/bin/node ./example.js - postPatch = pkgs.lib.optionalString pkgs.stdenv.isDarwin '' - substituteInPlace configure \ - --replace '/usr/bin/libtool' 'ar' \ - --replace 'AR="libtool"' 'AR="ar"' \ - --replace 'ARFLAGS="-o"' 'ARFLAGS="-r"' - ''; - }); + set +x + if [ $? -ne 0 ]; then + echo "test failed for some reason" + exit 1; + else + echo "it seems to work! very good." + fi + echo "================= /testing zlib using node =================" + ''; -### Usage 2: pkgs.buildEmscriptenPackage {#usage-2-pkgs.buildemscriptenpackage} + postPatch = pkgs.lib.optionalString pkgs.stdenv.isDarwin '' + substituteInPlace configure \ + --replace '/usr/bin/libtool' 'ar' \ + --replace 'AR="libtool"' 'AR="ar"' \ + --replace 'ARFLAGS="-o"' 'ARFLAGS="-r"' + ''; +}) +``` -This `xmlmirror` example features a emscriptenPackage which is defined completely from this context and no `pkgs.zlib.override` is used. +:::{.example #usage-2-pkgs.buildemscriptenpackage} - xmlmirror = pkgs.buildEmscriptenPackage rec { - name = "xmlmirror"; +# Using `pkgs.buildEmscriptenPackage {}` - buildInputs = [ pkg-config autoconf automake libtool gnumake libxml2 nodejs openjdk json_c ]; - nativeBuildInputs = [ pkg-config zlib ]; +This `xmlmirror` example features an Emscripten package that is defined completely from this context and no `pkgs.zlib.override` is used. - src = pkgs.fetchgit { - url = "https://gitlab.com/odfplugfest/xmlmirror.git"; - rev = "4fd7e86f7c9526b8f4c1733e5c8b45175860a8fd"; - hash = "sha256-i+QgY+5PYVg5pwhzcDnkfXAznBg3e8sWH2jZtixuWsk="; - }; +```nix +pkgs.buildEmscriptenPackage rec { + name = "xmlmirror"; - configurePhase = '' - rm -f fastXmlLint.js* - # a fix for ERROR:root:For asm.js, TOTAL_MEMORY must be a multiple of 16MB, was 234217728 - # https://gitlab.com/odfplugfest/xmlmirror/issues/8 - sed -e "s/TOTAL_MEMORY=234217728/TOTAL_MEMORY=268435456/g" -i Makefile.emEnv - # https://github.com/kripken/emscripten/issues/6344 - # https://gitlab.com/odfplugfest/xmlmirror/issues/9 - sed -e "s/\$(JSONC_LDFLAGS) \$(ZLIB_LDFLAGS) \$(LIBXML20_LDFLAGS)/\$(JSONC_LDFLAGS) \$(LIBXML20_LDFLAGS) \$(ZLIB_LDFLAGS) /g" -i Makefile.emEnv - # https://gitlab.com/odfplugfest/xmlmirror/issues/11 - sed -e "s/-o fastXmlLint.js/-s EXTRA_EXPORTED_RUNTIME_METHODS='[\"ccall\", \"cwrap\"]' -o fastXmlLint.js/g" -i Makefile.emEnv - ''; + buildInputs = [ pkg-config autoconf automake libtool gnumake libxml2 nodejs openjdk json_c ]; + nativeBuildInputs = [ pkg-config zlib ]; - buildPhase = '' - HOME=$TMPDIR - make -f Makefile.emEnv - ''; + src = pkgs.fetchgit { + url = "https://gitlab.com/odfplugfest/xmlmirror.git"; + rev = "4fd7e86f7c9526b8f4c1733e5c8b45175860a8fd"; + hash = "sha256-i+QgY+5PYVg5pwhzcDnkfXAznBg3e8sWH2jZtixuWsk="; + }; - outputs = [ "out" "doc" ]; + configurePhase = '' + rm -f fastXmlLint.js* + # a fix for ERROR:root:For asm.js, TOTAL_MEMORY must be a multiple of 16MB, was 234217728 + # https://gitlab.com/odfplugfest/xmlmirror/issues/8 + sed -e "s/TOTAL_MEMORY=234217728/TOTAL_MEMORY=268435456/g" -i Makefile.emEnv + # https://github.com/kripken/emscripten/issues/6344 + # https://gitlab.com/odfplugfest/xmlmirror/issues/9 + sed -e "s/\$(JSONC_LDFLAGS) \$(ZLIB_LDFLAGS) \$(LIBXML20_LDFLAGS)/\$(JSONC_LDFLAGS) \$(LIBXML20_LDFLAGS) \$(ZLIB_LDFLAGS) /g" -i Makefile.emEnv + # https://gitlab.com/odfplugfest/xmlmirror/issues/11 + sed -e "s/-o fastXmlLint.js/-s EXTRA_EXPORTED_RUNTIME_METHODS='[\"ccall\", \"cwrap\"]' -o fastXmlLint.js/g" -i Makefile.emEnv + ''; - installPhase = '' - mkdir -p $out/share - mkdir -p $doc/share/${name} + buildPhase = '' + HOME=$TMPDIR + make -f Makefile.emEnv + ''; - cp Demo* $out/share - cp -R codemirror-5.12 $out/share - cp fastXmlLint.js* $out/share - cp *.xsd $out/share - cp *.js $out/share - cp *.xhtml $out/share - cp *.html $out/share - cp *.json $out/share - cp *.rng $out/share - cp README.md $doc/share/${name} - ''; - checkPhase = '' + outputs = [ "out" "doc" ]; - ''; - }; + installPhase = '' + mkdir -p $out/share + mkdir -p $doc/share/${name} -### Declarative debugging {#declarative-debugging} + cp Demo* $out/share + cp -R codemirror-5.12 $out/share + cp fastXmlLint.js* $out/share + cp *.xsd $out/share + cp *.js $out/share + cp *.xhtml $out/share + cp *.html $out/share + cp *.json $out/share + cp *.rng $out/share + cp README.md $doc/share/${name} + ''; + checkPhase = '' + + ''; +} +``` + +::: + +## Debugging {#declarative-debugging} Use `nix-shell -I nixpkgs=/some/dir/nixpkgs -A emscriptenPackages.libz` and from there you can go trough the individual steps. This makes it easy to build a good `unit test` or list the files of the project. @@ -174,9 +165,3 @@ Use `nix-shell -I nixpkgs=/some/dir/nixpkgs -A emscriptenPackages.libz` and from 5. `configurePhase` 6. `buildPhase` 7. ... happy hacking... - -## Summary {#summary} - -Using this toolchain makes it easy to leverage `nix` from NixOS, MacOSX or even Windows (WSL+ubuntu+nix). This toolchain is reproducible, behaves like the rest of the packages from nixpkgs and contains a set of well working examples to learn and adapt from. - -If in trouble, ask the maintainers. diff --git a/third_party/nixpkgs/doc/languages-frameworks/go.section.md b/third_party/nixpkgs/doc/languages-frameworks/go.section.md index 7fd38a7d21..884ebcebf7 100644 --- a/third_party/nixpkgs/doc/languages-frameworks/go.section.md +++ b/third_party/nixpkgs/doc/languages-frameworks/go.section.md @@ -18,9 +18,9 @@ In the following is an example expression using `buildGoModule`, the following a To avoid updating this field when dependencies change, run `go mod vendor` in your source repo and set `vendorHash = null;` - To obtain the actual hash, set `vendorHash = lib.fakeSha256;` and run the build ([more details here](#sec-source-hashes)). + To obtain the actual hash, set `vendorHash = lib.fakeHash;` and run the build ([more details here](#sec-source-hashes)). - `proxyVendor`: Fetches (go mod download) and proxies the vendor directory. This is useful if your code depends on c code and go mod tidy does not include the needed sources to build or if any dependency has case-insensitive conflicts which will produce platform-dependent `vendorHash` checksums. -- `modPostBuild`: Shell commands to run after the build of the goModules executes `go mod vendor`, and before calculating fixed output derivation's `vendorHash` (or `vendorSha256`). Note that if you change this attribute, you need to update `vendorHash` (or `vendorSha256`) attribute. +- `modPostBuild`: Shell commands to run after the build of the goModules executes `go mod vendor`, and before calculating fixed output derivation's `vendorHash`. Note that if you change this attribute, you need to update `vendorHash` attribute. ```nix pet = buildGoModule rec { diff --git a/third_party/nixpkgs/doc/languages-frameworks/haskell.section.md b/third_party/nixpkgs/doc/languages-frameworks/haskell.section.md index 6b9ce32d17..b0b5f5c3bb 100644 --- a/third_party/nixpkgs/doc/languages-frameworks/haskell.section.md +++ b/third_party/nixpkgs/doc/languages-frameworks/haskell.section.md @@ -177,7 +177,7 @@ exactly one version. Those versions need to satisfy all the version constraints given in the `.cabal` file of your package and all its dependencies. The [Haskell builder in nixpkgs](#haskell-mkderivation) does no such thing. -It will simply take as input packages with names off the desired dependencies +It will take as input packages with names off the desired dependencies and just check whether they fulfill the version bounds and fail if they don’t (by default, see `jailbreak` to circumvent this). @@ -780,7 +780,7 @@ there instead. The top level `pkgs.haskell-language-server` attribute is just a convenience wrapper to make it possible to install HLS for multiple GHC versions at the same time. If you know, that you only use one GHC version, e.g., in a project -specific `nix-shell` you can simply use +specific `nix-shell` you can use `pkgs.haskellPackages.haskell-language-server` or `pkgs.haskell.packages.*.haskell-language-server` from the package set you use. diff --git a/third_party/nixpkgs/doc/languages-frameworks/javascript.section.md b/third_party/nixpkgs/doc/languages-frameworks/javascript.section.md index f35fd83cc5..152974b465 100644 --- a/third_party/nixpkgs/doc/languages-frameworks/javascript.section.md +++ b/third_party/nixpkgs/doc/languages-frameworks/javascript.section.md @@ -13,7 +13,7 @@ If you find you are lacking inspiration for packing javascript applications, the ### Github {#javascript-finding-examples-github} - Searching Nix files for `mkYarnPackage`: -- Searching just `flake.nix` files for `mkYarnPackage`: +- Searching just `flake.nix` files for `mkYarnPackage`: ### Gitlab {#javascript-finding-examples-gitlab} @@ -209,6 +209,8 @@ In the default `installPhase` set by `buildNpmPackage`, it uses `npm pack --json * `npmPackFlags`: Flags to pass to `npm pack`. * `npmPruneFlags`: Flags to pass to `npm prune`. Defaults to the value of `npmInstallFlags`. * `makeWrapperArgs`: Flags to pass to `makeWrapper`, added to executable calling the generated `.js` with `node` as an interpreter. These scripts are defined in `package.json`. +* `nodejs`: The `nodejs` package to build against, using the corresponding `npm` shipped with that version of `node`. Defaults to `pkgs.nodejs`. +* `npmDeps`: The dependencies used to build the npm package. Especially useful to not have to recompute workspace depedencies. #### prefetch-npm-deps {#javascript-buildNpmPackage-prefetch-npm-deps} diff --git a/third_party/nixpkgs/doc/languages-frameworks/lisp.section.md b/third_party/nixpkgs/doc/languages-frameworks/lisp.section.md index 8712c34120..09193093b0 100644 --- a/third_party/nixpkgs/doc/languages-frameworks/lisp.section.md +++ b/third_party/nixpkgs/doc/languages-frameworks/lisp.section.md @@ -66,7 +66,7 @@ buildPhase = '' To save some work of writing Nix expressions, there is a script that imports all the packages distributed by Quicklisp into `imported.nix`. This works by parsing its `releases.txt` and `systems.txt` files, which are published every couple of -months on [quicklisp.org](http://beta.quicklisp.org/dist/quicklisp.txt). +months on [quicklisp.org](https://beta.quicklisp.org/dist/quicklisp.txt). The import process is implemented in the `import` directory as Common Lisp code in the `org.lispbuilds.nix` ASDF system. To run the script, one can @@ -268,7 +268,7 @@ getting an environment variable for `ext:getenv`. This will load the ### Loading systems {#lisp-loading-systems} -There, you can simply use `asdf:load-system`. This works by setting the right +There, you can use `asdf:load-system`. This works by setting the right values for the `CL_SOURCE_REGISTRY`/`ASDF_OUTPUT_TRANSLATIONS` environment variables, so that systems are found in the Nix store and pre-compiled FASLs are loaded. diff --git a/third_party/nixpkgs/doc/languages-frameworks/lua.section.md b/third_party/nixpkgs/doc/languages-frameworks/lua.section.md index c5049326a7..310ea88a86 100644 --- a/third_party/nixpkgs/doc/languages-frameworks/lua.section.md +++ b/third_party/nixpkgs/doc/languages-frameworks/lua.section.md @@ -134,11 +134,11 @@ The site proposes two types of packages, the `rockspec` and the `src.rock` Luarocks-based packages are generated in [pkgs/development/lua-modules/generated-packages.nix](https://github.com/NixOS/nixpkgs/tree/master/pkgs/development/lua-modules/generated-packages.nix) from the whitelist maintainers/scripts/luarocks-packages.csv and updated by running -the script -[maintainers/scripts/update-luarocks-packages](https://github.com/NixOS/nixpkgs/tree/master/maintainers/scripts/update-luarocks-packages): +the package `luarocks-packages-updater`: ```sh -./maintainers/scripts/update-luarocks-packages update + +nix-shell -p luarocks-packages-updater --run luarocks-packages-updater ``` [luarocks2nix](https://github.com/nix-community/luarocks) is a tool capable of generating nix derivations from both rockspec and src.rock (and favors the src.rock). diff --git a/third_party/nixpkgs/doc/languages-frameworks/maven.section.md b/third_party/nixpkgs/doc/languages-frameworks/maven.section.md index 7e287a097c..b86733a758 100644 --- a/third_party/nixpkgs/doc/languages-frameworks/maven.section.md +++ b/third_party/nixpkgs/doc/languages-frameworks/maven.section.md @@ -53,7 +53,7 @@ After setting `maven.buildMavenPackage`, we then do standard Java `.jar` install Maven defines default versions for its core plugins, e.g. `maven-compiler-plugin`. If your project does not override these versions, an upgrade of Maven will change the version of the used plugins, and therefore the derivation and hash. -When `maven` is upgraded, `mvnHash` for the derivation must be updated as well: otherwise, the project will simply be built on the derivation of old plugins, and fail because the requested plugins are missing. +When `maven` is upgraded, `mvnHash` for the derivation must be updated as well: otherwise, the project will be built on the derivation of old plugins, and fail because the requested plugins are missing. This clearly prevents automatic upgrades of Maven: a manual effort must be made throughout nixpkgs by any maintainer wishing to push the upgrades. diff --git a/third_party/nixpkgs/doc/languages-frameworks/php.section.md b/third_party/nixpkgs/doc/languages-frameworks/php.section.md index 377e3947b2..154d8174f9 100644 --- a/third_party/nixpkgs/doc/languages-frameworks/php.section.md +++ b/third_party/nixpkgs/doc/languages-frameworks/php.section.md @@ -58,7 +58,7 @@ php.withExtensions ({ enabled, all }: ++ [ all.imagick ]) ``` -To build your list of extensions from the ground up, you can simply +To build your list of extensions from the ground up, you can ignore `enabled`: ```nix @@ -140,7 +140,7 @@ Example of building `composer` with additional extensions: ### Overriding PHP packages {#ssec-php-user-guide-overriding-packages} `php-packages.nix` form a scope, allowing us to override the packages defined -within. For example, to apply a patch to a `mysqlnd` extension, you can simply +within. For example, to apply a patch to a `mysqlnd` extension, you can pass an overlay-style function to `php`’s `packageOverrides` argument: ```nix @@ -191,7 +191,7 @@ using the `bin` attribute in `composer.json`, these binaries will be automatically linked and made accessible in the derivation. In this context, "binaries" refer to PHP scripts that are intended to be executable. -To use the helper effectively, simply add the `vendorHash` attribute, which +To use the helper effectively, add the `vendorHash` attribute, which enables the wrapper to handle the heavy lifting. Internally, the helper operates in three stages: diff --git a/third_party/nixpkgs/doc/languages-frameworks/python.section.md b/third_party/nixpkgs/doc/languages-frameworks/python.section.md index 40236d141d..19d4496eef 100644 --- a/third_party/nixpkgs/doc/languages-frameworks/python.section.md +++ b/third_party/nixpkgs/doc/languages-frameworks/python.section.md @@ -9,9 +9,10 @@ | python27 | python2, python | CPython 2.7 | | python38 | | CPython 3.8 | | python39 | | CPython 3.9 | -| python310 | python3 | CPython 3.10 | -| python311 | | CPython 3.11 | +| python310 | | CPython 3.10 | +| python311 | python3 | CPython 3.11 | | python312 | | CPython 3.12 | +| python313 | | CPython 3.13 | | pypy27 | pypy2, pypy | PyPy2.7 | | pypy39 | pypy3 | PyPy 3.9 | @@ -63,12 +64,14 @@ sets are * `pkgs.python39Packages` * `pkgs.python310Packages` * `pkgs.python311Packages` +* `pkgs.python312Packages` +* `pkgs.python313Packages` * `pkgs.pypyPackages` and the aliases * `pkgs.python2Packages` pointing to `pkgs.python27Packages` -* `pkgs.python3Packages` pointing to `pkgs.python310Packages` +* `pkgs.python3Packages` pointing to `pkgs.python311Packages` * `pkgs.pythonPackages` pointing to `pkgs.python2Packages` #### `buildPythonPackage` function {#buildpythonpackage-function} @@ -141,7 +144,7 @@ buildPythonPackage rec { The `buildPythonPackage` mainly does four things: -* In the [`buildPhase`](#build-phase), it calls `${python.pythonForBuild.interpreter} setup.py bdist_wheel` to +* In the [`buildPhase`](#build-phase), it calls `${python.pythonOnBuildForHost.interpreter} setup.py bdist_wheel` to build a wheel binary zipfile. * In the [`installPhase`](#ssec-install-phase), it installs the wheel file using `pip install *.whl`. * In the [`postFixup`](#var-stdenv-postFixup) phase, the `wrapPythonPrograms` bash function is called to @@ -261,7 +264,7 @@ python3MyBlas = pkgs.python3.override { ``` This is particularly useful for numpy and scipy users who want to gain speed with other blas implementations. -Note that using simply `scipy = super.scipy.override { blas = super.pkgs.mkl; };` will likely result in +Note that using `scipy = super.scipy.override { blas = super.pkgs.mkl; };` will likely result in compilation issues, because scipy dependencies need to use the same blas implementation as well. #### `buildPythonApplication` function {#buildpythonapplication-function} @@ -277,16 +280,16 @@ the packages with the version of the interpreter. Because this is irrelevant for applications, the prefix is omitted. When packaging a Python application with [`buildPythonApplication`](#buildpythonapplication-function), it should be -called with `callPackage` and passed `python` or `pythonPackages` (possibly +called with `callPackage` and passed `python3` or `python3Packages` (possibly specifying an interpreter version), like this: ```nix { lib -, python3 +, python3Packages , fetchPypi }: -python3.pkgs.buildPythonApplication rec { +python3Packages.buildPythonApplication rec { pname = "luigi"; version = "2.7.9"; pyproject = true; @@ -297,13 +300,13 @@ python3.pkgs.buildPythonApplication rec { }; nativeBuildInputs = [ - python3.pkgs.setuptools - python3.pkgs.wheel + python3Packages.setuptools + python3Packages.wheel ]; - propagatedBuildInputs = with python3.pkgs; [ - tornado - python-daemon + propagatedBuildInputs = [ + python3Packages.tornado + python3Packages.python-daemon ]; meta = with lib; { @@ -319,7 +322,7 @@ luigi = callPackage ../applications/networking/cluster/luigi { }; ``` Since the package is an application, a consumer doesn't need to care about -Python versions or modules, which is why they don't go in `pythonPackages`. +Python versions or modules, which is why they don't go in `python3Packages`. #### `toPythonApplication` function {#topythonapplication-function} @@ -335,7 +338,7 @@ the attribute in `python-packages.nix`, and the `toPythonApplication` shall be applied to the reference: ```nix -youtube-dl = with pythonPackages; toPythonApplication youtube-dl; +youtube-dl = with python3Packages; toPythonApplication youtube-dl; ``` #### `toPythonModule` function {#topythonmodule-function} @@ -364,8 +367,8 @@ Saving the following as `default.nix` ```nix with import {}; -python.buildEnv.override { - extraLibs = [ pythonPackages.pyramid ]; +python3.buildEnv.override { + extraLibs = [ python3Packages.pyramid ]; ignoreCollisions = true; } ``` @@ -430,7 +433,7 @@ python3.withPackages (ps: [ ps.pyramid ]) Now, `ps` is set to `python3Packages`, matching the version of the interpreter. -As [`python.withPackages`](#python.withpackages-function) simply uses [`python.buildEnv`](#python.buildenv-function) under the hood, it also +As [`python.withPackages`](#python.withpackages-function) uses [`python.buildEnv`](#python.buildenv-function) under the hood, it also supports the `env` attribute. The `shell.nix` file from the previous section can thus be also written like this: @@ -495,9 +498,9 @@ Given a `default.nix`: ```nix with import {}; -pythonPackages.buildPythonPackage { +python3Packages.buildPythonPackage { name = "myproject"; - buildInputs = with pythonPackages; [ pyramid ]; + buildInputs = with python3Packages; [ pyramid ]; src = ./.; } @@ -509,7 +512,7 @@ the package would be built with `nix-build`. Shortcut to setup environments with C headers/libraries and Python packages: ```shell -nix-shell -p pythonPackages.pyramid zlib libjpeg git +nix-shell -p python3Packages.pyramid zlib libjpeg git ``` ::: {.note} @@ -524,7 +527,7 @@ There is a boolean value `lib.inNixShell` set to `true` if nix-shell is invoked. Several versions of the Python interpreter are available on Nix, as well as a high amount of packages. The attribute `python3` refers to the default -interpreter, which is currently CPython 3.10. The attribute `python` refers to +interpreter, which is currently CPython 3.11. The attribute `python` refers to CPython 2.7 for backwards-compatibility. It is also possible to refer to specific versions, e.g. `python311` refers to CPython 3.11, and `pypy` refers to the default PyPy interpreter. @@ -542,7 +545,7 @@ however, are in separate sets, with one set per interpreter version. The interpreters have several common attributes. One of these attributes is `pkgs`, which is a package set of Python libraries for this specific interpreter. E.g., the `toolz` package corresponding to the default interpreter -is `python.pkgs.toolz`, and the CPython 3.11 version is `python311.pkgs.toolz`. +is `python3.pkgs.toolz`, and the CPython 3.11 version is `python311.pkgs.toolz`. The main package set contains aliases to these package sets, e.g. `pythonPackages` refers to `python.pkgs` and `python311Packages` to `python311.pkgs`. @@ -679,7 +682,7 @@ b = np.array([3,4]) print(f"The dot product of {a} and {b} is: {np.dot(a, b)}") ``` -Then we simply execute it, without requiring any environment setup at all! +Then we execute it, without requiring any environment setup at all! ```sh $ ./foo.py @@ -1681,7 +1684,7 @@ of such package using the feature is `pkgs/tools/X11/xpra/default.nix`. As workaround install it as an extra `preInstall` step: ```shell -${python.pythonForBuild.interpreter} setup.py install_data --install-dir=$out --root=$out +${python.pythonOnBuildForHost.interpreter} setup.py install_data --install-dir=$out --root=$out sed -i '/ = data\_files/d' setup.py ``` @@ -1710,7 +1713,7 @@ This is an example of a `default.nix` for a `nix-shell`, which allows to consume a virtual environment created by `venv`, and install Python modules through `pip` the traditional way. -Create this `default.nix` file, together with a `requirements.txt` and simply +Create this `default.nix` file, together with a `requirements.txt` and execute `nix-shell`. ```nix @@ -1834,7 +1837,7 @@ If you need to change a package's attribute(s) from `configuration.nix` you coul }; ``` -`pythonPackages.twisted` is now globally overridden. +`python3Packages.twisted` is now globally overridden. All packages and also all NixOS services that reference `twisted` (such as `services.buildbot-worker`) now use the new definition. Note that `python-super` refers to the old package set and `python-self` @@ -1844,7 +1847,7 @@ To modify only a Python package set instead of a whole Python derivation, use this snippet: ```nix - myPythonPackages = pythonPackages.override { + myPythonPackages = python3Packages.override { overrides = self: super: { twisted = ...; }; @@ -2024,7 +2027,9 @@ The following rules are desired to be respected: disabled individually. Try to avoid disabling the tests altogether. In any case, when you disable tests, leave a comment explaining why. * Commit names of Python libraries should reflect that they are Python - libraries, so write for example `pythonPackages.numpy: 1.11 -> 1.12`. + libraries, so write for example `python311Packages.numpy: 1.11 -> 1.12`. + It is highly recommended to specify the current default version to enable + automatic build by ofborg. * Attribute names in `python-packages.nix` as well as `pname`s should match the library's name on PyPI, but be normalized according to [PEP 0503](https://www.python.org/dev/peps/pep-0503/#normalized-names). This means diff --git a/third_party/nixpkgs/doc/languages-frameworks/ruby.section.md b/third_party/nixpkgs/doc/languages-frameworks/ruby.section.md index d3b896686c..920c84eee6 100644 --- a/third_party/nixpkgs/doc/languages-frameworks/ruby.section.md +++ b/third_party/nixpkgs/doc/languages-frameworks/ruby.section.md @@ -94,7 +94,7 @@ $ bundle lock $ bundix ``` -If you already have a `Gemfile.lock`, you can simply run `bundix` and it will work the same. +If you already have a `Gemfile.lock`, you can run `bundix` and it will work the same. To update the gems in your `Gemfile.lock`, you may use the `bundix -l` flag, which will create a new `Gemfile.lock` in case the `Gemfile` has a more recent time of modification. @@ -251,7 +251,7 @@ source 'https://rubygems.org' do end ``` -If you want to package a specific version, you can use the standard Gemfile syntax for that, e.g. `gem 'mdl', '0.5.0'`, but if you want the latest stable version anyway, it's easier to update by simply running the `bundle lock` and `bundix` steps again. +If you want to package a specific version, you can use the standard Gemfile syntax for that, e.g. `gem 'mdl', '0.5.0'`, but if you want the latest stable version anyway, it's easier to update by running the `bundle lock` and `bundix` steps again. Now you can also make a `default.nix` that looks like this: diff --git a/third_party/nixpkgs/doc/languages-frameworks/rust.section.md b/third_party/nixpkgs/doc/languages-frameworks/rust.section.md index 3bd8e1c765..d18b048b91 100644 --- a/third_party/nixpkgs/doc/languages-frameworks/rust.section.md +++ b/third_party/nixpkgs/doc/languages-frameworks/rust.section.md @@ -939,3 +939,68 @@ Fenix also has examples with `buildRustPackage`, [crane](https://github.com/ipetkov/crane), [naersk](https://github.com/nix-community/naersk), and cross compilation in its [Examples](https://github.com/nix-community/fenix#examples) section. + +## Using `git bisect` on the Rust compiler {#using-git-bisect-on-the-rust-compiler} + +Sometimes an upgrade of the Rust compiler (`rustc`) will break a +downstream package. In these situations, being able to `git bisect` +the `rustc` version history to find the offending commit is quite +useful. Nixpkgs makes it easy to do this. + +First, roll back your nixpkgs to a commit in which its `rustc` used +*the most recent one which doesn't have the problem.* You'll need +to do this because of `rustc`'s extremely aggressive +version-pinning. + +Next, add the following overlay, updating the Rust version to the +one in your rolled-back nixpkgs, and replacing `/git/scratch/rust` +with the path into which you have `git clone`d the `rustc` git +repository: + +```nix + (final: prev: /*lib.optionalAttrs prev.stdenv.targetPlatform.isAarch64*/ { + rust_1_72 = + lib.updateManyAttrsByPath [{ + path = [ "packages" "stable" ]; + update = old: old.overrideScope(final: prev: { + rustc = prev.rustc.overrideAttrs (_: { + src = lib.cleanSource /git/scratch/rust; + # do *not* put passthru.isReleaseTarball=true here + }); + }); + }] + prev.rust_1_72; + }) +``` + +If the problem you're troubleshooting only manifests when +cross-compiling you can uncomment the `lib.optionalAttrs` in the +example above, and replace `isAarch64` with the target that is +having problems. This will speed up your bisect quite a bit, since +the host compiler won't need to be rebuilt. + +Now, you can start a `git bisect` in the directory where you checked +out the `rustc` source code. It is recommended to select the +endpoint commits by searching backwards from `origin/master` for the +*commits which added the release notes for the versions in +question.* If you set the endpoints to commits on the release +branches (i.e. the release tags), git-bisect will often get confused +by the complex merge-commit structures it will need to traverse. + +The command loop you'll want to use for bisecting looks like this: + +```bash +git bisect {good,bad} # depending on result of last build +git submodule update --init +CARGO_NET_OFFLINE=false cargo vendor \ + --sync ./src/tools/cargo/Cargo.toml \ + --sync ./src/tools/rust-analyzer/Cargo.toml \ + --sync ./compiler/rustc_codegen_cranelift/Cargo.toml \ + --sync ./src/bootstrap/Cargo.toml +nix-build $NIXPKGS -A package-broken-by-rust-changes +``` + +The `git submodule update --init` and `cargo vendor` commands above +require network access, so they can't be performed from within the +`rustc` derivation, unfortunately. + diff --git a/third_party/nixpkgs/doc/languages-frameworks/swift.section.md b/third_party/nixpkgs/doc/languages-frameworks/swift.section.md index 1cc452cc9b..213d444f49 100644 --- a/third_party/nixpkgs/doc/languages-frameworks/swift.section.md +++ b/third_party/nixpkgs/doc/languages-frameworks/swift.section.md @@ -32,7 +32,7 @@ look for the following directories: (If not targeting macOS, replace `macosx` with the Xcode platform name.) - On other platforms: `lib/swift/linux/x86_64` (Where `linux` and `x86_64` are from lowercase `uname -sm`.) -- For convenience, Nixpkgs also adds simply `lib/swift` to the search path. +- For convenience, Nixpkgs also adds `lib/swift` to the search path. This can save a bit of work packaging Swift modules, because many Nix builds will produce output for just one target any way. @@ -123,7 +123,7 @@ swiftpmFlags = [ "--disable-dead-strip" ]; The default `buildPhase` already passes `-j` for parallel building. -If these two customization options are insufficient, simply provide your own +If these two customization options are insufficient, provide your own `buildPhase` that invokes `swift build`. ### Running tests {#ssec-swiftpm-running-tests} diff --git a/third_party/nixpkgs/doc/languages-frameworks/texlive.section.md b/third_party/nixpkgs/doc/languages-frameworks/texlive.section.md index a4c81daa54..2ba846dc49 100644 --- a/third_party/nixpkgs/doc/languages-frameworks/texlive.section.md +++ b/third_party/nixpkgs/doc/languages-frameworks/texlive.section.md @@ -2,6 +2,46 @@ Since release 15.09 there is a new TeX Live packaging that lives entirely under attribute `texlive`. +## User's guide (experimental new interface) {#sec-language-texlive-user-guide-experimental} + +Release 23.11 ships with a new interface that will eventually replace `texlive.combine`. + +- For basic usage, use some of the prebuilt environments available at the top level, such as `texliveBasic`, `texliveSmall`. For the full list of prebuilt environments, inspect `texlive.schemes`. + +- Packages cannot be used directly but must be assembled in an environment. To create or add packages to an environment, use + ```nix + texliveSmall.withPackages (ps: with ps; [ collection-langkorean algorithms cm-super ]) + ``` + The function `withPackages` can be called multiple times to add more packages. + + - **Note.** Within Nixpkgs, packages should only use prebuilt environments as inputs, such as `texliveSmall` or `texliveInfraOnly`, and should not depend directly on `texlive`. Further dependencies should be added by calling `withPackages`. This is to ensure that there is a consistent and simple way to override the inputs. + +- `texlive.withPackages` uses the same logic as `buildEnv`. Only parts of a package are installed in an environment: its 'runtime' files (`tex` output), binaries (`out` output), and support files (`tlpkg` output). Moreover, man and info pages are assembled into separate `man` and `info` outputs. To add only the TeX files of a package, or its documentation (`texdoc` output), just specify the outputs: + ```nix + texlive.withPackages (ps: with ps; [ + texdoc # recommended package to navigate the documentation + perlPackages.LaTeXML.tex # tex files of LaTeXML, omit binaries + cm-super + cm-super.texdoc # documentation of cm-super + ]) + ``` + +- All packages distributed by TeX Live, which contains most of CTAN, are available and can be found under `texlive.pkgs`: + ```ShellSession + $ nix repl + nix-repl> :l + nix-repl> texlive.pkgs.[TAB] + ``` + Note that the packages in `texlive.pkgs` are only provided for search purposes and must not be used directly. + +- **Experimental and subject to change without notice:** to add the documentation for all packages in the environment, use + ```nix + texliveSmall.__overrideTeXConfig { withDocs = true; } + ``` + This can be applied before or after calling `withPackages`. + + The function currently support the parameters `withDocs`, `withSources`, and `requireTeXPackages`. + ## User's guide {#sec-language-texlive-user-guide} - For basic usage just pull `texlive.combined.scheme-basic` for an environment with basic LaTeX support. @@ -38,6 +78,24 @@ Since release 15.09 there is a new TeX Live packaging that lives entirely under - Note that the wrapper assumes that the result has a chance to be useful. For example, the core executables should be present, as well as some core data files. The supported way of ensuring this is by including some scheme, for example `scheme-basic`, into the combination. +- TeX Live packages are also available under `texlive.pkgs` as derivations with outputs `out`, `tex`, `texdoc`, `texsource`, `tlpkg`, `man`, `info`. They cannot be installed outside of `texlive.combine` but are available for other uses. To repackage a font, for instance, use + + ```nix + stdenvNoCC.mkDerivation rec { + src = texlive.pkgs.iwona; + + inherit (src) pname version; + + installPhase = '' + runHook preInstall + install -Dm644 fonts/opentype/nowacki/iwona/*.otf -t $out/share/fonts/opentype + runHook postInstall + ''; + } + ``` + + See `biber`, `iwona` for complete examples. + ## Custom packages {#sec-language-texlive-custom-packages} You may find that you need to use an external TeX package. A derivation for such package has to provide the contents of the "texmf" directory in its output and provide the appropriate `tlType` attribute (one of `"run"`, `"bin"`, `"doc"`, `"source"`). Dependencies on other TeX packages can be listed in the attribute `tlDeps`. diff --git a/third_party/nixpkgs/doc/manual.md.in b/third_party/nixpkgs/doc/manual.md.in index 6b8d351380..52971ff526 100644 --- a/third_party/nixpkgs/doc/manual.md.in +++ b/third_party/nixpkgs/doc/manual.md.in @@ -9,7 +9,7 @@ preface.chapter.md using-nixpkgs.md lib.md stdenv.md -builders.md +build-helpers.md development.md contributing.md ``` diff --git a/third_party/nixpkgs/doc/builders/packages/cataclysm-dda.section.md b/third_party/nixpkgs/doc/packages/cataclysm-dda.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/cataclysm-dda.section.md rename to third_party/nixpkgs/doc/packages/cataclysm-dda.section.md diff --git a/third_party/nixpkgs/doc/builders/packages/citrix.section.md b/third_party/nixpkgs/doc/packages/citrix.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/citrix.section.md rename to third_party/nixpkgs/doc/packages/citrix.section.md diff --git a/third_party/nixpkgs/doc/builders/special/darwin-builder.section.md b/third_party/nixpkgs/doc/packages/darwin-builder.section.md similarity index 87% rename from third_party/nixpkgs/doc/builders/special/darwin-builder.section.md rename to third_party/nixpkgs/doc/packages/darwin-builder.section.md index e37fabe01a..89c2445667 100644 --- a/third_party/nixpkgs/doc/builders/special/darwin-builder.section.md +++ b/third_party/nixpkgs/doc/packages/darwin-builder.section.md @@ -1,10 +1,10 @@ # darwin.linux-builder {#sec-darwin-builder} -`darwin.linux-builder` provides a way to bootstrap a Linux builder on a macOS machine. +`darwin.linux-builder` provides a way to bootstrap a Linux remote builder on a macOS machine. This requires macOS version 12.4 or later. -The builder runs on host port 31022 by default. +The remote builder runs on host port 31022 by default. You can change it by overriding `virtualisation.darwin-builder.hostPort`. See the [example](#sec-darwin-builder-example-flake). @@ -15,7 +15,7 @@ words, your `/etc/nix/nix.conf` should have something like: extra-trusted-users = ``` -To launch the builder, run the following flake: +To launch the remote builder, run the following flake: ```ShellSession $ nix run nixpkgs#darwin.linux-builder @@ -57,7 +57,7 @@ builders = ssh-ng://builder@linux-builder ${ARCH}-linux /etc/nix/builder_ed25519 builders-use-substitutes = true ``` -To allow Nix to connect to a builder not running on port 22, you will also need to create a new file at `/etc/ssh/ssh_config.d/100-linux-builder.conf`: +To allow Nix to connect to a remote builder not running on port 22, you will also need to create a new file at `/etc/ssh/ssh_config.d/100-linux-builder.conf`: ``` Host linux-builder @@ -130,11 +130,11 @@ $ sudo launchctl kickstart -k system/org.nixos.nix-daemon } ``` -## Reconfiguring the builder {#sec-darwin-builder-reconfiguring} +## Reconfiguring the remote builder {#sec-darwin-builder-reconfiguring} -Initially you should not change the builder configuration else you will not be -able to use the binary cache. However, after you have the builder running locally -you may use it to build a modified builder with additional storage or memory. +Initially you should not change the remote builder configuration else you will not be +able to use the binary cache. However, after you have the remote builder running locally +you may use it to build a modified remote builder with additional storage or memory. To do this, you just need to set the `virtualisation.darwin-builder.*` parameters as in the example below and rebuild. diff --git a/third_party/nixpkgs/doc/builders/packages/dlib.section.md b/third_party/nixpkgs/doc/packages/dlib.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/dlib.section.md rename to third_party/nixpkgs/doc/packages/dlib.section.md diff --git a/third_party/nixpkgs/doc/builders/packages/eclipse.section.md b/third_party/nixpkgs/doc/packages/eclipse.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/eclipse.section.md rename to third_party/nixpkgs/doc/packages/eclipse.section.md diff --git a/third_party/nixpkgs/doc/builders/packages/elm.section.md b/third_party/nixpkgs/doc/packages/elm.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/elm.section.md rename to third_party/nixpkgs/doc/packages/elm.section.md diff --git a/third_party/nixpkgs/doc/builders/packages/emacs.section.md b/third_party/nixpkgs/doc/packages/emacs.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/emacs.section.md rename to third_party/nixpkgs/doc/packages/emacs.section.md diff --git a/third_party/nixpkgs/doc/builders/packages/etc-files.section.md b/third_party/nixpkgs/doc/packages/etc-files.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/etc-files.section.md rename to third_party/nixpkgs/doc/packages/etc-files.section.md diff --git a/third_party/nixpkgs/doc/builders/packages/firefox.section.md b/third_party/nixpkgs/doc/packages/firefox.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/firefox.section.md rename to third_party/nixpkgs/doc/packages/firefox.section.md diff --git a/third_party/nixpkgs/doc/builders/packages/fish.section.md b/third_party/nixpkgs/doc/packages/fish.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/fish.section.md rename to third_party/nixpkgs/doc/packages/fish.section.md diff --git a/third_party/nixpkgs/doc/builders/packages/fuse.section.md b/third_party/nixpkgs/doc/packages/fuse.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/fuse.section.md rename to third_party/nixpkgs/doc/packages/fuse.section.md diff --git a/third_party/nixpkgs/doc/builders/packages/ibus.section.md b/third_party/nixpkgs/doc/packages/ibus.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/ibus.section.md rename to third_party/nixpkgs/doc/packages/ibus.section.md diff --git a/third_party/nixpkgs/doc/builders/packages/index.md b/third_party/nixpkgs/doc/packages/index.md similarity index 95% rename from third_party/nixpkgs/doc/builders/packages/index.md rename to third_party/nixpkgs/doc/packages/index.md index 1f44357024..1f45018ffc 100644 --- a/third_party/nixpkgs/doc/builders/packages/index.md +++ b/third_party/nixpkgs/doc/packages/index.md @@ -4,6 +4,7 @@ This chapter contains information about how to use and maintain the Nix expressi ```{=include=} sections citrix.section.md +darwin-builder.section.md dlib.section.md eclipse.section.md elm.section.md diff --git a/third_party/nixpkgs/doc/builders/packages/kakoune.section.md b/third_party/nixpkgs/doc/packages/kakoune.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/kakoune.section.md rename to third_party/nixpkgs/doc/packages/kakoune.section.md diff --git a/third_party/nixpkgs/doc/builders/packages/linux.section.md b/third_party/nixpkgs/doc/packages/linux.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/linux.section.md rename to third_party/nixpkgs/doc/packages/linux.section.md diff --git a/third_party/nixpkgs/doc/builders/packages/locales.section.md b/third_party/nixpkgs/doc/packages/locales.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/locales.section.md rename to third_party/nixpkgs/doc/packages/locales.section.md diff --git a/third_party/nixpkgs/doc/builders/packages/nginx.section.md b/third_party/nixpkgs/doc/packages/nginx.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/nginx.section.md rename to third_party/nixpkgs/doc/packages/nginx.section.md diff --git a/third_party/nixpkgs/doc/builders/packages/opengl.section.md b/third_party/nixpkgs/doc/packages/opengl.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/opengl.section.md rename to third_party/nixpkgs/doc/packages/opengl.section.md diff --git a/third_party/nixpkgs/doc/builders/packages/shell-helpers.section.md b/third_party/nixpkgs/doc/packages/shell-helpers.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/shell-helpers.section.md rename to third_party/nixpkgs/doc/packages/shell-helpers.section.md diff --git a/third_party/nixpkgs/doc/builders/packages/steam.section.md b/third_party/nixpkgs/doc/packages/steam.section.md similarity index 89% rename from third_party/nixpkgs/doc/builders/packages/steam.section.md rename to third_party/nixpkgs/doc/packages/steam.section.md index 25728aa52a..a1e88b0d97 100644 --- a/third_party/nixpkgs/doc/builders/packages/steam.section.md +++ b/third_party/nixpkgs/doc/packages/steam.section.md @@ -11,7 +11,7 @@ Nix problems and constraints: - The `steam.sh` script in `$HOME` cannot be patched, as it is checked and rewritten by steam. - The steam binary cannot be patched, it's also checked. -The current approach to deploy Steam in NixOS is composing a FHS-compatible chroot environment, as documented [here](http://sandervanderburg.blogspot.nl/2013/09/composing-fhs-compatible-chroot.html). This allows us to have binaries in the expected paths without disrupting the system, and to avoid patching them to work in a non FHS environment. +The current approach to deploy Steam in NixOS is composing a FHS-compatible chroot environment, as documented [here](https://sandervanderburg.blogspot.com/2013/09/composing-fhs-compatible-chroot.html). This allows us to have binaries in the expected paths without disrupting the system, and to avoid patching them to work in a non FHS environment. ## How to play {#sec-steam-play} diff --git a/third_party/nixpkgs/doc/builders/packages/urxvt.section.md b/third_party/nixpkgs/doc/packages/urxvt.section.md similarity index 97% rename from third_party/nixpkgs/doc/builders/packages/urxvt.section.md rename to third_party/nixpkgs/doc/packages/urxvt.section.md index 507feaa6fd..7aff0997dd 100644 --- a/third_party/nixpkgs/doc/builders/packages/urxvt.section.md +++ b/third_party/nixpkgs/doc/packages/urxvt.section.md @@ -34,7 +34,7 @@ $ nix repl map (p: p.name) pkgs.rxvt-unicode.plugins ``` -Alternatively, if your shell is bash or zsh and have completion enabled, simply type `nixpkgs.rxvt-unicode.plugins.`. +Alternatively, if your shell is bash or zsh and have completion enabled, type `nixpkgs.rxvt-unicode.plugins.`. In addition to `plugins` the options `extraDeps` and `perlDeps` can be used to install extra packages. `extraDeps` can be used, for example, to provide `xsel` (a clipboard manager) to the clipboard plugin, without installing it globally: diff --git a/third_party/nixpkgs/doc/builders/packages/weechat.section.md b/third_party/nixpkgs/doc/packages/weechat.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/weechat.section.md rename to third_party/nixpkgs/doc/packages/weechat.section.md diff --git a/third_party/nixpkgs/doc/builders/packages/xorg.section.md b/third_party/nixpkgs/doc/packages/xorg.section.md similarity index 100% rename from third_party/nixpkgs/doc/builders/packages/xorg.section.md rename to third_party/nixpkgs/doc/packages/xorg.section.md diff --git a/third_party/nixpkgs/doc/stdenv/stdenv.chapter.md b/third_party/nixpkgs/doc/stdenv/stdenv.chapter.md index 366c519751..26c43bd9e9 100644 --- a/third_party/nixpkgs/doc/stdenv/stdenv.chapter.md +++ b/third_party/nixpkgs/doc/stdenv/stdenv.chapter.md @@ -101,25 +101,62 @@ genericBuild ### Building a `stdenv` package in `nix-shell` {#sec-building-stdenv-package-in-nix-shell} -To build a `stdenv` package in a [`nix-shell`](https://nixos.org/manual/nix/unstable/command-ref/nix-shell.html), use +To build a `stdenv` package in a [`nix-shell`](https://nixos.org/manual/nix/unstable/command-ref/nix-shell.html), enter a shell, find the [phases](#sec-stdenv-phases) you wish to build, then invoke `genericBuild` manually: + +Go to an empty directory, invoke `nix-shell` with the desired package, and from inside the shell, set the output variables to a writable directory: ```bash +cd "$(mktemp -d)" nix-shell '' -A some_package -eval "${unpackPhase:-unpackPhase}" -cd $sourceRoot -eval "${patchPhase:-patchPhase}" -eval "${configurePhase:-configurePhase}" -eval "${buildPhase:-buildPhase}" +export out=$(pwd)/out +``` + +Next, invoke the desired parts of the build. +First, run the phases that generate a working copy of the sources, which will change directory to the sources for you: + +```bash +phases="${prePhases[*]:-} unpackPhase patchPhase" genericBuild +``` + +Then, run more phases up until the failure is reached. +For example, if the failure is in the build phase, the following phases would be required: + +```bash +phases="${preConfigurePhases[*]:-} configurePhase ${preBuildPhases[*]:-} buildPhase" genericBuild +``` + +Re-run a single phase as many times as necessary to examine the failure like so: + +```bash +phases="buildPhase" genericBuild ``` To modify a [phase](#sec-stdenv-phases), first print it with +```bash +echo "$buildPhase" +``` + +Or, if that is empty, for instance, if it is using a function: + ```bash type buildPhase ``` then change it in a text editor, and paste it back to the terminal. +::: {.note} +This method may have some inconsistencies in environment variables and behaviour compared to a normal build within the [Nix build sandbox](https://nixos.org/manual/nix/unstable/language/derivations#builder-execution). +The following is a non-exhaustive list of such differences: + +- `TMP`, `TMPDIR`, and similar variables likely point to non-empty directories that the build might conflict with files in. +- Output store paths are not writable, so the variables for outputs need to be overridden to writable paths. +- Other environment variables may be inconsistent with a `nix-build` either due to `nix-shell`'s initialization script or due to the use of `nix-shell` without the `--pure` option. + +If the build fails differently inside the shell than in the sandbox, consider using [`breakpointHook`](#breakpointhook) and invoking `nix-build` instead. +The [`--keep-failed`](https://nixos.org/manual/nix/unstable/command-ref/conf-file#opt--keep-failed) option for `nix-build` may also be useful to examine the build directory of a failed build. +::: + ## Tools provided by `stdenv` {#sec-tools-of-stdenv} The standard environment provides the following packages: @@ -282,7 +319,7 @@ let f(h, h + 1, i) = i + (if i <= 0 then h else h) let f(h, h + 1, i) = i + h ``` -This is where “sum-like” comes in from above: We can just sum all of the host offsets to get the host offset of the transitive dependency. The target offset is the transitive dependency is simply the host offset + 1, just as it was with the dependencies composed to make this transitive one; it can be ignored as it doesn’t add any new information. +This is where “sum-like” comes in from above: We can just sum all of the host offsets to get the host offset of the transitive dependency. The target offset is the transitive dependency is the host offset + 1, just as it was with the dependencies composed to make this transitive one; it can be ignored as it doesn’t add any new information. Because of the bounds checks, the uncommon cases are `h = t` and `h + 2 = t`. In the former case, the motivation for `mapOffset` is that since its host and target platforms are the same, no transitive dependency of it should be able to “discover” an offset greater than its reduced target offsets. `mapOffset` effectively “squashes” all its transitive dependencies’ offsets so that none will ever be greater than the target offset of the original `h = t` package. In the other case, `h + 1` is skipped over between the host and target offsets. Instead of squashing the offsets, we need to “rip” them apart so no transitive dependencies’ offset is that one. @@ -491,7 +528,7 @@ If the returned array contains exactly one object (e.g. `[{}]`), all values are ``` ::: -### Recursive attributes in `mkDerivation` {#mkderivation-recursive-attributes} +### Fixed-point arguments of `mkDerivation` {#mkderivation-recursive-attributes} If you pass a function to `mkDerivation`, it will receive as its argument the final arguments, including the overrides when reinvoked via `overrideAttrs`. For example: @@ -612,7 +649,7 @@ Zip files are unpacked using `unzip`. However, `unzip` is not in the standard en #### Directories in the Nix store {#directories-in-the-nix-store} -These are simply copied to the current directory. The hash part of the file name is stripped, e.g. `/nix/store/1wydxgby13cz...-my-sources` would be copied to `my-sources`. +These are copied to the current directory. The hash part of the file name is stripped, e.g. `/nix/store/1wydxgby13cz...-my-sources` would be copied to `my-sources`. Additional file types can be supported by setting the `unpackCmd` variable (see below). @@ -751,7 +788,7 @@ Hook executed at the end of the configure phase. ### The build phase {#build-phase} -The build phase is responsible for actually building the package (e.g. compiling it). The default `buildPhase` simply calls `make` if a file named `Makefile`, `makefile` or `GNUmakefile` exists in the current directory (or the `makefile` is explicitly set); otherwise it does nothing. +The build phase is responsible for actually building the package (e.g. compiling it). The default `buildPhase` calls `make` if a file named `Makefile`, `makefile` or `GNUmakefile` exists in the current directory (or the `makefile` is explicitly set); otherwise it does nothing. #### Variables controlling the build phase {#variables-controlling-the-build-phase} @@ -1280,7 +1317,7 @@ Nix itself considers a build-time dependency as merely something that should pre In order to alleviate this burden, the setup hook mechanism was written, where any package can include a shell script that \[by convention rather than enforcement by Nix\], any downstream reverse-dependency will source as part of its build process. That allows the downstream dependency to merely specify its dependencies, and lets those dependencies effectively initialize themselves. No boilerplate mirroring the list of dependencies is needed. -The setup hook mechanism is a bit of a sledgehammer though: a powerful feature with a broad and indiscriminate area of effect. The combination of its power and implicit use may be expedient, but isn’t without costs. Nix itself is unchanged, but the spirit of added dependencies being effect-free is violated even if the latter isn’t. For example, if a derivation path is mentioned more than once, Nix itself doesn’t care and simply makes sure the dependency derivation is already built just the same—depending is just needing something to exist, and needing is idempotent. However, a dependency specified twice will have its setup hook run twice, and that could easily change the build environment (though a well-written setup hook will therefore strive to be idempotent so this is in fact not observable). More broadly, setup hooks are anti-modular in that multiple dependencies, whether the same or different, should not interfere and yet their setup hooks may well do so. +The setup hook mechanism is a bit of a sledgehammer though: a powerful feature with a broad and indiscriminate area of effect. The combination of its power and implicit use may be expedient, but isn’t without costs. Nix itself is unchanged, but the spirit of added dependencies being effect-free is violated even if the latter isn’t. For example, if a derivation path is mentioned more than once, Nix itself doesn’t care and makes sure the dependency derivation is already built just the same—depending is just needing something to exist, and needing is idempotent. However, a dependency specified twice will have its setup hook run twice, and that could easily change the build environment (though a well-written setup hook will therefore strive to be idempotent so this is in fact not observable). More broadly, setup hooks are anti-modular in that multiple dependencies, whether the same or different, should not interfere and yet their setup hooks may well do so. The most typical use of the setup hook is actually to add other hooks which are then run (i.e. after all the setup hooks) on each dependency. For example, the C compiler wrapper’s setup hook feeds itself flags for each dependency that contains relevant libraries and headers. This is done by defining a bash function, and appending its name to one of `envBuildBuildHooks`, `envBuildHostHooks`, `envBuildTargetHooks`, `envHostHostHooks`, `envHostTargetHooks`, or `envTargetTargetHooks`. These 6 bash variables correspond to the 6 sorts of dependencies by platform (there’s 12 total but we ignore the propagated/non-propagated axis). diff --git a/third_party/nixpkgs/doc/using/overlays.chapter.md b/third_party/nixpkgs/doc/using/overlays.chapter.md index 6ee52215a4..1bec6586f2 100644 --- a/third_party/nixpkgs/doc/using/overlays.chapter.md +++ b/third_party/nixpkgs/doc/using/overlays.chapter.md @@ -77,7 +77,7 @@ In Nixpkgs, we have multiple implementations of the BLAS/LAPACK numerical linear The Nixpkgs attribute is `openblas` for ILP64 (integer width = 64 bits) and `openblasCompat` for LP64 (integer width = 32 bits). `openblasCompat` is the default. -- [LAPACK reference](http://www.netlib.org/lapack/) (also provides BLAS and CBLAS) +- [LAPACK reference](https://www.netlib.org/lapack/) (also provides BLAS and CBLAS) The Nixpkgs attribute is `lapack-reference`. @@ -156,7 +156,7 @@ All programs that are built with [MPI](https://en.wikipedia.org/wiki/Message_Pas - [MVAPICH](https://mvapich.cse.ohio-state.edu/), attribute name `mvapich` -To provide MPI enabled applications that use `MPICH`, instead of the default `Open MPI`, simply use the following overlay: +To provide MPI enabled applications that use `MPICH`, instead of the default `Open MPI`, use the following overlay: ```nix self: super: diff --git a/third_party/nixpkgs/lib/README.md b/third_party/nixpkgs/lib/README.md index 627086843d..220940bc21 100644 --- a/third_party/nixpkgs/lib/README.md +++ b/third_party/nixpkgs/lib/README.md @@ -74,3 +74,23 @@ path/tests/prop.sh # Run the lib.fileset tests fileset/tests.sh ``` + +## Commit conventions + +- Make sure you read about the [commit conventions](../CONTRIBUTING.md#commit-conventions) common to Nixpkgs as a whole. + +- Format the commit messages in the following way: + + ``` + lib.(section): (init | add additional argument | refactor | etc) + + (Motivation for change. Additional information.) + ``` + + Examples: + + * lib.getExe': check arguments + * lib.fileset: Add an additional argument in the design docs + + Closes #264537 + diff --git a/third_party/nixpkgs/lib/asserts.nix b/third_party/nixpkgs/lib/asserts.nix index 98e0b490ac..8d0a621f4c 100644 --- a/third_party/nixpkgs/lib/asserts.nix +++ b/third_party/nixpkgs/lib/asserts.nix @@ -50,4 +50,33 @@ rec { lib.generators.toPretty {} xs}, but is: ${ lib.generators.toPretty {} val}"; + /* Specialized `assertMsg` for checking if every one of `vals` is one of the elements + of the list `xs`. Useful for checking lists of supported attributes. + + Example: + let sslLibraries = [ "libressl" "bearssl" ]; + in assertEachOneOf "sslLibraries" sslLibraries [ "openssl" "bearssl" ] + stderr> error: each element in sslLibraries must be one of [ + stderr> "openssl" + stderr> "bearssl" + stderr> ], but is: [ + stderr> "libressl" + stderr> "bearssl" + stderr> ] + + Type: + assertEachOneOf :: String -> List ComparableVal -> List ComparableVal -> Bool + */ + assertEachOneOf = + # The name of the variable the user entered `val` into, for inclusion in the error message + name: + # The list of values of what the user provided, to be compared against the values in `xs` + vals: + # The list of valid values + xs: + assertMsg + (lib.all (val: lib.elem val xs) vals) + "each element in ${name} must be one of ${ + lib.generators.toPretty {} xs}, but is: ${ + lib.generators.toPretty {} vals}"; } diff --git a/third_party/nixpkgs/lib/customisation.nix b/third_party/nixpkgs/lib/customisation.nix index 5ef4f29e6f..08fc5db061 100644 --- a/third_party/nixpkgs/lib/customisation.nix +++ b/third_party/nixpkgs/lib/customisation.nix @@ -13,16 +13,7 @@ rec { scenarios (e.g. in ~/.config/nixpkgs/config.nix). For instance, if you want to "patch" the derivation returned by a package function in Nixpkgs to build another version than what the - function itself provides, you can do something like this: - - mySed = overrideDerivation pkgs.gnused (oldAttrs: { - name = "sed-4.2.2-pre"; - src = fetchurl { - url = ftp://alpha.gnu.org/gnu/sed/sed-4.2.2-pre.tar.bz2; - hash = "sha256-MxBJRcM2rYzQYwJ5XKxhXTQByvSg5jZc5cSHEZoB2IY="; - }; - patches = []; - }); + function itself provides. For another application, see build-support/vm, where this function is used to build arbitrary derivations inside a QEMU @@ -35,6 +26,19 @@ rec { You should in general prefer `drv.overrideAttrs` over this function; see the nixpkgs manual for more information on overriding. + + Example: + mySed = overrideDerivation pkgs.gnused (oldAttrs: { + name = "sed-4.2.2-pre"; + src = fetchurl { + url = ftp://alpha.gnu.org/gnu/sed/sed-4.2.2-pre.tar.bz2; + hash = "sha256-MxBJRcM2rYzQYwJ5XKxhXTQByvSg5jZc5cSHEZoB2IY="; + }; + patches = []; + }); + + Type: + overrideDerivation :: Derivation -> ( Derivation -> AttrSet ) -> Derivation */ overrideDerivation = drv: f: let @@ -55,6 +59,10 @@ rec { injects `override` attribute which can be used to override arguments of the function. + Please refer to documentation on [`.overrideDerivation`](#sec-pkg-overrideDerivation) to learn about `overrideDerivation` and caveats + related to its use. + + Example: nix-repl> x = {a, b}: { result = a + b; } nix-repl> y = lib.makeOverridable x { a = 1; b = 2; } @@ -65,23 +73,25 @@ rec { nix-repl> y.override { a = 10; } { override = «lambda»; overrideDerivation = «lambda»; result = 12; } - Please refer to "Nixpkgs Contributors Guide" section - ".overrideDerivation" to learn about `overrideDerivation` and caveats - related to its use. + Type: + makeOverridable :: (AttrSet -> a) -> AttrSet -> a */ - makeOverridable = f: lib.setFunctionArgs - (origArgs: let + makeOverridable = f: + let + # Creates a functor with the same arguments as f + mirrorArgs = lib.mirrorFunctionArgs f; + in + mirrorArgs (origArgs: + let result = f origArgs; - # Creates a functor with the same arguments as f - copyArgs = g: lib.setFunctionArgs g (lib.functionArgs f); # Changes the original arguments with (potentially a function that returns) a set of new attributes overrideWith = newArgs: origArgs // (if lib.isFunction newArgs then newArgs origArgs else newArgs); # Re-call the function but with different arguments - overrideArgs = copyArgs (newArgs: makeOverridable f (overrideWith newArgs)); + overrideArgs = mirrorArgs (newArgs: makeOverridable f (overrideWith newArgs)); # Change the result of the function call by applying g to it - overrideResult = g: makeOverridable (copyArgs (args: g (f args))) origArgs; + overrideResult = g: makeOverridable (mirrorArgs (args: g (f args))) origArgs; in if builtins.isAttrs result then result // { @@ -95,8 +105,7 @@ rec { lib.setFunctionArgs result (lib.functionArgs result) // { override = overrideArgs; } - else result) - (lib.functionArgs f); + else result); /* Call the package function in the file `fn` with the required @@ -105,20 +114,29 @@ rec { `autoArgs`. This function is intended to be partially parameterised, e.g., + ```nix callPackage = callPackageWith pkgs; pkgs = { libfoo = callPackage ./foo.nix { }; libbar = callPackage ./bar.nix { }; }; + ``` If the `libbar` function expects an argument named `libfoo`, it is automatically passed as an argument. Overrides or missing arguments can be supplied in `args`, e.g. + ```nix libbar = callPackage ./bar.nix { libfoo = null; enableX11 = true; }; + ``` + + + + Type: + callPackageWith :: AttrSet -> ((AttrSet -> a) | Path) -> AttrSet -> a */ callPackageWith = autoArgs: fn: args: let @@ -129,7 +147,7 @@ rec { # This includes automatic ones and ones passed explicitly allArgs = builtins.intersectAttrs fargs autoArgs // args; - # A list of argument names that the function requires, but + # a list of argument names that the function requires, but # wouldn't be passed to it missingArgs = lib.attrNames # Filter out arguments that have a default value @@ -176,7 +194,11 @@ rec { /* Like callPackage, but for a function that returns an attribute set of derivations. The override function is added to the - individual attributes. */ + individual attributes. + + Type: + callPackagesWith :: AttrSet -> ((AttrSet -> AttrSet) | Path) -> AttrSet -> AttrSet + */ callPackagesWith = autoArgs: fn: args: let f = if lib.isFunction fn then fn else import fn; @@ -193,7 +215,11 @@ rec { /* Add attributes to each output of a derivation without changing - the derivation itself and check a given condition when evaluating. */ + the derivation itself and check a given condition when evaluating. + + Type: + extendDerivation :: Bool -> Any -> Derivation -> Derivation + */ extendDerivation = condition: passthru: drv: let outputs = drv.outputs or [ "out" ]; @@ -227,7 +253,11 @@ rec { /* Strip a derivation of all non-essential attributes, returning only those needed by hydra-eval-jobs. Also strictly evaluate the result to ensure that there are no thunks kept alive to prevent - garbage collection. */ + garbage collection. + + Type: + hydraJob :: (Derivation | Null) -> (Derivation | Null) + */ hydraJob = drv: let outputs = drv.outputs or ["out"]; @@ -265,7 +295,11 @@ rec { called with the overridden packages. The package sets may be hierarchical: the packages in the set are called with the scope provided by `newScope` and the set provides a `newScope` attribute - which can form the parent scope for later package sets. */ + which can form the parent scope for later package sets. + + Type: + makeScope :: (AttrSet -> ((AttrSet -> a) | Path) -> AttrSet -> a) -> (AttrSet -> AttrSet) -> AttrSet + */ makeScope = newScope: f: let self = f self // { newScope = scope: newScope (self // scope); @@ -287,13 +321,48 @@ rec { { inherit otherSplices keep extra f; }; /* Like makeScope, but aims to support cross compilation. It's still ugly, but - hopefully it helps a little bit. */ + hopefully it helps a little bit. + + Type: + makeScopeWithSplicing' :: + { splicePackages :: Splice -> AttrSet + , newScope :: AttrSet -> ((AttrSet -> a) | Path) -> AttrSet -> a + } + -> { otherSplices :: Splice, keep :: AttrSet -> AttrSet, extra :: AttrSet -> AttrSet } + -> AttrSet + + Splice :: + { pkgsBuildBuild :: AttrSet + , pkgsBuildHost :: AttrSet + , pkgsBuildTarget :: AttrSet + , pkgsHostHost :: AttrSet + , pkgsHostTarget :: AttrSet + , pkgsTargetTarget :: AttrSet + } + */ makeScopeWithSplicing' = { splicePackages , newScope }: { otherSplices + # Attrs from `self` which won't be spliced. + # Avoid using keep, it's only used for a python hook workaround, added in PR #104201. + # ex: `keep = (self: { inherit (self) aAttr; })` , keep ? (_self: {}) + # Additional attrs to add to the sets `callPackage`. + # When the package is from a subset (but not a subset within a package IS #211340) + # within `spliced0` it will be spliced. + # When using an package outside the set but it's available from `pkgs`, use the package from `pkgs.__splicedPackages`. + # If the package is not available within the set or in `pkgs`, such as a package in a let binding, it will not be spliced + # ex: + # ``` + # nix-repl> darwin.apple_sdk.frameworks.CoreFoundation + # «derivation ...CoreFoundation-11.0.0.drv» + # nix-repl> darwin.CoreFoundation + # error: attribute 'CoreFoundation' missing + # nix-repl> darwin.callPackage ({ CoreFoundation }: CoreFoundation) { } + # «derivation ...CoreFoundation-11.0.0.drv» + # ``` , extra ? (_spliced0: {}) , f }: diff --git a/third_party/nixpkgs/lib/default.nix b/third_party/nixpkgs/lib/default.nix index fe737a125e..a2958e561c 100644 --- a/third_party/nixpkgs/lib/default.nix +++ b/third_party/nixpkgs/lib/default.nix @@ -74,7 +74,7 @@ let importJSON importTOML warn warnIf warnIfNot throwIf throwIfNot checkListOfEnum info showWarnings nixpkgsVersion version isInOldestRelease mod compare splitByAndCompare - functionArgs setFunctionArgs isFunction toFunction + functionArgs setFunctionArgs isFunction toFunction mirrorFunctionArgs toHexString toBaseDigits inPureEvalMode; inherit (self.fixedPoints) fix fix' converge extends composeExtensions composeManyExtensions makeExtensible makeExtensibleWithCustomName; @@ -92,7 +92,7 @@ let concatMap flatten remove findSingle findFirst any all count optional optionals toList range replicate partition zipListsWith zipLists reverseList listDfs toposort sort naturalSort compareLists take - drop sublist last init crossLists unique intersectLists + drop sublist last init crossLists unique allUnique intersectLists subtractLists mutuallyExclusive groupBy groupBy'; inherit (self.strings) concatStrings concatMapStrings concatImapStrings intersperse concatStringsSep concatMapStringsSep diff --git a/third_party/nixpkgs/lib/fileset/README.md b/third_party/nixpkgs/lib/fileset/README.md index ebe13f08fd..14b6877a90 100644 --- a/third_party/nixpkgs/lib/fileset/README.md +++ b/third_party/nixpkgs/lib/fileset/README.md @@ -225,6 +225,9 @@ Arguments: This use case makes little sense for files that are already in the store. This should be a separate abstraction as e.g. `pkgs.drvLayout` instead, which could have a similar interface but be specific to derivations. Additional capabilities could be supported that can't be done at evaluation time, such as renaming files, creating new directories, setting executable bits, etc. +- (+) An API for filtering/transforming Nix store paths could be much more powerful, + because it's not limited to just what is possible at evaluation time with `builtins.path`. + Operations such as moving and adding files would be supported. ### Single files @@ -235,11 +238,22 @@ Arguments: And it would be unclear how the library should behave if the one file wouldn't be added to the store: `toSource { root = ./file.nix; fileset = ; }` has no reasonable result because returing an empty store path wouldn't match the file type, and there's no way to have an empty file store path, whatever that would mean. +### `fileFilter` takes a path + +The `fileFilter` function takes a path, and not a file set, as its second argument. + +- (-) Makes it harder to compose functions, since the file set type, the return value, can't be passed to the function itself like `fileFilter predicate fileset` + - (+) It's still possible to use `intersection` to filter on file sets: `intersection fileset (fileFilter predicate ./.)` + - (-) This does need an extra `./.` argument that's not obvious + - (+) This could always be `/.` or the project directory, `intersection` will make it lazy +- (+) In the future this will allow `fileFilter` to support a predicate property like `subpath` and/or `components` in a reproducible way. + This wouldn't be possible if it took a file set, because file sets don't have a predictable absolute path. + - (-) What about the base path? + - (+) That can change depending on which files are included, so if it's used for `fileFilter` + it would change the `subpath`/`components` value depending on which files are included. +- (+) If necessary, this restriction can be relaxed later, the opposite wouldn't be possible + ## To update in the future Here's a list of places in the library that need to be updated in the future: -- > The file set library is currently somewhat limited but is being expanded to include more functions over time. - - in [the manual](../../doc/functions/fileset.section.md) -- If/Once a function to convert `lib.sources` values into file sets exists, the `_coerce` and `toSource` functions should be updated to mention that function in the error when such a value is passed - If/Once a function exists that can optionally include a path depending on whether it exists, the error message for the path not existing in `_coerce` should mention the new function diff --git a/third_party/nixpkgs/lib/fileset/default.nix b/third_party/nixpkgs/lib/fileset/default.nix index 7bd7016703..15af0813ee 100644 --- a/third_party/nixpkgs/lib/fileset/default.nix +++ b/third_party/nixpkgs/lib/fileset/default.nix @@ -3,19 +3,27 @@ let inherit (import ./internal.nix { inherit lib; }) _coerce + _singleton _coerceMany _toSourceFilter + _fromSourceFilter _unionMany + _fileFilter _printFileset _intersection + _difference + _mirrorStorePath + _fetchGitSubmodulesMinver ; inherit (builtins) + isBool isList isPath pathExists seq typeOf + nixVersion ; inherit (lib.lists) @@ -30,6 +38,7 @@ let inherit (lib.strings) isStringLike + versionOlder ; inherit (lib.filesystem) @@ -41,7 +50,9 @@ let ; inherit (lib.trivial) + isFunction pipe + inPureEvalMode ; in { @@ -119,11 +130,10 @@ in { Paths in [strings](https://nixos.org/manual/nix/stable/language/values.html#type-string), including Nix store paths, cannot be passed as `root`. `root` has to be a directory. - -:::{.note} -Changing `root` only affects the directory structure of the resulting store path, it does not change which files are added to the store. -The only way to change which files get added to the store is by changing the `fileset` attribute. -::: + :::{.note} + Changing `root` only affects the directory structure of the resulting store path, it does not change which files are added to the store. + The only way to change which files get added to the store is by changing the `fileset` attribute. + ::: */ root, /* @@ -132,10 +142,9 @@ The only way to change which files get added to the store is by changing the `fi This argument can also be a path, which gets [implicitly coerced to a file set](#sec-fileset-path-coercion). - -:::{.note} -If a directory does not recursively contain any file, it is omitted from the store path contents. -::: + :::{.note} + If a directory does not recursively contain any file, it is omitted from the store path contents. + ::: */ fileset, @@ -151,9 +160,14 @@ If a directory does not recursively contain any file, it is omitted from the sto sourceFilter = _toSourceFilter fileset; in if ! isPath root then - if isStringLike root then + if root ? _isLibCleanSourceWith then throw '' - lib.fileset.toSource: `root` ("${toString root}") is a string-like value, but it should be a path instead. + lib.fileset.toSource: `root` is a `lib.sources`-based value, but it should be a path instead. + To use a `lib.sources`-based value, convert it to a file set using `lib.fileset.fromSource` and pass it as `fileset`. + Note that this only works for sources created from paths.'' + else if isStringLike root then + throw '' + lib.fileset.toSource: `root` (${toString root}) is a string-like value, but it should be a path instead. Paths in strings are not supported by `lib.fileset`, use `lib.sources` or derivations instead.'' else throw '' @@ -162,13 +176,13 @@ If a directory does not recursively contain any file, it is omitted from the sto # See also ../path/README.md else if ! fileset._internalIsEmptyWithoutBase && rootFilesystemRoot != filesetFilesystemRoot then throw '' - lib.fileset.toSource: Filesystem roots are not the same for `fileset` and `root` ("${toString root}"): - `root`: root "${toString rootFilesystemRoot}" - `fileset`: root "${toString filesetFilesystemRoot}" - Different roots are not supported.'' + lib.fileset.toSource: Filesystem roots are not the same for `fileset` and `root` (${toString root}): + `root`: Filesystem root is "${toString rootFilesystemRoot}" + `fileset`: Filesystem root is "${toString filesetFilesystemRoot}" + Different filesystem roots are not supported.'' else if ! pathExists root then throw '' - lib.fileset.toSource: `root` (${toString root}) does not exist.'' + lib.fileset.toSource: `root` (${toString root}) is a path that does not exist.'' else if pathType root != "directory" then throw '' lib.fileset.toSource: `root` (${toString root}) is a file, but it should be a directory instead. Potential solutions: @@ -180,13 +194,82 @@ If a directory does not recursively contain any file, it is omitted from the sto - Set `root` to ${toString fileset._internalBase} or any directory higher up. This changes the layout of the resulting store path. - Set `fileset` to a file set that cannot contain files outside the `root` (${toString root}). This could change the files included in the result.'' else - builtins.seq sourceFilter + seq sourceFilter cleanSourceWith { name = "source"; src = root; filter = sourceFilter; }; + /* + Create a file set with the same files as a `lib.sources`-based value. + This does not import any of the files into the store. + + This can be used to gradually migrate from `lib.sources`-based filtering to `lib.fileset`. + + A file set can be turned back into a source using [`toSource`](#function-library-lib.fileset.toSource). + + :::{.note} + File sets cannot represent empty directories. + Turning the result of this function back into a source using `toSource` will therefore not preserve empty directories. + ::: + + Type: + fromSource :: SourceLike -> FileSet + + Example: + # There's no cleanSource-like function for file sets yet, + # but we can just convert cleanSource to a file set and use it that way + toSource { + root = ./.; + fileset = fromSource (lib.sources.cleanSource ./.); + } + + # Keeping a previous sourceByRegex (which could be migrated to `lib.fileset.unions`), + # but removing a subdirectory using file set functions + difference + (fromSource (lib.sources.sourceByRegex ./. [ + "^README\.md$" + # This regex includes everything in ./doc + "^doc(/.*)?$" + ]) + ./doc/generated + + # Use cleanSource, but limit it to only include ./Makefile and files under ./src + intersection + (fromSource (lib.sources.cleanSource ./.)) + (unions [ + ./Makefile + ./src + ]); + */ + fromSource = source: + let + # This function uses `._isLibCleanSourceWith`, `.origSrc` and `.filter`, + # which are technically internal to lib.sources, + # but we'll allow this since both libraries are in the same code base + # and this function is a bridge between them. + isFiltered = source ? _isLibCleanSourceWith; + path = if isFiltered then source.origSrc else source; + in + # We can only support sources created from paths + if ! isPath path then + if isStringLike path then + throw '' + lib.fileset.fromSource: The source origin of the argument is a string-like value ("${toString path}"), but it should be a path instead. + Sources created from paths in strings cannot be turned into file sets, use `lib.sources` or derivations instead.'' + else + throw '' + lib.fileset.fromSource: The source origin of the argument is of type ${typeOf path}, but it should be a path instead.'' + else if ! pathExists path then + throw '' + lib.fileset.fromSource: The source origin (${toString path}) of the argument does not exist.'' + else if isFiltered then + _fromSourceFilter path source.filter + else + # If there's no filter, no need to run the expensive conversion, all subpaths will be included + _singleton path; + /* The file set containing all files that are in either of two given file sets. This is the same as [`unions`](#function-library-lib.fileset.unions), @@ -220,11 +303,11 @@ If a directory does not recursively contain any file, it is omitted from the sto _unionMany (_coerceMany "lib.fileset.union" [ { - context = "first argument"; + context = "First argument"; value = fileset1; } { - context = "second argument"; + context = "Second argument"; value = fileset2; } ]); @@ -266,18 +349,79 @@ If a directory does not recursively contain any file, it is omitted from the sto # which get [implicitly coerced to file sets](#sec-fileset-path-coercion). filesets: if ! isList filesets then - throw "lib.fileset.unions: Expected argument to be a list, but got a ${typeOf filesets}." + throw '' + lib.fileset.unions: Argument is of type ${typeOf filesets}, but it should be a list instead.'' else pipe filesets [ # Annotate the elements with context, used by _coerceMany for better errors (imap0 (i: el: { - context = "element ${toString i}"; + context = "Element ${toString i}"; value = el; })) (_coerceMany "lib.fileset.unions") _unionMany ]; + /* + Filter a file set to only contain files matching some predicate. + + Type: + fileFilter :: + ({ + name :: String, + type :: String, + ... + } -> Bool) + -> Path + -> FileSet + + Example: + # Include all regular `default.nix` files in the current directory + fileFilter (file: file.name == "default.nix") ./. + + # Include all non-Nix files from the current directory + fileFilter (file: ! hasSuffix ".nix" file.name) ./. + + # Include all files that start with a "." in the current directory + fileFilter (file: hasPrefix "." file.name) ./. + + # Include all regular files (not symlinks or others) in the current directory + fileFilter (file: file.type == "regular") ./. + */ + fileFilter = + /* + The predicate function to call on all files contained in given file set. + A file is included in the resulting file set if this function returns true for it. + + This function is called with an attribute set containing these attributes: + + - `name` (String): The name of the file + + - `type` (String, one of `"regular"`, `"symlink"` or `"unknown"`): The type of the file. + This matches result of calling [`builtins.readFileType`](https://nixos.org/manual/nix/stable/language/builtins.html#builtins-readFileType) on the file's path. + + Other attributes may be added in the future. + */ + predicate: + # The path whose files to filter + path: + if ! isFunction predicate then + throw '' + lib.fileset.fileFilter: First argument is of type ${typeOf predicate}, but it should be a function instead.'' + else if ! isPath path then + if path._type or "" == "fileset" then + throw '' + lib.fileset.fileFilter: Second argument is a file set, but it should be a path instead. + If you need to filter files in a file set, use `intersection fileset (fileFilter pred ./.)` instead.'' + else + throw '' + lib.fileset.fileFilter: Second argument is of type ${typeOf path}, but it should be a path instead.'' + else if ! pathExists path then + throw '' + lib.fileset.fileFilter: Second argument (${toString path}) is a path that does not exist.'' + else + _fileFilter predicate path; + /* The file set containing all files that are in both of two given file sets. See also [Intersection (set theory)](https://en.wikipedia.org/wiki/Intersection_(set_theory)). @@ -304,11 +448,11 @@ If a directory does not recursively contain any file, it is omitted from the sto let filesets = _coerceMany "lib.fileset.intersection" [ { - context = "first argument"; + context = "First argument"; value = fileset1; } { - context = "second argument"; + context = "Second argument"; value = fileset2; } ]; @@ -317,6 +461,58 @@ If a directory does not recursively contain any file, it is omitted from the sto (elemAt filesets 0) (elemAt filesets 1); + /* + The file set containing all files from the first file set that are not in the second file set. + See also [Difference (set theory)](https://en.wikipedia.org/wiki/Complement_(set_theory)#Relative_complement). + + The given file sets are evaluated as lazily as possible, + with the first argument being evaluated first if needed. + + Type: + union :: FileSet -> FileSet -> FileSet + + Example: + # Create a file set containing all files from the current directory, + # except ones under ./tests + difference ./. ./tests + + let + # A set of Nix-related files + nixFiles = unions [ ./default.nix ./nix ./tests/default.nix ]; + in + # Create a file set containing all files under ./tests, except ones in `nixFiles`, + # meaning only without ./tests/default.nix + difference ./tests nixFiles + */ + difference = + # The positive file set. + # The result can only contain files that are also in this file set. + # + # This argument can also be a path, + # which gets [implicitly coerced to a file set](#sec-fileset-path-coercion). + positive: + # The negative file set. + # The result will never contain files that are also in this file set. + # + # This argument can also be a path, + # which gets [implicitly coerced to a file set](#sec-fileset-path-coercion). + negative: + let + filesets = _coerceMany "lib.fileset.difference" [ + { + context = "First argument (positive set)"; + value = positive; + } + { + context = "Second argument (negative set)"; + value = negative; + } + ]; + in + _difference + (elemAt filesets 0) + (elemAt filesets 1); + /* Incrementally evaluate and trace a file set in a pretty way. This function is only intended for debugging purposes. @@ -352,7 +548,7 @@ If a directory does not recursively contain any file, it is omitted from the sto let # "fileset" would be a better name, but that would clash with the argument name, # and we cannot change that because of https://github.com/nix-community/nixdoc/issues/76 - actualFileset = _coerce "lib.fileset.trace: argument" fileset; + actualFileset = _coerce "lib.fileset.trace: Argument" fileset; in seq (_printFileset actualFileset) @@ -399,11 +595,118 @@ If a directory does not recursively contain any file, it is omitted from the sto let # "fileset" would be a better name, but that would clash with the argument name, # and we cannot change that because of https://github.com/nix-community/nixdoc/issues/76 - actualFileset = _coerce "lib.fileset.traceVal: argument" fileset; + actualFileset = _coerce "lib.fileset.traceVal: Argument" fileset; in seq (_printFileset actualFileset) # We could also return the original fileset argument here, # but that would then duplicate work for consumers of the fileset, because then they have to coerce it again actualFileset; + + /* + Create a file set containing all [Git-tracked files](https://git-scm.com/book/en/v2/Git-Basics-Recording-Changes-to-the-Repository) in a repository. + + This function behaves like [`gitTrackedWith { }`](#function-library-lib.fileset.gitTrackedWith) - using the defaults. + + Type: + gitTracked :: Path -> FileSet + + Example: + # Include all files tracked by the Git repository in the current directory + gitTracked ./. + + # Include only files tracked by the Git repository in the parent directory + # that are also in the current directory + intersection ./. (gitTracked ../.) + */ + gitTracked = + /* + The [path](https://nixos.org/manual/nix/stable/language/values#type-path) to the working directory of a local Git repository. + This directory must contain a `.git` file or subdirectory. + */ + path: + # See the gitTrackedWith implementation for more explanatory comments + let + fetchResult = builtins.fetchGit path; + in + if inPureEvalMode then + throw "lib.fileset.gitTracked: This function is currently not supported in pure evaluation mode, since it currently relies on `builtins.fetchGit`. See https://github.com/NixOS/nix/issues/9292." + else if ! isPath path then + throw "lib.fileset.gitTracked: Expected the argument to be a path, but it's a ${typeOf path} instead." + else if ! pathExists (path + "/.git") then + throw "lib.fileset.gitTracked: Expected the argument (${toString path}) to point to a local working tree of a Git repository, but it's not." + else + _mirrorStorePath path fetchResult.outPath; + + /* + Create a file set containing all [Git-tracked files](https://git-scm.com/book/en/v2/Git-Basics-Recording-Changes-to-the-Repository) in a repository. + The first argument allows configuration with an attribute set, + while the second argument is the path to the Git working tree. + If you don't need the configuration, + you can use [`gitTracked`](#function-library-lib.fileset.gitTracked) instead. + + This is equivalent to the result of [`unions`](#function-library-lib.fileset.unions) on all files returned by [`git ls-files`](https://git-scm.com/docs/git-ls-files) + (which uses [`--cached`](https://git-scm.com/docs/git-ls-files#Documentation/git-ls-files.txt--c) by default). + + :::{.warning} + Currently this function is based on [`builtins.fetchGit`](https://nixos.org/manual/nix/stable/language/builtins.html#builtins-fetchGit) + As such, this function causes all Git-tracked files to be unnecessarily added to the Nix store, + without being re-usable by [`toSource`](#function-library-lib.fileset.toSource). + + This may change in the future. + ::: + + Type: + gitTrackedWith :: { recurseSubmodules :: Bool ? false } -> Path -> FileSet + + Example: + # Include all files tracked by the Git repository in the current directory + # and any submodules under it + gitTracked { recurseSubmodules = true; } ./. + */ + gitTrackedWith = + { + /* + (optional, default: `false`) Whether to recurse into [Git submodules](https://git-scm.com/book/en/v2/Git-Tools-Submodules) to also include their tracked files. + + If `true`, this is equivalent to passing the [--recurse-submodules](https://git-scm.com/docs/git-ls-files#Documentation/git-ls-files.txt---recurse-submodules) flag to `git ls-files`. + */ + recurseSubmodules ? false, + }: + /* + The [path](https://nixos.org/manual/nix/stable/language/values#type-path) to the working directory of a local Git repository. + This directory must contain a `.git` file or subdirectory. + */ + path: + let + # This imports the files unnecessarily, which currently can't be avoided + # because `builtins.fetchGit` is the only function exposing which files are tracked by Git. + # With the [lazy trees PR](https://github.com/NixOS/nix/pull/6530), + # the unnecessarily import could be avoided. + # However a simpler alternative still would be [a builtins.gitLsFiles](https://github.com/NixOS/nix/issues/2944). + fetchResult = builtins.fetchGit { + url = path; + + # This is the only `fetchGit` parameter that makes sense in this context. + # We can't just pass `submodules = recurseSubmodules` here because + # this would fail for Nix versions that don't support `submodules`. + ${if recurseSubmodules then "submodules" else null} = true; + }; + in + if inPureEvalMode then + throw "lib.fileset.gitTrackedWith: This function is currently not supported in pure evaluation mode, since it currently relies on `builtins.fetchGit`. See https://github.com/NixOS/nix/issues/9292." + else if ! isBool recurseSubmodules then + throw "lib.fileset.gitTrackedWith: Expected the attribute `recurseSubmodules` of the first argument to be a boolean, but it's a ${typeOf recurseSubmodules} instead." + else if recurseSubmodules && versionOlder nixVersion _fetchGitSubmodulesMinver then + throw "lib.fileset.gitTrackedWith: Setting the attribute `recurseSubmodules` to `true` is only supported for Nix version ${_fetchGitSubmodulesMinver} and after, but Nix version ${nixVersion} is used." + else if ! isPath path then + throw "lib.fileset.gitTrackedWith: Expected the second argument to be a path, but it's a ${typeOf path} instead." + # We can identify local working directories by checking for .git, + # see https://git-scm.com/docs/gitrepository-layout#_description. + # Note that `builtins.fetchGit` _does_ work for bare repositories (where there's no `.git`), + # even though `git ls-files` wouldn't return any files in that case. + else if ! pathExists (path + "/.git") then + throw "lib.fileset.gitTrackedWith: Expected the second argument (${toString path}) to point to a local working tree of a Git repository, but it's not." + else + _mirrorStorePath path fetchResult.outPath; } diff --git a/third_party/nixpkgs/lib/fileset/internal.nix b/third_party/nixpkgs/lib/fileset/internal.nix index 9892172955..0769e654c8 100644 --- a/third_party/nixpkgs/lib/fileset/internal.nix +++ b/third_party/nixpkgs/lib/fileset/internal.nix @@ -7,7 +7,6 @@ let isString pathExists readDir - seq split trace typeOf @@ -17,7 +16,6 @@ let attrNames attrValues mapAttrs - setAttrByPath zipAttrsWith ; @@ -28,7 +26,6 @@ let inherit (lib.lists) all commonPrefix - drop elemAt filter findFirst @@ -170,7 +167,12 @@ rec { else value else if ! isPath value then - if isStringLike value then + if value ? _isLibCleanSourceWith then + throw '' + ${context} is a `lib.sources`-based value, but it should be a file set or a path instead. + To convert a `lib.sources`-based value to a file set you can use `lib.fileset.fromSource`. + Note that this only works for sources created from paths.'' + else if isStringLike value then throw '' ${context} ("${toString value}") is a string-like value, but it should be a file set or a path instead. Paths represented as strings are not supported by `lib.fileset`, use `lib.sources` or derivations instead.'' @@ -179,7 +181,7 @@ rec { ${context} is of type ${typeOf value}, but it should be a file set or a path instead.'' else if ! pathExists value then throw '' - ${context} (${toString value}) does not exist.'' + ${context} (${toString value}) is a path that does not exist.'' else _singleton value; @@ -208,9 +210,9 @@ rec { if firstWithBase != null && differentIndex != null then throw '' ${functionContext}: Filesystem roots are not the same: - ${(head list).context}: root "${toString firstBaseRoot}" - ${(elemAt list differentIndex).context}: root "${toString (elemAt filesets differentIndex)._internalBaseRoot}" - Different roots are not supported.'' + ${(head list).context}: Filesystem root is "${toString firstBaseRoot}" + ${(elemAt list differentIndex).context}: Filesystem root is "${toString (elemAt filesets differentIndex)._internalBaseRoot}" + Different filesystem roots are not supported.'' else filesets; @@ -424,7 +426,7 @@ rec { # Filter suited when there's some files # This can't be used for when there's no files, because the base directory is always included nonEmpty = - path: _: + path: type: let # Add a slash to the path string, turning "/foo" to "/foo/", # making sure to not have any false prefix matches below. @@ -433,25 +435,37 @@ rec { # meaning this function can never receive "/" as an argument pathSlash = path + "/"; in - # Same as `hasPrefix pathSlash baseString`, but more efficient. - # With base /foo/bar we need to include /foo: - # hasPrefix "/foo/" "/foo/bar/" - if substring 0 (stringLength pathSlash) baseString == pathSlash then - true - # Same as `! hasPrefix baseString pathSlash`, but more efficient. - # With base /foo/bar we need to exclude /baz - # ! hasPrefix "/baz/" "/foo/bar/" - else if substring 0 baseLength pathSlash != baseString then - false - else - # Same as `removePrefix baseString path`, but more efficient. - # From the above code we know that hasPrefix baseString pathSlash holds, so this is safe. - # We don't use pathSlash here because we only needed the trailing slash for the prefix matching. - # With base /foo and path /foo/bar/baz this gives - # inTree (split "/" (removePrefix "/foo/" "/foo/bar/baz")) - # == inTree (split "/" "bar/baz") - # == inTree [ "bar" "baz" ] - inTree (split "/" (substring baseLength (-1) path)); + ( + # Same as `hasPrefix pathSlash baseString`, but more efficient. + # With base /foo/bar we need to include /foo: + # hasPrefix "/foo/" "/foo/bar/" + if substring 0 (stringLength pathSlash) baseString == pathSlash then + true + # Same as `! hasPrefix baseString pathSlash`, but more efficient. + # With base /foo/bar we need to exclude /baz + # ! hasPrefix "/baz/" "/foo/bar/" + else if substring 0 baseLength pathSlash != baseString then + false + else + # Same as `removePrefix baseString path`, but more efficient. + # From the above code we know that hasPrefix baseString pathSlash holds, so this is safe. + # We don't use pathSlash here because we only needed the trailing slash for the prefix matching. + # With base /foo and path /foo/bar/baz this gives + # inTree (split "/" (removePrefix "/foo/" "/foo/bar/baz")) + # == inTree (split "/" "bar/baz") + # == inTree [ "bar" "baz" ] + inTree (split "/" (substring baseLength (-1) path)) + ) + # This is a way have an additional check in case the above is true without any significant performance cost + && ( + # This relies on the fact that Nix only distinguishes path types "directory", "regular", "symlink" and "unknown", + # so everything except "unknown" is allowed, seems reasonable to rely on that + type != "unknown" + || throw '' + lib.fileset.toSource: `fileset` contains a file that cannot be added to the store: ${path} + This file is neither a regular file nor a symlink, the only file types supported by the Nix store. + Therefore the file set cannot be added to the Nix store as is. Make sure to not include that file to avoid this error.'' + ); in # Special case because the code below assumes that the _internalBase is always included in the result # which shouldn't be done when we have no files at all in the base @@ -461,6 +475,59 @@ rec { else nonEmpty; + # Turn a builtins.filterSource-based source filter on a root path into a file set + # containing only files included by the filter. + # The filter is lazily called as necessary to determine whether paths are included + # Type: Path -> (String -> String -> Bool) -> fileset + _fromSourceFilter = root: sourceFilter: + let + # During the recursion we need to track both: + # - The path value such that we can safely call `readDir` on it + # - The path string value such that we can correctly call the `filter` with it + # + # While we could just recurse with the path value, + # this would then require converting it to a path string for every path, + # which is a fairly expensive operation + + # Create a file set from a directory entry + fromDirEntry = path: pathString: type: + # The filter needs to run on the path as a string + if ! sourceFilter pathString type then + null + else if type == "directory" then + fromDir path pathString + else + type; + + # Create a file set from a directory + fromDir = path: pathString: + mapAttrs + # This looks a bit funny, but we need both the path-based and the path string-based values + (name: fromDirEntry (path + "/${name}") (pathString + "/${name}")) + # We need to readDir on the path value, because reading on a path string + # would be unspecified if there are multiple filesystem roots + (readDir path); + + rootPathType = pathType root; + + # We need to convert the path to a string to imitate what builtins.path calls the filter function with. + # We don't want to rely on `toString` for this though because it's not very well defined, see ../path/README.md + # So instead we use `lib.path.splitRoot` to safely deconstruct the path into its filesystem root and subpath + # We don't need the filesystem root though, builtins.path doesn't expose that in any way to the filter. + # So we only need the components, which we then turn into a string as one would expect. + rootString = "/" + concatStringsSep "/" (components (splitRoot root).subpath); + in + if rootPathType == "directory" then + # We imitate builtins.path not calling the filter on the root path + _create root (fromDir root rootString) + else + # Direct files are always included by builtins.path without calling the filter + # But we need to lift up the base path to its parent to satisfy the base path invariant + _create (dirOf root) + { + ${baseNameOf root} = rootPathType; + }; + # Transforms the filesetTree of a file set to a shorter base path, e.g. # _shortenTreeBase [ "foo" ] (_create /foo/bar null) # => { bar = null; } @@ -638,4 +705,147 @@ rec { else # In all other cases it's the rhs rhs; + + # Compute the set difference between two file sets. + # The filesets must already be coerced and validated to be in the same filesystem root. + # Type: Fileset -> Fileset -> Fileset + _difference = positive: negative: + let + # The common base components prefix, e.g. + # (/foo/bar, /foo/bar/baz) -> /foo/bar + # (/foo/bar, /foo/baz) -> /foo + commonBaseComponentsLength = + # TODO: Have a `lib.lists.commonPrefixLength` function such that we don't need the list allocation from commonPrefix here + length ( + commonPrefix + positive._internalBaseComponents + negative._internalBaseComponents + ); + + # We need filesetTree's with the same base to be able to compute the difference between them + # This here is the filesetTree from the negative file set, but for a base path that matches the positive file set. + # Examples: + # For `difference /foo /foo/bar`, `negativeTreeWithPositiveBase = { bar = "directory"; }` + # because under the base path of `/foo`, only `bar` from the negative file set is included + # For `difference /foo/bar /foo`, `negativeTreeWithPositiveBase = "directory"` + # because under the base path of `/foo/bar`, everything from the negative file set is included + # For `difference /foo /bar`, `negativeTreeWithPositiveBase = null` + # because under the base path of `/foo`, nothing from the negative file set is included + negativeTreeWithPositiveBase = + if commonBaseComponentsLength == length positive._internalBaseComponents then + # The common prefix is the same as the positive base path, so the second path is equal or longer. + # We need to _shorten_ the negative filesetTree to the same base path as the positive one + # E.g. for `difference /foo /foo/bar` the common prefix is /foo, equal to the positive file set's base + # So we need to shorten the base of the tree for the negative argument from /foo/bar to just /foo + _shortenTreeBase positive._internalBaseComponents negative + else if commonBaseComponentsLength == length negative._internalBaseComponents then + # The common prefix is the same as the negative base path, so the first path is longer. + # We need to lengthen the negative filesetTree to the same base path as the positive one. + # E.g. for `difference /foo/bar /foo` the common prefix is /foo, equal to the negative file set's base + # So we need to lengthen the base of the tree for the negative argument from /foo to /foo/bar + _lengthenTreeBase positive._internalBaseComponents negative + else + # The common prefix is neither the first nor the second path. + # This means there's no overlap between the two file sets, + # and nothing from the negative argument should get removed from the positive one + # E.g for `difference /foo /bar`, we remove nothing to get the same as `/foo` + null; + + resultingTree = + _differenceTree + positive._internalBase + positive._internalTree + negativeTreeWithPositiveBase; + in + # If the first file set is empty, we can never have any files in the result + if positive._internalIsEmptyWithoutBase then + _emptyWithoutBase + # If the second file set is empty, nothing gets removed, so the result is just the first file set + else if negative._internalIsEmptyWithoutBase then + positive + else + # We use the positive file set base for the result, + # because only files from the positive side may be included, + # which is what base path is for + _create positive._internalBase resultingTree; + + # Computes the set difference of two filesetTree's + # Type: Path -> filesetTree -> filesetTree + _differenceTree = path: lhs: rhs: + # If the lhs doesn't have any files, or the right hand side includes all files + if lhs == null || isString rhs then + # The result will always be empty + null + # If the right hand side has no files + else if rhs == null then + # The result is always the left hand side, because nothing gets removed + lhs + else + # Otherwise we always have two attribute sets to recurse into + mapAttrs (name: lhsValue: + _differenceTree (path + "/${name}") lhsValue (rhs.${name} or null) + ) (_directoryEntries path lhs); + + # Filters all files in a path based on a predicate + # Type: ({ name, type, ... } -> Bool) -> Path -> FileSet + _fileFilter = predicate: root: + let + # Check the predicate for a single file + # Type: String -> String -> filesetTree + fromFile = name: type: + if + predicate { + inherit name type; + # To ensure forwards compatibility with more arguments being added in the future, + # adding an attribute which can't be deconstructed :) + "lib.fileset.fileFilter: The predicate function passed as the first argument must be able to handle extra attributes for future compatibility. If you're using `{ name, file }:`, use `{ name, file, ... }:` instead." = null; + } + then + type + else + null; + + # Check the predicate for all files in a directory + # Type: Path -> filesetTree + fromDir = path: + mapAttrs (name: type: + if type == "directory" then + fromDir (path + "/${name}") + else + fromFile name type + ) (readDir path); + + rootType = pathType root; + in + if rootType == "directory" then + _create root (fromDir root) + else + # Single files are turned into a directory containing that file or nothing. + _create (dirOf root) { + ${baseNameOf root} = + fromFile (baseNameOf root) rootType; + }; + + # Support for `builtins.fetchGit` with `submodules = true` was introduced in 2.4 + # https://github.com/NixOS/nix/commit/55cefd41d63368d4286568e2956afd535cb44018 + _fetchGitSubmodulesMinver = "2.4"; + + # Mirrors the contents of a Nix store path relative to a local path as a file set. + # Some notes: + # - The store path is read at evaluation time. + # - The store path must not include files that don't exist in the respective local path. + # + # Type: Path -> String -> FileSet + _mirrorStorePath = localPath: storePath: + let + recurse = focusedStorePath: + mapAttrs (name: type: + if type == "directory" then + recurse (focusedStorePath + "/${name}") + else + type + ) (builtins.readDir focusedStorePath); + in + _create localPath + (recurse storePath); } diff --git a/third_party/nixpkgs/lib/fileset/tests.sh b/third_party/nixpkgs/lib/fileset/tests.sh index 529f23ae88..3c88ebdd05 100755 --- a/third_party/nixpkgs/lib/fileset/tests.sh +++ b/third_party/nixpkgs/lib/fileset/tests.sh @@ -1,5 +1,7 @@ #!/usr/bin/env bash # shellcheck disable=SC2016 +# shellcheck disable=SC2317 +# shellcheck disable=SC2192 # Tests lib.fileset # Run: @@ -41,15 +43,29 @@ crudeUnquoteJSON() { cut -d \" -f2 } -prefixExpression='let - lib = import ; - internal = import { - inherit lib; - }; -in -with lib; -with internal; -with lib.fileset;' +prefixExpression() { + echo 'let + lib = + (import ) + ' + if [[ "${1:-}" == "--simulate-pure-eval" ]]; then + echo ' + .extend (final: prev: { + trivial = prev.trivial // { + inPureEvalMode = true; + }; + })' + fi + echo ' + ; + internal = import { + inherit lib; + }; + in + with lib; + with internal; + with lib.fileset;' +} # Check that two nix expression successfully evaluate to the same value. # The expressions have `lib.fileset` in scope. @@ -58,7 +74,7 @@ expectEqual() { local actualExpr=$1 local expectedExpr=$2 if actualResult=$(nix-instantiate --eval --strict --show-trace 2>"$tmp"/actualStderr \ - --expr "$prefixExpression ($actualExpr)"); then + --expr "$(prefixExpression) ($actualExpr)"); then actualExitCode=$? else actualExitCode=$? @@ -66,7 +82,7 @@ expectEqual() { actualStderr=$(< "$tmp"/actualStderr) if expectedResult=$(nix-instantiate --eval --strict --show-trace 2>"$tmp"/expectedStderr \ - --expr "$prefixExpression ($expectedExpr)"); then + --expr "$(prefixExpression) ($expectedExpr)"); then expectedExitCode=$? else expectedExitCode=$? @@ -93,8 +109,9 @@ expectEqual() { # Usage: expectStorePath NIX expectStorePath() { local expr=$1 - if ! result=$(nix-instantiate --eval --strict --json --read-write-mode --show-trace \ - --expr "$prefixExpression ($expr)"); then + if ! result=$(nix-instantiate --eval --strict --json --read-write-mode --show-trace 2>"$tmp"/stderr \ + --expr "$(prefixExpression) ($expr)"); then + cat "$tmp/stderr" >&2 die "$expr failed to evaluate, but it was expected to succeed" fi # This is safe because we assume to get back a store path in a string @@ -106,10 +123,16 @@ expectStorePath() { # The expression has `lib.fileset` in scope. # Usage: expectFailure NIX REGEX expectFailure() { + if [[ "$1" == "--simulate-pure-eval" ]]; then + maybePure="--simulate-pure-eval" + shift + else + maybePure="" + fi local expr=$1 local expectedErrorRegex=$2 if result=$(nix-instantiate --eval --strict --read-write-mode --show-trace 2>"$tmp/stderr" \ - --expr "$prefixExpression $expr"); then + --expr "$(prefixExpression $maybePure) $expr"); then die "$expr evaluated successfully to $result, but it was expected to fail" fi stderr=$(<"$tmp/stderr") @@ -126,12 +149,12 @@ expectTrace() { local expectedTrace=$2 nix-instantiate --eval --show-trace >/dev/null 2>"$tmp"/stderrTrace \ - --expr "$prefixExpression trace ($expr)" || true + --expr "$(prefixExpression) trace ($expr)" || true actualTrace=$(sed -n 's/^trace: //p' "$tmp/stderrTrace") nix-instantiate --eval --show-trace >/dev/null 2>"$tmp"/stderrTraceVal \ - --expr "$prefixExpression traceVal ($expr)" || true + --expr "$(prefixExpression) traceVal ($expr)" || true actualTraceVal=$(sed -n 's/^trace: //p' "$tmp/stderrTraceVal") @@ -224,23 +247,17 @@ withFileMonitor() { fi } -# Check whether a file set includes/excludes declared paths as expected, usage: + +# Create the tree structure declared in the tree variable, usage: # # tree=( -# [a/b] =1 # Declare that file a/b should exist and expect it to be included in the store path -# [c/a] = # Declare that file c/a should exist and expect it to be excluded in the store path -# [c/d/]= # Declare that directory c/d/ should exist and expect it to be excluded in the store path +# [a/b] = # Declare that file a/b should exist +# [c/a] = # Declare that file c/a should exist +# [c/d/]= # Declare that directory c/d/ should exist # ) -# checkFileset './a' # Pass the fileset as the argument +# createTree declare -A tree -checkFileset() { - # New subshell so that we can have a separate trap handler, see `trap` below - local fileset=$1 - - # Process the tree into separate arrays for included paths, excluded paths and excluded files. - local -a included=() - local -a excluded=() - local -a excludedFiles=() +createTree() { # Track which paths need to be created local -a dirsToCreate=() local -a filesToCreate=() @@ -248,24 +265,9 @@ checkFileset() { # If keys end with a `/` we treat them as directories, otherwise files if [[ "$p" =~ /$ ]]; then dirsToCreate+=("$p") - isFile= else filesToCreate+=("$p") - isFile=1 fi - case "${tree[$p]}" in - 1) - included+=("$p") - ;; - 0) - excluded+=("$p") - if [[ -n "$isFile" ]]; then - excludedFiles+=("$p") - fi - ;; - *) - die "Unsupported tree value: ${tree[$p]}" - esac done # Create all the necessary paths. @@ -280,6 +282,43 @@ checkFileset() { mkdir -p "${parentsToCreate[@]}" touch "${filesToCreate[@]}" fi +} + +# Check whether a file set includes/excludes declared paths as expected, usage: +# +# tree=( +# [a/b] =1 # Declare that file a/b should exist and expect it to be included in the store path +# [c/a] = # Declare that file c/a should exist and expect it to be excluded in the store path +# [c/d/]= # Declare that directory c/d/ should exist and expect it to be excluded in the store path +# ) +# checkFileset './a' # Pass the fileset as the argument +checkFileset() { + # New subshell so that we can have a separate trap handler, see `trap` below + local fileset=$1 + + # Create the tree + createTree + + # Process the tree into separate arrays for included paths, excluded paths and excluded files. + local -a included=() + local -a excluded=() + local -a excludedFiles=() + for p in "${!tree[@]}"; do + case "${tree[$p]}" in + 1) + included+=("$p") + ;; + 0) + excluded+=("$p") + # If keys end with a `/` we treat them as directories, otherwise files + if [[ ! "$p" =~ /$ ]]; then + excludedFiles+=("$p") + fi + ;; + *) + die "Unsupported tree value: ${tree[$p]}" + esac + done expression="toSource { root = ./.; fileset = $fileset; }" @@ -318,9 +357,13 @@ checkFileset() { #### Error messages ##### # Absolute paths in strings cannot be passed as `root` -expectFailure 'toSource { root = "/nix/store/foobar"; fileset = ./.; }' 'lib.fileset.toSource: `root` \("/nix/store/foobar"\) is a string-like value, but it should be a path instead. +expectFailure 'toSource { root = "/nix/store/foobar"; fileset = ./.; }' 'lib.fileset.toSource: `root` \(/nix/store/foobar\) is a string-like value, but it should be a path instead. \s*Paths in strings are not supported by `lib.fileset`, use `lib.sources` or derivations instead.' +expectFailure 'toSource { root = cleanSourceWith { src = ./.; }; fileset = ./.; }' 'lib.fileset.toSource: `root` is a `lib.sources`-based value, but it should be a path instead. +\s*To use a `lib.sources`-based value, convert it to a file set using `lib.fileset.fromSource` and pass it as `fileset`. +\s*Note that this only works for sources created from paths.' + # Only paths are accepted as `root` expectFailure 'toSource { root = 10; fileset = ./.; }' 'lib.fileset.toSource: `root` is of type int, but it should be a path instead.' @@ -328,21 +371,21 @@ expectFailure 'toSource { root = 10; fileset = ./.; }' 'lib.fileset.toSource: `r mkdir -p {foo,bar}/mock-root expectFailure 'with ((import ).extend (import )).fileset; toSource { root = ./foo/mock-root; fileset = ./bar/mock-root; } -' 'lib.fileset.toSource: Filesystem roots are not the same for `fileset` and `root` \("'"$work"'/foo/mock-root"\): -\s*`root`: root "'"$work"'/foo/mock-root" -\s*`fileset`: root "'"$work"'/bar/mock-root" -\s*Different roots are not supported.' -rm -rf * +' 'lib.fileset.toSource: Filesystem roots are not the same for `fileset` and `root` \('"$work"'/foo/mock-root\): +\s*`root`: Filesystem root is "'"$work"'/foo/mock-root" +\s*`fileset`: Filesystem root is "'"$work"'/bar/mock-root" +\s*Different filesystem roots are not supported.' +rm -rf -- * # `root` needs to exist -expectFailure 'toSource { root = ./a; fileset = ./.; }' 'lib.fileset.toSource: `root` \('"$work"'/a\) does not exist.' +expectFailure 'toSource { root = ./a; fileset = ./.; }' 'lib.fileset.toSource: `root` \('"$work"'/a\) is a path that does not exist.' # `root` needs to be a file touch a expectFailure 'toSource { root = ./a; fileset = ./a; }' 'lib.fileset.toSource: `root` \('"$work"'/a\) is a file, but it should be a directory instead. Potential solutions: \s*- If you want to import the file into the store _without_ a containing directory, use string interpolation or `builtins.path` instead of this function. \s*- If you want to import the file into the store _with_ a containing directory, set `root` to the containing directory, such as '"$work"', and set `fileset` to the file path.' -rm -rf * +rm -rf -- * # The fileset argument should be evaluated, even if the directory is empty expectFailure 'toSource { root = ./.; fileset = abort "This should be evaluated"; }' 'evaluation aborted with the following error message: '\''This should be evaluated'\' @@ -352,15 +395,25 @@ mkdir a expectFailure 'toSource { root = ./a; fileset = ./.; }' 'lib.fileset.toSource: `fileset` could contain files in '"$work"', which is not under the `root` \('"$work"'/a\). Potential solutions: \s*- Set `root` to '"$work"' or any directory higher up. This changes the layout of the resulting store path. \s*- Set `fileset` to a file set that cannot contain files outside the `root` \('"$work"'/a\). This could change the files included in the result.' -rm -rf * +rm -rf -- * + +# non-regular and non-symlink files cannot be added to the Nix store +mkfifo a +expectFailure 'toSource { root = ./.; fileset = ./a; }' 'lib.fileset.toSource: `fileset` contains a file that cannot be added to the store: '"$work"'/a +\s*This file is neither a regular file nor a symlink, the only file types supported by the Nix store. +\s*Therefore the file set cannot be added to the Nix store as is. Make sure to not include that file to avoid this error.' +rm -rf -- * # Path coercion only works for paths expectFailure 'toSource { root = ./.; fileset = 10; }' 'lib.fileset.toSource: `fileset` is of type int, but it should be a file set or a path instead.' expectFailure 'toSource { root = ./.; fileset = "/some/path"; }' 'lib.fileset.toSource: `fileset` \("/some/path"\) is a string-like value, but it should be a file set or a path instead. \s*Paths represented as strings are not supported by `lib.fileset`, use `lib.sources` or derivations instead.' +expectFailure 'toSource { root = ./.; fileset = cleanSourceWith { src = ./.; }; }' 'lib.fileset.toSource: `fileset` is a `lib.sources`-based value, but it should be a file set or a path instead. +\s*To convert a `lib.sources`-based value to a file set you can use `lib.fileset.fromSource`. +\s*Note that this only works for sources created from paths.' # Path coercion errors for non-existent paths -expectFailure 'toSource { root = ./.; fileset = ./a; }' 'lib.fileset.toSource: `fileset` \('"$work"'/a\) does not exist.' +expectFailure 'toSource { root = ./.; fileset = ./a; }' 'lib.fileset.toSource: `fileset` \('"$work"'/a\) is a path that does not exist.' # File sets cannot be evaluated directly expectFailure 'union ./. ./.' 'lib.fileset: Directly evaluating a file set is not supported. @@ -483,26 +536,26 @@ mkdir -p {foo,bar}/mock-root expectFailure 'with ((import ).extend (import )).fileset; toSource { root = ./.; fileset = union ./foo/mock-root ./bar/mock-root; } ' 'lib.fileset.union: Filesystem roots are not the same: -\s*first argument: root "'"$work"'/foo/mock-root" -\s*second argument: root "'"$work"'/bar/mock-root" -\s*Different roots are not supported.' +\s*First argument: Filesystem root is "'"$work"'/foo/mock-root" +\s*Second argument: Filesystem root is "'"$work"'/bar/mock-root" +\s*Different filesystem roots are not supported.' expectFailure 'with ((import ).extend (import )).fileset; toSource { root = ./.; fileset = unions [ ./foo/mock-root ./bar/mock-root ]; } ' 'lib.fileset.unions: Filesystem roots are not the same: -\s*element 0: root "'"$work"'/foo/mock-root" -\s*element 1: root "'"$work"'/bar/mock-root" -\s*Different roots are not supported.' -rm -rf * +\s*Element 0: Filesystem root is "'"$work"'/foo/mock-root" +\s*Element 1: Filesystem root is "'"$work"'/bar/mock-root" +\s*Different filesystem roots are not supported.' +rm -rf -- * # Coercion errors show the correct context -expectFailure 'toSource { root = ./.; fileset = union ./a ./.; }' 'lib.fileset.union: first argument \('"$work"'/a\) does not exist.' -expectFailure 'toSource { root = ./.; fileset = union ./. ./b; }' 'lib.fileset.union: second argument \('"$work"'/b\) does not exist.' -expectFailure 'toSource { root = ./.; fileset = unions [ ./a ./. ]; }' 'lib.fileset.unions: element 0 \('"$work"'/a\) does not exist.' -expectFailure 'toSource { root = ./.; fileset = unions [ ./. ./b ]; }' 'lib.fileset.unions: element 1 \('"$work"'/b\) does not exist.' +expectFailure 'toSource { root = ./.; fileset = union ./a ./.; }' 'lib.fileset.union: First argument \('"$work"'/a\) is a path that does not exist.' +expectFailure 'toSource { root = ./.; fileset = union ./. ./b; }' 'lib.fileset.union: Second argument \('"$work"'/b\) is a path that does not exist.' +expectFailure 'toSource { root = ./.; fileset = unions [ ./a ./. ]; }' 'lib.fileset.unions: Element 0 \('"$work"'/a\) is a path that does not exist.' +expectFailure 'toSource { root = ./.; fileset = unions [ ./. ./b ]; }' 'lib.fileset.unions: Element 1 \('"$work"'/b\) is a path that does not exist.' # unions needs a list -expectFailure 'toSource { root = ./.; fileset = unions null; }' 'lib.fileset.unions: Expected argument to be a list, but got a null.' +expectFailure 'toSource { root = ./.; fileset = unions null; }' 'lib.fileset.unions: Argument is of type null, but it should be a list instead.' # The tree of later arguments should not be evaluated if a former argument already includes all files tree=() @@ -596,14 +649,14 @@ mkdir -p {foo,bar}/mock-root expectFailure 'with ((import ).extend (import )).fileset; toSource { root = ./.; fileset = intersection ./foo/mock-root ./bar/mock-root; } ' 'lib.fileset.intersection: Filesystem roots are not the same: -\s*first argument: root "'"$work"'/foo/mock-root" -\s*second argument: root "'"$work"'/bar/mock-root" -\s*Different roots are not supported.' +\s*First argument: Filesystem root is "'"$work"'/foo/mock-root" +\s*Second argument: Filesystem root is "'"$work"'/bar/mock-root" +\s*Different filesystem roots are not supported.' rm -rf -- * # Coercion errors show the correct context -expectFailure 'toSource { root = ./.; fileset = intersection ./a ./.; }' 'lib.fileset.intersection: first argument \('"$work"'/a\) does not exist.' -expectFailure 'toSource { root = ./.; fileset = intersection ./. ./b; }' 'lib.fileset.intersection: second argument \('"$work"'/b\) does not exist.' +expectFailure 'toSource { root = ./.; fileset = intersection ./a ./.; }' 'lib.fileset.intersection: First argument \('"$work"'/a\) is a path that does not exist.' +expectFailure 'toSource { root = ./.; fileset = intersection ./. ./b; }' 'lib.fileset.intersection: Second argument \('"$work"'/b\) is a path that does not exist.' # The tree of later arguments should not be evaluated if a former argument already excludes all files tree=( @@ -677,6 +730,191 @@ tree=( ) checkFileset 'intersection (unions [ ./a/b ./c/d ./c/e ]) (unions [ ./a ./c/d/f ./c/e ])' +## Difference + +# Subtracting something from itself results in nothing +tree=( + [a]=0 +) +checkFileset 'difference ./. ./.' + +# The tree of the second argument should not be evaluated if not needed +checkFileset 'difference _emptyWithoutBase (_create ./. (abort "This should not be used!"))' +checkFileset 'difference (_create ./. null) (_create ./. (abort "This should not be used!"))' + +# Subtracting nothing gives the same thing back +tree=( + [a]=1 +) +checkFileset 'difference ./. _emptyWithoutBase' +checkFileset 'difference ./. (_create ./. null)' + +# Subtracting doesn't influence the base path +mkdir a b +touch {a,b}/x +expectEqual 'toSource { root = ./a; fileset = difference ./a ./b; }' 'toSource { root = ./a; fileset = ./a; }' +rm -rf -- * + +# Also not the other way around +mkdir a +expectFailure 'toSource { root = ./a; fileset = difference ./. ./a; }' 'lib.fileset.toSource: `fileset` could contain files in '"$work"', which is not under the `root` \('"$work"'/a\). Potential solutions: +\s*- Set `root` to '"$work"' or any directory higher up. This changes the layout of the resulting store path. +\s*- Set `fileset` to a file set that cannot contain files outside the `root` \('"$work"'/a\). This could change the files included in the result.' +rm -rf -- * + +# Difference actually works +# We test all combinations of ./., ./a, ./a/x and ./b +tree=( + [a/x]=0 + [a/y]=0 + [b]=0 + [c]=0 +) +checkFileset 'difference ./. ./.' +checkFileset 'difference ./a ./.' +checkFileset 'difference ./a/x ./.' +checkFileset 'difference ./b ./.' +checkFileset 'difference ./a ./a' +checkFileset 'difference ./a/x ./a' +checkFileset 'difference ./a/x ./a/x' +checkFileset 'difference ./b ./b' +tree=( + [a/x]=0 + [a/y]=0 + [b]=1 + [c]=1 +) +checkFileset 'difference ./. ./a' +tree=( + [a/x]=1 + [a/y]=1 + [b]=0 + [c]=0 +) +checkFileset 'difference ./a ./b' +tree=( + [a/x]=1 + [a/y]=0 + [b]=0 + [c]=0 +) +checkFileset 'difference ./a/x ./b' +tree=( + [a/x]=0 + [a/y]=1 + [b]=0 + [c]=0 +) +checkFileset 'difference ./a ./a/x' +tree=( + [a/x]=0 + [a/y]=0 + [b]=1 + [c]=0 +) +checkFileset 'difference ./b ./a' +checkFileset 'difference ./b ./a/x' +tree=( + [a/x]=0 + [a/y]=1 + [b]=1 + [c]=1 +) +checkFileset 'difference ./. ./a/x' +tree=( + [a/x]=1 + [a/y]=1 + [b]=0 + [c]=1 +) +checkFileset 'difference ./. ./b' + +## File filter + +# The first argument needs to be a function +expectFailure 'fileFilter null (abort "this is not needed")' 'lib.fileset.fileFilter: First argument is of type null, but it should be a function instead.' + +# The second argument needs to be an existing path +expectFailure 'fileFilter (file: abort "this is not needed") _emptyWithoutBase' 'lib.fileset.fileFilter: Second argument is a file set, but it should be a path instead. +\s*If you need to filter files in a file set, use `intersection fileset \(fileFilter pred \./\.\)` instead.' +expectFailure 'fileFilter (file: abort "this is not needed") null' 'lib.fileset.fileFilter: Second argument is of type null, but it should be a path instead.' +expectFailure 'fileFilter (file: abort "this is not needed") ./a' 'lib.fileset.fileFilter: Second argument \('"$work"'/a\) is a path that does not exist.' + +# The predicate is not called when there's no files +tree=() +checkFileset 'fileFilter (file: abort "this is not needed") ./.' + +# The predicate must be able to handle extra attributes +touch a +expectFailure 'toSource { root = ./.; fileset = fileFilter ({ name, type }: true) ./.; }' 'called with unexpected argument '\''"lib.fileset.fileFilter: The predicate function passed as the first argument must be able to handle extra attributes for future compatibility. If you'\''re using `\{ name, file \}:`, use `\{ name, file, ... \}:` instead."'\' +rm -rf -- * + +# .name is the name, and it works correctly, even recursively +tree=( + [a]=1 + [b]=0 + [c/a]=1 + [c/b]=0 + [d/c/a]=1 + [d/c/b]=0 +) +checkFileset 'fileFilter (file: file.name == "a") ./.' +tree=( + [a]=0 + [b]=1 + [c/a]=0 + [c/b]=1 + [d/c/a]=0 + [d/c/b]=1 +) +checkFileset 'fileFilter (file: file.name != "a") ./.' + +# `.type` is the file type +mkdir d +touch d/a +ln -s d/b d/b +mkfifo d/c +expectEqual \ + 'toSource { root = ./.; fileset = fileFilter (file: file.type == "regular") ./.; }' \ + 'toSource { root = ./.; fileset = ./d/a; }' +expectEqual \ + 'toSource { root = ./.; fileset = fileFilter (file: file.type == "symlink") ./.; }' \ + 'toSource { root = ./.; fileset = ./d/b; }' +expectEqual \ + 'toSource { root = ./.; fileset = fileFilter (file: file.type == "unknown") ./.; }' \ + 'toSource { root = ./.; fileset = ./d/c; }' +expectEqual \ + 'toSource { root = ./.; fileset = fileFilter (file: file.type != "regular") ./.; }' \ + 'toSource { root = ./.; fileset = union ./d/b ./d/c; }' +expectEqual \ + 'toSource { root = ./.; fileset = fileFilter (file: file.type != "symlink") ./.; }' \ + 'toSource { root = ./.; fileset = union ./d/a ./d/c; }' +expectEqual \ + 'toSource { root = ./.; fileset = fileFilter (file: file.type != "unknown") ./.; }' \ + 'toSource { root = ./.; fileset = union ./d/a ./d/b; }' +rm -rf -- * + +# It's lazy +tree=( + [b]=1 + [c/a]=1 +) +# Note that union evaluates the first argument first if necessary, that's why we can use ./c/a here +checkFileset 'union ./c/a (fileFilter (file: assert file.name != "a"; true) ./.)' +# but here we need to use ./c +checkFileset 'union (fileFilter (file: assert file.name != "a"; true) ./.) ./c' + +# Make sure single files are filtered correctly +tree=( + [a]=1 + [b]=0 +) +checkFileset 'fileFilter (file: assert file.name == "a"; true) ./a' +tree=( + [a]=0 + [b]=0 +) +checkFileset 'fileFilter (file: assert file.name == "a"; false) ./a' ## Tracing @@ -823,6 +1061,390 @@ touch 0 "${filesToCreate[@]}" expectTrace 'unions (mapAttrsToList (n: _: ./. + "/${n}") (removeAttrs (builtins.readDir ./.) [ "0" ]))' "$expectedTrace" rm -rf -- * +## lib.fileset.fromSource + +# Check error messages +expectFailure 'fromSource null' 'lib.fileset.fromSource: The source origin of the argument is of type null, but it should be a path instead.' + +expectFailure 'fromSource (lib.cleanSource "")' 'lib.fileset.fromSource: The source origin of the argument is a string-like value \(""\), but it should be a path instead. +\s*Sources created from paths in strings cannot be turned into file sets, use `lib.sources` or derivations instead.' + +expectFailure 'fromSource (lib.cleanSource null)' 'lib.fileset.fromSource: The source origin of the argument is of type null, but it should be a path instead.' + +# fromSource on a path works and is the same as coercing that path +mkdir a +touch a/b c +expectEqual 'trace (fromSource ./.) null' 'trace ./. null' +rm -rf -- * + +# Check that converting to a file set doesn't read the included files +mkdir a +touch a/b +run() { + expectEqual "trace (fromSource (lib.cleanSourceWith { src = ./a; })) null" "builtins.trace \"$work/a (all files in directory)\" null" + rm a/b +} +withFileMonitor run a/b +rm -rf -- * + +# Check that converting to a file set doesn't read entries for directories that are filtered out +mkdir -p a/b +touch a/b/c +run() { + expectEqual "trace (fromSource (lib.cleanSourceWith { + src = ./a; + filter = pathString: type: false; + })) null" "builtins.trace \"(empty)\" null" + rm a/b/c + rmdir a/b +} +withFileMonitor run a/b +rm -rf -- * + +# The filter is not needed on empty directories +expectEqual 'trace (fromSource (lib.cleanSourceWith { + src = ./.; + filter = abort "filter should not be needed"; +})) null' 'trace _emptyWithoutBase null' + +# Single files also work +touch a b +expectEqual 'trace (fromSource (cleanSourceWith { src = ./a; })) null' 'trace ./a null' +rm -rf -- * + +# For a tree assigning each subpath true/false, +# check whether a source filter with those results includes the same files +# as a file set created using fromSource. Usage: +# +# tree=( +# [a]=1 # ./a is a file and the filter should return true for it +# [b/]=0 # ./b is a directory and the filter should return false for it +# ) +# checkSource +checkSource() { + createTree + + # Serialise the tree as JSON (there's only minimal savings with jq, + # and we don't need to handle escapes) + { + echo "{" + first=1 + for p in "${!tree[@]}"; do + if [[ -z "$first" ]]; then + echo "," + else + first= + fi + echo "\"$p\":" + case "${tree[$p]}" in + 1) + echo "true" + ;; + 0) + echo "false" + ;; + *) + die "Unsupported tree value: ${tree[$p]}" + esac + done + echo "}" + } > "$tmp/tree.json" + + # An expression to create a source value with a filter matching the tree + sourceExpr=' + let + tree = importJSON '"$tmp"'/tree.json; + in + cleanSourceWith { + src = ./.; + filter = + pathString: type: + let + stripped = removePrefix (toString ./. + "/") pathString; + key = stripped + optionalString (type == "directory") "/"; + in + tree.${key} or + (throw "tree key ${key} missing"); + } + ' + + filesetExpr=' + toSource { + root = ./.; + fileset = fromSource ('"$sourceExpr"'); + } + ' + + # Turn both into store paths + sourceStorePath=$(expectStorePath "$sourceExpr") + filesetStorePath=$(expectStorePath "$filesetExpr") + + # Loop through each path in the tree + while IFS= read -r -d $'\0' subpath; do + if [[ ! -e "$sourceStorePath"/"$subpath" ]]; then + # If it's not in the source store path, it's also not in the file set store path + if [[ -e "$filesetStorePath"/"$subpath" ]]; then + die "The store path $sourceStorePath created by $expr doesn't contain $subpath, but the corresponding store path $filesetStorePath created via fromSource does contain $subpath" + fi + elif [[ -z "$(find "$sourceStorePath"/"$subpath" -type f)" ]]; then + # If it's an empty directory in the source store path, it shouldn't be in the file set store path + if [[ -e "$filesetStorePath"/"$subpath" ]]; then + die "The store path $sourceStorePath created by $expr contains the path $subpath without any files, but the corresponding store path $filesetStorePath created via fromSource didn't omit it" + fi + else + # If it's non-empty directory or a file, it should be in the file set store path + if [[ ! -e "$filesetStorePath"/"$subpath" ]]; then + die "The store path $sourceStorePath created by $expr contains the non-empty path $subpath, but the corresponding store path $filesetStorePath created via fromSource doesn't include it" + fi + fi + done < <(find . -mindepth 1 -print0) + + rm -rf -- * +} + +# Check whether the filter is evaluated correctly +tree=( + [a]= + [b/]= + [b/c]= + [b/d]= + [e/]= + [e/e/]= +) +# We fill out the above tree values with all possible combinations of 0 and 1 +# Then check whether a filter based on those return values gets turned into the corresponding file set +for i in $(seq 0 $((2 ** ${#tree[@]} - 1 ))); do + for p in "${!tree[@]}"; do + tree[$p]=$(( i % 2 )) + (( i /= 2 )) || true + done + checkSource +done + +# The filter is called with the same arguments in the same order +mkdir a e +touch a/b a/c d e +expectEqual ' + trace (fromSource (cleanSourceWith { + src = ./.; + filter = pathString: type: builtins.trace "${pathString} ${toString type}" true; + })) null +' ' + builtins.seq (cleanSourceWith { + src = ./.; + filter = pathString: type: builtins.trace "${pathString} ${toString type}" true; + }).outPath + builtins.trace "'"$work"' (all files in directory)" + null +' +rm -rf -- * + +# Test that if a directory is not included, the filter isn't called on its contents +mkdir a b +touch a/c b/d +expectEqual 'trace (fromSource (cleanSourceWith { + src = ./.; + filter = pathString: type: + if pathString == toString ./a then + false + else if pathString == toString ./b then + true + else if pathString == toString ./b/d then + true + else + abort "This filter should not be called with path ${pathString}"; +})) null' 'trace (_create ./. { b = "directory"; }) null' +rm -rf -- * + +# The filter is called lazily: +# If a later say intersection removes a part of the tree, the filter won't run on it +mkdir a d +touch a/{b,c} d/e +expectEqual 'trace (intersection ./a (fromSource (lib.cleanSourceWith { + src = ./.; + filter = pathString: type: + if pathString == toString ./a || pathString == toString ./a/b then + true + else if pathString == toString ./a/c then + false + else + abort "filter should not be called on ${pathString}"; +}))) null' 'trace ./a/b null' +rm -rf -- * + +## lib.fileset.gitTracked/gitTrackedWith + +# The first/second argument has to be a path +expectFailure 'gitTracked null' 'lib.fileset.gitTracked: Expected the argument to be a path, but it'\''s a null instead.' +expectFailure 'gitTrackedWith {} null' 'lib.fileset.gitTrackedWith: Expected the second argument to be a path, but it'\''s a null instead.' + +# The path has to contain a .git directory +expectFailure 'gitTracked ./.' 'lib.fileset.gitTracked: Expected the argument \('"$work"'\) to point to a local working tree of a Git repository, but it'\''s not.' +expectFailure 'gitTrackedWith {} ./.' 'lib.fileset.gitTrackedWith: Expected the second argument \('"$work"'\) to point to a local working tree of a Git repository, but it'\''s not.' + +# recurseSubmodules has to be a boolean +expectFailure 'gitTrackedWith { recurseSubmodules = null; } ./.' 'lib.fileset.gitTrackedWith: Expected the attribute `recurseSubmodules` of the first argument to be a boolean, but it'\''s a null instead.' + +# recurseSubmodules = true is not supported on all Nix versions +if [[ "$(nix-instantiate --eval --expr "$(prefixExpression) (versionAtLeast builtins.nixVersion _fetchGitSubmodulesMinver)")" == true ]]; then + fetchGitSupportsSubmodules=1 +else + fetchGitSupportsSubmodules= + expectFailure 'gitTrackedWith { recurseSubmodules = true; } ./.' 'lib.fileset.gitTrackedWith: Setting the attribute `recurseSubmodules` to `true` is only supported for Nix version 2.4 and after, but Nix version [0-9.]+ is used.' +fi + +# Checks that `gitTrackedWith` contains the same files as `git ls-files` +# for the current working directory. +# If --recurse-submodules is passed, the flag is passed through to `git ls-files` +# and as `recurseSubmodules` to `gitTrackedWith` +checkGitTrackedWith() { + if [[ "${1:-}" == "--recurse-submodules" ]]; then + gitLsFlags="--recurse-submodules" + gitTrackedArg="{ recurseSubmodules = true; }" + else + gitLsFlags="" + gitTrackedArg="{ }" + fi + + # All files listed by `git ls-files` + expectedFiles=() + while IFS= read -r -d $'\0' file; do + # If there are submodules but --recurse-submodules isn't passed, + # `git ls-files` lists them as empty directories, + # we need to filter that out since we only want to check/count files + if [[ -f "$file" ]]; then + expectedFiles+=("$file") + fi + done < <(git ls-files -z $gitLsFlags) + + storePath=$(expectStorePath 'toSource { root = ./.; fileset = gitTrackedWith '"$gitTrackedArg"' ./.; }') + + # Check that each expected file is also in the store path with the same content + for expectedFile in "${expectedFiles[@]}"; do + if [[ ! -e "$storePath"/"$expectedFile" ]]; then + die "Expected file $expectedFile to exist in $storePath, but it doesn't.\nGit status:\n$(git status)\nStore path contents:\n$(find "$storePath")" + fi + if ! diff "$expectedFile" "$storePath"/"$expectedFile"; then + die "Expected file $expectedFile to have the same contents as in $storePath, but it doesn't.\nGit status:\n$(git status)\nStore path contents:\n$(find "$storePath")" + fi + done + + # This is a cheap way to verify the inverse: That all files in the store path are also expected + # We just count the number of files in both and verify they're the same + actualFileCount=$(find "$storePath" -type f -printf . | wc -c) + if [[ "${#expectedFiles[@]}" != "$actualFileCount" ]]; then + die "Expected ${#expectedFiles[@]} files in $storePath, but got $actualFileCount.\nGit status:\n$(git status)\nStore path contents:\n$(find "$storePath")" + fi +} + + +# Runs checkGitTrackedWith with and without --recurse-submodules +# Allows testing both variants together +checkGitTracked() { + checkGitTrackedWith + if [[ -n "$fetchGitSupportsSubmodules" ]]; then + checkGitTrackedWith --recurse-submodules + fi +} + +createGitRepo() { + git init -q "$1" + # Only repo-local config + git -C "$1" config user.name "Nixpkgs" + git -C "$1" config user.email "nixpkgs@nixos.org" + # Get at least a HEAD commit, needed for older Nix versions + git -C "$1" commit -q --allow-empty -m "Empty commit" +} + +# Check the error message for pure eval mode +createGitRepo . +expectFailure --simulate-pure-eval 'toSource { root = ./.; fileset = gitTracked ./.; }' 'lib.fileset.gitTracked: This function is currently not supported in pure evaluation mode, since it currently relies on `builtins.fetchGit`. See https://github.com/NixOS/nix/issues/9292.' +expectFailure --simulate-pure-eval 'toSource { root = ./.; fileset = gitTrackedWith {} ./.; }' 'lib.fileset.gitTrackedWith: This function is currently not supported in pure evaluation mode, since it currently relies on `builtins.fetchGit`. See https://github.com/NixOS/nix/issues/9292.' +rm -rf -- * + +# Go through all stages of Git files +# See https://www.git-scm.com/book/en/v2/Git-Basics-Recording-Changes-to-the-Repository + +# Empty repository +createGitRepo . +checkGitTracked + +# Untracked file +echo a > a +checkGitTracked + +# Staged file +git add a +checkGitTracked + +# Committed file +git commit -q -m "Added a" +checkGitTracked + +# Edited file +echo b > a +checkGitTracked + +# Removed file +git rm -f -q a +checkGitTracked + +rm -rf -- * + +# gitignored file +createGitRepo . +echo a > .gitignore +touch a +git add -A +checkGitTracked + +# Add it regardless (needs -f) +git add -f a +checkGitTracked +rm -rf -- * + +# Directory +createGitRepo . +mkdir -p d1/d2/d3 +touch d1/d2/d3/a +git add d1 +checkGitTracked +rm -rf -- * + +# Submodules +createGitRepo . +createGitRepo sub + +# Untracked submodule +git -C sub commit -q --allow-empty -m "Empty commit" +checkGitTracked + +# Tracked submodule +git submodule add ./sub sub >/dev/null +checkGitTracked + +# Untracked file +echo a > sub/a +checkGitTracked + +# Staged file +git -C sub add a +checkGitTracked + +# Committed file +git -C sub commit -q -m "Add a" +checkGitTracked + +# Changed file +echo b > sub/b +checkGitTracked + +# Removed file +git -C sub rm -f -q a +checkGitTracked + +rm -rf -- * + # TODO: Once we have combinators and a property testing library, derive property tests from https://en.wikipedia.org/wiki/Algebra_of_sets echo >&2 tests ok diff --git a/third_party/nixpkgs/lib/fixed-points.nix b/third_party/nixpkgs/lib/fixed-points.nix index 3444e95e15..3b5fdc9e8e 100644 --- a/third_party/nixpkgs/lib/fixed-points.nix +++ b/third_party/nixpkgs/lib/fixed-points.nix @@ -45,7 +45,7 @@ rec { } ``` - This is where `fix` comes in, it contains the syntactic that's not in `f` anymore. + This is where `fix` comes in, it contains the syntactic recursion that's not in `f` anymore. ```nix nix-repl> fix = f: diff --git a/third_party/nixpkgs/lib/licenses.nix b/third_party/nixpkgs/lib/licenses.nix index d9555ca66c..ad6922498a 100644 --- a/third_party/nixpkgs/lib/licenses.nix +++ b/third_party/nixpkgs/lib/licenses.nix @@ -516,17 +516,17 @@ in mkLicense lset) ({ generaluser = { fullName = "GeneralUser GS License v2.0"; - url = "http://www.schristiancollins.com/generaluser.php"; # license included in sources + url = "https://www.schristiancollins.com/generaluser.php"; # license included in sources }; gfl = { fullName = "GUST Font License"; - url = "http://www.gust.org.pl/fonts/licenses/GUST-FONT-LICENSE.txt"; + url = "https://www.gust.org.pl/projects/e-foundry/licenses/GUST-FONT-LICENSE.txt"; }; gfsl = { fullName = "GUST Font Source License"; - url = "http://www.gust.org.pl/fonts/licenses/GUST-FONT-SOURCE-LICENSE.txt"; + url = "https://www.gust.org.pl/projects/e-foundry/licenses/GUST-FONT-SOURCE-LICENSE.txt"; }; gpl1Only = { @@ -613,7 +613,7 @@ in mkLicense lset) ({ info-zip = { spdxId = "Info-ZIP"; fullName = "Info-ZIP License"; - url = "http://www.info-zip.org/pub/infozip/license.html"; + url = "https://infozip.sourceforge.net/license.html"; }; inria-compcert = { @@ -877,6 +877,21 @@ in mkLicense lset) ({ fullName = "Non-Profit Open Software License 3.0"; }; + nvidiaCuda = { + shortName = "CUDA EULA"; + fullName = "CUDA Toolkit End User License Agreement (EULA)"; + url = "https://docs.nvidia.com/cuda/eula/index.html#cuda-toolkit-supplement-license-agreement"; + free = false; + }; + + nvidiaCudaRedist = { + shortName = "CUDA EULA"; + fullName = "CUDA Toolkit End User License Agreement (EULA)"; + url = "https://docs.nvidia.com/cuda/eula/index.html#cuda-toolkit-supplement-license-agreement"; + free = false; + redistributable = true; + }; + obsidian = { fullName = "Obsidian End User Agreement"; url = "https://obsidian.md/eula"; @@ -1167,7 +1182,7 @@ in mkLicense lset) ({ xfig = { fullName = "xfig"; - url = "http://mcj.sourceforge.net/authors.html#xfig"; # https is broken + url = "https://mcj.sourceforge.net/authors.html#xfig"; }; zlib = { diff --git a/third_party/nixpkgs/lib/lists.nix b/third_party/nixpkgs/lib/lists.nix index 3835e3ba69..15047f488f 100644 --- a/third_party/nixpkgs/lib/lists.nix +++ b/third_party/nixpkgs/lib/lists.nix @@ -821,6 +821,19 @@ rec { */ unique = foldl' (acc: e: if elem e acc then acc else acc ++ [ e ]) []; + /* Check if list contains only unique elements. O(n^2) complexity. + + Type: allUnique :: [a] -> bool + + Example: + allUnique [ 3 2 3 4 ] + => false + allUnique [ 3 2 4 1 ] + => true + */ + allUnique = list: (length (unique list) == length list); + + /* Intersects list 'e' and another list. O(nm) complexity. Example: diff --git a/third_party/nixpkgs/lib/meta.nix b/third_party/nixpkgs/lib/meta.nix index 44730a7155..2e817c4232 100644 --- a/third_party/nixpkgs/lib/meta.nix +++ b/third_party/nixpkgs/lib/meta.nix @@ -162,5 +162,12 @@ rec { getExe' pkgs.imagemagick "convert" => "/nix/store/5rs48jamq7k6sal98ymj9l4k2bnwq515-imagemagick-7.1.1-15/bin/convert" */ - getExe' = x: y: "${lib.getBin x}/bin/${y}"; + getExe' = x: y: + assert lib.assertMsg (lib.isDerivation x) + "lib.meta.getExe': The first argument is of type ${builtins.typeOf x}, but it should be a derivation instead."; + assert lib.assertMsg (lib.isString y) + "lib.meta.getExe': The second argument is of type ${builtins.typeOf y}, but it should be a string instead."; + assert lib.assertMsg (builtins.length (lib.splitString "/" y) == 1) + "lib.meta.getExe': The second argument \"${y}\" is a nested path with a \"/\" character, but it should just be the name of the executable instead."; + "${lib.getBin x}/bin/${y}"; } diff --git a/third_party/nixpkgs/lib/strings.nix b/third_party/nixpkgs/lib/strings.nix index 628669d86b..695aaaacd3 100644 --- a/third_party/nixpkgs/lib/strings.nix +++ b/third_party/nixpkgs/lib/strings.nix @@ -144,6 +144,20 @@ rec { */ concatLines = concatMapStrings (s: s + "\n"); + /* + Replicate a string n times, + and concatenate the parts into a new string. + + Type: replicate :: int -> string -> string + + Example: + replicate 3 "v" + => "vvv" + replicate 5 "hello" + => "hellohellohellohellohello" + */ + replicate = n: s: concatStrings (lib.lists.replicate n s); + /* Construct a Unix-style, colon-separated search path consisting of the given `subDir` appended to each of the given paths. diff --git a/third_party/nixpkgs/lib/systems/default.nix b/third_party/nixpkgs/lib/systems/default.nix index 2790ea08d9..ada8c66e36 100644 --- a/third_party/nixpkgs/lib/systems/default.nix +++ b/third_party/nixpkgs/lib/systems/default.nix @@ -43,6 +43,10 @@ rec { elaborate = args': let args = if lib.isString args' then { system = args'; } else args'; + + # TODO: deprecate args.rustc in favour of args.rust after 23.05 is EOL. + rust = assert !(args ? rust && args ? rustc); args.rust or args.rustc or {}; + final = { # Prefer to parse `config` as it is strictly more informative. parsed = parse.mkSystemFromString (if args ? config then args.config else args.system); @@ -159,9 +163,101 @@ rec { ({ linux-kernel = args.linux-kernel or {}; gcc = args.gcc or {}; - rustc = args.rustc or {}; } // platforms.select final) - linux-kernel gcc rustc; + linux-kernel gcc; + + # TODO: remove after 23.05 is EOL, with an error pointing to the rust.* attrs. + rustc = args.rustc or {}; + + rust = rust // { + # Once args.rustc.platform.target-family is deprecated and + # removed, there will no longer be any need to modify any + # values from args.rust.platform, so we can drop all the + # "args ? rust" etc. checks, and merge args.rust.platform in + # /after/. + platform = rust.platform or {} // { + # https://doc.rust-lang.org/reference/conditional-compilation.html#target_arch + arch = + /**/ if rust ? platform then rust.platform.arch + else if final.isAarch32 then "arm" + else if final.isMips64 then "mips64" # never add "el" suffix + else if final.isPower64 then "powerpc64" # never add "le" suffix + else final.parsed.cpu.name; + + # https://doc.rust-lang.org/reference/conditional-compilation.html#target_os + os = + /**/ if rust ? platform then rust.platform.os or "none" + else if final.isDarwin then "macos" + else final.parsed.kernel.name; + + # https://doc.rust-lang.org/reference/conditional-compilation.html#target_family + target-family = + /**/ if args ? rust.platform.target-family then args.rust.platform.target-family + else if args ? rustc.platform.target-family + then + ( + # Since https://github.com/rust-lang/rust/pull/84072 + # `target-family` is a list instead of single value. + let + f = args.rustc.platform.target-family; + in + if builtins.isList f then f else [ f ] + ) + else lib.optional final.isUnix "unix" + ++ lib.optional final.isWindows "windows"; + + # https://doc.rust-lang.org/reference/conditional-compilation.html#target_vendor + vendor = let + inherit (final.parsed) vendor; + in rust.platform.vendor or { + "w64" = "pc"; + }.${vendor.name} or vendor.name; + }; + + # The name of the rust target, even if it is custom. Adjustments are + # because rust has slightly different naming conventions than we do. + rustcTarget = let + inherit (final.parsed) cpu kernel abi; + cpu_ = rust.platform.arch or { + "armv7a" = "armv7"; + "armv7l" = "armv7"; + "armv6l" = "arm"; + "armv5tel" = "armv5te"; + "riscv64" = "riscv64gc"; + }.${cpu.name} or cpu.name; + vendor_ = final.rust.platform.vendor; + in rust.config + or "${cpu_}-${vendor_}-${kernel.name}${lib.optionalString (abi.name != "unknown") "-${abi.name}"}"; + + # The name of the rust target if it is standard, or the json file + # containing the custom target spec. + rustcTargetSpec = + /**/ if rust ? platform + then builtins.toFile (final.rust.rustcTarget + ".json") (builtins.toJSON rust.platform) + else final.rust.rustcTarget; + + # The name of the rust target if it is standard, or the + # basename of the file containing the custom target spec, + # without the .json extension. + # + # This is the name used by Cargo for target subdirectories. + cargoShortTarget = + lib.removeSuffix ".json" (baseNameOf "${final.rust.rustcTargetSpec}"); + + # When used as part of an environment variable name, triples are + # uppercased and have all hyphens replaced by underscores: + # + # https://github.com/rust-lang/cargo/pull/9169 + # https://github.com/rust-lang/cargo/issues/8285#issuecomment-634202431 + cargoEnvVarTarget = + lib.strings.replaceStrings ["-"] ["_"] + (lib.strings.toUpper final.rust.cargoShortTarget); + + # True if the target is no_std + # https://github.com/rust-lang/rust/blob/2e44c17c12cec45b6a682b1e53a04ac5b5fcc9d2/src/bootstrap/config.rs#L415-L421 + isNoStdTarget = + builtins.any (t: lib.hasInfix t final.rust.rustcTarget) ["-none" "nvptx" "switch" "-uefi"]; + }; linuxArch = if final.isAarch32 then "arm" diff --git a/third_party/nixpkgs/lib/systems/examples.nix b/third_party/nixpkgs/lib/systems/examples.nix index 0e704b7d7d..75578b9749 100644 --- a/third_party/nixpkgs/lib/systems/examples.nix +++ b/third_party/nixpkgs/lib/systems/examples.nix @@ -115,6 +115,7 @@ rec { }; gnu64 = { config = "x86_64-unknown-linux-gnu"; }; + gnu64_simplekernel = gnu64 // platforms.pc_simplekernel; # see test/cross/default.nix gnu32 = { config = "i686-unknown-linux-gnu"; }; musl64 = { config = "x86_64-unknown-linux-musl"; }; diff --git a/third_party/nixpkgs/lib/systems/inspect.nix b/third_party/nixpkgs/lib/systems/inspect.nix index 022e459c39..073df78797 100644 --- a/third_party/nixpkgs/lib/systems/inspect.nix +++ b/third_party/nixpkgs/lib/systems/inspect.nix @@ -100,6 +100,32 @@ rec { ]; }; + # given two patterns, return a pattern which is their logical AND. + # Since a pattern is a list-of-disjuncts, this needs to + patternLogicalAnd = pat1_: pat2_: + let + # patterns can be either a list or a (bare) singleton; turn + # them into singletons for uniform handling + pat1 = lib.toList pat1_; + pat2 = lib.toList pat2_; + in + lib.concatMap (attr1: + map (attr2: + lib.recursiveUpdateUntil + (path: subattr1: subattr2: + if (builtins.intersectAttrs subattr1 subattr2) == {} || subattr1 == subattr2 + then true + else throw '' + pattern conflict at path ${toString path}: + ${builtins.toJSON subattr1} + ${builtins.toJSON subattr2} + '') + attr1 + attr2 + ) + pat2) + pat1; + matchAnyAttrs = patterns: if builtins.isList patterns then attrs: any (pattern: matchAttrs pattern attrs) patterns else matchAttrs patterns; diff --git a/third_party/nixpkgs/lib/systems/parse.nix b/third_party/nixpkgs/lib/systems/parse.nix index 34bfd94b3c..b69ad669e1 100644 --- a/third_party/nixpkgs/lib/systems/parse.nix +++ b/third_party/nixpkgs/lib/systems/parse.nix @@ -29,6 +29,15 @@ let assert type.check value; setType type.name ({ inherit name; } // value)); + # gnu-config will ignore the portion of a triple matching the + # regex `e?abi.*$` when determining the validity of a triple. In + # other words, `i386-linuxabichickenlips` is a valid triple. + removeAbiSuffix = x: + let match = builtins.match "(.*)e?abi.*" x; + in if match==null + then x + else lib.elemAt match 0; + in rec { @@ -466,7 +475,7 @@ rec { else vendors.unknown; kernel = if hasPrefix "darwin" args.kernel then getKernel "darwin" else if hasPrefix "netbsd" args.kernel then getKernel "netbsd" - else getKernel args.kernel; + else getKernel (removeAbiSuffix args.kernel); abi = /**/ if args ? abi then getAbi args.abi else if isLinux parsed || isWindows parsed then diff --git a/third_party/nixpkgs/lib/tests/filesystem.sh b/third_party/nixpkgs/lib/tests/filesystem.sh index cfd333d000..7e7e03bc66 100755 --- a/third_party/nixpkgs/lib/tests/filesystem.sh +++ b/third_party/nixpkgs/lib/tests/filesystem.sh @@ -64,8 +64,14 @@ expectSuccess "pathType $PWD/directory" '"directory"' expectSuccess "pathType $PWD/regular" '"regular"' expectSuccess "pathType $PWD/symlink" '"symlink"' expectSuccess "pathType $PWD/fifo" '"unknown"' -# Different errors depending on whether the builtins.readFilePath primop is available or not -expectFailure "pathType $PWD/non-existent" "error: (evaluation aborted with the following error message: 'lib.filesystem.pathType: Path $PWD/non-existent does not exist.'|getting status of '$PWD/non-existent': No such file or directory)" + +# Only check error message when a Nixpkgs-specified error is thrown, +# which is only the case when `readFileType` is not available +# and the fallback implementation needs to be used. +if [[ "$(nix-instantiate --eval --expr 'builtins ? readFileType')" == false ]]; then + expectFailure "pathType $PWD/non-existent" \ + "error: evaluation aborted with the following error message: 'lib.filesystem.pathType: Path $PWD/non-existent does not exist.'" +fi expectSuccess "pathIsDirectory /." "true" expectSuccess "pathIsDirectory $PWD/directory" "true" diff --git a/third_party/nixpkgs/lib/tests/misc.nix b/third_party/nixpkgs/lib/tests/misc.nix index 2e7fda2b1f..06cb5e763e 100644 --- a/third_party/nixpkgs/lib/tests/misc.nix +++ b/third_party/nixpkgs/lib/tests/misc.nix @@ -191,6 +191,11 @@ runTests { expected = "a\nb\nc\n"; }; + testReplicateString = { + expr = strings.replicate 5 "hello"; + expected = "hellohellohellohellohello"; + }; + testSplitStringsSimple = { expr = strings.splitString "." "a.b.c.d"; expected = [ "a" "b" "c" "d" ]; @@ -721,6 +726,15 @@ runTests { expected = 7; }; + testAllUnique_true = { + expr = allUnique [ 3 2 4 1 ]; + expected = true; + }; + testAllUnique_false = { + expr = allUnique [ 3 2 3 4 ]; + expected = false; + }; + # ATTRSETS testConcatMapAttrs = { @@ -1906,4 +1920,32 @@ runTests { expr = (with types; either int (listOf (either bool str))).description; expected = "signed integer or list of (boolean or string)"; }; + +# Meta + testGetExe'Output = { + expr = getExe' { + type = "derivation"; + out = "somelonghash"; + bin = "somelonghash"; + } "executable"; + expected = "somelonghash/bin/executable"; + }; + + testGetExeOutput = { + expr = getExe { + type = "derivation"; + out = "somelonghash"; + bin = "somelonghash"; + meta.mainProgram = "mainProgram"; + }; + expected = "somelonghash/bin/mainProgram"; + }; + + testGetExe'FailureFirstArg = testingThrow ( + getExe' "not a derivation" "executable" + ); + + testGetExe'FailureSecondArg = testingThrow ( + getExe' { type = "derivation"; } "dir/executable" + ); } diff --git a/third_party/nixpkgs/lib/tests/release.nix b/third_party/nixpkgs/lib/tests/release.nix index c8d6b81012..6e5b071173 100644 --- a/third_party/nixpkgs/lib/tests/release.nix +++ b/third_party/nixpkgs/lib/tests/release.nix @@ -25,11 +25,13 @@ let ]; nativeBuildInputs = [ nix + pkgs.gitMinimal ] ++ lib.optional pkgs.stdenv.isLinux pkgs.inotify-tools; strictDeps = true; } '' datadir="${nix}/share" export TEST_ROOT=$(pwd)/test-tmp + export HOME=$(mktemp -d) export NIX_BUILD_HOOK= export NIX_CONF_DIR=$TEST_ROOT/etc export NIX_LOCALSTATE_DIR=$TEST_ROOT/var diff --git a/third_party/nixpkgs/lib/trivial.nix b/third_party/nixpkgs/lib/trivial.nix index c23fc6070b..a89c1aa25b 100644 --- a/third_party/nixpkgs/lib/trivial.nix +++ b/third_party/nixpkgs/lib/trivial.nix @@ -448,6 +448,40 @@ rec { isFunction = f: builtins.isFunction f || (f ? __functor && isFunction (f.__functor f)); + /* + `mirrorFunctionArgs f g` creates a new function `g'` with the same behavior as `g` (`g' x == g x`) + but its function arguments mirroring `f` (`lib.functionArgs g' == lib.functionArgs f`). + + Type: + mirrorFunctionArgs :: (a -> b) -> (a -> c) -> (a -> c) + + Example: + addab = {a, b}: a + b + addab { a = 2; b = 4; } + => 6 + lib.functionArgs addab + => { a = false; b = false; } + addab1 = attrs: addab attrs + 1 + addab1 { a = 2; b = 4; } + => 7 + lib.functionArgs addab1 + => { } + addab1' = lib.mirrorFunctionArgs addab addab1 + addab1' { a = 2; b = 4; } + => 7 + lib.functionArgs addab1' + => { a = false; b = false; } + */ + mirrorFunctionArgs = + # Function to provide the argument metadata + f: + let + fArgs = functionArgs f; + in + # Function to set the argument metadata to + g: + setFunctionArgs g fArgs; + /* Turns any non-callable values into constant functions. Returns callable values as is. diff --git a/third_party/nixpkgs/maintainers/maintainer-list.nix b/third_party/nixpkgs/maintainers/maintainer-list.nix index 6c834689f3..4a75aed9a6 100644 --- a/third_party/nixpkgs/maintainers/maintainer-list.nix +++ b/third_party/nixpkgs/maintainers/maintainer-list.nix @@ -371,6 +371,15 @@ githubId = 124545; name = "Anthony Cowley"; }; + acuteenvy = { + matrix = "@acuteenvy:matrix.org"; + github = "acuteenvy"; + githubId = 126529524; + name = "Lena"; + keys = [{ + fingerprint = "CE85 54F7 B9BC AC0D D648 5661 AB5F C04C 3C94 443F"; + }]; + }; adamcstephens = { email = "happy.plan4249@valkor.net"; matrix = "@adam:valkor.net"; @@ -446,6 +455,13 @@ githubId = 25236206; name = "Adrian Dole"; }; + adriangl = { + email = "adrian@lauterer.it"; + matrix = "@adriangl:pvv.ntnu.no"; + github = "adrlau"; + githubId = 25004152; + name = "Adrian Gunnar Lauterer"; + }; AdsonCicilioti = { name = "Adson Cicilioti"; email = "adson.cicilioti@live.com"; @@ -533,6 +549,12 @@ githubId = 732652; name = "Andreas Herrmann"; }; + ahoneybun = { + email = "aaron@system76.com"; + github = "ahoneybun"; + githubId = 4884946; + name = "Aaron Honeycutt"; + }; ahrzb = { email = "ahrzb5@gmail.com"; github = "ahrzb"; @@ -1274,6 +1296,9 @@ github = "antonmosich"; githubId = 27223336; name = "Anton Mosich"; + keys = [ { + fingerprint = "F401 287C 324F 0A1C B321 657B 9B96 97B8 FB18 7D14"; + } ]; }; antono = { email = "self@antono.info"; @@ -1383,6 +1408,12 @@ githubId = 59743220; name = "Vinícius Müller"; }; + arcuru = { + email = "patrick@jackson.dev"; + github = "arcuru"; + githubId = 160646; + name = "Patrick Jackson"; + }; ardumont = { email = "eniotna.t@gmail.com"; github = "ardumont"; @@ -1395,6 +1426,12 @@ githubId = 58516559; name = "Alexander Rezvov"; }; + argrat = { + email = "n.bertazzo@protonmail.com"; + github = "argrat"; + githubId = 98821629; + name = "Nicolò Bertazzo"; + }; arian-d = { email = "arianxdehghani@gmail.com"; github = "arian-d"; @@ -1762,12 +1799,6 @@ githubId = 1217745; name = "Aldwin Vlasblom"; }; - aveltras = { - email = "romain.viallard@outlook.fr"; - github = "aveltras"; - githubId = 790607; - name = "Romain Viallard"; - }; averelld = { email = "averell+nixos@rxd4.com"; github = "averelld"; @@ -2608,12 +2639,6 @@ githubId = 200617; name = "Ben Sima"; }; - bstrik = { - email = "dutchman55@gmx.com"; - github = "bstrik"; - githubId = 7716744; - name = "Berno Strik"; - }; btlvr = { email = "btlvr@protonmail.com"; github = "btlvr"; @@ -2766,6 +2791,12 @@ githubId = 7435854; name = "Victor Calvert"; }; + camelpunch = { + email = "me@andrewbruce.net"; + github = "camelpunch"; + githubId = 141733; + name = "Andrew Bruce"; + }; cameronfyfe = { email = "cameron.j.fyfe@gmail.com"; github = "cameronfyfe"; @@ -3045,6 +3076,9 @@ email = "chayleaf-nix@pavluk.org"; github = "chayleaf"; githubId = 9590981; + keys = [{ + fingerprint = "4314 3701 154D 9E5F 7051 7ECF 7817 1AD4 6227 E68E"; + }]; matrix = "@chayleaf:matrix.pavluk.org"; name = "Anna Pavlyuk"; }; @@ -3054,6 +3088,12 @@ githubId = 1689801; name = "Mikhail Chekan"; }; + chen = { + email = "i@cuichen.cc"; + github = "cu1ch3n"; + githubId = 80438676; + name = "Chen Cui"; + }; ChengCat = { email = "yu@cheng.cat"; github = "ChengCat"; @@ -3667,6 +3707,12 @@ githubId = 1222362; name = "Matías Lang"; }; + criyle = { + email = "i+nixos@goj.ac"; + name = "Yang Gao"; + githubId = 6821729; + github = "criyle"; + }; CRTified = { email = "carl.schneider+nixos@rub.de"; matrix = "@schnecfk:ruhr-uni-bochum.de"; @@ -3677,6 +3723,15 @@ fingerprint = "2017 E152 BB81 5C16 955C E612 45BC C1E2 709B 1788"; }]; }; + Cryolitia = { + name = "Beiyan Cryolitia"; + email = "Cryolitia@gmail.com"; + github = "Cryolitia"; + githubId = 23723294; + keys = [{ + fingerprint = "1C3C 6547 538D 7152 310C 0EEA 84DD 0C01 30A5 4DF7"; + }]; + }; cryptix = { email = "cryptix@riseup.net"; github = "cryptix"; @@ -3880,12 +3935,25 @@ githubId = 50051176; name = "Daniel Rolls"; }; + danielsidhion = { + email = "nixpkgs@sidhion.com"; + github = "DanielSidhion"; + githubId = 160084; + name = "Daniel Sidhion"; + }; daniyalsuri6 = { email = "daniyal.suri@gmail.com"; github = "daniyalsuri6"; githubId = 107034852; name = "Daniyal Suri"; }; + dannixon = { + email = "dan@dan-nixon.com"; + github = "DanNixon"; + githubId = 4037377; + name = "Dan Nixon"; + matrix = "@dannixon:matrix.org"; + }; dansbandit = { github = "dansbandit"; githubId = 4530687; @@ -4174,6 +4242,12 @@ githubId = 12224254; name = "Delta"; }; + delta231 = { + email = "swstkbaranwal@gmail.com"; + github = "Delta456"; + githubId = 28479139; + name = "Swastik Baranwal"; + }; deltadelta = { email = "contact@libellules.eu"; name = "Dara Ly"; @@ -4192,6 +4266,12 @@ githubId = 5503422; name = "Dmitriy Demin"; }; + demine = { + email = "riches_tweaks0o@icloud.com"; + github = "demine0"; + githubId = 51992962; + name = "Nikita Demin"; + }; demize = { email = "johannes@kyriasis.com"; github = "kyrias"; @@ -4510,6 +4590,12 @@ githubId = 1708810; name = "Daniel Vianna"; }; + dmytrokyrychuk = { + email = "dmytro@kyrych.uk"; + github = "dmytrokyrychuk"; + githubId = 699961; + name = "Dmytro Kyrychuk"; + }; dnr = { email = "dnr@dnr.im"; github = "dnr"; @@ -5278,6 +5364,13 @@ fingerprint = "F178 B4B4 6165 6D1B 7C15 B55D 4029 3358 C7B9 326B"; }]; }; + ericthemagician = { + email = "eric@ericyen.com"; + matrix = "@eric:jupiterbroadcasting.com"; + github = "EricTheMagician"; + githubId = 323436; + name = "Eric Yen"; + }; erikarvstedt = { email = "erik.arvstedt@gmail.com"; matrix = "@erikarvstedt:matrix.org"; @@ -5938,6 +6031,11 @@ githubId = 119691; name = "Michael Gough"; }; + franciscod = { + github = "franciscod"; + githubId = 726447; + name = "Francisco Demartino"; + }; franzmondlichtmann = { name = "Franz Schroepf"; email = "franz-schroepf@t-online.de"; @@ -6014,6 +6112,10 @@ github = "frogamic"; githubId = 10263813; name = "Dominic Shelton"; + matrix = "@frogamic:beeper.com"; + keys = [{ + fingerprint = "779A 7CA8 D51C C53A 9C51 43F7 AAE0 70F0 67EC 00A5"; + }]; }; frontsideair = { email = "photonia@gmail.com"; @@ -6080,7 +6182,7 @@ }; fugi = { email = "me@fugi.dev"; - github = "FugiMuffi"; + github = "fugidev"; githubId = 21362942; name = "Fugi"; }; @@ -6189,6 +6291,16 @@ githubId = 45048741; name = "Alwanga Oyango"; }; + galaxy = { + email = "galaxy@dmc.chat"; + matrix = "@galaxy:mozilla.org"; + name = "The Galaxy"; + github = "ga1aksy"; + githubId = 148551648; + keys = [{ + fingerprint = "48CA 3873 9E9F CA8E 76A0 835A E3DE CF85 4212 E1EA"; + }]; + }; gal_bolle = { email = "florent.becker@ens-lyon.org"; github = "FlorentBecker"; @@ -6437,6 +6549,12 @@ githubId = 1713676; name = "Luis G. Torres"; }; + giomf = { + email = "giomf@mailbox.org"; + github = "giomf"; + githubId = 35076723; + name = "Guillaume Fournier"; + }; giorgiga = { email = "giorgio.gallo@bitnic.it"; github = "giorgiga"; @@ -6621,6 +6739,12 @@ githubId = 4656860; name = "Gaute Ravndal"; }; + gray-heron = { + email = "ave+nix@cezar.info"; + github = "gray-heron"; + githubId = 7032646; + name = "Cezary Siwek"; + }; graysonhead = { email = "grayson@graysonhead.net"; github = "graysonhead"; @@ -7218,6 +7342,7 @@ }; hubble = { name = "Hubble the Wolverine"; + email = "hubblethewolverine@gmail.com"; matrix = "@hubofeverything:bark.lgbt"; github = "the-furry-hubofeverything"; githubId = 53921912; @@ -7392,6 +7517,13 @@ githubId = 1550265; name = "Dominic Steinitz"; }; + iFreilicht = { + github = "iFreilicht"; + githubId = 9742635; + matrix = "@ifreilicht:matrix.org"; + email = "nixpkgs@mail.felix-uhl.de"; + name = "Felix Uhl"; + }; ifurther = { github = "ifurther"; githubId = 55025025; @@ -7421,6 +7553,12 @@ githubId = 25505957; name = "Ilian"; }; + iliayar = { + email = "iliayar3@gmail.com"; + github = "iliayar"; + githubId = 17529355; + name = "Ilya Yaroshevskiy"; + }; ilikeavocadoes = { email = "ilikeavocadoes@hush.com"; github = "ilikeavocadoes"; @@ -7589,6 +7727,12 @@ githubId = 88038050; name = "Souvik Sen"; }; + iogamaster = { + email = "iogamastercode+nixpkgs@gmail.com"; + name = "IogaMaster"; + github = "iogamaster"; + githubId = 67164465; + }; ionutnechita = { email = "ionut_n2001@yahoo.com"; github = "ionutnechita"; @@ -7833,6 +7977,12 @@ githubId = 2212681; name = "Jakub Grzgorz Sokołowski"; }; + jakuzure = { + email = "shin@posteo.jp"; + github = "jakuzure"; + githubId = 11823547; + name = "jakuzure"; + }; jali-clarke = { email = "jinnah.ali-clarke@outlook.com"; name = "Jinnah Ali-Clarke"; @@ -7893,6 +8043,12 @@ githubId = 488556; name = "Javier Aguirre"; }; + javimerino = { + email = "merino.jav@gmail.com"; + name = "Javi Merino"; + github = "JaviMerino"; + githubId = 44926; + }; jayesh-bhoot = { name = "Jayesh Bhoot"; email = "jb@jayeshbhoot.com"; @@ -8152,6 +8308,15 @@ githubId = 18501; name = "Julien Langlois"; }; + jfly = { + name = "Jeremy Fleischman"; + email = "jeremyfleischman@gmail.com"; + github = "jfly"; + githubId = 277474; + keys = [{ + fingerprint = "F1F1 3395 8E8E 9CC4 D9FC 9647 1931 9CD8 416A 642B"; + }]; + }; jfrankenau = { email = "johannes@frankenau.net"; github = "jfrankenau"; @@ -8289,6 +8454,12 @@ githubId = 3081095; name = "Jürgen Keck"; }; + jl178 = { + email = "jeredlittle1996@gmail.com"; + github = "jl178"; + githubId = 72664723; + name = "Jered Little"; + }; jlamur = { email = "contact@juleslamur.fr"; github = "jlamur"; @@ -9737,6 +9908,11 @@ }]; name = "Joseph LaFreniere"; }; + lagoja = { + github = "Lagoja"; + githubId =750845; + name = "John Lago"; + }; laikq = { email = "gwen@quasebarth.de"; github = "laikq"; @@ -10388,6 +10564,12 @@ githubId = 2487922; name = "Lars Jellema"; }; + ludat = { + email = "lucas6246@gmail.com"; + github = "ludat"; + githubId = 4952044; + name = "Lucas David Traverso"; + }; ludo = { email = "ludo@gnu.org"; github = "civodul"; @@ -10922,6 +11104,12 @@ githubId = 29855073; name = "Michael Colicchia"; }; + massimogengarelli = { + email = "massimo.gengarelli@gmail.com"; + github = "massix"; + githubId = 585424; + name = "Massimo Gengarelli"; + }; matejc = { email = "cotman.matej@gmail.com"; github = "matejc"; @@ -11043,6 +11231,12 @@ githubId = 11810057; name = "Matt Snider"; }; + matusf = { + email = "matus.ferech@gmail.com"; + github = "matusf"; + githubId = 18228995; + name = "Matúš Ferech"; + }; maurer = { email = "matthew.r.maurer+nix@gmail.com"; github = "maurer"; @@ -11554,6 +11748,12 @@ githubId = 34864484; name = "Mikael Fangel"; }; + mikecm = { + email = "mikecmcleod@gmail.com"; + github = "MaxwellDupre"; + githubId = 14096356; + name = "Michael McLeod"; + }; mikefaille = { email = "michael@faille.io"; github = "mikefaille"; @@ -11666,6 +11866,13 @@ githubId = 149558; name = "Merlin Gaillard"; }; + mirkolenz = { + name = "Mirko Lenz"; + email = "mirko@mirkolenz.com"; + matrix = "@mlenz:matrix.org"; + github = "mirkolenz"; + githubId = 5160954; + }; mirrexagon = { email = "mirrexagon@mirrexagon.com"; github = "mirrexagon"; @@ -12033,12 +12240,30 @@ github = "MrTarantoga"; githubId = 53876219; }; + mrtnvgr = { + name = "Egor Martynov"; + github = "mrtnvgr"; + githubId = 48406064; + keys = [{ + fingerprint = "6FAD DB43 D5A5 FE52 6835 0943 5B33 79E9 81EF 48B1"; + }]; + }; mrVanDalo = { email = "contact@ingolf-wagner.de"; github = "mrVanDalo"; githubId = 839693; name = "Ingolf Wanger"; }; + msanft = { + email = "moritz.sanft@outlook.de"; + matrix = "@msanft:matrix.org"; + name = "Moritz Sanft"; + github = "msanft"; + githubId = 58110325; + keys = [{ + fingerprint = "3CAC 1D21 3D97 88FF 149A E116 BB8B 30F5 A024 C31C"; + }]; + }; mschristiansen = { email = "mikkel@rheosystems.com"; github = "mschristiansen"; @@ -12269,6 +12494,11 @@ fingerprint = "9E6A 25F2 C1F2 9D76 ED00 1932 1261 173A 01E1 0298"; }]; }; + nadir-ishiguro = { + github = "nadir-ishiguro"; + githubId = 23151917; + name = "nadir-ishiguro"; + }; nadrieril = { email = "nadrieril@gmail.com"; github = "Nadrieril"; @@ -12312,6 +12542,11 @@ githubId = 6709831; name = "Jake Hill"; }; + nasageek = { + github = "NasaGeek"; + githubId = 474937; + name = "Chris Roberts"; + }; nasirhm = { email = "nasirhussainm14@gmail.com"; github = "nasirhm"; @@ -12702,13 +12937,6 @@ fingerprint = "9B1A 7906 5D2F 2B80 6C8A 5A1C 7D2A CDAF 4653 CF28"; }]; }; - ninjatrappeur = { - email = "felix@alternativebit.fr"; - matrix = "@ninjatrappeur:matrix.org"; - github = "NinjaTrappeur"; - githubId = 1219785; - name = "Félix Baylac-Jacqué"; - }; nintron = { email = "nintron@sent.com"; github = "Nintron27"; @@ -12748,6 +12976,11 @@ githubId = 66913205; name = "Rick Sanchez"; }; + nix-julia = { + name = "nix-julia"; + github = "nix-julia"; + githubId = 149073815; + }; nixy = { email = "nixy@nixy.moe"; github = "nixy"; @@ -13251,6 +13484,15 @@ githubId = 75299; name = "Malcolm Matalka"; }; + orhun = { + email = "orhunparmaksiz@gmail.com"; + github = "orhun"; + githubId = 24392180; + name = "Orhun Parmaksız"; + keys = [{ + fingerprint = "165E 0FF7 C48C 226E 1EC3 63A7 F834 2482 4B3E 4B90"; + }]; + }; orichter = { email = "richter-oliver@gmx.net"; github = "ORichterSec"; @@ -13495,12 +13737,6 @@ githubId = 6931743; name = "pasqui23"; }; - patricksjackson = { - email = "patrick@jackson.dev"; - github = "patricksjackson"; - githubId = 160646; - name = "Patrick Jackson"; - }; patryk27 = { email = "pwychowaniec@pm.me"; github = "Patryk27"; @@ -13522,6 +13758,11 @@ githubId = 15645854; name = "Brad Christensen"; }; + paumr = { + github = "paumr"; + name = "Michael Bergmeister"; + githubId = 53442728; + }; paveloom = { email = "paveloom@riseup.net"; github = "paveloom"; @@ -13583,6 +13824,7 @@ pbsds = { name = "Peder Bergebakken Sundt"; email = "pbsds@hotmail.com"; + matrix = "@pederbs:pvv.ntnu.no"; github = "pbsds"; githubId = 140964; }; @@ -13634,6 +13876,12 @@ githubId = 152312; name = "Periklis Tsirakidis"; }; + perstark = { + email = "perstark.se@gmail.com"; + github = "perstarkse"; + githubId = 63069986; + name = "Per Stark"; + }; petercommand = { email = "petercommand@gmail.com"; github = "petercommand"; @@ -13751,6 +13999,12 @@ githubId = 9267430; name = "Philipp Mildenberger"; }; + philiptaron = { + email = "philip.taron@gmail.com"; + github = "philiptaron"; + githubId = 43863; + name = "Philip Taron"; + }; phip1611 = { email = "phip1611@gmail.com"; github = "phip1611"; @@ -13787,6 +14041,13 @@ githubId = 627831; name = "Hoang Xuan Phu"; }; + picnoir = { + email = "felix@alternativebit.fr"; + matrix = "@picnoir:alternativebit.fr"; + github = "picnoir"; + githubId = 1219785; + name = "Félix Baylac-Jacqué"; + }; piegames = { name = "piegames"; email = "nix@piegames.de"; @@ -13902,6 +14163,12 @@ githubId = 610615; name = "Chih-Mao Chen"; }; + pks = { + email = "ps@pks.im"; + github = "pks-t"; + githubId = 4056630; + name = "Patrick Steinhardt"; + }; plabadens = { name = "Pierre Labadens"; email = "labadens.pierre+nixpkgs@gmail.com"; @@ -13938,12 +14205,25 @@ githubId = 7839004; name = "Dmitriy Pleshevskiy"; }; + pluiedev = { + email = "hi@pluie.me"; + github = "pluiedev"; + githubId = 22406910; + name = "Leah Amelia Chen"; + }; plumps = { email = "maks.bronsky@web.de"; github = "plumps"; githubId = 13000278; name = "Maksim Bronsky"; }; + plusgut = { + name = "Carlo Jeske"; + email = "carlo.jeske+nixpkgs@webentwickler2-0.de"; + github = "plusgut"; + githubId = 277935; + matrix = "@plusgut5:matrix.org"; + }; PlushBeaver = { name = "Dmitry Kozlyuk"; email = "dmitry.kozliuk+nixpkgs@gmail.com"; @@ -14436,7 +14716,7 @@ }; quantenzitrone = { email = "quantenzitrone@protonmail.com"; - github = "Quantenzitrone"; + github = "quantenzitrone"; githubId = 74491719; matrix = "@quantenzitrone:matrix.org"; name = "quantenzitrone"; @@ -14630,6 +14910,12 @@ githubId = 145816; name = "David McKay"; }; + rayslash = { + email = "stevemathewjoy@tutanota.com"; + github = "rayslash"; + githubId = 45141270; + name = "Steve Mathew Joy"; + }; razvan = { email = "razvan.panda@gmail.com"; github = "freeman42x"; @@ -14804,6 +15090,12 @@ githubId = 165283; name = "Alexey Kutepov"; }; + rexxDigital = { + email = "joellarssonpriv@gmail.com"; + github = "rexxDigital"; + githubId = 44014925; + name = "Rexx Larsson"; + }; rgnns = { email = "jglievano@gmail.com"; github = "rgnns"; @@ -16042,6 +16334,12 @@ fingerprint = "AB63 4CD9 3322 BD42 6231 F764 C404 1EA6 B326 33DE"; }]; }; + shivaraj-bh = { + email = "sbh69840@gmail.com"; + name = "Shivaraj B H"; + github = "shivaraj-bh"; + githubId = 23645788; + }; shlevy = { email = "shea@shealevy.com"; github = "shlevy"; @@ -16260,11 +16558,10 @@ githubId = 158321; name = "Stewart Mackenzie"; }; - skeidel = { - email = "svenkeidel@gmail.com"; - github = "svenkeidel"; - githubId = 266500; - name = "Sven Keidel"; + skovati = { + github = "skovati"; + githubId = 49844593; + name = "skovati"; }; skykanin = { github = "skykanin"; @@ -16386,6 +16683,16 @@ github = "SnO2WMaN"; githubId = 15155608; }; + snowflake = { + email = "snowflake@pissmail.com"; + name = "Snowflake"; + github = "snf1k"; + githubId = 149651684; + matrix = "@snowflake:mozilla.org"; + keys = [{ + fingerprint = "8223 7B6F 2FF4 8F16 B652 6CA3 934F 9E5F 9701 2C0B"; + }]; + }; snpschaaf = { email = "philipe.schaaf@secunet.com"; name = "Philippe Schaaf"; @@ -17017,6 +17324,12 @@ githubId = 7075751; name = "Patrick Hilhorst"; }; + sysedwinistrator = { + email = "edwin.mowen@gmail.com"; + github = "sysedwinistrator"; + githubId = 71331875; + name = "Edwin Mackenzie-Owen"; + }; szczyp = { email = "qb@szczyp.com"; github = "Szczyp"; @@ -17142,6 +17455,12 @@ githubId = 1901799; name = "Nathan van Doorn"; }; + taranarmo = { + email = "taranarmo@gmail.com"; + github = "taranarmo"; + githubId = 11619234; + name = "Sergey Volkov"; + }; tari = { email = "peter@taricorp.net"; github = "tari"; @@ -17784,6 +18103,12 @@ githubId = 858790; name = "Tobias Mayer"; }; + tochiaha = { + email = "tochiahan@proton.me"; + github = "Tochiaha"; + githubId = 74688871; + name = "Tochukwu Ahanonu"; + }; tokudan = { email = "git@danielfrank.net"; github = "tokudan"; @@ -17829,6 +18154,10 @@ githubId = 13155277; name = "Tom Houle"; }; + tomkoid = { + email = "tomaszierl@outlook.com"; + name = "Tomkoid"; + }; tomodachi94 = { email = "tomodachi94+nixpkgs@protonmail.com"; matrix = "@tomodachi94:matrix.org"; @@ -17984,6 +18313,12 @@ githubId = 15064765; name = "tshaynik"; }; + tsowell = { + email = "tom@ldtlb.com"; + github = "tsowell"; + githubId = 4044033; + name = "Thomas Sowell"; + }; ttuegel = { email = "ttuegel@mailbox.org"; github = "ttuegel"; @@ -18608,6 +18943,12 @@ githubId = 7038383; name = "Vojta Káně"; }; + volfyd = { + email = "lb.nix@lisbethmail.com"; + github = "volfyd"; + githubId = 3578382; + name = "Leif Huhn"; + }; volhovm = { email = "volhovm.cs@gmail.com"; github = "volhovm"; @@ -18713,6 +19054,13 @@ fingerprint = "47F7 009E 3AE3 1DA7 988E 12E1 8C9B 0A8F C0C0 D862"; }]; }; + wamirez = { + email = "wamirez@protonmail.com"; + matrix = "@wamirez:matrix.org"; + github = "wamirez"; + githubId = 24505474; + name = "Daniel Ramirez"; + }; wamserma = { name = "Markus S. Wamser"; email = "github-dev@mail2013.wamser.eu"; @@ -18828,6 +19176,12 @@ fingerprint = "640B EDDE 9734 310A BFA3 B257 52ED AE6A 3995 AFAB"; }]; }; + whiteley = { + email = "mattwhiteley@gmail.com"; + github = "whiteley"; + githubId = 2215; + name = "Matt Whiteley"; + }; WhittlesJr = { email = "alex.joseph.whitt@gmail.com"; github = "WhittlesJr"; @@ -18941,11 +19295,11 @@ githubId = 168610; name = "Ricardo M. Correia"; }; - wjlroe = { - email = "willroe@gmail.com"; - github = "wjlroe"; - githubId = 43315; - name = "William Roe"; + wladmis = { + email = "dev@wladmis.org"; + github = "wladmis"; + githubId = 5000261; + name = "Wladmis"; }; wldhx = { email = "wldhx+nixpkgs@wldhx.me"; @@ -19125,11 +19479,11 @@ name = "Uli Baum"; }; xfix = { - email = "konrad@borowski.pw"; + email = "kamila@borowska.pw"; matrix = "@xfix:matrix.org"; github = "xfix"; githubId = 1297598; - name = "Konrad Borowski"; + name = "Kamila Borowska"; }; xfnw = { email = "xfnw+nixos@riseup.net"; @@ -19231,6 +19585,12 @@ github = "yanganto"; githubId = 10803111; }; + yannip = { + email = "yPapandreou7@gmail.com"; + github = "YanniPapandreou"; + githubId = 15948162; + name = "Yanni Papandreou"; + }; yarny = { github = "Yarny0"; githubId = 41838844; @@ -19275,6 +19635,13 @@ fingerprint = "FD0A C425 9EF5 4084 F99F 9B47 2ACC 9749 7C68 FAD4"; }]; }; + YellowOnion = { + name = "Daniel Hill"; + email = "daniel@gluo.nz"; + github = "YellowOnion"; + githubId = 364160; + matrix = "@woobilicious:matrix.org"; + }; yesbox = { email = "jesper.geertsen.jonsson@gmail.com"; github = "yesbox"; @@ -19336,6 +19703,11 @@ github = "ymeister"; githubId = 47071325; }; + ymstnt = { + name = "YMSTNT"; + github = "ymstnt"; + githubId = 21342713; + }; yoavlavi = { email = "yoav@yoavlavi.com"; github = "yoav-lavi"; @@ -19415,6 +19787,12 @@ fingerprint = "85F8 E850 F8F2 F823 F934 535B EC50 6589 9AEA AF4C"; }]; }; + yunfachi = { + email = "yunfachi@gmail.com"; + github = "yunfachi"; + githubId = 73419713; + name = "Yunfachi"; + }; yureien = { email = "contact@sohamsen.me"; github = "Yureien"; @@ -19655,6 +20033,12 @@ github = "zmitchell"; githubId = 10246891; }; + znaniye = { + email = "zn4niye@proton.me"; + github = "znaniye"; + githubId = 134703788; + name = "Samuel Silva"; + }; znewman01 = { email = "znewman01@gmail.com"; github = "znewman01"; diff --git a/third_party/nixpkgs/maintainers/scripts/haskell/hydra-report.hs b/third_party/nixpkgs/maintainers/scripts/haskell/hydra-report.hs index 5573e5e5af..2ce3ecb2ae 100755 --- a/third_party/nixpkgs/maintainers/scripts/haskell/hydra-report.hs +++ b/third_party/nixpkgs/maintainers/scripts/haskell/hydra-report.hs @@ -187,7 +187,7 @@ getBuildReports opt = runReq defaultHttpConfig do getEvalBuilds :: HydraSlownessWorkaroundFlag -> Int -> Req (Seq Build) getEvalBuilds NoHydraSlownessWorkaround id = - hydraJSONQuery (responseTimeout 900000000) ["eval", showT id, "builds"] + hydraJSONQuery mempty ["eval", showT id, "builds"] getEvalBuilds HydraSlownessWorkaround id = do Eval{builds} <- hydraJSONQuery mempty [ "eval", showT id ] forM builds $ \buildId -> do @@ -195,14 +195,15 @@ getEvalBuilds HydraSlownessWorkaround id = do hydraJSONQuery mempty [ "build", showT buildId ] hydraQuery :: HttpResponse a => Proxy a -> Option 'Https -> [Text] -> Req (HttpResponseBody a) -hydraQuery responseType option query = - responseBody - <$> req - GET - (foldl' (/:) (https "hydra.nixos.org") query) - NoReqBody - responseType - (header "User-Agent" "hydra-report.hs/v1 (nixpkgs;maintainers/scripts/haskell) pls fix https://github.com/NixOS/nixos-org-configurations/issues/270" <> option) +hydraQuery responseType option query = do + let customHeaderOpt = + header + "User-Agent" + "hydra-report.hs/v1 (nixpkgs;maintainers/scripts/haskell) pls fix https://github.com/NixOS/nixos-org-configurations/issues/270" + customTimeoutOpt = responseTimeout 900_000_000 -- 15 minutes + opts = customHeaderOpt <> customTimeoutOpt <> option + url = foldl' (/:) (https "hydra.nixos.org") query + responseBody <$> req GET url NoReqBody responseType opts hydraJSONQuery :: FromJSON a => Option 'Https -> [Text] -> Req a hydraJSONQuery = hydraQuery jsonResponse diff --git a/third_party/nixpkgs/maintainers/scripts/haskell/upload-nixos-package-list-to-hackage.sh b/third_party/nixpkgs/maintainers/scripts/haskell/upload-nixos-package-list-to-hackage.sh index 86fecbc3d8..9130941a53 100755 --- a/third_party/nixpkgs/maintainers/scripts/haskell/upload-nixos-package-list-to-hackage.sh +++ b/third_party/nixpkgs/maintainers/scripts/haskell/upload-nixos-package-list-to-hackage.sh @@ -39,5 +39,5 @@ fi package_list="$(nix-build -A haskell.package-list)/nixos-hackage-packages.csv" username=$(grep "^username:" "$CABAL_DIR/config" | sed "s/^username: //") password_command=$(grep "^password-command:" "$CABAL_DIR/config" | sed "s/^password-command: //") -curl -u "$username:$($password_command | head -n1)" --digest -H "Content-type: text/csv" -T "$package_list" http://hackage.haskell.org/distro/NixOS/packages.csv +curl -u "$username:$($password_command | head -n1)" --digest -H "Content-type: text/csv" -T "$package_list" https://hackage.haskell.org/distro/NixOS/packages.csv echo diff --git a/third_party/nixpkgs/maintainers/scripts/luarocks-packages.csv b/third_party/nixpkgs/maintainers/scripts/luarocks-packages.csv index 5897948a9f..78cfca24d9 100644 --- a/third_party/nixpkgs/maintainers/scripts/luarocks-packages.csv +++ b/third_party/nixpkgs/maintainers/scripts/luarocks-packages.csv @@ -1,9 +1,9 @@ name,src,ref,server,version,luaversion,maintainers alt-getopt,,,,,,arobyn bit32,,,,5.3.0-1,5.1,lblasc -argparse,https://github.com/luarocks/argparse.git,,,,, -basexx,https://github.com/teto/basexx.git,,,,, -binaryheap,https://github.com/Tieske/binaryheap.lua,,,,,vcunat +argparse,,,,,, +basexx,,,,,, +binaryheap,,,,,,vcunat busted,,,,,, cassowary,,,,,,marsam alerque cldr,,,,,,alerque @@ -12,8 +12,7 @@ cosmo,,,,,,marsam coxpcall,,,,1.17.0-1,, cqueues,,,,,,vcunat cyan,,,,,, -cyrussasl,https://github.com/JorjBauer/lua-cyrussasl.git,,,,, -digestif,https://github.com/astoff/digestif.git,,,0.2-1,5.3, +digestif,https://github.com/astoff/digestif.git,,,,5.3, dkjson,,,,,, fennel,,,,,,misterio77 fifo,,,,,, @@ -24,7 +23,7 @@ http,,,,0.3-0,,vcunat inspect,,,,,, jsregexp,,,,,, ldbus,,,http://luarocks.org/dev,,, -ldoc,https://github.com/stevedonovan/LDoc.git,,,,, +ldoc,,,,,, lgi,,,,,, linenoise,https://github.com/hoelzro/lua-linenoise.git,,,,, ljsyscall,,,,,5.1,lblasc @@ -40,7 +39,7 @@ lrexlib-posix,,,,,, lua-cjson,,,,,, lua-cmsgpack,,,,,, lua-curl,,,,,, -lua-iconv,,,,,, +lua-ffi-zlib,,,,,, lua-lsp,,,,,, lua-messagepack,,,,,, lua-protobuf,,,,,,lockejan @@ -49,6 +48,7 @@ lua-resty-jwt,,,,,, lua-resty-openidc,,,,,, lua-resty-openssl,,,,,, lua-resty-session,,,,,, +lua-rtoml,https://github.com/lblasc/lua-rtoml,,,,,lblasc lua-subprocess,https://github.com/0x0ade/lua-subprocess,,,,5.1,scoder12 lua-term,,,,,, lua-toml,,,,,, @@ -82,29 +82,30 @@ luaunit,,,,,,lockejan luautf8,,,,,,pstn luazip,,,,,, lua-yajl,,,,,,pstn +lua-iconv,,,,7.0.0,, luuid,,,,,, luv,,,,1.44.2-1,, lush.nvim,https://github.com/rktjmp/lush.nvim,,,,,teto lyaml,,,,,,lblasc -magick,,,,,,donovanglover +magick,,,,,5.1,donovanglover markdown,,,,,, mediator_lua,,,,,, middleclass,,,,,, mpack,,,,,, moonscript,https://github.com/leafo/moonscript.git,dev-1,,,,arobyn -nui-nvim,,,,,,mrcjkb +nui.nvim,,,,,,mrcjkb nvim-client,https://github.com/neovim/lua-client.git,,,,, nvim-cmp,https://github.com/hrsh7th/nvim-cmp,,,,, penlight,https://github.com/lunarmodules/Penlight.git,,,,,alerque plenary.nvim,https://github.com/nvim-lua/plenary.nvim.git,,,,5.1, rapidjson,https://github.com/xpol/lua-rapidjson.git,,,,, rest.nvim,,,,,5.1,teto -readline,,,,,, +rustaceanvim,,,,,,mrcjkb say,https://github.com/Olivine-Labs/say.git,,,,, serpent,,,,,,lockejan sqlite,,,,,, std._debug,https://github.com/lua-stdlib/_debug.git,,,,, -std.normalize,https://github.com/lua-stdlib/normalize.git,,,,, +std.normalize,,,,,, stdlib,,,,41.2.2,,vyp teal-language-server,,,http://luarocks.org/dev,,, telescope.nvim,,,,,5.1, diff --git a/third_party/nixpkgs/maintainers/scripts/pluginupdate.py b/third_party/nixpkgs/maintainers/scripts/pluginupdate.py index 5ceaab8db9..cc0f4ef742 100644 --- a/third_party/nixpkgs/maintainers/scripts/pluginupdate.py +++ b/third_party/nixpkgs/maintainers/scripts/pluginupdate.py @@ -26,7 +26,7 @@ import urllib.parse import urllib.request import xml.etree.ElementTree as ET from dataclasses import asdict, dataclass -from datetime import datetime +from datetime import UTC, datetime from functools import wraps from multiprocessing.dummy import Pool from pathlib import Path @@ -468,6 +468,7 @@ class Editor: "--input-names", "-i", dest="input_file", + type=Path, default=self.default_in, help="A list of plugins in the form owner/repo", ) @@ -476,6 +477,7 @@ class Editor: "-o", dest="outfile", default=self.default_out, + type=Path, help="Filename to save generated nix code", ) common.add_argument( @@ -786,8 +788,16 @@ def update_plugins(editor: Editor, args): autocommit = not args.no_commit if autocommit: - editor.nixpkgs_repo = git.Repo(editor.root, search_parent_directories=True) - commit(editor.nixpkgs_repo, f"{editor.attr_path}: update", [args.outfile]) + try: + repo = git.Repo(os.getcwd()) + updated = datetime.now(tz=UTC).strftime('%Y-%m-%d') + print(args.outfile) + commit(repo, + f"{editor.attr_path}: update on {updated}", [args.outfile] + ) + except git.InvalidGitRepositoryError as e: + print(f"Not in a git repository: {e}", file=sys.stderr) + sys.exit(1) if redirects: update() diff --git a/third_party/nixpkgs/maintainers/scripts/update-luarocks-shell.nix b/third_party/nixpkgs/maintainers/scripts/update-luarocks-shell.nix deleted file mode 100644 index 346b0319b0..0000000000 --- a/third_party/nixpkgs/maintainers/scripts/update-luarocks-shell.nix +++ /dev/null @@ -1,13 +0,0 @@ -{ nixpkgs ? import ../.. { } -}: -with nixpkgs; -let - pyEnv = python3.withPackages(ps: [ ps.gitpython ]); -in -mkShell { - packages = [ - pyEnv - luarocks-nix - nix-prefetch-scripts - ]; -} diff --git a/third_party/nixpkgs/maintainers/team-list.nix b/third_party/nixpkgs/maintainers/team-list.nix index b8811da002..3ad43f2a34 100644 --- a/third_party/nixpkgs/maintainers/team-list.nix +++ b/third_party/nixpkgs/maintainers/team-list.nix @@ -324,12 +324,16 @@ with lib.maintainers; { geospatial = { members = [ imincik - sikmir nh2 + sikmir willcohen ]; + githubTeams = [ + "geospatial" + ]; scope = "Maintain geospatial packages."; shortName = "Geospatial"; + enableFeatureFreezePing = true; }; gitlab = { @@ -350,6 +354,7 @@ with lib.maintainers; { mic92 zowoq qbit + mfrw ]; githubTeams = [ "golang" @@ -406,7 +411,6 @@ with lib.maintainers; { home-assistant = { members = [ fab - globin hexa mic92 ]; @@ -430,6 +434,7 @@ with lib.maintainers; { members = [ cleeyv ryantm + lassulus ]; scope = "Maintain Jitsi."; shortName = "Jitsi"; @@ -439,6 +444,7 @@ with lib.maintainers; { members = [ GaetanLepage natsukium + thomasjm ]; scope = "Maintain Jupyter and related packages."; shortName = "Jupyter"; @@ -611,6 +617,7 @@ with lib.maintainers; { minimal-bootstrap = { members = [ + alejandrosame artturin emilytrau ericson2314 @@ -740,7 +747,6 @@ with lib.maintainers; { aanderse drupol etu - globin ma27 talyz ]; @@ -931,7 +937,6 @@ with lib.maintainers; { wdz = { members = [ n0emis - netali vidister johannwagner yuka diff --git a/third_party/nixpkgs/nixos/README.md b/third_party/nixpkgs/nixos/README.md index b3cd9d234f..07e82bf0ad 100644 --- a/third_party/nixpkgs/nixos/README.md +++ b/third_party/nixpkgs/nixos/README.md @@ -8,6 +8,27 @@ https://nixos.org/nixos and in the manual in doc/manual. You can add new module to your NixOS configuration file (usually it’s `/etc/nixos/configuration.nix`). And do `sudo nixos-rebuild test -I nixpkgs= --fast`. +## Commit conventions + +- Make sure you read about the [commit conventions](../CONTRIBUTING.md#commit-conventions) common to Nixpkgs as a whole. + +- Format the commit messages in the following way: + + ``` + nixos/(module): (init module | add setting | refactor | etc) + + (Motivation for change. Link to release notes. Additional information.) + ``` + + Examples: + + * nixos/hydra: add bazBaz option + + Dual baz behavior is needed to do foo. + * nixos/nginx: refactor config generation + + The old config generation system used impure shell scripts and could break in specific circumstances (see #1234). + ## Reviewing contributions When changing the bootloader installation process, extra care must be taken. Grub installations cannot be rolled back, hence changes may break people’s installations forever. For any non-trivial change to the bootloader please file a PR asking for review, especially from \@edolstra. @@ -21,12 +42,14 @@ Reviewing process: - Ensure that the module maintainers are notified. - [CODEOWNERS](https://help.github.com/articles/about-codeowners/) will make GitHub notify users based on the submitted changes, but it can happen that it misses some of the package maintainers. - Ensure that the module tests, if any, are succeeding. + - You may invoke OfBorg with `@ofborg test ` to build `nixosTests.` - Ensure that the introduced options are correct. - Type should be appropriate (string related types differs in their merging capabilities, `loaOf` and `string` types are deprecated). - Description, default and example should be provided. - Ensure that option changes are backward compatible. - - `mkRenamedOptionModuleWith` provides a way to make option changes backward compatible. -- Ensure that removed options are declared with `mkRemovedOptionModule` + - `mkRenamedOptionModuleWith` provides a way to make renamed option backward compatible. + - Use `lib.versionAtLeast config.system.stateVersion "23.11"` on backward incompatible changes which may corrupt, change or update the state stored on existing setups. +- Ensure that removed options are declared with `mkRemovedOptionModule`. - Ensure that changes that are not backward compatible are mentioned in release notes. - Ensure that documentations affected by the change is updated. @@ -55,6 +78,7 @@ New modules submissions introduce a new module to NixOS. Reviewing process: +- Ensure that all file paths [fit the guidelines](../CONTRIBUTING.md#file-naming-and-organisation). - Ensure that the module tests, if any, are succeeding. - Ensure that the introduced options are correct. - Type should be appropriate (string related types differs in their merging capabilities, `loaOf` and `string` types are deprecated). @@ -76,9 +100,9 @@ Sample template for a new module review is provided below. - [ ] options have default - [ ] options have example - [ ] options have descriptions -- [ ] No unneeded package is added to environment.systemPackages -- [ ] meta.maintainers is set -- [ ] module documentation is declared in meta.doc +- [ ] No unneeded package is added to `environment.systemPackages` +- [ ] `meta.maintainers` is set +- [ ] module documentation is declared in `meta.doc` ##### Possible improvements diff --git a/third_party/nixpkgs/nixos/doc/manual/configuration/declarative-packages.section.md b/third_party/nixpkgs/nixos/doc/manual/configuration/declarative-packages.section.md index 02eaa56192..480e250da8 100644 --- a/third_party/nixpkgs/nixos/doc/manual/configuration/declarative-packages.section.md +++ b/third_party/nixpkgs/nixos/doc/manual/configuration/declarative-packages.section.md @@ -37,7 +37,7 @@ Note: the `nixos` prefix tells us that we want to get the package from the `nixos` channel and works only in CLI tools. In declarative configuration use `pkgs` prefix (variable). -To "uninstall" a package, simply remove it from +To "uninstall" a package, remove it from [](#opt-environment.systemPackages) and run `nixos-rebuild switch`. ```{=include=} sections diff --git a/third_party/nixpkgs/nixos/doc/manual/configuration/modularity.section.md b/third_party/nixpkgs/nixos/doc/manual/configuration/modularity.section.md index 2eff153879..f4a566d669 100644 --- a/third_party/nixpkgs/nixos/doc/manual/configuration/modularity.section.md +++ b/third_party/nixpkgs/nixos/doc/manual/configuration/modularity.section.md @@ -36,8 +36,8 @@ Here, we include two modules from the same directory, `vpn.nix` and Note that both `configuration.nix` and `kde.nix` define the option [](#opt-environment.systemPackages). When multiple modules define an option, NixOS will try to *merge* the definitions. In the case of -[](#opt-environment.systemPackages), that's easy: the lists of -packages can simply be concatenated. The value in `configuration.nix` is +[](#opt-environment.systemPackages) the lists of packages will be +concatenated. The value in `configuration.nix` is merged last, so for list-type options, it will appear at the end of the merged list. If you want it to appear first, you can use `mkBefore`: diff --git a/third_party/nixpkgs/nixos/doc/manual/configuration/subversion.chapter.md b/third_party/nixpkgs/nixos/doc/manual/configuration/subversion.chapter.md index 84f9c27033..ff870f5c40 100644 --- a/third_party/nixpkgs/nixos/doc/manual/configuration/subversion.chapter.md +++ b/third_party/nixpkgs/nixos/doc/manual/configuration/subversion.chapter.md @@ -2,7 +2,7 @@ [Subversion](https://subversion.apache.org/) is a centralized version-control system. It can use a [variety of -protocols](http://svnbook.red-bean.com/en/1.7/svn-book.html#svn.serverconfig.choosing) +protocols](https://svnbook.red-bean.com/en/1.7/svn-book.html#svn.serverconfig.choosing) for communication between client and server. ## Subversion inside Apache HTTP {#module-services-subversion-apache-httpd} @@ -14,7 +14,7 @@ for communication. For more information on the general setup, please refer to the [the appropriate section of the Subversion -book](http://svnbook.red-bean.com/en/1.7/svn-book.html#svn.serverconfig.httpd). +book](https://svnbook.red-bean.com/en/1.7/svn-book.html#svn.serverconfig.httpd). To configure, include in `/etc/nixos/configuration.nix` code to activate Apache HTTP, setting [](#opt-services.httpd.adminAddr) diff --git a/third_party/nixpkgs/nixos/doc/manual/configuration/x-windows.chapter.md b/third_party/nixpkgs/nixos/doc/manual/configuration/x-windows.chapter.md index 5a870a46cb..0451e4d252 100644 --- a/third_party/nixpkgs/nixos/doc/manual/configuration/x-windows.chapter.md +++ b/third_party/nixpkgs/nixos/doc/manual/configuration/x-windows.chapter.md @@ -208,7 +208,7 @@ qt.style = "gtk2"; It is possible to install custom [ XKB ](https://en.wikipedia.org/wiki/X_keyboard_extension) keyboard layouts -using the option `services.xserver.extraLayouts`. +using the option `services.xserver.xkb.extraLayouts`. As a first example, we are going to create a layout based on the basic US layout, with an additional layer to type some greek symbols by @@ -235,7 +235,7 @@ xkb_symbols "us-greek" A minimal layout specification must include the following: ```nix -services.xserver.extraLayouts.us-greek = { +services.xserver.xkb.extraLayouts.us-greek = { description = "US layout with alt-gr greek"; languages = [ "eng" ]; symbolsFile = /yourpath/symbols/us-greek; @@ -298,7 +298,7 @@ xkb_symbols "media" As before, to install the layout do ```nix -services.xserver.extraLayouts.media = { +services.xserver.xkb.extraLayouts.media = { description = "Multimedia keys remapping"; languages = [ "eng" ]; symbolsFile = /path/to/media-key; diff --git a/third_party/nixpkgs/nixos/doc/manual/configuration/xfce.chapter.md b/third_party/nixpkgs/nixos/doc/manual/configuration/xfce.chapter.md index a80be2b523..9ec4a51d6e 100644 --- a/third_party/nixpkgs/nixos/doc/manual/configuration/xfce.chapter.md +++ b/third_party/nixpkgs/nixos/doc/manual/configuration/xfce.chapter.md @@ -28,7 +28,7 @@ manually (system wide), put them into your Thunar (the Xfce file manager) is automatically enabled when Xfce is enabled. To enable Thunar without enabling Xfce, use the configuration -option [](#opt-programs.thunar.enable) instead of simply adding +option [](#opt-programs.thunar.enable) instead of adding `pkgs.xfce.thunar` to [](#opt-environment.systemPackages). If you'd like to add extra plugins to Thunar, add them to diff --git a/third_party/nixpkgs/nixos/doc/manual/development/activation-script.section.md b/third_party/nixpkgs/nixos/doc/manual/development/activation-script.section.md index c339258c6d..cc317a6a01 100644 --- a/third_party/nixpkgs/nixos/doc/manual/development/activation-script.section.md +++ b/third_party/nixpkgs/nixos/doc/manual/development/activation-script.section.md @@ -69,4 +69,4 @@ do: `/etc/group` and `/etc/shadow`. This also creates home directories - `usrbinenv` creates `/usr/bin/env` - `var` creates some directories in `/var` that are not service-specific -- `wrappers` creates setuid wrappers like `ping` and `sudo` +- `wrappers` creates setuid wrappers like `sudo` diff --git a/third_party/nixpkgs/nixos/doc/manual/development/non-switchable-systems.section.md b/third_party/nixpkgs/nixos/doc/manual/development/non-switchable-systems.section.md new file mode 100644 index 0000000000..87bb46c789 --- /dev/null +++ b/third_party/nixpkgs/nixos/doc/manual/development/non-switchable-systems.section.md @@ -0,0 +1,21 @@ +# Non Switchable Systems {#sec-non-switchable-system} + +In certain systems, most notably image based appliances, updates are handled +outside the system. This means that you do not need to rebuild your +configuration on the system itself anymore. + +If you want to build such a system, you can use the `image-based-appliance` +profile: + +```nix +{ modulesPath, ... }: { + imports = [ "${modulesPath}/profiles/image-based-appliance.nix" ] +} +``` + +The most notable deviation of this profile from a standard NixOS configuration +is that after building it, you cannot switch *to* the configuration anymore. +The profile sets `config.system.switch.enable = false;`, which excludes +`switch-to-configuration`, the central script called by `nixos-rebuild`, from +your system. Removing this script makes the image lighter and slightly more +secure. diff --git a/third_party/nixpkgs/nixos/doc/manual/development/running-nixos-tests-interactively.section.md b/third_party/nixpkgs/nixos/doc/manual/development/running-nixos-tests-interactively.section.md index 54002941d6..4b8385d7e0 100644 --- a/third_party/nixpkgs/nixos/doc/manual/development/running-nixos-tests-interactively.section.md +++ b/third_party/nixpkgs/nixos/doc/manual/development/running-nixos-tests-interactively.section.md @@ -57,6 +57,27 @@ using: Once the connection is established, you can enter commands in the socat terminal where socat is running. +## Port forwarding to NixOS test VMs {#sec-nixos-test-port-forwarding} + +If your test has only a single VM, you may use e.g. + +```ShellSession +$ QEMU_NET_OPTS="hostfwd=tcp:127.0.0.1:2222-:22" ./result/bin/nixos-test-driver +``` + +to port-forward a port in the VM (here `22`) to the host machine (here port `2222`). + +This naturally does not work when multiple machines are involved, +since a single port on the host cannot forward to multiple VMs. + +If the test defines multiple machines, you may opt to _temporarily_ set +`virtualisation.forwardPorts` in the test definition for debugging. + +Such port forwardings connect via the VM's virtual network interface. +Thus they cannot connect to ports that are only bound to the VM's +loopback interface (`127.0.0.1`), and the VM's NixOS firewall +must be configured to allow these connections. + ## Reuse VM state {#sec-nixos-test-reuse-vm-state} You can re-use the VM states coming from a previous run by setting the diff --git a/third_party/nixpkgs/nixos/doc/manual/development/settings-options.section.md b/third_party/nixpkgs/nixos/doc/manual/development/settings-options.section.md index 5060dd98f5..3a4800742b 100644 --- a/third_party/nixpkgs/nixos/doc/manual/development/settings-options.section.md +++ b/third_party/nixpkgs/nixos/doc/manual/development/settings-options.section.md @@ -58,7 +58,7 @@ have a predefined type and string generator already declared under and returning a set with YAML-specific attributes `type` and `generate` as specified [below](#pkgs-formats-result). -`pkgs.formats.ini` { *`listsAsDuplicateKeys`* ? false, *`listToValue`* ? null, \... } +`pkgs.formats.ini` { *`listsAsDuplicateKeys`* ? false, *`listToValue`* ? null, \.\.\. } : A function taking an attribute set with values diff --git a/third_party/nixpkgs/nixos/doc/manual/development/what-happens-during-a-system-switch.chapter.md b/third_party/nixpkgs/nixos/doc/manual/development/what-happens-during-a-system-switch.chapter.md index 5d6d67f1aa..ccadb819e0 100644 --- a/third_party/nixpkgs/nixos/doc/manual/development/what-happens-during-a-system-switch.chapter.md +++ b/third_party/nixpkgs/nixos/doc/manual/development/what-happens-during-a-system-switch.chapter.md @@ -44,6 +44,10 @@ of actions is always the same: - Inspect what changed during these actions and print units that failed and that were newly started +By default, some units are filtered from the outputs to make it less spammy. +This can be disabled for development or testing by setting the environment variable +`STC_DISPLAY_ALL_UNITS=1` + Most of these actions are either self-explaining but some of them have to do with our units or the activation script. For this reason, these topics are explained in the next sections. @@ -51,4 +55,5 @@ explained in the next sections. ```{=include=} sections unit-handling.section.md activation-script.section.md +non-switchable-systems.section.md ``` diff --git a/third_party/nixpkgs/nixos/doc/manual/development/writing-documentation.chapter.md b/third_party/nixpkgs/nixos/doc/manual/development/writing-documentation.chapter.md index 8cb6823d09..3d9bd318cf 100644 --- a/third_party/nixpkgs/nixos/doc/manual/development/writing-documentation.chapter.md +++ b/third_party/nixpkgs/nixos/doc/manual/development/writing-documentation.chapter.md @@ -33,13 +33,13 @@ symlink at `./result/share/doc/nixos/index.html`. ## Editing DocBook XML {#sec-writing-docs-editing-docbook-xml} For general information on how to write in DocBook, see [DocBook 5: The -Definitive Guide](http://www.docbook.org/tdg5/en/html/docbook.html). +Definitive Guide](https://tdg.docbook.org/tdg/5.1/). Emacs nXML Mode is very helpful for editing DocBook XML because it validates the document as you write, and precisely locates errors. To use it, see [](#sec-emacs-docbook-xml). -[Pandoc](http://pandoc.org) can generate DocBook XML from a multitude of +[Pandoc](https://pandoc.org/) can generate DocBook XML from a multitude of formats, which makes a good starting point. Here is an example of Pandoc invocation to convert GitHub-Flavoured MarkDown to DocBook 5 XML: @@ -50,7 +50,7 @@ pandoc -f markdown_github -t docbook5 docs.md -o my-section.md Pandoc can also quickly convert a single `section.xml` to HTML, which is helpful when drafting. -Sometimes writing valid DocBook is simply too difficult. In this case, +Sometimes writing valid DocBook is too difficult. In this case, submit your documentation updates in a [GitHub Issue](https://github.com/NixOS/nixpkgs/issues/new) and someone will handle the conversion to XML for you. @@ -62,9 +62,9 @@ topic from scratch. Keep the following guidelines in mind when you create and add a topic: -- The NixOS [`book`](http://www.docbook.org/tdg5/en/html/book.html) +- The NixOS [`book`](https://tdg.docbook.org/tdg/5.0/book.html) element is in `nixos/doc/manual/manual.xml`. It includes several - [`parts`](http://www.docbook.org/tdg5/en/html/book.html) which are in + [`parts`](https://tdg.docbook.org/tdg/5.0/book.html) which are in subdirectories. - Store the topic file in the same directory as the `part` to which it diff --git a/third_party/nixpkgs/nixos/modules/image/repart.md b/third_party/nixpkgs/nixos/doc/manual/installation/building-images-via-systemd-repart.chapter.md similarity index 100% rename from third_party/nixpkgs/nixos/modules/image/repart.md rename to third_party/nixpkgs/nixos/doc/manual/installation/building-images-via-systemd-repart.chapter.md diff --git a/third_party/nixpkgs/nixos/doc/manual/installation/changing-config.chapter.md b/third_party/nixpkgs/nixos/doc/manual/installation/changing-config.chapter.md index 11b49ccb1f..12abf90b71 100644 --- a/third_party/nixpkgs/nixos/doc/manual/installation/changing-config.chapter.md +++ b/third_party/nixpkgs/nixos/doc/manual/installation/changing-config.chapter.md @@ -89,7 +89,7 @@ guest. For instance, the following will forward host port 2222 to guest port 22 (SSH): ```ShellSession -$ QEMU_NET_OPTS="hostfwd=tcp::2222-:22" ./result/bin/run-*-vm +$ QEMU_NET_OPTS="hostfwd=tcp:127.0.0.1:2222-:22" ./result/bin/run-*-vm ``` allowing you to log in via SSH (assuming you have set the appropriate @@ -98,3 +98,8 @@ passwords or SSH authorized keys): ```ShellSession $ ssh -p 2222 localhost ``` + +Such port forwardings connect via the VM's virtual network interface. +Thus they cannot connect to ports that are only bound to the VM's +loopback interface (`127.0.0.1`), and the VM's NixOS firewall +must be configured to allow these connections. diff --git a/third_party/nixpkgs/nixos/doc/manual/installation/installation.md b/third_party/nixpkgs/nixos/doc/manual/installation/installation.md index 1405942566..f3b1773d86 100644 --- a/third_party/nixpkgs/nixos/doc/manual/installation/installation.md +++ b/third_party/nixpkgs/nixos/doc/manual/installation/installation.md @@ -8,4 +8,5 @@ installing.chapter.md changing-config.chapter.md upgrading.chapter.md building-nixos.chapter.md +building-images-via-systemd-repart.chapter.md ``` diff --git a/third_party/nixpkgs/nixos/doc/manual/installation/installing-pxe.section.md b/third_party/nixpkgs/nixos/doc/manual/installation/installing-pxe.section.md index 4fbd6525f8..c1cad99d39 100644 --- a/third_party/nixpkgs/nixos/doc/manual/installation/installing-pxe.section.md +++ b/third_party/nixpkgs/nixos/doc/manual/installation/installing-pxe.section.md @@ -4,7 +4,7 @@ Advanced users may wish to install NixOS using an existing PXE or iPXE setup. These instructions assume that you have an existing PXE or iPXE -infrastructure and simply want to add the NixOS installer as another +infrastructure and want to add the NixOS installer as another option. To build the necessary files from your current version of nixpkgs, you can run: diff --git a/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-1509.section.md b/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-1509.section.md index 1422ae4c29..f47d130081 100644 --- a/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-1509.section.md +++ b/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-1509.section.md @@ -2,7 +2,7 @@ In addition to numerous new and upgraded packages, this release has the following highlights: -- The [Haskell](http://haskell.org/) packages infrastructure has been re-designed from the ground up ("Haskell NG"). NixOS now distributes the latest version of every single package registered on [Hackage](http://hackage.haskell.org/) \-- well in excess of 8,000 Haskell packages. Detailed instructions on how to use that infrastructure can be found in the [User's Guide to the Haskell Infrastructure](https://nixos.org/nixpkgs/manual/#users-guide-to-the-haskell-infrastructure). Users migrating from an earlier release may find helpful information below, in the list of backwards-incompatible changes. Furthermore, we distribute 51(!) additional Haskell package sets that provide every single [LTS Haskell](http://www.stackage.org/) release since version 0.0 as well as the most recent [Stackage Nightly](http://www.stackage.org/) snapshot. The announcement ["Full Stackage Support in Nixpkgs"](https://nixos.org/nix-dev/2015-September/018138.html) gives additional details. +- The [Haskell](http://haskell.org/) packages infrastructure has been re-designed from the ground up ("Haskell NG"). NixOS now distributes the latest version of every single package registered on [Hackage](http://hackage.haskell.org/) -- well in excess of 8,000 Haskell packages. Detailed instructions on how to use that infrastructure can be found in the [User's Guide to the Haskell Infrastructure](https://nixos.org/nixpkgs/manual/#users-guide-to-the-haskell-infrastructure). Users migrating from an earlier release may find helpful information below, in the list of backwards-incompatible changes. Furthermore, we distribute 51(!) additional Haskell package sets that provide every single [LTS Haskell](http://www.stackage.org/) release since version 0.0 as well as the most recent [Stackage Nightly](http://www.stackage.org/) snapshot. The announcement ["Full Stackage Support in Nixpkgs"](https://nixos.org/nix-dev/2015-September/018138.html) gives additional details. - Nix has been updated to version 1.10, which among other improvements enables cryptographic signatures on binary caches for improved security. @@ -178,7 +178,7 @@ The new option `system.stateVersion` ensures that certain configuration changes - Nix now requires binary caches to be cryptographically signed. If you have unsigned binary caches that you want to continue to use, you should set `nix.requireSignedBinaryCaches = false`. -- Steam now doesn't need root rights to work. Instead of using `*-steam-chrootenv`, you should now just run `steam`. `steamChrootEnv` package was renamed to `steam`, and old `steam` package \-- to `steamOriginal`. +- Steam now doesn't need root rights to work. Instead of using `*-steam-chrootenv`, you should now just run `steam`. `steamChrootEnv` package was renamed to `steam`, and old `steam` package -- to `steamOriginal`. - CMPlayer has been renamed to bomi upstream. Package `cmplayer` was accordingly renamed to `bomi` diff --git a/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-1609.section.md b/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-1609.section.md index ad3478d0ca..0cbabf58ca 100644 --- a/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-1609.section.md +++ b/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-1609.section.md @@ -46,7 +46,7 @@ When upgrading from a previous release, please be aware of the following incompa Other notable improvements: -- Revamped grsecurity/PaX support. There is now only a single general-purpose distribution kernel and the configuration interface has been streamlined. Desktop users should be able to simply set +- Revamped grsecurity/PaX support. There is now only a single general-purpose distribution kernel and the configuration interface has been streamlined. Desktop users should be able to set ```nix { diff --git a/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-1909.section.md b/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-1909.section.md index 22cef05d4f..2bd04f8dd4 100644 --- a/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-1909.section.md +++ b/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-1909.section.md @@ -198,7 +198,7 @@ When upgrading from a previous release, please be aware of the following incompa For nginx, the dependencies are still automatically managed when `services.nginx.virtualhosts..enableACME` is enabled just like before. What changed is that nginx now directly depends on the specific certificates that it needs, instead of depending on the catch-all `acme-certificates.target`. This target unit was also removed from the codebase. This will mean nginx will no longer depend on certificates it isn't explicitly managing and fixes a bug with certificate renewal ordering racing with nginx restarting which could lead to nginx getting in a broken state as described at [NixOS/nixpkgs\#60180](https://github.com/NixOS/nixpkgs/issues/60180). -- The old deprecated `emacs` package sets have been dropped. What used to be called `emacsPackagesNg` is now simply called `emacsPackages`. +- The old deprecated `emacs` package sets have been dropped. What used to be called `emacsPackagesNg` is now called `emacsPackages`. - `services.xserver.desktopManager.xterm` is now disabled by default if `stateVersion` is 19.09 or higher. Previously the xterm desktopManager was enabled when xserver was enabled, but it isn't useful for all people so it didn't make sense to have any desktopManager enabled default. diff --git a/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2003.section.md b/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2003.section.md index 76cee8858e..695f8a2c95 100644 --- a/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2003.section.md +++ b/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2003.section.md @@ -482,7 +482,7 @@ When upgrading from a previous release, please be aware of the following incompa - If you use `postgresql` on a different server, you don't need to change anything as well since this module was never designed to configure remote databases. - - If you use `postgresql` and configured your synapse initially on `19.09` or older, you simply need to enable postgresql-support explicitly: + - If you use `postgresql` and configured your synapse initially on `19.09` or older, you need to enable postgresql-support explicitly: ```nix { ... }: { diff --git a/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2009.section.md b/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2009.section.md index 6bb75a04b3..eac02a8ff4 100644 --- a/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2009.section.md +++ b/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2009.section.md @@ -422,7 +422,7 @@ When upgrading from a previous release, please be aware of the following incompa - The `systemd-networkd` option `systemd.network.networks._name_.dhcpConfig` has been renamed to [systemd.network.networks._name_.dhcpV4Config](options.html#opt-systemd.network.networks._name_.dhcpV4Config) following upstream systemd's documentation change. See systemd.network 5 for details. -- In the `picom` module, several options that accepted floating point numbers encoded as strings (for example [services.picom.activeOpacity](options.html#opt-services.picom.activeOpacity)) have been changed to the (relatively) new native `float` type. To migrate your configuration simply remove the quotes around the numbers. +- In the `picom` module, several options that accepted floating point numbers encoded as strings (for example [services.picom.activeOpacity](options.html#opt-services.picom.activeOpacity)) have been changed to the (relatively) new native `float` type. To migrate your configuration remove the quotes around the numbers. - When using `buildBazelPackage` from Nixpkgs, `flat` hash mode is now used for dependencies instead of `recursive`. This is to better allow using hashed mirrors where needed. As a result, these hashes will have changed. diff --git a/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2211.section.md b/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2211.section.md index 37079c2096..1c73d0c979 100644 --- a/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2211.section.md +++ b/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2211.section.md @@ -14,7 +14,7 @@ In addition to numerous new and upgraded packages, this release includes the fol - Support for algorithms that `libxcrypt` [does not consider strong](https://github.com/besser82/libxcrypt/blob/v4.4.28/lib/hashes.conf#L41) are **deprecated** as of this release, and will be removed in NixOS 23.05. - This includes system login passwords. Given this, we **strongly encourage** all users to update their system passwords, as you will be unable to login if password hashes are not migrated by the time their support is removed. - When using `users.users..hashedPassword` to configure user passwords, run `mkpasswd`, and use the yescrypt hash that is provided as the new value. - - On the other hand, for interactively configured user passwords, simply re-set the passwords for all users with `passwd`. + - On the other hand, for interactively configured user passwords, re-set the passwords for all users with `passwd`. - This release introduces warnings for the use of deprecated hash algorithms for both methods of configuring passwords. To make sure you migrated correctly, run `nixos-rebuild switch`. - The NixOS documentation is now generated from markdown. While docbook is still part of the documentation build process, it's a big step towards the full migration. diff --git a/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2305.section.md b/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2305.section.md index 3d27d3fef8..21c798b3b4 100644 --- a/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2305.section.md +++ b/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2305.section.md @@ -611,7 +611,7 @@ If you are: - adding new rules with `*.rules` - running custom PulseAudio commands with `pulse.cmd` -Simply move the definitions into the drop-in. +Move the definitions into the drop-in. Note that the use of `context.exec` is not recommended and other methods of running your thing are likely a better option. @@ -660,5 +660,5 @@ If reloading the module is not an option, proceed to [Nuclear option](#sec-relea #### Nuclear option {#sec-release-23.05-migration-pipewire-nuclear} If all else fails, you can still manually copy the contents of the default configuration file -from `${pkgs.pipewire.lib}/share/pipewire` to `/etc/pipewire` and edit it to fully override the default. +from `${pkgs.pipewire}/share/pipewire` to `/etc/pipewire` and edit it to fully override the default. However, this should be done only as a last resort. Please talk to the Pipewire maintainers if you ever need to do this. diff --git a/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2311.section.md b/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2311.section.md index d74dc5b93c..5cb5fec230 100644 --- a/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2311.section.md +++ b/third_party/nixpkgs/nixos/doc/manual/release-notes/rl-2311.section.md @@ -4,6 +4,8 @@ - FoundationDB now defaults to major version 7. +- PostgreSQL now defaults to major version 15. + - Support for WiFi6 (IEEE 802.11ax) and WPA3-SAE-PK was enabled in the `hostapd` package, along with a significant rework of the hostapd module. - LXD now supports virtual machine instances to complement the existing container support @@ -24,16 +26,22 @@ - `root` and `wheel` are not given the ability to set (or preserve) arbitrary environment variables. +- [glibc](https://www.gnu.org/software/libc/) has been updated from version 2.37 to 2.38, see [the release notes](https://sourceware.org/glibc/wiki/Release/2.38) for what was changed. + [`sudo-rs`]: https://github.com/memorysafety/sudo-rs/ - All [ROCm](https://rocm.docs.amd.com/en/latest/) packages have been updated to 5.7.0. - [ROCm](https://rocm.docs.amd.com/en/latest/) package attribute sets are versioned: `rocmPackages` -> `rocmPackages_5`. +- `yarn-berry` has been updated to 4.0.1. This means that NodeJS versions less than `18.12` are no longer supported by it. More details at the [upstream changelog](https://github.com/yarnpkg/berry/blob/master/CHANGELOG.md). + - If the user has a custom shell enabled via `users.users.${USERNAME}.shell = ${CUSTOMSHELL}`, the assertion will require them to also set `programs.${CUSTOMSHELL}.enable = true`. This is generally safe behavior, but for anyone needing to opt out from the check `users.users.${USERNAME}.ignoreShellProgramCheck = true` will do the job. +- Cassandra now defaults to 4.x, updated from 3.11.x. + ## New Services {#sec-release-23.11-new-services} - [MCHPRS](https://github.com/MCHPR/MCHPRS), a multithreaded Minecraft server built for redstone. Available as [services.mchprs](#opt-services.mchprs.enable). @@ -68,6 +76,8 @@ - [LibreNMS](https://www.librenms.org), a auto-discovering PHP/MySQL/SNMP based network monitoring. Available as [services.librenms](#opt-services.librenms.enable). +- [Livebook](https://livebook.dev/), an interactive notebook with support for Elixir, graphs, machine learning, and more. + - [sitespeed-io](https://sitespeed.io), a tool that can generate metrics (timings, diagnostics) for websites. Available as [services.sitespeed-io](#opt-services.sitespeed-io.enable). - [stalwart-mail](https://stalw.art), an all-in-one email server (SMTP, IMAP, JMAP). Available as [services.stalwart-mail](#opt-services.stalwart-mail.enable). @@ -76,10 +86,14 @@ - [Jool](https://nicmx.github.io/Jool/en/index.html), a kernelspace NAT64 and SIIT implementation, providing translation between IPv4 and IPv6. Available as [networking.jool.enable](#opt-networking.jool.enable). +- [Home Assistant Satellite], a streaming audio satellite for Home Assistant voice pipelines, where you can reuse existing mic/speaker hardware. Available as [services.homeassistant-satellite](#opt-services.homeassistant-satellite.enable). + - [Apache Guacamole](https://guacamole.apache.org/), a cross-platform, clientless remote desktop gateway. Available as [services.guacamole-server](#opt-services.guacamole-server.enable) and [services.guacamole-client](#opt-services.guacamole-client.enable) services. - [pgBouncer](https://www.pgbouncer.org), a PostgreSQL connection pooler. Available as [services.pgbouncer](#opt-services.pgbouncer.enable). +- [Goss](https://goss.rocks/), a YAML based serverspec alternative tool for validating a server's configuration. Available as [services.goss](#opt-services.goss.enable). + - [trust-dns](https://trust-dns.org/), a Rust based DNS server built to be safe and secure from the ground up. Available as [services.trust-dns](#opt-services.trust-dns.enable). - [osquery](https://www.osquery.io/), a SQL powered operating system instrumentation, monitoring, and analytics. @@ -92,25 +106,41 @@ - hardware/infiniband.nix adds infiniband subnet manager support using an [opensm](https://github.com/linux-rdma/opensm) systemd-template service, instantiated on card guids. The module also adds kernel modules and cli tooling to help administrators debug and measure performance. Available as [hardware.infiniband.enable](#opt-hardware.infiniband.enable). +- [zwave-js](https://github.com/zwave-js/zwave-js-server), a small server wrapper around Z-Wave JS to access it via a WebSocket. Available as [services.zwave-js](#opt-services.zwave-js.enable). + - [Honk](https://humungus.tedunangst.com/r/honk), a complete ActivityPub server with minimal setup and support costs. Available as [services.honk](#opt-services.honk.enable). - [ferretdb](https://www.ferretdb.io/), an open-source proxy, converting the MongoDB 6.0+ wire protocol queries to PostgreSQL or SQLite. Available as [services.ferretdb](options.html#opt-services.ferretdb.enable). +- [MicroBin](https://microbin.eu/), a feature rich, performant and secure text and file sharing web application, a "paste bin". Available as [services.microbin](#opt-services.microbin.enable). + - [NNCP](http://www.nncpgo.org/). Added nncp-daemon and nncp-caller services. Configuration is set with [programs.nncp.settings](#opt-programs.nncp.settings) and the daemons are enabled at [services.nncp](#opt-services.nncp.caller.enable). +- [FastNetMon Advanced](https://fastnetmon.com/product-overview/), a commercial high performance DDoS detector / sensor. Available as [services.fastnetmon-advanced](#opt-services.fastnetmon-advanced.enable). + - [tuxedo-rs](https://github.com/AaronErhardt/tuxedo-rs), Rust utilities for interacting with hardware from TUXEDO Computers. +- [certspotter](https://github.com/SSLMate/certspotter), a certificate transparency log monitor. Available as [services.certspotter](#opt-services.certspotter.enable). + - [audiobookshelf](https://github.com/advplyr/audiobookshelf/), a self-hosted audiobook and podcast server. Available as [services.audiobookshelf](#opt-services.audiobookshelf.enable). - [ZITADEL](https://zitadel.com), a turnkey identity and access management platform. Available as [services.zitadel](#opt-services.zitadel.enable). +- [exportarr](https://github.com/onedr0p/exportarr), Prometheus Exporters for Bazarr, Lidarr, Prowlarr, Radarr, Readarr, and Sonarr. Available as [services.prometheus.exporters.exportarr-bazarr](#opt-services.prometheus.exporters.exportarr-bazarr.enable)/[services.prometheus.exporters.exportarr-lidarr](#opt-services.prometheus.exporters.exportarr-lidarr.enable)/[services.prometheus.exporters.exportarr-prowlarr](#opt-services.prometheus.exporters.exportarr-prowlarr.enable)/[services.prometheus.exporters.exportarr-radarr](#opt-services.prometheus.exporters.exportarr-radarr.enable)/[services.prometheus.exporters.exportarr-readarr](#opt-services.prometheus.exporters.exportarr-readarr.enable)/[services.prometheus.exporters.exportarr-sonarr](#opt-services.prometheus.exporters.exportarr-sonarr.enable). + - [netclient](https://github.com/gravitl/netclient), an automated WireGuard® Management Client. Available as [services.netclient](#opt-services.netclient.enable). - [trunk-ng](https://github.com/ctron/trunk), A fork of `trunk`: Build, bundle & ship your Rust WASM application to the web - [virt-manager](https://virt-manager.org/), an UI for managing virtual machines in libvirt, is now available as `programs.virt-manager`. +- [Soft Serve](https://github.com/charmbracelet/soft-serve), a tasty, self-hostable Git server for the command line. Available as [services.soft-serve](#opt-services.soft-serve.enable). + +- [Rosenpass](https://rosenpass.eu/), a service for post-quantum-secure VPNs with WireGuard. Available as [services.rosenpass](#opt-services.rosenpass.enable). + +- [c2FmZQ](https://github.com/c2FmZQ/c2FmZQ/), an application that can securely encrypt, store, and share files, including but not limited to pictures and videos. Available as [services.c2fmzq-server](#opt-services.c2fmzq-server.enable). + ## Backward Incompatibilities {#sec-release-23.11-incompatibilities} - `network-online.target` has been fixed to no longer time out for systems with `networking.useDHCP = true` and `networking.useNetworkd = true`. @@ -124,6 +154,8 @@ - The latest version of `clonehero` now stores custom content in `~/.clonehero`. See the [migration instructions](https://clonehero.net/2022/11/29/v23-to-v1-migration-instructions.html). Typically, these content files would exist along side the binary, but the previous build used a wrapper script that would store them in `~/.config/unity3d/srylain Inc_/Clone Hero`. +- `services.mastodon` doesn't support providing a TCP port to its `streaming` component anymore, as upstream implemented parallelization by running multiple instances instead of running multiple processes in one instance. Please create a PR if you are interested in this feature. + - The `services.hostapd` module was rewritten to support `passwordFile` like options, WPA3-SAE, and management of multiple interfaces. This breaks compatibility with older configurations. - `hostapd` is now started with additional systemd sandbox/hardening options for better security. - `services.hostapd.interface` was replaced with a per-radio and per-bss configuration scheme using [services.hostapd.radios](#opt-services.hostapd.radios). @@ -142,8 +174,16 @@ - `getent` has been moved from `glibc`'s `bin` output to its own dedicated output, reducing closure size for many dependents. Dependents using the `getent` alias should not be affected; others should move from using `glibc.bin` or `getBin glibc` to `getent` (which also improves compatibility with non-glibc platforms). +- `maintainers/scripts/update-luarocks-packages` is now a proper package + `luarocks-packages-updater` that can be run to maintain out-of-tree luarocks + packages + - The `users.users..passwordFile` has been renamed to `users.users..hashedPasswordFile` to avoid possible confusions. The option is in fact the file-based version of `hashedPassword`, not `password`, and expects a file containing the {manpage}`crypt(3)` hash of the user password. +- `chromiumBeta` and `chromiumDev` have been removed due to the lack of maintenance in nixpkgs. Consider using `chromium` instead. + +- `google-chrome-beta` and `google-chrome-dev` have been removed due to the lack of maintenance in nixpkgs. Consider using `google-chrome` instead. + - The `services.ananicy.extraRules` option now has the type of `listOf attrs` instead of `string`. - `buildVimPluginFrom2Nix` has been renamed to `buildVimPlugin`, which now @@ -151,6 +191,8 @@ - JACK tools (`jack_*` except `jack_control`) have moved from the `jack2` package to `jack-example-tools` +- The `waagent` service does provisioning now + - The `matrix-synapse` package & module have undergone some significant internal changes, for most setups no intervention is needed, though: - The option [`services.matrix-synapse.package`](#opt-services.matrix-synapse.package) is now read-only. For modifying the package, use an overlay which modifies `matrix-synapse-unwrapped` instead. More on that below. - The `enableSystemd` & `enableRedis` arguments have been removed and `matrix-synapse` has been renamed to `matrix-synapse-unwrapped`. Also, several optional dependencies (such as `psycopg2` or `authlib`) have been removed. @@ -222,14 +264,24 @@ - `baloo`, the file indexer/search engine used by KDE now has a patch to prevent files from constantly being reindexed when the device ids of the their underlying storage changes. This happens frequently when using btrfs or LVM. The patch has not yet been accepted upstream but it provides a significantly improved experience. When upgrading, reset baloo to get a clean index: `balooctl disable ; balooctl purge ; balooctl enable`. -- `services.ddclient` has been removed on the request of the upstream maintainer because it is unmaintained and has bugs. Please switch to a different software like `inadyn` or `knsupdate`. - - The `vlock` program from the `kbd` package has been moved into its own package output and should now be referenced explicitly as `kbd.vlock` or replaced with an alternative such as the standalone `vlock` package or `physlock`. - `fileSystems..autoFormat` now uses `systemd-makefs`, which does not accept formatting options. Therefore, `fileSystems..formatOptions` has been removed. - `fileSystems..autoResize` now uses `systemd-growfs` to resize the file system online in stage 2. This means that `f2fs` and `ext2` can no longer be auto resized, while `xfs` and `btrfs` now can be. +- `fuse3` has been updated from 3.11.0 to 3.16.2; see [ChangeLog.rst](https://github.com/libfuse/libfuse/blob/fuse-3.16.2/ChangeLog.rst#libfuse-3162-2023-10-10) for an overview of the changes. + + Unsupported mount options are no longer silently accepted [(since 3.15.0)](https://github.com/libfuse/libfuse/blob/fuse-3.16.2/ChangeLog.rst#libfuse-3150-2023-06-09). The [affected mount options](https://github.com/libfuse/libfuse/commit/dba6b3983af34f30de01cf532dff0b66f0ed6045) are: `atime`, `diratime`, `lazytime`, `nolazytime`, `relatime`, `norelatime`, `strictatime`. + + For example, + + ```bash + $ sshfs 127.0.0.1:/home/test/testdir /home/test/sshfs_mnt -o atime` + ``` + + would previously terminate successfully with the mount point established, now it outputs the error message ``fuse: unknown option(s): `-o atime'`` and terminates with exit status 1. + - `nixos-rebuild {switch,boot,test,dry-activate}` now runs the system activation inside `systemd-run`, creating an ephemeral systemd service and protecting the system switch against issues like network disconnections during remote (e.g. SSH) sessions. This has the side effect of running the switch in an isolated environment, that could possible break post-switch scripts that depends on things like environment variables being set. If you want to opt-out from this behavior for now, you may set the `NIXOS_SWITCH_USE_DIRTY_ENV` environment variable before running `nixos-rebuild`. However, keep in mind that this option will be removed in the future. - The `services.vaultwarden.config` option default value was changed to make Vaultwarden only listen on localhost, following the [secure defaults for most NixOS services](https://github.com/NixOS/nixpkgs/issues/100192). @@ -252,6 +304,8 @@ - Garage has been upgraded to 0.9.x. `services.garage.package` now needs to be explicitly set, so version upgrades can be done in a controlled fashion. For this, we expose `garage_x_y` attributes which can be set here. +- `voms` and `xrootd` now moves the `$out/etc` content to the `$etc` output instead of `$out/etc.orig`, when input argument `externalEtc` is not `null`. + - The `woodpecker-*` CI packages have been updated to 1.0.0. This release is wildly incompatible with the 0.15.X versions that were previously packaged. Please read [upstream's documentation](https://woodpecker-ci.org/docs/next/migrations#100) to learn how to update your CI configurations. - The Caddy module gained a new option named `services.caddy.enableReload` which is enabled by default. It allows reloading the service instead of restarting it, if only a config file has changed. This option must be disabled if you have turned off the [Caddy admin API](https://caddyserver.com/docs/caddyfile/options#admin). If you keep this option enabled, you should consider setting [`grace_period`](https://caddyserver.com/docs/caddyfile/options#grace-period) to a non-infinite value to prevent Caddy from delaying the reload indefinitely. @@ -264,13 +318,13 @@ - The default `kops` version is now 1.28.0 and support for 1.25 and older has been dropped. -- `pharo` has been updated to latest stable (PharoVM 10.0.5), which is compatible with the latest stable and oldstable images (Pharo 10 and 11). The VM in question is the 64bit Spur. The 32bit version has been dropped due to lack of maintenance. The Cog VM has been deleted because it is severily outdated. Finally, the `pharo-launcher` package has been deleted because it was not compatible with the newer VM, and due to lack of maintenance. +- `pharo` has been updated to latest stable (PharoVM 10.0.8), which is compatible with the latest stable and oldstable images (Pharo 10 and 11). The VM in question is the 64bit Spur. The 32bit version has been dropped due to lack of maintenance. The Cog VM has been deleted because it is severily outdated. Finally, the `pharo-launcher` package has been deleted because it was not compatible with the newer VM, and due to lack of maintenance. - Emacs mainline version 29 was introduced. This new version includes many major additions, most notably `tree-sitter` support (enabled by default) and the pgtk variant (useful for Wayland users), which is available under the attribute `emacs29-pgtk`. - Emacs macport version 29 was introduced. -- The option `services.networking.networkmanager.enableFccUnlock` was removed in favor of `networking.networkmanager.fccUnlockScripts`, which allows specifying unlock scripts explicitly. The previous option simply did enable all unlock scripts bundled with ModemManager, which is risky, and didn't allow using vendor-provided unlock scripts at all. +- The option `services.networking.networkmanager.enableFccUnlock` was removed in favor of `networking.networkmanager.fccUnlockScripts`, which allows specifying unlock scripts explicitly. The previous option enabled all unlock scripts bundled with ModemManager, which is risky, and didn't allow using vendor-provided unlock scripts at all. - The `html-proofer` package has been updated from major version 3 to major version 5, which includes [breaking changes](https://github.com/gjtorikian/html-proofer/blob/v5.0.8/UPGRADING.md). @@ -285,12 +339,14 @@ - Package `pash` was removed due to being archived upstream. Use `powershell` as an alternative. +- The option `services.plausible.releaseCookiePath` has been removed: Plausible does not use any distributed Erlang features, and does not plan to (see [discussion](https://github.com/NixOS/nixpkgs/pull/130297#issuecomment-1805851333)), so NixOS now disables them, and the Erlang cookie becomes unnecessary. You may delete the file that `releaseCookiePath` was set to. + - `security.sudo.extraRules` now includes `root`'s default rule, with ordering priority 400. This is functionally identical for users not specifying rule order, or relying on `mkBefore` and `mkAfter`, but may impact users calling `mkOrder n` with n ≤ 400. -- X keyboard extension (XKB) options have been reorganized into a single attribute set, `services.xserver.xkb`. Specifically, `services.xserver.layout` is now `services.xserver.xkb.layout`, `services.xserver.xkbModel` is now `services.xserver.xkb.model`, `services.xserver.xkbOptions` is now `services.xserver.xkb.options`, `services.xserver.xkbVariant` is now `services.xserver.xkb.variant`, and `services.xserver.xkbDir` is now `services.xserver.xkb.dir`. +- X keyboard extension (XKB) options have been reorganized into a single attribute set, `services.xserver.xkb`. Specifically, `services.xserver.layout` is now `services.xserver.xkb.layout`, `services.xserver.extraLayouts` is now `services.xserver.xkb.extraLayouts`, `services.xserver.xkbModel` is now `services.xserver.xkb.model`, `services.xserver.xkbOptions` is now `services.xserver.xkb.options`, `services.xserver.xkbVariant` is now `services.xserver.xkb.variant`, and `services.xserver.xkbDir` is now `services.xserver.xkb.dir`. - `networking.networkmanager.firewallBackend` was removed as NixOS is now using iptables-nftables-compat even when using iptables, therefore Networkmanager now uses the nftables backend unconditionally. @@ -301,18 +357,43 @@ - `rome` was removed because it is no longer maintained and is succeeded by `biome`. +- The `prometheus-knot-exporter` was migrated to a version maintained by CZ.NIC. Various metric names have changed, so checking existing rules is recommended. + - The `services.mtr-exporter.target` has been removed in favor of `services.mtr-exporter.jobs` which allows specifying multiple targets. +- `blender-with-packages` has been deprecated in favor of `blender.withPackages`, for example `blender.withPackages (ps: [ps.bpycv])`. It behaves similarly to `python3.withPackages`. + - Setting `nixpkgs.config` options while providing an external `pkgs` instance will now raise an error instead of silently ignoring the options. NixOS modules no longer set `nixpkgs.config` to accomodate this. This specifically affects `services.locate`, `services.xserver.displayManager.lightdm.greeters.tiny` and `programs.firefox` NixOS modules. No manual intervention should be required in most cases, however, configurations relying on those modules affecting packages outside the system environment should switch to explicit overlays. - `service.borgmatic.settings.location` and `services.borgmatic.configurations..location` are deprecated, please move your options out of sections to the global scope. +- `privacyidea` (and the corresponding `privacyidea-ldap-proxy`) has been removed from nixpkgs because it has severely outdated dependencies that became unmaintainable with nixpkgs' python package-set. + - `dagger` was removed because using a package called `dagger` and packaging it from source violates their trademark policy. - `win-virtio` package was renamed to `virtio-win` to be consistent with the upstream package name. +- `ps3netsrv` has been replaced with the webman-mod fork, the executable has been renamed from `ps3netsrv++` to `ps3netsrv` and cli parameters have changed. + +- `ssm-agent` package and module were renamed to `amazon-ssm-agent` to be consistent with the upstream package name. + +- `services.kea.{ctrl-agent,dhcp-ddns,dhcp,dhcp6}` now use separate runtime directories instead of `/run/kea` to work around the runtime directory being cleared on service start. + +- `mkDerivation` now rejects MD5 hashes. + +- The `junicode` font package has been updated to [major version 2](https://github.com/psb1558/Junicode-font/releases/tag/v2.001), which is now a font family. In particular, plain `Junicode.ttf` no longer exists. In addition, TrueType font files are now placed in `font/truetype` instead of `font/junicode-ttf`; this change does not affect use via `fonts.packages` NixOS option. + +- The `prayer` package as well as `services.prayer` have been removed because it's been unmaintained for several years and the author's website has vanished. + +- The `chrony` NixOS module now tracks the Real-Time Clock drift from the System Clock with `rtcfile` and automatically adjusts it with `rtcautotrim` when it exceeds the maximum error specified in `services.chrony.autotrimThreshold` (default 30 seconds). If you enabled `rtcsync` in `extraConfig`, you should remove RTC related options from `extraConfig`. If you do not want chrony configured to keep the RTC in check, you can set `services.chrony.enableRTCTrimming = false;` + ## Other Notable Changes {#sec-release-23.11-notable-changes} +- A new option `system.switch.enable` was added. By default, this is option is + enabled. Disabling it makes the system unable to be reconfigured via + `nixos-rebuild`. This is good for image based appliances where updates are + handled outside the image. + - The Cinnamon module now enables XDG desktop integration by default. If you are experiencing collisions related to xdg-desktop-portal-gtk you can safely remove `xdg.portal.extraPortals = [ pkgs.xdg-desktop-portal-gtk ];` from your NixOS configuration. - GNOME, Pantheon, Cinnamon module no longer forces Qt applications to use Adwaita style since it was buggy and is no longer maintained upstream (specifically, Cinnamon now defaults to the gtk2 style instead, following the default in Linux Mint). If you still want it, you can add the following options to your configuration but it will probably be eventually removed: @@ -337,20 +418,36 @@ - `jq` was updated to 1.7, its [first release in 5 years](https://github.com/jqlang/jq/releases/tag/jq-1.7). +- `zfs` was updated from 2.1.x to 2.2.0, [enabling newer kernel support and adding new features](https://github.com/openzfs/zfs/releases/tag/zfs-2.2.0). + +- Elixir now defaults to version + [v1.15](https://elixir-lang.org/blog/2023/06/19/elixir-v1-15-0-released/). + - A new option was added to the virtualisation module that enables specifying explicitly named network interfaces in QEMU VMs. The existing `virtualisation.vlans` is still supported for cases where the name of the network interface is irrelevant. - DocBook option documentation is no longer supported, all module documentation now uses markdown. +- `services.outline` can now be configured to use local filesystem storage instead of S3 storage using [services.outline.storage.storageType](#opt-services.outline.storage.storageType). + +- `paperwork` was updated to version 2.2. Documents scanned with this version will not be visible to previous versions if you downgrade. See the [upstream announcement](https://forum.openpaper.work/t/paperwork-2-2-testing-phase/316#important-switch-from-jpeg-to-png-for-new-pages-2) for details and workarounds. + - `buildGoModule` `go-modules` attrs have been renamed to `goModules`. - The `fonts.fonts` and `fonts.enableDefaultFonts` options have been renamed to `fonts.packages` and `fonts.enableDefaultPackages` respectively. +- The `services.sslh` module has been updated to follow [RFC 0042](https://github.com/NixOS/rfcs/blob/master/rfcs/0042-config-option.md). As such, several options have been moved to the freeform attribute set [services.sslh.settings](#opt-services.sslh.settings), which allows to change any of the settings in {manpage}`sslh(8)`. + In addition, the newly added option [services.sslh.method](#opt-services.sslh.method) allows to switch between the {manpage}`fork(2)`, {manpage}`select(2)` and `libev`-based connection handling method; see the [sslh docs](https://github.com/yrutschle/sslh/blob/master/doc/INSTALL.md#binaries) for a comparison. + - `pkgs.openvpn3` now optionally supports systemd-resolved. `programs.openvpn3` will automatically enable systemd-resolved support if `config.services.resolved.enable` is enabled. - `services.fail2ban.jails` can now be configured with attribute sets defining settings and filters instead of lines. The stringed options `daemonConfig` and `extraSettings` have respectively been replaced by `daemonSettings` and `jails.DEFAULT.settings` which use attribute sets. - The application firewall `opensnitch` now uses the process monitor method eBPF as default as recommended by upstream. The method can be changed with the setting [services.opensnitch.settings.ProcMonitorMethod](#opt-services.opensnitch.settings.ProcMonitorMethod). +- `services.hedgedoc` has been heavily refactored, reducing the amount of declared options in the module. Most of the options should still work without any changes. Some options have been deprecated, as they no longer have any effect. See [#244941](https://github.com/NixOS/nixpkgs/pull/244941) for more details. + +- The [services.woodpecker-server](#opt-services.woodpecker-server.environmentFile) type was changed to list of paths to be more consistent to the woodpecker-agent module + - The module [services.ankisyncd](#opt-services.ankisyncd.package) has been switched to [anki-sync-server-rs](https://github.com/ankicommunity/anki-sync-server-rs) from the old python version, which was difficult to update, had not been updated in a while, and did not support recent versions of anki. Unfortunately all servers supporting new clients (newer version of anki-sync-server, anki's built in sync server and this new rust package) do not support the older sync protocol that was used in the old server, so such old clients will also need updating and in particular the anki package in nixpkgs is also being updated in this release. The module update takes care of the new config syntax and the data itself (user login and cards) are compatible, so users of the module will be able to just log in again after updating both client and server without any extra action. @@ -390,6 +487,8 @@ The module update takes care of the new config syntax and the data itself (user - Suricata was upgraded from 6.0 to 7.0 and no longer considers HTTP/2 support as experimental, see [upstream release notes](https://forum.suricata.io/t/suricata-7-0-0-released/3715) for more details. +- Cloud support in the `netdata` package is now disabled by default. To enable it use the `netdataCloud` package. + - `networking.nftables` now has the option `networking.nftables.table.` to create tables and have them be updated atomically, instead of flushing the ruleset. @@ -412,6 +511,8 @@ The module update takes care of the new config syntax and the data itself (user - `keepTerminfo` controls whether `TERMINFO` and `TERMINFO_DIRS` are preserved for `root` and the `wheel` group. +- `virtualisation.googleComputeImage` now provides `efi` option to support UEFI booting. + - CoreDNS can now be built with external plugins by overriding `externalPlugins` and `vendorHash` arguments like this: ``` @@ -430,8 +531,18 @@ The module update takes care of the new config syntax and the data itself (user If you use this feature, updates to CoreDNS may require updating `vendorHash` by following these steps again. +- `postgresql_11` has been removed since it'll stop receiving fixes on November 9 2023. + +- `ffmpeg` default upgraded from `ffmpeg_5` to `ffmpeg_6`. + - `fusuma` now enables the following plugins: [appmatcher](https://github.com/iberianpig/fusuma-plugin-appmatcher), [keypress](https://github.com/iberianpig/fusuma-plugin-keypress), [sendkey](https://github.com/iberianpig/fusuma-plugin-sendkey), [tap](https://github.com/iberianpig/fusuma-plugin-tap) and [wmctrl](https://github.com/iberianpig/fusuma-plugin-wmctrl). +- `services.bitcoind` now properly respects the `enable` option. + +- The Home Assistant module now offers support for installing custom components and lovelace modules. Available at [`services.home-assistant.customComponents`](#opt-services.home-assistant.customComponents) and [`services.home-assistant.customLovelaceModules`](#opt-services.home-assistant.customLovelaceModules). + +- The argument `vendorSha256` of `buildGoModule` is deprecated. Use `vendorHash` instead. ([\#259999](https://github.com/NixOS/nixpkgs/pull/259999)) + ## Nixpkgs internals {#sec-release-23.11-nixpkgs-internals} - The use of `sourceRoot = "source";`, `sourceRoot = "source/subdir";`, and similar lines in package derivations using the default `unpackPhase` is deprecated as it requires `unpackPhase` to always produce a directory named "source". Use `sourceRoot = src.name`, `sourceRoot = "${src.name}/subdir";`, or `setSourceRoot = "sourceRoot=$(echo */subdir)";` or similar instead. @@ -473,3 +584,9 @@ The module update takes care of the new config syntax and the data itself (user - The `electron` packages now places its application files in `$out/libexec/electron` instead of `$out/lib/electron`. Packages using electron-builder will fail to build and need to be adjusted by changing `lib` to `libexec`. - `teleport` has been upgraded from major version 12 to major version 14. Please see upstream [upgrade instructions](https://goteleport.com/docs/management/operations/upgrading/) and release notes for versions [13](https://goteleport.com/docs/changelog/#1300-050823) and [14](https://goteleport.com/docs/changelog/#1400-092023). Note that Teleport does not officially support upgrades across more than one major version at a time. If you're running Teleport server components, it is recommended to first upgrade to an intermediate 13.x version by setting `services.teleport.package = pkgs.teleport_13`. Afterwards, this option can be removed to upgrade to the default version (14). + +- The Linux kernel module `msr` (see [`msr(4)`](https://man7.org/linux/man-pages/man4/msr.4.html)), which provides an interface to read and write the model-specific registers (MSRs) of an x86 CPU, can now be configured via `hardware.cpu.x86.msr`. + +- Docker now defaults to 24, as 20.10 is stopping to receive security updates and bug fixes after [December 10, 2023](https://github.com/moby/moby/discussions/45104). + +- There is a new NixOS option when writing NixOS tests `testing.initrdBackdoor`, that enables `backdoor.service` in initrd. Requires `boot.initrd.systemd.enable` to be enabled. Boot will pause in stage 1 at `initrd.target`, and will listen for commands from the `Machine` python interface, just like stage 2 normally does. This enables commands to be sent to test and debug stage 1. Use `machine.switch_root()` to leave stage 1 and proceed to stage 2. diff --git a/third_party/nixpkgs/nixos/lib/make-btrfs-fs.nix b/third_party/nixpkgs/nixos/lib/make-btrfs-fs.nix index 225666f9a5..277ff6a4dc 100644 --- a/third_party/nixpkgs/nixos/lib/make-btrfs-fs.nix +++ b/third_party/nixpkgs/nixos/lib/make-btrfs-fs.nix @@ -15,6 +15,8 @@ , volumeLabel , uuid ? "44444444-4444-4444-8888-888888888888" , btrfs-progs +, libfaketime +, fakeroot }: let @@ -23,7 +25,7 @@ in pkgs.stdenv.mkDerivation { name = "btrfs-fs.img${lib.optionalString compressImage ".zst"}"; - nativeBuildInputs = [ btrfs-progs ] ++ lib.optional compressImage zstd; + nativeBuildInputs = [ btrfs-progs libfaketime fakeroot ] ++ lib.optional compressImage zstd; buildCommand = '' @@ -50,7 +52,7 @@ pkgs.stdenv.mkDerivation { cp ${sdClosureInfo}/registration ./rootImage/nix-path-registration touch $img - mkfs.btrfs -L ${volumeLabel} -U ${uuid} -r ./rootImage --shrink $img + faketime -f "1970-01-01 00:00:01" fakeroot mkfs.btrfs -L ${volumeLabel} -U ${uuid} -r ./rootImage --shrink $img if ! btrfs check $img; then echo "--- 'btrfs check' failed for BTRFS image ---" diff --git a/third_party/nixpkgs/nixos/lib/make-squashfs.nix b/third_party/nixpkgs/nixos/lib/make-squashfs.nix index b7c7078b73..4b6b567399 100644 --- a/third_party/nixpkgs/nixos/lib/make-squashfs.nix +++ b/third_party/nixpkgs/nixos/lib/make-squashfs.nix @@ -1,15 +1,22 @@ { lib, stdenv, squashfsTools, closureInfo +, fileName ? "squashfs" , # The root directory of the squashfs filesystem is filled with the # closures of the Nix store paths listed here. storeContents ? [] + # Pseudo files to be added to squashfs image +, pseudoFiles ? [] +, noStrip ? false , # Compression parameters. # For zstd compression you can use "zstd -Xcompression-level 6". comp ? "xz -Xdict-size 100%" }: +let + pseudoFilesArgs = lib.concatMapStrings (f: ''-p "${f}" '') pseudoFiles; +in stdenv.mkDerivation { - name = "squashfs.img"; + name = "${fileName}.img"; __structuredAttrs = true; nativeBuildInputs = [ squashfsTools ]; @@ -31,8 +38,8 @@ stdenv.mkDerivation { '' + '' # Generate the squashfs image. - mksquashfs nix-path-registration $(cat $closureInfo/store-paths) $out \ - -no-hardlinks -keep-as-directory -all-root -b 1048576 -comp ${comp} \ + mksquashfs nix-path-registration $(cat $closureInfo/store-paths) $out ${pseudoFilesArgs} \ + -no-hardlinks ${lib.optionalString noStrip "-no-strip"} -keep-as-directory -all-root -b 1048576 -comp ${comp} \ -processors $NIX_BUILD_CORES ''; } diff --git a/third_party/nixpkgs/nixos/lib/qemu-common.nix b/third_party/nixpkgs/nixos/lib/qemu-common.nix index 4fff2e0a6f..b946f62d93 100644 --- a/third_party/nixpkgs/nixos/lib/qemu-common.nix +++ b/third_party/nixpkgs/nixos/lib/qemu-common.nix @@ -40,6 +40,7 @@ rec { otherHostGuestMatrix = { aarch64-darwin = { aarch64-linux = "${qemuPkg}/bin/qemu-system-aarch64 -machine virt,gic-version=2,accel=hvf:tcg -cpu max"; + inherit (otherHostGuestMatrix.x86_64-darwin) x86_64-linux; }; x86_64-darwin = { x86_64-linux = "${qemuPkg}/bin/qemu-system-x86_64 -machine type=q35,accel=hvf:tcg -cpu max"; diff --git a/third_party/nixpkgs/nixos/lib/systemd-lib.nix b/third_party/nixpkgs/nixos/lib/systemd-lib.nix index 5669aae0bc..820ccbcbf7 100644 --- a/third_party/nixpkgs/nixos/lib/systemd-lib.nix +++ b/third_party/nixpkgs/nixos/lib/systemd-lib.nix @@ -20,12 +20,16 @@ in rec { pkgs.runCommand "unit-${mkPathSafeName name}" { preferLocalBuild = true; allowSubstitutes = false; - inherit (unit) text; + # unit.text can be null. But variables that are null listed in + # passAsFile are ignored by nix, resulting in no file being created, + # making the mv operation fail. + text = optionalString (unit.text != null) unit.text; + passAsFile = [ "text" ]; } '' name=${shellEscape name} mkdir -p "$out/$(dirname -- "$name")" - echo -n "$text" > "$out/$name" + mv "$textPath" "$out/$name" '' else pkgs.runCommand "unit-${mkPathSafeName name}-disabled" @@ -372,24 +376,23 @@ in rec { serviceToUnit = name: def: { inherit (def) aliases wantedBy requiredBy enable overrideStrategy; - text = commonUnitText def + - '' - [Service] - ${let env = cfg.globalEnvironment // def.environment; - in concatMapStrings (n: - let s = optionalString (env.${n} != null) - "Environment=${builtins.toJSON "${n}=${env.${n}}"}\n"; - # systemd max line length is now 1MiB - # https://github.com/systemd/systemd/commit/e6dde451a51dc5aaa7f4d98d39b8fe735f73d2af - in if stringLength s >= 1048576 then throw "The value of the environment variable ‘${n}’ in systemd service ‘${name}.service’ is too long." else s) (attrNames env)} - ${if def ? reloadIfChanged && def.reloadIfChanged then '' - X-ReloadIfChanged=true - '' else if (def ? restartIfChanged && !def.restartIfChanged) then '' - X-RestartIfChanged=false - '' else ""} - ${optionalString (def ? stopIfChanged && !def.stopIfChanged) "X-StopIfChanged=false"} - ${attrsToSection def.serviceConfig} - ''; + text = commonUnitText def + '' + [Service] + '' + (let env = cfg.globalEnvironment // def.environment; + in concatMapStrings (n: + let s = optionalString (env.${n} != null) + "Environment=${builtins.toJSON "${n}=${env.${n}}"}\n"; + # systemd max line length is now 1MiB + # https://github.com/systemd/systemd/commit/e6dde451a51dc5aaa7f4d98d39b8fe735f73d2af + in if stringLength s >= 1048576 then throw "The value of the environment variable ‘${n}’ in systemd service ‘${name}.service’ is too long." else s) (attrNames env)) + + (if def ? reloadIfChanged && def.reloadIfChanged then '' + X-ReloadIfChanged=true + '' else if (def ? restartIfChanged && !def.restartIfChanged) then '' + X-RestartIfChanged=false + '' else "") + + optionalString (def ? stopIfChanged && !def.stopIfChanged) '' + X-StopIfChanged=false + '' + attrsToSection def.serviceConfig; }; socketToUnit = name: def: diff --git a/third_party/nixpkgs/nixos/lib/systemd-network-units.nix b/third_party/nixpkgs/nixos/lib/systemd-network-units.nix index 14ff0b3742..8bda1a8bfd 100644 --- a/third_party/nixpkgs/nixos/lib/systemd-network-units.nix +++ b/third_party/nixpkgs/nixos/lib/systemd-network-units.nix @@ -65,6 +65,9 @@ in { '' + optionalString (def.vrfConfig != { }) '' [VRF] ${attrsToSection def.vrfConfig} + '' + optionalString (def.wlanConfig != { }) '' + [WLAN] + ${attrsToSection def.wlanConfig} '' + optionalString (def.batmanAdvancedConfig != { }) '' [BatmanAdvanced] ${attrsToSection def.batmanAdvancedConfig} diff --git a/third_party/nixpkgs/nixos/lib/test-driver/default.nix b/third_party/nixpkgs/nixos/lib/test-driver/default.nix index 6e01e00b43..09d80deb85 100644 --- a/third_party/nixpkgs/nixos/lib/test-driver/default.nix +++ b/third_party/nixpkgs/nixos/lib/test-driver/default.nix @@ -11,6 +11,7 @@ , tesseract4 , vde2 , extraPythonPackages ? (_ : []) +, nixosTests }: python3Packages.buildPythonApplication { @@ -31,6 +32,10 @@ python3Packages.buildPythonApplication { ++ (lib.optionals enableOCR [ imagemagick_light tesseract4 ]) ++ extraPythonPackages python3Packages; + passthru.tests = { + inherit (nixosTests.nixos-test-driver) driver-timeout; + }; + doCheck = true; nativeCheckInputs = with python3Packages; [ mypy ruff black ]; checkPhase = '' diff --git a/third_party/nixpkgs/nixos/lib/test-driver/test_driver/__init__.py b/third_party/nixpkgs/nixos/lib/test-driver/test_driver/__init__.py index 371719d7a9..9daae1e941 100755 --- a/third_party/nixpkgs/nixos/lib/test-driver/test_driver/__init__.py +++ b/third_party/nixpkgs/nixos/lib/test-driver/test_driver/__init__.py @@ -76,6 +76,14 @@ def main() -> None: nargs="*", help="vlans to span by the driver", ) + arg_parser.add_argument( + "--global-timeout", + type=int, + metavar="GLOBAL_TIMEOUT", + action=EnvDefault, + envvar="globalTimeout", + help="Timeout in seconds for the whole test", + ) arg_parser.add_argument( "-o", "--output_directory", @@ -103,6 +111,7 @@ def main() -> None: args.testscript.read_text(), args.output_directory.resolve(), args.keep_vm_state, + args.global_timeout, ) as driver: if args.interactive: history_dir = os.getcwd() diff --git a/third_party/nixpkgs/nixos/lib/test-driver/test_driver/driver.py b/third_party/nixpkgs/nixos/lib/test-driver/test_driver/driver.py index 723c807178..786821b0cc 100644 --- a/third_party/nixpkgs/nixos/lib/test-driver/test_driver/driver.py +++ b/third_party/nixpkgs/nixos/lib/test-driver/test_driver/driver.py @@ -1,6 +1,8 @@ import os import re +import signal import tempfile +import threading from contextlib import contextmanager from pathlib import Path from typing import Any, Callable, ContextManager, Dict, Iterator, List, Optional, Union @@ -41,6 +43,8 @@ class Driver: vlans: List[VLan] machines: List[Machine] polling_conditions: List[PollingCondition] + global_timeout: int + race_timer: threading.Timer def __init__( self, @@ -49,9 +53,12 @@ class Driver: tests: str, out_dir: Path, keep_vm_state: bool = False, + global_timeout: int = 24 * 60 * 60 * 7, ): self.tests = tests self.out_dir = out_dir + self.global_timeout = global_timeout + self.race_timer = threading.Timer(global_timeout, self.terminate_test) tmp_dir = get_tmp_dir() @@ -82,6 +89,7 @@ class Driver: def __exit__(self, *_: Any) -> None: with rootlog.nested("cleanup"): + self.race_timer.cancel() for machine in self.machines: machine.release() @@ -144,6 +152,10 @@ class Driver: def run_tests(self) -> None: """Run the test script (for non-interactive test runs)""" + rootlog.info( + f"Test will time out and terminate in {self.global_timeout} seconds" + ) + self.race_timer.start() self.test_script() # TODO: Collect coverage data for machine in self.machines: @@ -161,6 +173,19 @@ class Driver: with rootlog.nested("wait for all VMs to finish"): for machine in self.machines: machine.wait_for_shutdown() + self.race_timer.cancel() + + def terminate_test(self) -> None: + # This will be usually running in another thread than + # the thread actually executing the test script. + with rootlog.nested("timeout reached; test terminating..."): + for machine in self.machines: + machine.release() + # As we cannot `sys.exit` from another thread + # We can at least force the main thread to get SIGTERM'ed. + # This will prevent any user who caught all the exceptions + # to swallow them and prevent itself from terminating. + os.kill(os.getpid(), signal.SIGTERM) def create_machine(self, args: Dict[str, Any]) -> Machine: tmp_dir = get_tmp_dir() diff --git a/third_party/nixpkgs/nixos/lib/test-driver/test_driver/machine.py b/third_party/nixpkgs/nixos/lib/test-driver/test_driver/machine.py index 7ed001a1df..f430321bb6 100644 --- a/third_party/nixpkgs/nixos/lib/test-driver/test_driver/machine.py +++ b/third_party/nixpkgs/nixos/lib/test-driver/test_driver/machine.py @@ -19,6 +19,8 @@ from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple from test_driver.logger import rootlog +from .qmp import QMPSession + CHAR_TO_KEY = { "A": "shift-a", "N": "shift-n", @@ -144,6 +146,7 @@ class StartCommand: def cmd( self, monitor_socket_path: Path, + qmp_socket_path: Path, shell_socket_path: Path, allow_reboot: bool = False, ) -> str: @@ -167,6 +170,7 @@ class StartCommand: return ( f"{self._cmd}" + f" -qmp unix:{qmp_socket_path},server=on,wait=off" f" -monitor unix:{monitor_socket_path}" f" -chardev socket,id=shell,path={shell_socket_path}" f"{qemu_opts}" @@ -194,11 +198,14 @@ class StartCommand: state_dir: Path, shared_dir: Path, monitor_socket_path: Path, + qmp_socket_path: Path, shell_socket_path: Path, allow_reboot: bool, ) -> subprocess.Popen: return subprocess.Popen( - self.cmd(monitor_socket_path, shell_socket_path, allow_reboot), + self.cmd( + monitor_socket_path, qmp_socket_path, shell_socket_path, allow_reboot + ), stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, @@ -309,6 +316,7 @@ class Machine: shared_dir: Path state_dir: Path monitor_path: Path + qmp_path: Path shell_path: Path start_command: StartCommand @@ -317,6 +325,7 @@ class Machine: process: Optional[subprocess.Popen] pid: Optional[int] monitor: Optional[socket.socket] + qmp_client: Optional[QMPSession] shell: Optional[socket.socket] serial_thread: Optional[threading.Thread] @@ -352,6 +361,7 @@ class Machine: self.state_dir = self.tmp_dir / f"vm-state-{self.name}" self.monitor_path = self.state_dir / "monitor" + self.qmp_path = self.state_dir / "qmp" self.shell_path = self.state_dir / "shell" if (not self.keep_vm_state) and self.state_dir.exists(): self.cleanup_statedir() @@ -360,6 +370,7 @@ class Machine: self.process = None self.pid = None self.monitor = None + self.qmp_client = None self.shell = None self.serial_thread = None @@ -791,6 +802,28 @@ class Machine: with self.nested(f"waiting for TCP port {port} on {addr}"): retry(port_is_open, timeout) + def wait_for_open_unix_socket( + self, addr: str, is_datagram: bool = False, timeout: int = 900 + ) -> None: + """ + Wait until a process is listening on the given UNIX-domain socket + (default to a UNIX-domain stream socket). + """ + + nc_flags = [ + "-z", + "-uU" if is_datagram else "-U", + ] + + def socket_is_open(_: Any) -> bool: + status, _ = self.execute(f"nc {' '.join(nc_flags)} {addr}") + return status == 0 + + with self.nested( + f"waiting for UNIX-domain {'datagram' if is_datagram else 'stream'} on '{addr}'" + ): + retry(socket_is_open, timeout) + def wait_for_closed_port( self, port: int, addr: str = "localhost", timeout: int = 900 ) -> None: @@ -1090,11 +1123,13 @@ class Machine: self.state_dir, self.shared_dir, self.monitor_path, + self.qmp_path, self.shell_path, allow_reboot, ) self.monitor, _ = monitor_socket.accept() self.shell, _ = shell_socket.accept() + self.qmp_client = QMPSession.from_path(self.qmp_path) # Store last serial console lines for use # of wait_for_console_text @@ -1243,3 +1278,19 @@ class Machine: def run_callbacks(self) -> None: for callback in self.callbacks: callback() + + def switch_root(self) -> None: + """ + Transition from stage 1 to stage 2. This requires the + machine to be configured with `testing.initrdBackdoor = true` + and `boot.initrd.systemd.enable = true`. + """ + self.wait_for_unit("initrd.target") + self.execute( + "systemctl isolate --no-block initrd-switch-root.target 2>/dev/null >/dev/null", + check_return=False, + check_output=False, + ) + self.wait_for_console_text(r"systemd\[1\]:.*Switching root\.") + self.connected = False + self.connect() diff --git a/third_party/nixpkgs/nixos/lib/test-driver/test_driver/qmp.py b/third_party/nixpkgs/nixos/lib/test-driver/test_driver/qmp.py new file mode 100644 index 0000000000..62ca6d7d5b --- /dev/null +++ b/third_party/nixpkgs/nixos/lib/test-driver/test_driver/qmp.py @@ -0,0 +1,98 @@ +import json +import logging +import os +import socket +from collections.abc import Iterator +from pathlib import Path +from queue import Queue +from typing import Any + +logger = logging.getLogger(__name__) + + +class QMPAPIError(RuntimeError): + def __init__(self, message: dict[str, Any]): + assert "error" in message, "Not an error message!" + try: + self.class_name = message["class"] + self.description = message["desc"] + # NOTE: Some errors can occur before the Server is able to read the + # id member; in these cases the id member will not be part of the + # error response, even if provided by the client. + self.transaction_id = message.get("id") + except KeyError: + raise RuntimeError("Malformed QMP API error response") + + def __str__(self) -> str: + return f"" + + +class QMPSession: + def __init__(self, sock: socket.socket) -> None: + self.sock = sock + self.results: Queue[dict[str, str]] = Queue() + self.pending_events: Queue[dict[str, Any]] = Queue() + self.reader = sock.makefile("r") + self.writer = sock.makefile("w") + # Make the reader non-blocking so we can kind of select on it. + os.set_blocking(self.reader.fileno(), False) + hello = self._wait_for_new_result() + logger.debug(f"Got greeting from QMP API: {hello}") + # The greeting message format is: + # { "QMP": { "version": json-object, "capabilities": json-array } } + assert "QMP" in hello, f"Unexpected result: {hello}" + self.send("qmp_capabilities") + + @classmethod + def from_path(cls, path: Path) -> "QMPSession": + sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) + sock.connect(str(path)) + return cls(sock) + + def __del__(self) -> None: + self.sock.close() + + def _wait_for_new_result(self) -> dict[str, str]: + assert self.results.empty(), "Results set is not empty, missed results!" + while self.results.empty(): + self.read_pending_messages() + return self.results.get() + + def read_pending_messages(self) -> None: + line = self.reader.readline() + if not line: + return + evt_or_result = json.loads(line) + logger.debug(f"Received a message: {evt_or_result}") + + # It's a result + if "return" in evt_or_result or "QMP" in evt_or_result: + self.results.put(evt_or_result) + # It's an event + elif "event" in evt_or_result: + self.pending_events.put(evt_or_result) + else: + raise QMPAPIError(evt_or_result) + + def wait_for_event(self, timeout: int = 10) -> dict[str, Any]: + while self.pending_events.empty(): + self.read_pending_messages() + + return self.pending_events.get(timeout=timeout) + + def events(self, timeout: int = 10) -> Iterator[dict[str, Any]]: + while not self.pending_events.empty(): + yield self.pending_events.get(timeout=timeout) + + def send(self, cmd: str, args: dict[str, str] = {}) -> dict[str, str]: + self.read_pending_messages() + assert self.results.empty(), "Results set is not empty, missed results!" + data: dict[str, Any] = dict(execute=cmd) + if args != {}: + data["arguments"] = args + + logger.debug(f"Sending {data} to QMP...") + json.dump(data, self.writer) + self.writer.write("\n") + self.writer.flush() + return self._wait_for_new_result() diff --git a/third_party/nixpkgs/nixos/lib/testing-python.nix b/third_party/nixpkgs/nixos/lib/testing-python.nix index 4904ad6e35..f522235151 100644 --- a/third_party/nixpkgs/nixos/lib/testing-python.nix +++ b/third_party/nixpkgs/nixos/lib/testing-python.nix @@ -42,6 +42,7 @@ rec { , nodes ? {} , testScript , enableOCR ? false + , globalTimeout ? (60 * 60) , name ? "unnamed" , skipTypeCheck ? false # Skip linting (mainly intended for faster dev cycles) diff --git a/third_party/nixpkgs/nixos/lib/testing/driver.nix b/third_party/nixpkgs/nixos/lib/testing/driver.nix index cc97ca7208..b6f01c3819 100644 --- a/third_party/nixpkgs/nixos/lib/testing/driver.nix +++ b/third_party/nixpkgs/nixos/lib/testing/driver.nix @@ -94,6 +94,7 @@ let wrapProgram $out/bin/nixos-test-driver \ --set startScripts "''${vmStartScripts[*]}" \ --set testScript "$out/test-script" \ + --set globalTimeout "${toString config.globalTimeout}" \ --set vlans '${toString vlans}' \ ${lib.escapeShellArgs (lib.concatMap (arg: ["--add-flags" arg]) config.extraDriverArgs)} ''; @@ -123,6 +124,18 @@ in defaultText = "hostPkgs.qemu_test"; }; + globalTimeout = mkOption { + description = mdDoc '' + A global timeout for the complete test, expressed in seconds. + Beyond that timeout, every resource will be killed and released and the test will fail. + + By default, we use a 1 hour timeout. + ''; + type = types.int; + default = 60 * 60; + example = 10 * 60; + }; + enableOCR = mkOption { description = mdDoc '' Whether to enable Optical Character Recognition functionality for diff --git a/third_party/nixpkgs/nixos/lib/testing/nodes.nix b/third_party/nixpkgs/nixos/lib/testing/nodes.nix index a47d1c98ec..73e6d386fd 100644 --- a/third_party/nixpkgs/nixos/lib/testing/nodes.nix +++ b/third_party/nixpkgs/nixos/lib/testing/nodes.nix @@ -32,9 +32,6 @@ let key = "nodes.nix-pkgs"; config = optionalAttrs (!config.node.pkgsReadOnly) ( mkIf (!options.nixpkgs.pkgs.isDefined) { - # Ensure we do not use aliases. Ideally this is only set - # when the test framework is used by Nixpkgs NixOS tests. - nixpkgs.config.allowAliases = false; # TODO: switch to nixpkgs.hostPlatform and make sure containers-imperative test still evaluates. nixpkgs.system = hostPkgs.stdenv.hostPlatform.system; } diff --git a/third_party/nixpkgs/nixos/lib/testing/run.nix b/third_party/nixpkgs/nixos/lib/testing/run.nix index 0cd07d8afd..9440c1acdf 100644 --- a/third_party/nixpkgs/nixos/lib/testing/run.nix +++ b/third_party/nixpkgs/nixos/lib/testing/run.nix @@ -16,6 +16,15 @@ in ''; }; + rawTestDerivation = mkOption { + type = types.package; + description = mdDoc '' + Unfiltered version of `test`, for troubleshooting the test framework and `testBuildFailure` in the test framework's test suite. + This is not intended for general use. Use `test` instead. + ''; + internal = true; + }; + test = mkOption { type = types.package; # TODO: can the interactive driver be configured to access the network? @@ -29,25 +38,26 @@ in }; config = { + rawTestDerivation = hostPkgs.stdenv.mkDerivation { + name = "vm-test-run-${config.name}"; + + requiredSystemFeatures = [ "kvm" "nixos-test" ]; + + buildCommand = '' + mkdir -p $out + + # effectively mute the XMLLogger + export LOGFILE=/dev/null + + ${config.driver}/bin/nixos-test-driver -o $out + ''; + + passthru = config.passthru; + + meta = config.meta; + }; test = lib.lazyDerivation { # lazyDerivation improves performance when only passthru items and/or meta are used. - derivation = hostPkgs.stdenv.mkDerivation { - name = "vm-test-run-${config.name}"; - - requiredSystemFeatures = [ "kvm" "nixos-test" ]; - - buildCommand = '' - mkdir -p $out - - # effectively mute the XMLLogger - export LOGFILE=/dev/null - - ${config.driver}/bin/nixos-test-driver -o $out - ''; - - passthru = config.passthru; - - meta = config.meta; - }; + derivation = config.rawTestDerivation; inherit (config) passthru meta; }; diff --git a/third_party/nixpkgs/nixos/maintainers/scripts/azure-new/examples/basic/system.nix b/third_party/nixpkgs/nixos/maintainers/scripts/azure-new/examples/basic/system.nix index d283742701..d1044802e1 100644 --- a/third_party/nixpkgs/nixos/maintainers/scripts/azure-new/examples/basic/system.nix +++ b/third_party/nixpkgs/nixos/maintainers/scripts/azure-new/examples/basic/system.nix @@ -21,7 +21,6 @@ in virtualisation.azureImage.diskSize = 2500; - system.stateVersion = "20.03"; boot.kernelPackages = pkgs.linuxPackages_latest; # test user doesn't have a password diff --git a/third_party/nixpkgs/nixos/maintainers/scripts/lxd/lxd-container-image-inner.nix b/third_party/nixpkgs/nixos/maintainers/scripts/lxd/lxd-container-image-inner.nix index 7b743d170b..62a6e1f9aa 100644 --- a/third_party/nixpkgs/nixos/maintainers/scripts/lxd/lxd-container-image-inner.nix +++ b/third_party/nixpkgs/nixos/maintainers/scripts/lxd/lxd-container-image-inner.nix @@ -2,13 +2,13 @@ # your system. Help is available in the configuration.nix(5) man page # and in the NixOS manual (accessible by running ‘nixos-help’). -{ config, pkgs, lib, ... }: +{ config, pkgs, lib, modulesPath, ... }: { imports = [ # Include the default lxd configuration. - ../../../modules/virtualisation/lxc-container.nix + "${modulesPath}/modules/virtualisation/lxc-container.nix" # Include the container-specific autogenerated configuration. ./lxd.nix ]; @@ -16,5 +16,5 @@ networking.useDHCP = false; networking.interfaces.eth0.useDHCP = true; - system.stateVersion = "21.05"; # Did you read the comment? + system.stateVersion = "@stateVersion@"; # Did you read the comment? } diff --git a/third_party/nixpkgs/nixos/maintainers/scripts/lxd/lxd-container-image.nix b/third_party/nixpkgs/nixos/maintainers/scripts/lxd/lxd-container-image.nix index 3bd1320b2b..b77f9f5aab 100644 --- a/third_party/nixpkgs/nixos/maintainers/scripts/lxd/lxd-container-image.nix +++ b/third_party/nixpkgs/nixos/maintainers/scripts/lxd/lxd-container-image.nix @@ -13,11 +13,15 @@ }; # copy the config for nixos-rebuild - system.activationScripts.config = '' + system.activationScripts.config = let + config = pkgs.substituteAll { + src = ./lxd-container-image-inner.nix; + stateVersion = lib.trivial.release; + }; + in '' if [ ! -e /etc/nixos/configuration.nix ]; then mkdir -p /etc/nixos - cat ${./lxd-container-image-inner.nix} > /etc/nixos/configuration.nix - ${lib.getExe pkgs.gnused} 's|../../../modules/virtualisation/lxc-container.nix||g' -i /etc/nixos/configuration.nix + cp ${config} /etc/nixos/configuration.nix fi ''; diff --git a/third_party/nixpkgs/nixos/maintainers/scripts/lxd/lxd-virtual-machine-image-inner.nix b/third_party/nixpkgs/nixos/maintainers/scripts/lxd/lxd-virtual-machine-image-inner.nix index a8f2c63ac5..c1c50b32ff 100644 --- a/third_party/nixpkgs/nixos/maintainers/scripts/lxd/lxd-virtual-machine-image-inner.nix +++ b/third_party/nixpkgs/nixos/maintainers/scripts/lxd/lxd-virtual-machine-image-inner.nix @@ -2,13 +2,13 @@ # your system. Help is available in the configuration.nix(5) man page # and in the NixOS manual (accessible by running ‘nixos-help’). -{ config, pkgs, lib, ... }: +{ config, pkgs, lib, modulesPath, ... }: { imports = [ # Include the default lxd configuration. - ../../../modules/virtualisation/lxd-virtual-machine.nix + "${modulesPath}/virtualisation/lxd-virtual-machine.nix" # Include the container-specific autogenerated configuration. ./lxd.nix ]; @@ -16,5 +16,5 @@ networking.useDHCP = false; networking.interfaces.eth0.useDHCP = true; - system.stateVersion = "23.05"; # Did you read the comment? + system.stateVersion = "@stateVersion@"; # Did you read the comment? } diff --git a/third_party/nixpkgs/nixos/maintainers/scripts/lxd/lxd-virtual-machine-image.nix b/third_party/nixpkgs/nixos/maintainers/scripts/lxd/lxd-virtual-machine-image.nix index eb0d9217d4..0d96eea0e2 100644 --- a/third_party/nixpkgs/nixos/maintainers/scripts/lxd/lxd-virtual-machine-image.nix +++ b/third_party/nixpkgs/nixos/maintainers/scripts/lxd/lxd-virtual-machine-image.nix @@ -13,11 +13,15 @@ }; # copy the config for nixos-rebuild - system.activationScripts.config = '' + system.activationScripts.config = let + config = pkgs.substituteAll { + src = ./lxd-virtual-machine-image-inner.nix; + stateVersion = lib.trivial.release; + }; + in '' if [ ! -e /etc/nixos/configuration.nix ]; then mkdir -p /etc/nixos - cat ${./lxd-virtual-machine-image-inner.nix} > /etc/nixos/configuration.nix - ${lib.getExe pkgs.gnused} 's|../../../modules/virtualisation/lxd-virtual-machine.nix||g' -i /etc/nixos/configuration.nix + cp ${config} /etc/nixos/configuration.nix fi ''; diff --git a/third_party/nixpkgs/nixos/modules/config/fanout.nix b/third_party/nixpkgs/nixos/modules/config/fanout.nix new file mode 100644 index 0000000000..60ee145f19 --- /dev/null +++ b/third_party/nixpkgs/nixos/modules/config/fanout.nix @@ -0,0 +1,49 @@ +{ config, lib, pkgs, ... }: +let + cfg = config.services.fanout; + mknodCmds = n: lib.lists.imap0 (i: s: + "mknod /dev/fanout${builtins.toString i} c $MAJOR ${builtins.toString i}" + ) (lib.lists.replicate n ""); +in +{ + options.services.fanout = { + enable = lib.mkEnableOption (lib.mdDoc "fanout"); + fanoutDevices = lib.mkOption { + type = lib.types.int; + default = 1; + description = "Number of /dev/fanout devices"; + }; + bufferSize = lib.mkOption { + type = lib.types.int; + default = 16384; + description = "Size of /dev/fanout buffer in bytes"; + }; + }; + + config = lib.mkIf cfg.enable { + boot.extraModulePackages = [ config.boot.kernelPackages.fanout.out ]; + + boot.kernelModules = [ "fanout" ]; + + boot.extraModprobeConfig = '' + options fanout buffersize=${builtins.toString cfg.bufferSize} + ''; + + systemd.services.fanout = { + description = "Bring up /dev/fanout devices"; + script = '' + MAJOR=$(${pkgs.gnugrep}/bin/grep fanout /proc/devices | ${pkgs.gawk}/bin/awk '{print $1}') + ${lib.strings.concatLines (mknodCmds cfg.fanoutDevices)} + ''; + + wantedBy = [ "multi-user.target" ]; + + serviceConfig = { + Type = "oneshot"; + User = "root"; + RemainAfterExit = "yes"; + Restart = "no"; + }; + }; + }; +} diff --git a/third_party/nixpkgs/nixos/modules/config/iproute2.nix b/third_party/nixpkgs/nixos/modules/config/iproute2.nix index 8f49e7dbf7..78bd07d680 100644 --- a/third_party/nixpkgs/nixos/modules/config/iproute2.nix +++ b/third_party/nixpkgs/nixos/modules/config/iproute2.nix @@ -7,7 +7,7 @@ let in { options.networking.iproute2 = { - enable = mkEnableOption (lib.mdDoc "copy IP route configuration files"); + enable = mkEnableOption (lib.mdDoc "copying IP route configuration files"); rttablesExtraConfig = mkOption { type = types.lines; default = ""; @@ -18,15 +18,10 @@ in }; config = mkIf cfg.enable { - environment.etc."iproute2/bpf_pinning" = { mode = "0644"; text = fileContents "${pkgs.iproute2}/etc/iproute2/bpf_pinning"; }; - environment.etc."iproute2/ematch_map" = { mode = "0644"; text = fileContents "${pkgs.iproute2}/etc/iproute2/ematch_map"; }; - environment.etc."iproute2/group" = { mode = "0644"; text = fileContents "${pkgs.iproute2}/etc/iproute2/group"; }; - environment.etc."iproute2/nl_protos" = { mode = "0644"; text = fileContents "${pkgs.iproute2}/etc/iproute2/nl_protos"; }; - environment.etc."iproute2/rt_dsfield" = { mode = "0644"; text = fileContents "${pkgs.iproute2}/etc/iproute2/rt_dsfield"; }; - environment.etc."iproute2/rt_protos" = { mode = "0644"; text = fileContents "${pkgs.iproute2}/etc/iproute2/rt_protos"; }; - environment.etc."iproute2/rt_realms" = { mode = "0644"; text = fileContents "${pkgs.iproute2}/etc/iproute2/rt_realms"; }; - environment.etc."iproute2/rt_scopes" = { mode = "0644"; text = fileContents "${pkgs.iproute2}/etc/iproute2/rt_scopes"; }; - environment.etc."iproute2/rt_tables" = { mode = "0644"; text = (fileContents "${pkgs.iproute2}/etc/iproute2/rt_tables") - + (optionalString (cfg.rttablesExtraConfig != "") "\n\n${cfg.rttablesExtraConfig}"); }; + environment.etc."iproute2/rt_tables" = { + mode = "0644"; + text = (fileContents "${pkgs.iproute2}/lib/iproute2/rt_tables") + + (optionalString (cfg.rttablesExtraConfig != "") "\n\n${cfg.rttablesExtraConfig}"); + }; }; } diff --git a/third_party/nixpkgs/nixos/modules/config/mysql.nix b/third_party/nixpkgs/nixos/modules/config/mysql.nix index 2f13c56f2a..95c9ba7666 100644 --- a/third_party/nixpkgs/nixos/modules/config/mysql.nix +++ b/third_party/nixpkgs/nixos/modules/config/mysql.nix @@ -429,11 +429,11 @@ in ''; }; - # Activation script to append the password from the password file + # preStart script to append the password from the password file # to the configuration files. It also fixes the owner of the # libnss-mysql-root.cfg because it is changed to root after the # password is appended. - system.activationScripts.mysql-auth-passwords = '' + systemd.services.mysql.preStart = '' if [[ -r ${cfg.passwordFile} ]]; then org_umask=$(umask) umask 0077 diff --git a/third_party/nixpkgs/nixos/modules/config/nix-channel.nix b/third_party/nixpkgs/nixos/modules/config/nix-channel.nix index 3f8e088ede..a7ca7a5c74 100644 --- a/third_party/nixpkgs/nixos/modules/config/nix-channel.nix +++ b/third_party/nixpkgs/nixos/modules/config/nix-channel.nix @@ -97,12 +97,8 @@ in nix.settings.nix-path = mkIf (! cfg.channel.enable) (mkDefault ""); - system.activationScripts.nix-channel = mkIf cfg.channel.enable - (stringAfter [ "etc" "users" ] '' - # Subscribe the root user to the NixOS channel by default. - if [ ! -e "/root/.nix-channels" ]; then - echo "${config.system.defaultChannel} nixos" > "/root/.nix-channels" - fi - ''); + systemd.tmpfiles.rules = lib.mkIf cfg.channel.enable [ + ''f /root/.nix-channels - - - - ${config.system.defaultChannel} nixos\n'' + ]; }; } diff --git a/third_party/nixpkgs/nixos/modules/config/qt.nix b/third_party/nixpkgs/nixos/modules/config/qt.nix index 2b09281e46..f82b7ab85a 100644 --- a/third_party/nixpkgs/nixos/modules/config/qt.nix +++ b/third_party/nixpkgs/nixos/modules/config/qt.nix @@ -1,121 +1,154 @@ { config, lib, pkgs, ... }: -with lib; - let - cfg = config.qt; - isQGnome = cfg.platformTheme == "gnome" && builtins.elem cfg.style ["adwaita" "adwaita-dark"]; - isQtStyle = cfg.platformTheme == "gtk2" && !(builtins.elem cfg.style ["adwaita" "adwaita-dark"]); - isQt5ct = cfg.platformTheme == "qt5ct"; - isLxqt = cfg.platformTheme == "lxqt"; - isKde = cfg.platformTheme == "kde"; + platformPackages = with pkgs; { + gnome = [ qgnomeplatform qgnomeplatform-qt6 ]; + gtk2 = [ libsForQt5.qtstyleplugins qt6Packages.qt6gtk2 ]; + kde = [ libsForQt5.plasma-integration libsForQt5.systemsettings ]; + lxqt = [ lxqt.lxqt-qtplugin lxqt.lxqt-config ]; + qt5ct = [ libsForQt5.qt5ct qt6Packages.qt6ct ]; + }; - packages = - if isQGnome then [ - pkgs.qgnomeplatform - pkgs.adwaita-qt - pkgs.qgnomeplatform-qt6 - pkgs.adwaita-qt6 - ] - else if isQtStyle then [ pkgs.libsForQt5.qtstyleplugins pkgs.qt6Packages.qt6gtk2 ] - else if isQt5ct then [ pkgs.libsForQt5.qt5ct pkgs.qt6Packages.qt6ct ] - else if isLxqt then [ pkgs.lxqt.lxqt-qtplugin pkgs.lxqt.lxqt-config ] - else if isKde then [ pkgs.libsForQt5.plasma-integration pkgs.libsForQt5.systemsettings ] - else throw "`qt.platformTheme` ${cfg.platformTheme} and `qt.style` ${cfg.style} are not compatible."; + stylePackages = with pkgs; { + bb10bright = [ libsForQt5.qtstyleplugins ]; + bb10dark = [ libsForQt5.qtstyleplugins ]; + cde = [ libsForQt5.qtstyleplugins ]; + cleanlooks = [ libsForQt5.qtstyleplugins ]; + gtk2 = [ libsForQt5.qtstyleplugins qt6Packages.qt6gtk2 ]; + motif = [ libsForQt5.qtstyleplugins ]; + plastique = [ libsForQt5.qtstyleplugins ]; + adwaita = [ adwaita-qt adwaita-qt6 ]; + adwaita-dark = [ adwaita-qt adwaita-qt6 ]; + adwaita-highcontrast = [ adwaita-qt adwaita-qt6 ]; + adwaita-highcontrastinverse = [ adwaita-qt adwaita-qt6 ]; + + breeze = [ libsForQt5.breeze-qt5 ]; + + kvantum = [ libsForQt5.qtstyleplugin-kvantum qt6Packages.qtstyleplugin-kvantum ]; + }; in - { - meta.maintainers = [ maintainers.romildo ]; + meta.maintainers = with lib.maintainers; [ romildo thiagokokada ]; imports = [ - (mkRenamedOptionModule ["qt5" "enable" ] ["qt" "enable" ]) - (mkRenamedOptionModule ["qt5" "platformTheme" ] ["qt" "platformTheme" ]) - (mkRenamedOptionModule ["qt5" "style" ] ["qt" "style" ]) + (lib.mkRenamedOptionModule [ "qt5" "enable" ] [ "qt" "enable" ]) + (lib.mkRenamedOptionModule [ "qt5" "platformTheme" ] [ "qt" "platformTheme" ]) + (lib.mkRenamedOptionModule [ "qt5" "style" ] [ "qt" "style" ]) ]; options = { qt = { + enable = lib.mkEnableOption "" // { + description = lib.mdDoc '' + Whether to enable Qt configuration, including theming. - enable = mkEnableOption (lib.mdDoc "Qt theming configuration"); + Enabling this option is necessary for Qt plugins to work in the + installed profiles (e.g.: `nix-env -i` or `environment.systemPackages`). + ''; + }; - platformTheme = mkOption { - type = types.enum [ - "gtk2" - "gnome" - "lxqt" - "qt5ct" - "kde" - ]; + platformTheme = lib.mkOption { + type = with lib.types; nullOr (enum (lib.attrNames platformPackages)); + default = null; example = "gnome"; relatedPackages = [ "qgnomeplatform" "qgnomeplatform-qt6" - ["libsForQt5" "qtstyleplugins"] - ["libsForQt5" "qt5ct"] - ["lxqt" "lxqt-qtplugin"] - ["libsForQt5" "plasma-integration"] + [ "libsForQt5" "plasma-integration" ] + [ "libsForQt5" "qt5ct" ] + [ "libsForQt5" "qtstyleplugins" ] + [ "libsForQt5" "systemsettings" ] + [ "lxqt" "lxqt-config" ] + [ "lxqt" "lxqt-qtplugin" ] + [ "qt6Packages" "qt6ct" ] + [ "qt6Packages" "qt6gtk2" ] ]; description = lib.mdDoc '' Selects the platform theme to use for Qt applications. The options are - - `gtk`: Use GTK theme with [qtstyleplugins](https://github.com/qt/qtstyleplugins) - `gnome`: Use GNOME theme with [qgnomeplatform](https://github.com/FedoraQt/QGnomePlatform) + - `gtk2`: Use GTK theme with [qtstyleplugins](https://github.com/qt/qtstyleplugins) + - `kde`: Use Qt settings from Plasma. - `lxqt`: Use LXQt style set using the [lxqt-config-appearance](https://github.com/lxqt/lxqt-config) application. - `qt5ct`: Use Qt style set using the [qt5ct](https://sourceforge.net/projects/qt5ct/) - application. - - `kde`: Use Qt settings from Plasma. + and [qt6ct](https://github.com/trialuser02/qt6ct) applications. ''; }; - style = mkOption { - type = types.enum [ - "adwaita" - "adwaita-dark" - "cleanlooks" - "gtk2" - "motif" - "plastique" - ]; + style = lib.mkOption { + type = with lib.types; nullOr (enum (lib.attrNames stylePackages)); + default = null; example = "adwaita"; relatedPackages = [ "adwaita-qt" "adwaita-qt6" - ["libsForQt5" "qtstyleplugins"] - ["qt6Packages" "qt6gtk2"] + [ "libsForQt5" "breeze-qt5" ] + [ "libsForQt5" "qtstyleplugin-kvantum" ] + [ "libsForQt5" "qtstyleplugins" ] + [ "qt6Packages" "qt6gtk2" ] + [ "qt6Packages" "qtstyleplugin-kvantum" ] ]; description = lib.mdDoc '' Selects the style to use for Qt applications. The options are - - `adwaita`, `adwaita-dark`: Use Adwaita Qt style with + - `adwaita`, `adwaita-dark`, `adwaita-highcontrast`, `adawaita-highcontrastinverse`: + Use Adwaita Qt style with [adwaita](https://github.com/FedoraQt/adwaita-qt) - - `cleanlooks`, `gtk2`, `motif`, `plastique`: Use styles from + - `breeze`: Use the Breeze style from + [breeze](https://github.com/KDE/breeze) + - `bb10bright`, `bb10dark`, `cleanlooks`, `gtk2`, `motif`, `plastique`: + Use styles from [qtstyleplugins](https://github.com/qt/qtstyleplugins) + - `kvantum`: Use styles from + [kvantum](https://github.com/tsujan/Kvantum) ''; }; }; }; - config = mkIf cfg.enable { + config = lib.mkIf cfg.enable { + assertions = + let + gnomeStyles = [ + "adwaita" + "adwaita-dark" + "adwaita-highcontrast" + "adwaita-highcontrastinverse" + "breeze" + ]; + in + [ + { + assertion = cfg.platformTheme == "gnome" -> (builtins.elem cfg.style gnomeStyles); + message = '' + `qt.platformTheme` "gnome" must have `qt.style` set to a theme that supports both Qt and Gtk, + for example: ${lib.concatStringsSep ", " gnomeStyles}. + ''; + } + ]; environment.variables = { - QT_QPA_PLATFORMTHEME = cfg.platformTheme; - QT_STYLE_OVERRIDE = mkIf (! (isQt5ct || isLxqt || isKde)) cfg.style; + QT_QPA_PLATFORMTHEME = lib.mkIf (cfg.platformTheme != null) cfg.platformTheme; + QT_STYLE_OVERRIDE = lib.mkIf (cfg.style != null) cfg.style; }; - environment.profileRelativeSessionVariables = let - qtVersions = with pkgs; [ qt5 qt6 ]; - in { - QT_PLUGIN_PATH = map (qt: "/${qt.qtbase.qtPluginPrefix}") qtVersions; - QML2_IMPORT_PATH = map (qt: "/${qt.qtbase.qtQmlPrefix}") qtVersions; - }; - - environment.systemPackages = packages; + environment.profileRelativeSessionVariables = + let + qtVersions = with pkgs; [ qt5 qt6 ]; + in + { + QT_PLUGIN_PATH = map (qt: "/${qt.qtbase.qtPluginPrefix}") qtVersions; + QML2_IMPORT_PATH = map (qt: "/${qt.qtbase.qtQmlPrefix}") qtVersions; + }; + environment.systemPackages = + lib.optionals (cfg.platformTheme != null) (platformPackages.${cfg.platformTheme}) + ++ lib.optionals (cfg.style != null) (stylePackages.${cfg.style}); }; } diff --git a/third_party/nixpkgs/nixos/modules/config/stevenblack.nix b/third_party/nixpkgs/nixos/modules/config/stevenblack.nix index 07a0aa339a..30ef7ff259 100644 --- a/third_party/nixpkgs/nixos/modules/config/stevenblack.nix +++ b/third_party/nixpkgs/nixos/modules/config/stevenblack.nix @@ -15,7 +15,7 @@ let in { options.networking.stevenblack = { - enable = mkEnableOption (mdDoc "Enable the stevenblack hosts file blocklist"); + enable = mkEnableOption (mdDoc "the stevenblack hosts file blocklist"); block = mkOption { type = types.listOf (types.enum [ "fakenews" "gambling" "porn" "social" ]); diff --git a/third_party/nixpkgs/nixos/modules/config/terminfo.nix b/third_party/nixpkgs/nixos/modules/config/terminfo.nix index d1dbc4e0d0..ebd1aaea8f 100644 --- a/third_party/nixpkgs/nixos/modules/config/terminfo.nix +++ b/third_party/nixpkgs/nixos/modules/config/terminfo.nix @@ -16,10 +16,7 @@ with lib; }; security.sudo.keepTerminfo = mkOption { - default = config.security.sudo.package.pname != "sudo-rs"; - defaultText = literalMD '' - `true` unless using `sudo-rs` - ''; + default = true; type = types.bool; description = lib.mdDoc '' Whether to preserve the `TERMINFO` and `TERMINFO_DIRS` diff --git a/third_party/nixpkgs/nixos/modules/config/users-groups.nix b/third_party/nixpkgs/nixos/modules/config/users-groups.nix index 97268a8d83..39aac9fb82 100644 --- a/third_party/nixpkgs/nixos/modules/config/users-groups.nix +++ b/third_party/nixpkgs/nixos/modules/config/users-groups.nix @@ -153,7 +153,7 @@ let {file}`pam_mount.conf.xml`. Useful attributes might include `path`, `options`, `fstype`, and `server`. - See + See for more information. ''; }; @@ -606,6 +606,14 @@ in { defaultText = literalExpression "config.users.users.\${name}.group"; default = cfg.users.${name}.group; }; + options.shell = mkOption { + type = types.passwdEntry types.path; + description = '' + The path to the user's shell in initrd. + ''; + default = "${pkgs.shadow}/bin/nologin"; + defaultText = literalExpression "\${pkgs.shadow}/bin/nologin"; + }; })); }; @@ -750,17 +758,20 @@ in { boot.initrd.systemd = lib.mkIf config.boot.initrd.systemd.enable { contents = { "/etc/passwd".text = '' - ${lib.concatStringsSep "\n" (lib.mapAttrsToList (n: { uid, group }: let + ${lib.concatStringsSep "\n" (lib.mapAttrsToList (n: { uid, group, shell }: let g = config.boot.initrd.systemd.groups.${group}; - in "${n}:x:${toString uid}:${toString g.gid}::/var/empty:") config.boot.initrd.systemd.users)} + in "${n}:x:${toString uid}:${toString g.gid}::/var/empty:${shell}") config.boot.initrd.systemd.users)} ''; "/etc/group".text = '' ${lib.concatStringsSep "\n" (lib.mapAttrsToList (n: { gid }: "${n}:x:${toString gid}:") config.boot.initrd.systemd.groups)} ''; + "/etc/shells".text = lib.concatStringsSep "\n" (lib.unique (lib.mapAttrsToList (_: u: u.shell) config.boot.initrd.systemd.users)) + "\n"; }; + storePaths = [ "${pkgs.shadow}/bin/nologin" ]; + users = { - root = {}; + root = { shell = lib.mkDefault "/bin/bash"; }; nobody = {}; }; diff --git a/third_party/nixpkgs/nixos/modules/hardware/all-firmware.nix b/third_party/nixpkgs/nixos/modules/hardware/all-firmware.nix index 08141bb0e8..6f58e848b3 100644 --- a/third_party/nixpkgs/nixos/modules/hardware/all-firmware.nix +++ b/third_party/nixpkgs/nixos/modules/hardware/all-firmware.nix @@ -18,29 +18,16 @@ in { options = { - hardware.enableAllFirmware = mkOption { - default = false; - type = types.bool; - description = lib.mdDoc '' - Turn on this option if you want to enable all the firmware. - ''; - }; + hardware.enableAllFirmware = mkEnableOption "all firmware regardless of license"; - hardware.enableRedistributableFirmware = mkOption { + hardware.enableRedistributableFirmware = mkEnableOption "firmware with a license allowing redistribution" // { default = config.hardware.enableAllFirmware; defaultText = lib.literalExpression "config.hardware.enableAllFirmware"; - type = types.bool; - description = lib.mdDoc '' - Turn on this option if you want to enable all the firmware with a license allowing redistribution. - ''; }; - hardware.wirelessRegulatoryDatabase = mkOption { - default = false; - type = types.bool; - description = lib.mdDoc '' - Load the wireless regulatory database at boot. - ''; + hardware.wirelessRegulatoryDatabase = mkEnableOption "loading the wireless regulatory database at boot" // { + default = cfg.enableRedistributableFirmware || cfg.enableAllFirmware; + defaultText = literalMD "Enabled if proprietary firmware is allowed via {option}`enableRedistributableFirmware` or {option}`enableAllFirmware`."; }; }; @@ -65,7 +52,6 @@ in { ++ optionals (versionOlder config.boot.kernelPackages.kernel.version "4.13") [ rtl8723bs-firmware ]; - hardware.wirelessRegulatoryDatabase = true; }) (mkIf cfg.enableAllFirmware { assertions = [{ diff --git a/third_party/nixpkgs/nixos/modules/hardware/corectrl.nix b/third_party/nixpkgs/nixos/modules/hardware/corectrl.nix index 965cbe0267..8ef61a158d 100644 --- a/third_party/nixpkgs/nixos/modules/hardware/corectrl.nix +++ b/third_party/nixpkgs/nixos/modules/hardware/corectrl.nix @@ -8,13 +8,13 @@ in { options.programs.corectrl = { enable = mkEnableOption (lib.mdDoc '' - A tool to overclock amd graphics cards and processors. + CoreCtrl, a tool to overclock amd graphics cards and processors. Add your user to the corectrl group to run corectrl without needing to enter your password ''); gpuOverclock = { enable = mkEnableOption (lib.mdDoc '' - true + GPU overclocking ''); ppfeaturemask = mkOption { type = types.str; diff --git a/third_party/nixpkgs/nixos/modules/hardware/cpu/x86-msr.nix b/third_party/nixpkgs/nixos/modules/hardware/cpu/x86-msr.nix new file mode 100644 index 0000000000..554bec1b7d --- /dev/null +++ b/third_party/nixpkgs/nixos/modules/hardware/cpu/x86-msr.nix @@ -0,0 +1,91 @@ +{ lib +, config +, options +, ... +}: +let + inherit (builtins) hasAttr; + inherit (lib) mkIf mdDoc; + cfg = config.hardware.cpu.x86.msr; + opt = options.hardware.cpu.x86.msr; + defaultGroup = "msr"; + isDefaultGroup = cfg.group == defaultGroup; + set = "to set for devices of the `msr` kernel subsystem."; + + # Generates `foo=bar` parameters to pass to the kernel. + # If `module = baz` is passed, generates `baz.foo=bar`. + # Adds double quotes on demand to handle `foo="bar baz"`. + kernelParam = { module ? null }: name: value: + assert lib.asserts.assertMsg (!lib.strings.hasInfix "=" name) "kernel parameter cannot have '=' in name"; + let + key = (if module == null then "" else module + ".") + name; + valueString = lib.generators.mkValueStringDefault {} value; + quotedValueString = if lib.strings.hasInfix " " valueString + then lib.strings.escape ["\""] valueString + else valueString; + in "${key}=${quotedValueString}"; + msrKernelParam = kernelParam { module = "msr"; }; +in +{ + options.hardware.cpu.x86.msr = with lib.options; with lib.types; { + enable = mkEnableOption (mdDoc "the `msr` (Model-Specific Registers) kernel module and configure `udev` rules for its devices (usually `/dev/cpu/*/msr`)"); + owner = mkOption { + type = str; + default = "root"; + example = "nobody"; + description = mdDoc "Owner ${set}"; + }; + group = mkOption { + type = str; + default = defaultGroup; + example = "nobody"; + description = mdDoc "Group ${set}"; + }; + mode = mkOption { + type = str; + default = "0640"; + example = "0660"; + description = mdDoc "Mode ${set}"; + }; + settings = mkOption { + type = submodule { + freeformType = attrsOf (oneOf [ bool int str ]); + options.allow-writes = mkOption { + type = nullOr (enum ["on" "off"]); + default = null; + description = "Whether to allow writes to MSRs (`\"on\"`) or not (`\"off\"`)."; + }; + }; + default = {}; + description = "Parameters for the `msr` kernel module."; + }; + }; + + config = mkIf cfg.enable { + assertions = [ + { + assertion = hasAttr cfg.owner config.users.users; + message = "Owner '${cfg.owner}' set in `${opt.owner}` is not configured via `${options.users.users}.\"${cfg.owner}\"`."; + } + { + assertion = isDefaultGroup || (hasAttr cfg.group config.users.groups); + message = "Group '${cfg.group}' set in `${opt.group}` is not configured via `${options.users.groups}.\"${cfg.group}\"`."; + } + ]; + + boot = { + kernelModules = [ "msr" ]; + kernelParams = lib.attrsets.mapAttrsToList msrKernelParam (lib.attrsets.filterAttrs (_: value: value != null) cfg.settings); + }; + + users.groups.${cfg.group} = mkIf isDefaultGroup { }; + + services.udev.extraRules = '' + SUBSYSTEM=="msr", OWNER="${cfg.owner}", GROUP="${cfg.group}", MODE="${cfg.mode}" + ''; + }; + + meta = with lib; { + maintainers = with maintainers; [ lorenzleutgeb ]; + }; +} diff --git a/third_party/nixpkgs/nixos/modules/hardware/i2c.nix b/third_party/nixpkgs/nixos/modules/hardware/i2c.nix index 9a5a2e4481..bd4c4ebe21 100644 --- a/third_party/nixpkgs/nixos/modules/hardware/i2c.nix +++ b/third_party/nixpkgs/nixos/modules/hardware/i2c.nix @@ -11,7 +11,7 @@ in enable = mkEnableOption (lib.mdDoc '' i2c devices support. By default access is granted to users in the "i2c" group (will be created if non-existent) and any user with a seat, meaning - logged on the computer locally. + logged on the computer locally ''); group = mkOption { diff --git a/third_party/nixpkgs/nixos/modules/hardware/keyboard/uhk.nix b/third_party/nixpkgs/nixos/modules/hardware/keyboard/uhk.nix index 17baff83d8..ff984fa5da 100644 --- a/third_party/nixpkgs/nixos/modules/hardware/keyboard/uhk.nix +++ b/third_party/nixpkgs/nixos/modules/hardware/keyboard/uhk.nix @@ -11,7 +11,7 @@ in non-root access to the firmware of UHK keyboards. You need it when you want to flash a new firmware on the keyboard. Access to the keyboard is granted to users in the "input" group. - You may want to install the uhk-agent package. + You may want to install the uhk-agent package ''); }; diff --git a/third_party/nixpkgs/nixos/modules/hardware/keyboard/zsa.nix b/third_party/nixpkgs/nixos/modules/hardware/keyboard/zsa.nix index a04b67b5c8..191fb12cca 100644 --- a/third_party/nixpkgs/nixos/modules/hardware/keyboard/zsa.nix +++ b/third_party/nixpkgs/nixos/modules/hardware/keyboard/zsa.nix @@ -11,7 +11,7 @@ in udev rules for keyboards from ZSA like the ErgoDox EZ, Planck EZ and Moonlander Mark I. You need it when you want to flash a new configuration on the keyboard or use their live training in the browser. - You may want to install the wally-cli package. + You may want to install the wally-cli package ''); }; diff --git a/third_party/nixpkgs/nixos/modules/hardware/openrazer.nix b/third_party/nixpkgs/nixos/modules/hardware/openrazer.nix index aaa4000e75..abbafaee89 100644 --- a/third_party/nixpkgs/nixos/modules/hardware/openrazer.nix +++ b/third_party/nixpkgs/nixos/modules/hardware/openrazer.nix @@ -50,7 +50,7 @@ in options = { hardware.openrazer = { enable = mkEnableOption (lib.mdDoc '' - OpenRazer drivers and userspace daemon. + OpenRazer drivers and userspace daemon ''); verboseLogging = mkOption { diff --git a/third_party/nixpkgs/nixos/modules/hardware/tuxedo-keyboard.nix b/third_party/nixpkgs/nixos/modules/hardware/tuxedo-keyboard.nix index 3ae876bd1f..fd8b48a5e9 100644 --- a/third_party/nixpkgs/nixos/modules/hardware/tuxedo-keyboard.nix +++ b/third_party/nixpkgs/nixos/modules/hardware/tuxedo-keyboard.nix @@ -9,7 +9,7 @@ in { options.hardware.tuxedo-keyboard = { enable = mkEnableOption (lib.mdDoc '' - Enables the tuxedo-keyboard driver. + the tuxedo-keyboard driver. To configure the driver, pass the options to the {option}`boot.kernelParams` configuration. There are several parameters you can change. It's best to check at the source code description which options are supported. diff --git a/third_party/nixpkgs/nixos/modules/hardware/video/nvidia.nix b/third_party/nixpkgs/nixos/modules/hardware/video/nvidia.nix index a40713ac25..c36775dd24 100644 --- a/third_party/nixpkgs/nixos/modules/hardware/video/nvidia.nix +++ b/third_party/nixpkgs/nixos/modules/hardware/video/nvidia.nix @@ -24,7 +24,7 @@ in { options = { hardware.nvidia = { datacenter.enable = lib.mkEnableOption (lib.mdDoc '' - Data Center drivers for NVIDIA cards on a NVLink topology. + Data Center drivers for NVIDIA cards on a NVLink topology ''); datacenter.settings = lib.mkOption { type = settingsFormat.type; @@ -79,18 +79,18 @@ in { powerManagement.enable = lib.mkEnableOption (lib.mdDoc '' experimental power management through systemd. For more information, see - the NVIDIA docs, on Chapter 21. Configuring Power Management Support. + the NVIDIA docs, on Chapter 21. Configuring Power Management Support ''); powerManagement.finegrained = lib.mkEnableOption (lib.mdDoc '' experimental power management of PRIME offload. For more information, see - the NVIDIA docs, on Chapter 22. PCI-Express Runtime D3 (RTD3) Power Management. + the NVIDIA docs, on Chapter 22. PCI-Express Runtime D3 (RTD3) Power Management ''); dynamicBoost.enable = lib.mkEnableOption (lib.mdDoc '' dynamic Boost balances power between the CPU and the GPU for improved performance on supported laptops using the nvidia-powerd daemon. For more - information, see the NVIDIA docs, on Chapter 23. Dynamic Boost on Linux. + information, see the NVIDIA docs, on Chapter 23. Dynamic Boost on Linux ''); modesetting.enable = lib.mkEnableOption (lib.mdDoc '' @@ -99,7 +99,7 @@ in { Enabling this fixes screen tearing when using Optimus via PRIME (see {option}`hardware.nvidia.prime.sync.enable`. This is not enabled by default because it is not officially supported by NVIDIA and would not - work with SLI. + work with SLI ''); prime.nvidiaBusId = lib.mkOption { @@ -153,11 +153,11 @@ in { Note that this configuration will only be successful when a display manager for which the {option}`services.xserver.displayManager.setupCommands` - option is supported is used. + option is supported is used ''); prime.allowExternalGpu = lib.mkEnableOption (lib.mdDoc '' - configuring X to allow external NVIDIA GPUs when using Prime [Reverse] sync optimus. + configuring X to allow external NVIDIA GPUs when using Prime [Reverse] sync optimus ''); prime.offload.enable = lib.mkEnableOption (lib.mdDoc '' @@ -166,7 +166,7 @@ in { If this is enabled, then the bus IDs of the NVIDIA and Intel/AMD GPUs have to be specified ({option}`hardware.nvidia.prime.nvidiaBusId` and {option}`hardware.nvidia.prime.intelBusId` or - {option}`hardware.nvidia.prime.amdgpuBusId`). + {option}`hardware.nvidia.prime.amdgpuBusId`) ''); prime.offload.enableOffloadCmd = lib.mkEnableOption (lib.mdDoc '' @@ -174,7 +174,7 @@ in { for offloading programs to an nvidia device. To work, should have also enabled {option}`hardware.nvidia.prime.offload.enable` or {option}`hardware.nvidia.prime.reverseSync.enable`. - Example usage `nvidia-offload sauerbraten_client`. + Example usage `nvidia-offload sauerbraten_client` ''); prime.reverseSync.enable = lib.mkEnableOption (lib.mdDoc '' @@ -202,25 +202,25 @@ in { Note that this configuration will only be successful when a display manager for which the {option}`services.xserver.displayManager.setupCommands` - option is supported is used. + option is supported is used ''); nvidiaSettings = (lib.mkEnableOption (lib.mdDoc '' - nvidia-settings, NVIDIA's GUI configuration tool. + nvidia-settings, NVIDIA's GUI configuration tool '')) // {default = true;}; nvidiaPersistenced = lib.mkEnableOption (lib.mdDoc '' nvidia-persistenced a update for NVIDIA GPU headless mode, i.e. - It ensures all GPUs stay awake even during headless mode. + It ensures all GPUs stay awake even during headless mode ''); forceFullCompositionPipeline = lib.mkEnableOption (lib.mdDoc '' forcefully the full composition pipeline. This sometimes fixes screen tearing issues. This has been reported to reduce the performance of some OpenGL applications and may produce issues in WebGL. - It also drastically increases the time the driver needs to clock down after load. + It also drastically increases the time the driver needs to clock down after load ''); package = lib.mkOption { @@ -269,9 +269,9 @@ in { services.udev.extraRules = '' # Create /dev/nvidia-uvm when the nvidia-uvm module is loaded. - KERNEL=="nvidia", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidiactl c $$(grep nvidia-frontend /proc/devices | cut -d \ -f 1) 255'" - KERNEL=="nvidia", RUN+="${pkgs.runtimeShell} -c 'for i in $$(cat /proc/driver/nvidia/gpus/*/information | grep Minor | cut -d \ -f 4); do mknod -m 666 /dev/nvidia$${i} c $$(grep nvidia-frontend /proc/devices | cut -d \ -f 1) $${i}; done'" - KERNEL=="nvidia_modeset", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidia-modeset c $$(grep nvidia-frontend /proc/devices | cut -d \ -f 1) 254'" + KERNEL=="nvidia", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidiactl c 195 255'" + KERNEL=="nvidia", RUN+="${pkgs.runtimeShell} -c 'for i in $$(cat /proc/driver/nvidia/gpus/*/information | grep Minor | cut -d \ -f 4); do mknod -m 666 /dev/nvidia$${i} c 195 $${i}; done'" + KERNEL=="nvidia_modeset", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidia-modeset c 195 254'" KERNEL=="nvidia_uvm", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidia-uvm c $$(grep nvidia-uvm /proc/devices | cut -d \ -f 1) 0'" KERNEL=="nvidia_uvm", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidia-uvm-tools c $$(grep nvidia-uvm /proc/devices | cut -d \ -f 1) 1'" ''; diff --git a/third_party/nixpkgs/nixos/modules/hardware/video/webcam/facetimehd.nix b/third_party/nixpkgs/nixos/modules/hardware/video/webcam/facetimehd.nix index 480c636aa0..a0ec9c98a5 100644 --- a/third_party/nixpkgs/nixos/modules/hardware/video/webcam/facetimehd.nix +++ b/third_party/nixpkgs/nixos/modules/hardware/video/webcam/facetimehd.nix @@ -12,7 +12,7 @@ in { - options.hardware.facetimehd.enable = mkEnableOption (lib.mdDoc "facetimehd kernel module"); + options.hardware.facetimehd.enable = mkEnableOption (lib.mdDoc "the facetimehd kernel module"); options.hardware.facetimehd.withCalibration = mkOption { default = false; diff --git a/third_party/nixpkgs/nixos/modules/image/repart.nix b/third_party/nixpkgs/nixos/modules/image/repart.nix index e567485c9d..41e6110885 100644 --- a/third_party/nixpkgs/nixos/modules/image/repart.nix +++ b/third_party/nixpkgs/nixos/modules/image/repart.nix @@ -34,12 +34,13 @@ let }; }); default = { }; - example = lib.literalExpression '' { - "/EFI/BOOT/BOOTX64.EFI".source = - "''${pkgs.systemd}/lib/systemd/boot/efi/systemd-bootx64.efi"; + example = lib.literalExpression '' + { + "/EFI/BOOT/BOOTX64.EFI".source = + "''${pkgs.systemd}/lib/systemd/boot/efi/systemd-bootx64.efi"; - "/loader/entries/nixos.conf".source = systemdBootEntry; - } + "/loader/entries/nixos.conf".source = systemdBootEntry; + } ''; description = lib.mdDoc "The contents to end up in the filesystem image."; }; @@ -90,34 +91,33 @@ in package = lib.mkPackageOption pkgs "systemd-repart" { default = "systemd"; - example = lib.literalExpression '' - pkgs.systemdMinimal.override { withCryptsetup = true; } - ''; + example = "pkgs.systemdMinimal.override { withCryptsetup = true; }"; }; partitions = lib.mkOption { type = with lib.types; attrsOf (submodule partitionOptions); default = { }; - example = lib.literalExpression '' { - "10-esp" = { - contents = { - "/EFI/BOOT/BOOTX64.EFI".source = - "''${pkgs.systemd}/lib/systemd/boot/efi/systemd-bootx64.efi"; - } - repartConfig = { - Type = "esp"; - Format = "fat"; + example = lib.literalExpression '' + { + "10-esp" = { + contents = { + "/EFI/BOOT/BOOTX64.EFI".source = + "''${pkgs.systemd}/lib/systemd/boot/efi/systemd-bootx64.efi"; + } + repartConfig = { + Type = "esp"; + Format = "fat"; + }; + }; + "20-root" = { + storePaths = [ config.system.build.toplevel ]; + repartConfig = { + Type = "root"; + Format = "ext4"; + Minimize = "guess"; + }; }; }; - "20-root" = { - storePaths = [ config.system.build.toplevel ]; - repartConfig = { - Type = "root"; - Format = "ext4"; - Minimize = "guess"; - }; - }; - }; ''; description = lib.mdDoc '' Specify partitions as a set of the names of the partitions with their @@ -208,10 +208,7 @@ in | tee repart-output.json ''; - meta = { - maintainers = with lib.maintainers; [ nikstur ]; - doc = ./repart.md; - }; + meta.maintainers = with lib.maintainers; [ nikstur ]; }; } diff --git a/third_party/nixpkgs/nixos/modules/installer/cd-dvd/channel.nix b/third_party/nixpkgs/nixos/modules/installer/cd-dvd/channel.nix index 8426ba8fac..bc70dc985f 100644 --- a/third_party/nixpkgs/nixos/modules/installer/cd-dvd/channel.nix +++ b/third_party/nixpkgs/nixos/modules/installer/cd-dvd/channel.nix @@ -3,8 +3,6 @@ { config, lib, pkgs, ... }: -with lib; - let # This is copied into the installer image, so it's important that it is filtered # to avoid including a large .git directory. @@ -27,38 +25,40 @@ let if [ ! -e $out/nixos/nixpkgs ]; then ln -s . $out/nixos/nixpkgs fi - ${optionalString (config.system.nixos.revision != null) '' + ${lib.optionalString (config.system.nixos.revision != null) '' echo -n ${config.system.nixos.revision} > $out/nixos/.git-revision ''} echo -n ${config.system.nixos.versionSuffix} > $out/nixos/.version-suffix echo ${config.system.nixos.versionSuffix} | sed -e s/pre// > $out/nixos/svn-revision ''; - in { - # Pin the nixpkgs flake in the installer to our cleaned up nixpkgs source. - # FIXME: this might be surprising and is really only needed for offline installations, - # see discussion in https://github.com/NixOS/nixpkgs/pull/204178#issuecomment-1336289021 - nix.registry.nixpkgs.to = { - type = "path"; - path = "${channelSources}/nixos"; - }; + options.system.installer.channel.enable = (lib.mkEnableOption "bundling NixOS/Nixpkgs channel in the installer") // { default = true; }; + config = lib.mkIf config.system.installer.channel.enable { + # Pin the nixpkgs flake in the installer to our cleaned up nixpkgs source. + # FIXME: this might be surprising and is really only needed for offline installations, + # see discussion in https://github.com/NixOS/nixpkgs/pull/204178#issuecomment-1336289021 + nix.registry.nixpkgs.to = { + type = "path"; + path = "${channelSources}/nixos"; + }; - # Provide the NixOS/Nixpkgs sources in /etc/nixos. This is required - # for nixos-install. - boot.postBootCommands = mkAfter - '' - if ! [ -e /var/lib/nixos/did-channel-init ]; then - echo "unpacking the NixOS/Nixpkgs sources..." - mkdir -p /nix/var/nix/profiles/per-user/root - ${config.nix.package.out}/bin/nix-env -p /nix/var/nix/profiles/per-user/root/channels \ - -i ${channelSources} --quiet --option build-use-substitutes false \ - ${optionalString config.boot.initrd.systemd.enable "--option sandbox false"} # There's an issue with pivot_root - mkdir -m 0700 -p /root/.nix-defexpr - ln -s /nix/var/nix/profiles/per-user/root/channels /root/.nix-defexpr/channels - mkdir -m 0755 -p /var/lib/nixos - touch /var/lib/nixos/did-channel-init - fi - ''; + # Provide the NixOS/Nixpkgs sources in /etc/nixos. This is required + # for nixos-install. + boot.postBootCommands = lib.mkAfter + '' + if ! [ -e /var/lib/nixos/did-channel-init ]; then + echo "unpacking the NixOS/Nixpkgs sources..." + mkdir -p /nix/var/nix/profiles/per-user/root + ${config.nix.package.out}/bin/nix-env -p /nix/var/nix/profiles/per-user/root/channels \ + -i ${channelSources} --quiet --option build-use-substitutes false \ + ${lib.optionalString config.boot.initrd.systemd.enable "--option sandbox false"} # There's an issue with pivot_root + mkdir -m 0700 -p /root/.nix-defexpr + ln -s /nix/var/nix/profiles/per-user/root/channels /root/.nix-defexpr/channels + mkdir -m 0755 -p /var/lib/nixos + touch /var/lib/nixos/did-channel-init + fi + ''; + }; } diff --git a/third_party/nixpkgs/nixos/modules/installer/tools/nix-fallback-paths.nix b/third_party/nixpkgs/nixos/modules/installer/tools/nix-fallback-paths.nix index 10c37a46fd..e4241e9654 100644 --- a/third_party/nixpkgs/nixos/modules/installer/tools/nix-fallback-paths.nix +++ b/third_party/nixpkgs/nixos/modules/installer/tools/nix-fallback-paths.nix @@ -1,7 +1,7 @@ { - x86_64-linux = "/nix/store/3wqasl97rjiza3vd7fxjnvli2w9l30mk-nix-2.17.0"; - i686-linux = "/nix/store/z360xswxfx55pmm1fng3hw748rbs0kkj-nix-2.17.0"; - aarch64-linux = "/nix/store/9670sxa916xmv8n1kqs7cdvmnsrhrdjv-nix-2.17.0"; - x86_64-darwin = "/nix/store/2rdbky9j8hc3mbgl6pnda4hkjllyfwnn-nix-2.17.0"; - aarch64-darwin = "/nix/store/jl9qma14fb4zk9lq1k0syw2k9qm2gqjw-nix-2.17.0"; + x86_64-linux = "/nix/store/azvn85cras6xv4z5j85fiy406f24r1q0-nix-2.18.1"; + i686-linux = "/nix/store/9bnwy7f9h0kzdzmcnjjsjg0aak5waj40-nix-2.18.1"; + aarch64-linux = "/nix/store/hh65xwqm9s040s3cgn9vzcmrxj0sf5ij-nix-2.18.1"; + x86_64-darwin = "/nix/store/6zi5fqzn9n17wrk8r41rhdw4j7jqqsi3-nix-2.18.1"; + aarch64-darwin = "/nix/store/0pbq6wzr2f1jgpn5212knyxpwmkjgjah-nix-2.18.1"; } diff --git a/third_party/nixpkgs/nixos/modules/installer/tools/nixos-generate-config.pl b/third_party/nixpkgs/nixos/modules/installer/tools/nixos-generate-config.pl index 7d0c5898e2..2f9edba4f0 100644 --- a/third_party/nixpkgs/nixos/modules/installer/tools/nixos-generate-config.pl +++ b/third_party/nixpkgs/nixos/modules/installer/tools/nixos-generate-config.pl @@ -102,22 +102,6 @@ sub cpuManufacturer { return $cpuinfo =~ /^vendor_id\s*:.* $id$/m; } - -# Determine CPU governor to use -if (-e "/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors") { - my $governors = read_file("/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors"); - # ondemand governor is not available on sandy bridge or later Intel CPUs - my @desired_governors = ("ondemand", "powersave"); - my $e; - - foreach $e (@desired_governors) { - if (index($governors, $e) != -1) { - last if (push @attrs, "powerManagement.cpuFreqGovernor = lib.mkDefault \"$e\";"); - } - } -} - - # Virtualization support? push @kernelModules, "kvm-intel" if hasCPUFeature "vmx"; push @kernelModules, "kvm-amd" if hasCPUFeature "svm"; @@ -146,7 +130,7 @@ sub pciCheck { debug "\n"; if (defined $module) { - # See the bottom of http://pciids.sourceforge.net/pci.ids for + # See the bottom of https://pciids.sourceforge.net/pci.ids for # device classes. if (# Mass-storage controller. Definitely important. $class =~ /^0x01/ || @@ -273,6 +257,7 @@ foreach my $path (glob "/sys/class/{block,mmc_host}/*") { # Add bcache module, if needed. my @bcacheDevices = glob("/dev/bcache*"); +@bcacheDevices = grep(!qr#dev/bcachefs.*#, @bcacheDevices); if (scalar @bcacheDevices > 0) { push @initrdAvailableKernelModules, "bcache"; } @@ -483,6 +468,19 @@ EOF # boot.tmp.useTmpfs option in configuration.nix (managed declaratively). next if ($mountPoint eq "/tmp" && $fsType eq "tmpfs"); + # This should work for single and multi-device systems. + # still needs subvolume support + if ($fsType eq "bcachefs") { + my ($status, @info) = runCommand("bcachefs fs usage $rootDir$mountPoint"); + my $UUID = $info[0]; + + if ($status == 0 && $UUID =~ /^Filesystem:[ \t\n]*([0-9a-z-]+)/) { + $stableDevPath = "UUID=$1"; + } else { + print STDERR "warning: can't find bcachefs mount UUID falling back to device-path"; + } + } + # Emit the filesystem. $fileSystems .= </dev/null + find $package/share/man -type f | xargs ${pkgs.python3.pythonOnBuildForHost.interpreter} ${patchedGenerator}/create_manpage_completions.py --directory $out >/dev/null fi ''; in diff --git a/third_party/nixpkgs/nixos/modules/programs/kdeconnect.nix b/third_party/nixpkgs/nixos/modules/programs/kdeconnect.nix index 4978c428ce..4ba156f2db 100644 --- a/third_party/nixpkgs/nixos/modules/programs/kdeconnect.nix +++ b/third_party/nixpkgs/nixos/modules/programs/kdeconnect.nix @@ -9,7 +9,7 @@ with lib; 1714 to 1764 as they are needed for it to function properly. You can use the {option}`package` to use `gnomeExtensions.gsconnect` as an alternative - implementation if you use Gnome. + implementation if you use Gnome ''); package = mkOption { default = pkgs.plasma5Packages.kdeconnect-kde; diff --git a/third_party/nixpkgs/nixos/modules/programs/npm.nix b/third_party/nixpkgs/nixos/modules/programs/npm.nix index 48dc48e668..c41fea3261 100644 --- a/third_party/nixpkgs/nixos/modules/programs/npm.nix +++ b/third_party/nixpkgs/nixos/modules/programs/npm.nix @@ -34,7 +34,7 @@ in prefix = ''${HOME}/.npm https-proxy=proxy.example.com init-license=MIT - init-author-url=http://npmjs.org + init-author-url=https://www.npmjs.com/ color=true ''; }; diff --git a/third_party/nixpkgs/nixos/modules/programs/wayland/cardboard.nix b/third_party/nixpkgs/nixos/modules/programs/wayland/cardboard.nix new file mode 100644 index 0000000000..262c698c74 --- /dev/null +++ b/third_party/nixpkgs/nixos/modules/programs/wayland/cardboard.nix @@ -0,0 +1,24 @@ +{ config, lib, pkgs, ... }: + +let + cfg = config.programs.cardboard; +in +{ + meta.maintainers = with lib.maintainers; [ AndersonTorres ]; + + options.programs.cardboard = { + enable = lib.mkEnableOption (lib.mdDoc "cardboard"); + + package = lib.mkPackageOptionMD pkgs "cardboard" { }; + }; + + config = lib.mkIf cfg.enable (lib.mkMerge [ + { + environment.systemPackages = [ cfg.package ]; + + # To make a cardboard session available for certain DMs like SDDM + services.xserver.displayManager.sessionPackages = [ cfg.package ]; + } + (import ./wayland-session.nix { inherit lib pkgs; }) + ]); +} diff --git a/third_party/nixpkgs/nixos/modules/programs/wayland/sway.nix b/third_party/nixpkgs/nixos/modules/programs/wayland/sway.nix index de739faabe..698d9c2b46 100644 --- a/third_party/nixpkgs/nixos/modules/programs/wayland/sway.nix +++ b/third_party/nixpkgs/nixos/modules/programs/wayland/sway.nix @@ -42,11 +42,6 @@ in { and "man 5 sway" for more information''); - enableRealtime = mkEnableOption (lib.mdDoc '' - add CAP_SYS_NICE capability on `sway` binary for realtime scheduling - privileges. This may improve latency and reduce stuttering, specially in - high load scenarios'') // { default = true; }; - package = mkOption { type = with types; nullOr package; default = defaultSwayPackage; @@ -154,14 +149,6 @@ in { "sway/config".source = mkOptionDefault "${cfg.package}/etc/sway/config"; }; }; - security.wrappers = mkIf (cfg.enableRealtime && cfg.package != null) { - sway = { - owner = "root"; - group = "root"; - source = "${cfg.package}/bin/sway"; - capabilities = "cap_sys_nice+ep"; - }; - }; # To make a Sway session available if a display manager like SDDM is enabled: services.xserver.displayManager.sessionPackages = optionals (cfg.package != null) [ cfg.package ]; } (import ./wayland-session.nix { inherit lib pkgs; }) diff --git a/third_party/nixpkgs/nixos/modules/programs/wayland/wayfire.nix b/third_party/nixpkgs/nixos/modules/programs/wayland/wayfire.nix index d0b280e394..9ea2010cf5 100644 --- a/third_party/nixpkgs/nixos/modules/programs/wayland/wayfire.nix +++ b/third_party/nixpkgs/nixos/modules/programs/wayland/wayfire.nix @@ -6,7 +6,7 @@ in meta.maintainers = with lib.maintainers; [ rewine ]; options.programs.wayfire = { - enable = lib.mkEnableOption (lib.mdDoc "Wayfire, a wayland compositor based on wlroots."); + enable = lib.mkEnableOption (lib.mdDoc "Wayfire, a wayland compositor based on wlroots"); package = lib.mkPackageOptionMD pkgs "wayfire" { }; diff --git a/third_party/nixpkgs/nixos/modules/programs/zsh/oh-my-zsh.md b/third_party/nixpkgs/nixos/modules/programs/zsh/oh-my-zsh.md index 73d425244c..6a310006ed 100644 --- a/third_party/nixpkgs/nixos/modules/programs/zsh/oh-my-zsh.md +++ b/third_party/nixpkgs/nixos/modules/programs/zsh/oh-my-zsh.md @@ -78,7 +78,7 @@ If third-party customizations (e.g. new themes) are supposed to be added to - Completion scripts are supposed to be stored at `$out/share/zsh/site-functions`. This directory is part of the - [`fpath`](http://zsh.sourceforge.net/Doc/Release/Functions.html) + [`fpath`](https://zsh.sourceforge.io/Doc/Release/Functions.html) and the package should be compatible with pure `ZSH` setups. The module will automatically link the contents of `site-functions` to completions directory in the proper diff --git a/third_party/nixpkgs/nixos/modules/rename.nix b/third_party/nixpkgs/nixos/modules/rename.nix index 408c515044..3fab863adb 100644 --- a/third_party/nixpkgs/nixos/modules/rename.nix +++ b/third_party/nixpkgs/nixos/modules/rename.nix @@ -54,7 +54,6 @@ in (mkRemovedOptionModule [ "services" "chronos" ] "The corresponding package was removed from nixpkgs.") (mkRemovedOptionModule [ "services" "couchpotato" ] "The corresponding package was removed from nixpkgs.") (mkRemovedOptionModule [ "services" "dd-agent" ] "dd-agent was removed from nixpkgs in favor of the newer datadog-agent.") - (mkRemovedOptionModule [ "services" "ddclient" ] "ddclient has been removed on the request of the upstream maintainer because it is unmaintained and has bugs. Please switch to a different software like `inadyn` or `knsupdate`.") # Added 2023-07-04 (mkRemovedOptionModule [ "services" "dnscrypt-proxy" ] "Use services.dnscrypt-proxy2 instead") (mkRemovedOptionModule [ "services" "exhibitor" ] "The corresponding package was removed from nixpkgs.") (mkRemovedOptionModule [ "services" "firefox" "syncserver" ] "The corresponding package was removed from nixpkgs.") @@ -112,6 +111,7 @@ in (mkRemovedOptionModule [ "services" "riak" ] "The corresponding package was removed from nixpkgs.") (mkRemovedOptionModule [ "services" "cryptpad" ] "The corresponding package was removed from nixpkgs.") (mkRemovedOptionModule [ "services" "rtsp-simple-server" ] "Package has been completely rebranded by upstream as mediamtx, and thus the service and the package were renamed in NixOS as well.") + (mkRemovedOptionModule [ "services" "prayer" ] "The corresponding package was removed from nixpkgs.") (mkRemovedOptionModule [ "i18n" "inputMethod" "fcitx" ] "The fcitx module has been removed. Please use fcitx5 instead") (mkRemovedOptionModule [ "services" "dhcpd4" ] '' diff --git a/third_party/nixpkgs/nixos/modules/security/acme/default.nix b/third_party/nixpkgs/nixos/modules/security/acme/default.nix index 92bed172f4..7cc302969f 100644 --- a/third_party/nixpkgs/nixos/modules/security/acme/default.nix +++ b/third_party/nixpkgs/nixos/modules/security/acme/default.nix @@ -184,6 +184,7 @@ let certToConfig = cert: data: let acmeServer = data.server; useDns = data.dnsProvider != null; + useDnsOrS3 = useDns || data.s3Bucket != null; destPath = "/var/lib/acme/${cert}"; selfsignedDeps = optionals (cfg.preliminarySelfsigned) [ "acme-selfsigned-${cert}.service" ]; @@ -219,7 +220,8 @@ let [ "--dns" data.dnsProvider ] ++ optionals (!data.dnsPropagationCheck) [ "--dns.disable-cp" ] ++ optionals (data.dnsResolver != null) [ "--dns.resolvers" data.dnsResolver ] - ) else if data.listenHTTP != null then [ "--http" "--http.port" data.listenHTTP ] + ) else if data.s3Bucket != null then [ "--http" "--http.s3-bucket" data.s3Bucket ] + else if data.listenHTTP != null then [ "--http" "--http.port" data.listenHTTP ] else [ "--http" "--http.webroot" data.webroot ]; commonOpts = [ @@ -343,6 +345,10 @@ let serviceConfig = commonServiceConfig // { Group = data.group; + # Let's Encrypt Failed Validation Limit allows 5 retries per hour, per account, hostname and hour. + # This avoids eating them all up if something is misconfigured upon the first try. + RestartSec = 15 * 60; + # Keep in mind that these directories will be deleted if the user runs # systemctl clean --what=state # acme/.lego/${cert} is listed for this reason. @@ -362,13 +368,12 @@ let "/var/lib/acme/.lego/${cert}/${certDir}:/tmp/certificates" ]; - # Only try loading the environmentFile if the dns challenge is enabled - EnvironmentFile = mkIf useDns data.environmentFile; + EnvironmentFile = mkIf useDnsOrS3 data.environmentFile; - Environment = mkIf useDns + Environment = mkIf useDnsOrS3 (mapAttrsToList (k: v: ''"${k}=%d/${k}"'') data.credentialFiles); - LoadCredential = mkIf useDns + LoadCredential = mkIf useDnsOrS3 (mapAttrsToList (k: v: "${k}:${v}") data.credentialFiles); # Run as root (Prefixed with +) @@ -592,7 +597,7 @@ let description = lib.mdDoc '' Key type to use for private keys. For an up to date list of supported values check the --key-type option - at . + at . ''; }; @@ -755,6 +760,15 @@ let ''; }; + s3Bucket = mkOption { + type = types.nullOr types.str; + default = null; + example = "acme"; + description = lib.mdDoc '' + S3 bucket name to use for HTTP-01 based challenges. Challenges will be written to the S3 bucket. + ''; + }; + inheritDefaults = mkOption { default = true; example = true; @@ -928,35 +942,20 @@ in { and remove the wildcard from the path. ''; } - { - assertion = data.dnsProvider == null || data.webroot == null; + (let exclusiveAttrs = { + inherit (data) dnsProvider webroot listenHTTP s3Bucket; + }; in { + assertion = lib.length (lib.filter (x: x != null) (builtins.attrValues exclusiveAttrs)) == 1; message = '' - Options `security.acme.certs.${cert}.dnsProvider` and - `security.acme.certs.${cert}.webroot` are mutually exclusive. + Exactly one of the options + `security.acme.certs.${cert}.dnsProvider`, + `security.acme.certs.${cert}.webroot`, + `security.acme.certs.${cert}.listenHTTP` and + `security.acme.certs.${cert}.s3Bucket` + is required. + Current values: ${(lib.generators.toPretty {} exclusiveAttrs)}. ''; - } - { - assertion = data.webroot == null || data.listenHTTP == null; - message = '' - Options `security.acme.certs.${cert}.webroot` and - `security.acme.certs.${cert}.listenHTTP` are mutually exclusive. - ''; - } - { - assertion = data.listenHTTP == null || data.dnsProvider == null; - message = '' - Options `security.acme.certs.${cert}.listenHTTP` and - `security.acme.certs.${cert}.dnsProvider` are mutually exclusive. - ''; - } - { - assertion = data.dnsProvider != null || data.webroot != null || data.listenHTTP != null; - message = '' - One of `security.acme.certs.${cert}.dnsProvider`, - `security.acme.certs.${cert}.webroot`, or - `security.acme.certs.${cert}.listenHTTP` must be provided. - ''; - } + }) { assertion = all (hasSuffix "_FILE") (attrNames data.credentialFiles); message = '' diff --git a/third_party/nixpkgs/nixos/modules/security/apparmor/profiles.nix b/third_party/nixpkgs/nixos/modules/security/apparmor/profiles.nix index 8eb630b5a4..0bf90a0086 100644 --- a/third_party/nixpkgs/nixos/modules/security/apparmor/profiles.nix +++ b/third_party/nixpkgs/nixos/modules/security/apparmor/profiles.nix @@ -2,10 +2,4 @@ let apparmor = config.security.apparmor; in { config.security.apparmor.packages = [ pkgs.apparmor-profiles ]; -config.security.apparmor.policies."bin.ping".profile = lib.mkIf apparmor.policies."bin.ping".enable '' - include "${pkgs.iputils.apparmor}/bin.ping" - include "${pkgs.inetutils.apparmor}/bin.ping" - # Note that including those two profiles in the same profile - # would not work if the second one were to re-include . -''; } diff --git a/third_party/nixpkgs/nixos/modules/security/duosec.nix b/third_party/nixpkgs/nixos/modules/security/duosec.nix index 02b11766b3..2a855a77e3 100644 --- a/third_party/nixpkgs/nixos/modules/security/duosec.nix +++ b/third_party/nixpkgs/nixos/modules/security/duosec.nix @@ -193,8 +193,11 @@ in source = "${pkgs.duo-unix.out}/bin/login_duo"; }; - system.activationScripts = { - login_duo = mkIf cfg.ssh.enable '' + systemd.services.login-duo = lib.mkIf cfg.ssh.enable { + wantedBy = [ "sysinit.target" ]; + before = [ "sysinit.target" ]; + unitConfig.DefaultDependencies = false; + script = '' if test -f "${cfg.secretKeyFile}"; then mkdir -m 0755 -p /etc/duo @@ -209,7 +212,13 @@ in mv -fT "$conf" /etc/duo/login_duo.conf fi ''; - pam_duo = mkIf cfg.pam.enable '' + }; + + systemd.services.pam-duo = lib.mkIf cfg.ssh.enable { + wantedBy = [ "sysinit.target" ]; + before = [ "sysinit.target" ]; + unitConfig.DefaultDependencies = false; + script = '' if test -f "${cfg.secretKeyFile}"; then mkdir -m 0755 -p /etc/duo diff --git a/third_party/nixpkgs/nixos/modules/security/google_oslogin.nix b/third_party/nixpkgs/nixos/modules/security/google_oslogin.nix index f75b4df185..95975943ff 100644 --- a/third_party/nixpkgs/nixos/modules/security/google_oslogin.nix +++ b/third_party/nixpkgs/nixos/modules/security/google_oslogin.nix @@ -42,6 +42,10 @@ in security.sudo.extraConfig = '' #includedir /run/google-sudoers.d ''; + security.sudo-rs.extraConfig = '' + #includedir /run/google-sudoers.d + ''; + systemd.tmpfiles.rules = [ "d /run/google-sudoers.d 750 root root -" "d /var/google-users.d 750 root root -" diff --git a/third_party/nixpkgs/nixos/modules/security/pam.nix b/third_party/nixpkgs/nixos/modules/security/pam.nix index 709bb8b94a..b7e1ea5265 100644 --- a/third_party/nixpkgs/nixos/modules/security/pam.nix +++ b/third_party/nixpkgs/nixos/modules/security/pam.nix @@ -1531,6 +1531,10 @@ in (map (module: "mr ${module},")) concatLines ]); - }; + security.sudo.extraConfig = optionalString config.security.pam.enableSSHAgentAuth '' + # Keep SSH_AUTH_SOCK so that pam_ssh_agent_auth.so can do its magic. + Defaults env_keep+=SSH_AUTH_SOCK + ''; + }; } diff --git a/third_party/nixpkgs/nixos/modules/security/pam_mount.nix b/third_party/nixpkgs/nixos/modules/security/pam_mount.nix index ad78f38b08..26f906f2a7 100644 --- a/third_party/nixpkgs/nixos/modules/security/pam_mount.nix +++ b/third_party/nixpkgs/nixos/modules/security/pam_mount.nix @@ -33,7 +33,7 @@ in default = []; description = lib.mdDoc '' List of volume definitions for pam_mount. - For more information, visit . + For more information, visit . ''; }; @@ -78,7 +78,7 @@ in description = lib.mdDoc '' Sets the Debug-Level. 0 disables debugging, 1 enables pam_mount tracing, and 2 additionally enables tracing in mount.crypt. The default is 0. - For more information, visit . + For more information, visit . ''; }; @@ -88,7 +88,7 @@ in description = lib.mdDoc '' Amount of microseconds to wait until killing remaining processes after final logout. - For more information, visit . + For more information, visit . ''; }; diff --git a/third_party/nixpkgs/nixos/modules/security/polkit.nix b/third_party/nixpkgs/nixos/modules/security/polkit.nix index de427ccb29..327f49c0b6 100644 --- a/third_party/nixpkgs/nixos/modules/security/polkit.nix +++ b/third_party/nixpkgs/nixos/modules/security/polkit.nix @@ -35,7 +35,7 @@ in description = lib.mdDoc '' Any polkit rules to be added to config (in JavaScript ;-). See: - http://www.freedesktop.org/software/polkit/docs/latest/polkit.8.html#polkit-rules + ''; }; @@ -117,4 +117,3 @@ in }; } - diff --git a/third_party/nixpkgs/nixos/modules/security/sudo.nix b/third_party/nixpkgs/nixos/modules/security/sudo.nix index d225442773..3dd5d2e525 100644 --- a/third_party/nixpkgs/nixos/modules/security/sudo.nix +++ b/third_party/nixpkgs/nixos/modules/security/sudo.nix @@ -6,7 +6,7 @@ let cfg = config.security.sudo; - inherit (pkgs) sudo; + inherit (config.security.pam) enableSSHAgentAuth; toUserString = user: if (isInt user) then "#${toString user}" else "${user}"; toGroupString = group: if (isInt group) then "%#${toString group}" else "%${group}"; @@ -30,9 +30,18 @@ in ###### interface - options = { + options.security.sudo = { - security.sudo.enable = mkOption { + defaultOptions = mkOption { + type = with types; listOf str; + default = [ "SETENV" ]; + description = mdDoc '' + Options used for the default rules, granting `root` and the + `wheel` group permission to run any command as any user. + ''; + }; + + enable = mkOption { type = types.bool; default = true; description = @@ -42,29 +51,21 @@ in ''; }; - security.sudo.package = mkOption { - type = types.package; - default = pkgs.sudo; - defaultText = literalExpression "pkgs.sudo"; - description = lib.mdDoc '' - Which package to use for `sudo`. - ''; - }; + package = mkPackageOption pkgs "sudo" { }; - security.sudo.wheelNeedsPassword = mkOption { + wheelNeedsPassword = mkOption { type = types.bool; default = true; - description = - lib.mdDoc '' - Whether users of the `wheel` group must - provide a password to run commands as super user via {command}`sudo`. - ''; + description = mdDoc '' + Whether users of the `wheel` group must + provide a password to run commands as super user via {command}`sudo`. + ''; }; - security.sudo.execWheelOnly = mkOption { + execWheelOnly = mkOption { type = types.bool; default = false; - description = lib.mdDoc '' + description = mdDoc '' Only allow members of the `wheel` group to execute sudo by setting the executable's permissions accordingly. This prevents users that are not members of `wheel` from @@ -72,19 +73,18 @@ in ''; }; - security.sudo.configFile = mkOption { + configFile = mkOption { type = types.lines; # Note: if syntax errors are detected in this file, the NixOS # configuration will fail to build. - description = - lib.mdDoc '' - This string contains the contents of the - {file}`sudoers` file. - ''; + description = mdDoc '' + This string contains the contents of the + {file}`sudoers` file. + ''; }; - security.sudo.extraRules = mkOption { - description = lib.mdDoc '' + extraRules = mkOption { + description = mdDoc '' Define specific rules to be in the {file}`sudoers` file. More specific rules should come after more general ones in order to yield the expected behavior. You can use mkBefore/mkAfter to ensure @@ -114,7 +114,7 @@ in options = { users = mkOption { type = with types; listOf (either str int); - description = lib.mdDoc '' + description = mdDoc '' The usernames / UIDs this rule should apply for. ''; default = []; @@ -122,7 +122,7 @@ in groups = mkOption { type = with types; listOf (either str int); - description = lib.mdDoc '' + description = mdDoc '' The groups / GIDs this rule should apply for. ''; default = []; @@ -131,7 +131,7 @@ in host = mkOption { type = types.str; default = "ALL"; - description = lib.mdDoc '' + description = mdDoc '' For what host this rule should apply. ''; }; @@ -139,7 +139,7 @@ in runAs = mkOption { type = with types; str; default = "ALL:ALL"; - description = lib.mdDoc '' + description = mdDoc '' Under which user/group the specified command is allowed to run. A user can be specified using just the username: `"foo"`. @@ -149,7 +149,7 @@ in }; commands = mkOption { - description = lib.mdDoc '' + description = mdDoc '' The commands for which the rule should apply. ''; type = with types; listOf (either str (submodule { @@ -157,7 +157,7 @@ in options = { command = mkOption { type = with types; str; - description = lib.mdDoc '' + description = mdDoc '' A command being either just a path to a binary to allow any arguments, the full command with arguments pre-set or with `""` used as the argument, not allowing arguments to the command at all. @@ -166,7 +166,7 @@ in options = mkOption { type = with types; listOf (enum [ "NOPASSWD" "PASSWD" "NOEXEC" "EXEC" "SETENV" "NOSETENV" "LOG_INPUT" "NOLOG_INPUT" "LOG_OUTPUT" "NOLOG_OUTPUT" ]); - description = lib.mdDoc '' + description = mdDoc '' Options for running the command. Refer to the [sudo manual](https://www.sudo.ws/man/1.7.10/sudoers.man.html). ''; default = []; @@ -179,10 +179,10 @@ in }); }; - security.sudo.extraConfig = mkOption { + extraConfig = mkOption { type = types.lines; default = ""; - description = lib.mdDoc '' + description = mdDoc '' Extra configuration text appended to {file}`sudoers`. ''; }; @@ -192,44 +192,55 @@ in ###### implementation config = mkIf cfg.enable { - assertions = [ - { assertion = cfg.package.pname != "sudo-rs"; - message = "The NixOS `sudo` module does not work with `sudo-rs` yet."; } - ]; + assertions = [ { + assertion = cfg.package.pname != "sudo-rs"; + message = '' + NixOS' `sudo` module does not support `sudo-rs`; see `security.sudo-rs` instead. + ''; + } ]; - # We `mkOrder 600` so that the default rule shows up first, but there is - # still enough room for a user to `mkBefore` it. - security.sudo.extraRules = mkOrder 600 [ - { groups = [ "wheel" ]; - commands = [ { command = "ALL"; options = (if cfg.wheelNeedsPassword then [ "SETENV" ] else [ "NOPASSWD" "SETENV" ]); } ]; - } - ]; + security.sudo.extraRules = + let + defaultRule = { users ? [], groups ? [], opts ? [] }: [ { + inherit users groups; + commands = [ { + command = "ALL"; + options = opts ++ cfg.defaultOptions; + } ]; + } ]; + in mkMerge [ + # This is ordered before users' `mkBefore` rules, + # so as not to introduce unexpected changes. + (mkOrder 400 (defaultRule { users = [ "root" ]; })) - security.sudo.configFile = + # This is ordered to show before (most) other rules, but + # late-enough for a user to `mkBefore` it. + (mkOrder 600 (defaultRule { + groups = [ "wheel" ]; + opts = (optional (!cfg.wheelNeedsPassword) "NOPASSWD"); + })) + ]; + + security.sudo.configFile = concatStringsSep "\n" (filter (s: s != "") [ '' # Don't edit this file. Set the NixOS options ‘security.sudo.configFile’ # or ‘security.sudo.extraRules’ instead. - - # Keep SSH_AUTH_SOCK so that pam_ssh_agent_auth.so can do its magic. - Defaults env_keep+=SSH_AUTH_SOCK - - # "root" is allowed to do anything. - root ALL=(ALL:ALL) SETENV: ALL - - # extraRules - ${concatStringsSep "\n" ( - lists.flatten ( - map ( - rule: optionals (length rule.commands != 0) [ - (map (user: "${toUserString user} ${rule.host}=(${rule.runAs}) ${toCommandsString rule.commands}") rule.users) - (map (group: "${toGroupString group} ${rule.host}=(${rule.runAs}) ${toCommandsString rule.commands}") rule.groups) - ] - ) cfg.extraRules - ) - )} - + '' + (pipe cfg.extraRules [ + (filter (rule: length rule.commands != 0)) + (map (rule: [ + (map (user: "${toUserString user} ${rule.host}=(${rule.runAs}) ${toCommandsString rule.commands}") rule.users) + (map (group: "${toGroupString group} ${rule.host}=(${rule.runAs}) ${toCommandsString rule.commands}") rule.groups) + ])) + flatten + (concatStringsSep "\n") + ]) + "\n" + (optionalString (cfg.extraConfig != "") '' + # extraConfig ${cfg.extraConfig} - ''; + '') + ]); security.wrappers = let owner = "root"; @@ -247,7 +258,7 @@ in }; }; - environment.systemPackages = [ sudo ]; + environment.systemPackages = [ cfg.package ]; security.pam.services.sudo = { sshAgentAuth = true; usshAuth = true; }; diff --git a/third_party/nixpkgs/nixos/modules/security/wrappers/default.nix b/third_party/nixpkgs/nixos/modules/security/wrappers/default.nix index a8bb0650b1..250f9775be 100644 --- a/third_party/nixpkgs/nixos/modules/security/wrappers/default.nix +++ b/third_party/nixpkgs/nixos/modules/security/wrappers/default.nix @@ -275,33 +275,38 @@ in mrpx ${wrap.source}, '') wrappers; - ###### wrappers activation script - system.activationScripts.wrappers = - lib.stringAfter [ "specialfs" "users" ] - '' - chmod 755 "${parentWrapperDir}" + systemd.services.suid-sgid-wrappers = { + description = "Create SUID/SGID Wrappers"; + wantedBy = [ "sysinit.target" ]; + before = [ "sysinit.target" ]; + unitConfig.DefaultDependencies = false; + unitConfig.RequiresMountsFor = [ "/nix/store" "/run/wrappers" ]; + serviceConfig.Type = "oneshot"; + script = '' + chmod 755 "${parentWrapperDir}" - # We want to place the tmpdirs for the wrappers to the parent dir. - wrapperDir=$(mktemp --directory --tmpdir="${parentWrapperDir}" wrappers.XXXXXXXXXX) - chmod a+rx "$wrapperDir" + # We want to place the tmpdirs for the wrappers to the parent dir. + wrapperDir=$(mktemp --directory --tmpdir="${parentWrapperDir}" wrappers.XXXXXXXXXX) + chmod a+rx "$wrapperDir" - ${lib.concatStringsSep "\n" mkWrappedPrograms} + ${lib.concatStringsSep "\n" mkWrappedPrograms} - if [ -L ${wrapperDir} ]; then - # Atomically replace the symlink - # See https://axialcorps.com/2013/07/03/atomically-replacing-files-and-directories/ - old=$(readlink -f ${wrapperDir}) - if [ -e "${wrapperDir}-tmp" ]; then - rm --force --recursive "${wrapperDir}-tmp" - fi - ln --symbolic --force --no-dereference "$wrapperDir" "${wrapperDir}-tmp" - mv --no-target-directory "${wrapperDir}-tmp" "${wrapperDir}" - rm --force --recursive "$old" - else - # For initial setup - ln --symbolic "$wrapperDir" "${wrapperDir}" + if [ -L ${wrapperDir} ]; then + # Atomically replace the symlink + # See https://axialcorps.com/2013/07/03/atomically-replacing-files-and-directories/ + old=$(readlink -f ${wrapperDir}) + if [ -e "${wrapperDir}-tmp" ]; then + rm --force --recursive "${wrapperDir}-tmp" fi - ''; + ln --symbolic --force --no-dereference "$wrapperDir" "${wrapperDir}-tmp" + mv --no-target-directory "${wrapperDir}-tmp" "${wrapperDir}" + rm --force --recursive "$old" + else + # For initial setup + ln --symbolic "$wrapperDir" "${wrapperDir}" + fi + ''; + }; ###### wrappers consistency checks system.checks = lib.singleton (pkgs.runCommandLocal diff --git a/third_party/nixpkgs/nixos/modules/services/audio/jack.nix b/third_party/nixpkgs/nixos/modules/services/audio/jack.nix index 105e99cb2f..b51f2a78c9 100644 --- a/third_party/nixpkgs/nixos/modules/services/audio/jack.nix +++ b/third_party/nixpkgs/nixos/modules/services/audio/jack.nix @@ -225,7 +225,7 @@ in { description = "JACK Audio system service user"; isSystemUser = true; }; - # http://jackaudio.org/faq/linux_rt_config.html + # https://jackaudio.org/faq/linux_rt_config.html security.pam.loginLimits = [ { domain = "@jackaudio"; type = "-"; item = "rtprio"; value = "99"; } { domain = "@jackaudio"; type = "-"; item = "memlock"; value = "unlimited"; } diff --git a/third_party/nixpkgs/nixos/modules/services/audio/navidrome.nix b/third_party/nixpkgs/nixos/modules/services/audio/navidrome.nix index e18e61eb6d..77a0e74af9 100644 --- a/third_party/nixpkgs/nixos/modules/services/audio/navidrome.nix +++ b/third_party/nixpkgs/nixos/modules/services/audio/navidrome.nix @@ -28,10 +28,17 @@ in { ''; }; + openFirewall = mkOption { + type = types.bool; + default = false; + description = lib.mdDoc "Whether to open the TCP port in the firewall"; + }; }; }; config = mkIf cfg.enable { + networking.firewall.allowedTCPPorts = mkIf cfg.openFirewall [cfg.settings.Port]; + systemd.services.navidrome = { description = "Navidrome Media Server"; after = [ "network.target" ]; diff --git a/third_party/nixpkgs/nixos/modules/services/audio/wyoming/faster-whisper.nix b/third_party/nixpkgs/nixos/modules/services/audio/wyoming/faster-whisper.nix index 1fb67ecfe5..2d56acdc1b 100644 --- a/third_party/nixpkgs/nixos/modules/services/audio/wyoming/faster-whisper.nix +++ b/third_party/nixpkgs/nixos/modules/services/audio/wyoming/faster-whisper.nix @@ -37,6 +37,9 @@ in enable = mkEnableOption (mdDoc "Wyoming faster-whisper server"); model = mkOption { + # Intersection between available and referenced models here: + # https://github.com/rhasspy/models/releases/tag/v1.0 + # https://github.com/rhasspy/rhasspy3/blob/wyoming-v1/programs/asr/faster-whisper/server/wyoming_faster_whisper/download.py#L17-L27 type = enum [ "tiny" "tiny-int8" @@ -44,7 +47,6 @@ in "base-int8" "small" "small-int8" - "medium" "medium-int8" ]; default = "tiny-int8"; @@ -136,6 +138,7 @@ in --data-dir $STATE_DIRECTORY \ --download-dir $STATE_DIRECTORY \ --uri ${options.uri} \ + --device ${options.device} \ --model ${options.model} \ --language ${options.language} \ --beam-size ${options.beamSize} ${options.extraArgs} @@ -143,6 +146,8 @@ in CapabilityBoundingSet = ""; DeviceAllow = if builtins.elem options.device [ "cuda" "auto" ] then [ # https://docs.nvidia.com/dgx/pdf/dgx-os-5-user-guide.pdf + # CUDA not working? Check DeviceAllow and PrivateDevices first! + "/dev/nvidia0" "/dev/nvidia1" "/dev/nvidia2" "/dev/nvidia3" @@ -157,7 +162,6 @@ in DevicePolicy = "closed"; LockPersonality = true; MemoryDenyWriteExecute = true; - PrivateDevices = true; PrivateUsers = true; ProtectHome = true; ProtectHostname = true; diff --git a/third_party/nixpkgs/nixos/modules/services/audio/wyoming/openwakeword.nix b/third_party/nixpkgs/nixos/modules/services/audio/wyoming/openwakeword.nix index e1993407da..987818246b 100644 --- a/third_party/nixpkgs/nixos/modules/services/audio/wyoming/openwakeword.nix +++ b/third_party/nixpkgs/nixos/modules/services/audio/wyoming/openwakeword.nix @@ -8,6 +8,7 @@ let cfg = config.services.wyoming.openwakeword; inherit (lib) + concatStringsSep concatMapStringsSep escapeShellArgs mkOption @@ -15,6 +16,7 @@ let mkEnableOption mkIf mkPackageOptionMD + mkRemovedOptionModule types ; @@ -22,18 +24,13 @@ let toString ; - models = [ - # wyoming_openwakeword/models/*.tflite - "alexa" - "hey_jarvis" - "hey_mycroft" - "hey_rhasspy" - "ok_nabu" - ]; - in { + imports = [ + (mkRemovedOptionModule [ "services" "wyoming" "openwakeword" "models" ] "Configuring models has been removed, they are now dynamically discovered and loaded at runtime") + ]; + meta.buildDocsInSandbox = false; options.services.wyoming.openwakeword = with types; { @@ -50,19 +47,27 @@ in ''; }; - models = mkOption { - type = listOf (enum models); - default = models; - description = mdDoc '' - List of wake word models that should be made available. + customModelsDirectories = mkOption { + type = listOf types.path; + default = []; + description = lib.mdDoc '' + Paths to directories with custom wake word models (*.tflite model files). ''; }; preloadModels = mkOption { - type = listOf (enum models); + type = listOf str; default = [ "ok_nabu" ]; + example = [ + # wyoming_openwakeword/models/*.tflite + "alexa" + "hey_jarvis" + "hey_mycroft" + "hey_rhasspy" + "ok_nabu" + ]; description = mdDoc '' List of wake word models to preload after startup. ''; @@ -114,14 +119,15 @@ in DynamicUser = true; User = "wyoming-openwakeword"; # https://github.com/home-assistant/addons/blob/master/openwakeword/rootfs/etc/s6-overlay/s6-rc.d/openwakeword/run - ExecStart = '' - ${cfg.package}/bin/wyoming-openwakeword \ - --uri ${cfg.uri} \ - ${concatMapStringsSep " " (model: "--model ${model}") cfg.models} \ - ${concatMapStringsSep " " (model: "--preload-model ${model}") cfg.preloadModels} \ - --threshold ${cfg.threshold} \ - --trigger-level ${cfg.triggerLevel} ${cfg.extraArgs} - ''; + ExecStart = concatStringsSep " " [ + "${cfg.package}/bin/wyoming-openwakeword" + "--uri ${cfg.uri}" + (concatMapStringsSep " " (model: "--preload-model ${model}") cfg.preloadModels) + (concatMapStringsSep " " (dir: "--custom-model-dir ${toString dir}") cfg.customModelsDirectories) + "--threshold ${cfg.threshold}" + "--trigger-level ${cfg.triggerLevel}" + "${cfg.extraArgs}" + ]; CapabilityBoundingSet = ""; DeviceAllow = ""; DevicePolicy = "closed"; @@ -136,7 +142,7 @@ in ProtectKernelTunables = true; ProtectControlGroups = true; ProtectProc = "invisible"; - ProcSubset = "pid"; + ProcSubset = "all"; # reads /proc/cpuinfo RestrictAddressFamilies = [ "AF_INET" "AF_INET6" diff --git a/third_party/nixpkgs/nixos/modules/services/backup/bacula.nix b/third_party/nixpkgs/nixos/modules/services/backup/bacula.nix index 0acbf1b3ea..5a75a46e52 100644 --- a/third_party/nixpkgs/nixos/modules/services/backup/bacula.nix +++ b/third_party/nixpkgs/nixos/modules/services/backup/bacula.nix @@ -15,16 +15,16 @@ let Client { Name = "${fd_cfg.name}"; FDPort = ${toString fd_cfg.port}; - WorkingDirectory = "${libDir}"; - Pid Directory = "/run"; + WorkingDirectory = ${libDir}; + Pid Directory = /run; ${fd_cfg.extraClientConfig} } ${concatStringsSep "\n" (mapAttrsToList (name: value: '' Director { Name = "${name}"; - Password = "${value.password}"; - Monitor = "${value.monitor}"; + Password = ${value.password}; + Monitor = ${value.monitor}; } '') fd_cfg.director)} @@ -41,8 +41,8 @@ let Storage { Name = "${sd_cfg.name}"; SDPort = ${toString sd_cfg.port}; - WorkingDirectory = "${libDir}"; - Pid Directory = "/run"; + WorkingDirectory = ${libDir}; + Pid Directory = /run; ${sd_cfg.extraStorageConfig} } @@ -50,8 +50,8 @@ let Autochanger { Name = "${name}"; Device = ${concatStringsSep ", " (map (a: "\"${a}\"") value.devices)}; - Changer Device = "${value.changerDevice}"; - Changer Command = "${value.changerCommand}"; + Changer Device = ${value.changerDevice}; + Changer Command = ${value.changerCommand}; ${value.extraAutochangerConfig} } '') sd_cfg.autochanger)} @@ -59,8 +59,8 @@ let ${concatStringsSep "\n" (mapAttrsToList (name: value: '' Device { Name = "${name}"; - Archive Device = "${value.archiveDevice}"; - Media Type = "${value.mediaType}"; + Archive Device = ${value.archiveDevice}; + Media Type = ${value.mediaType}; ${value.extraDeviceConfig} } '') sd_cfg.device)} @@ -68,8 +68,8 @@ let ${concatStringsSep "\n" (mapAttrsToList (name: value: '' Director { Name = "${name}"; - Password = "${value.password}"; - Monitor = "${value.monitor}"; + Password = ${value.password}; + Monitor = ${value.monitor}; } '') sd_cfg.director)} @@ -85,18 +85,18 @@ let '' Director { Name = "${dir_cfg.name}"; - Password = "${dir_cfg.password}"; + Password = ${dir_cfg.password}; DirPort = ${toString dir_cfg.port}; - Working Directory = "${libDir}"; - Pid Directory = "/run/"; - QueryFile = "${pkgs.bacula}/etc/query.sql"; + Working Directory = ${libDir}; + Pid Directory = /run/; + QueryFile = ${pkgs.bacula}/etc/query.sql; ${dir_cfg.extraDirectorConfig} } Catalog { - Name = "PostgreSQL"; - dbname = "bacula"; - user = "bacula"; + Name = PostgreSQL; + dbname = bacula; + user = bacula; } Messages { @@ -533,7 +533,7 @@ in { }; }; - services.postgresql.enable = dir_cfg.enable == true; + services.postgresql.enable = lib.mkIf dir_cfg.enable true; systemd.services.bacula-dir = mkIf dir_cfg.enable { after = [ "network.target" "postgresql.service" ]; diff --git a/third_party/nixpkgs/nixos/modules/services/backup/borgmatic.nix b/third_party/nixpkgs/nixos/modules/services/backup/borgmatic.nix index d3ba7628e8..b27dd28171 100644 --- a/third_party/nixpkgs/nixos/modules/services/backup/borgmatic.nix +++ b/third_party/nixpkgs/nixos/modules/services/backup/borgmatic.nix @@ -81,7 +81,7 @@ in config = mkIf cfg.enable { warnings = [] - ++ optional (cfg.settings != null && cfg.settings.location != null) + ++ optional (cfg.settings != null && cfg.settings ? location) "`services.borgmatic.settings.location` is deprecated, please move your options out of sections to the global scope" ++ optional (catAttrs "location" (attrValues cfg.configurations) != []) "`services.borgmatic.configurations..location` is deprecated, please move your options out of sections to the global scope" diff --git a/third_party/nixpkgs/nixos/modules/services/backup/postgresql-wal-receiver.nix b/third_party/nixpkgs/nixos/modules/services/backup/postgresql-wal-receiver.nix index 01fd57f5c5..773dc0ba44 100644 --- a/third_party/nixpkgs/nixos/modules/services/backup/postgresql-wal-receiver.nix +++ b/third_party/nixpkgs/nixos/modules/services/backup/postgresql-wal-receiver.nix @@ -7,7 +7,7 @@ let options = { postgresqlPackage = mkOption { type = types.package; - example = literalExpression "pkgs.postgresql_11"; + example = literalExpression "pkgs.postgresql_15"; description = lib.mdDoc '' PostgreSQL package to use. ''; @@ -124,7 +124,7 @@ in { example = literalExpression '' { main = { - postgresqlPackage = pkgs.postgresql_11; + postgresqlPackage = pkgs.postgresql_15; directory = /mnt/pg_wal/main/; slot = "main_wal_receiver"; connection = "postgresql://user@somehost"; diff --git a/third_party/nixpkgs/nixos/modules/services/backup/restic.nix b/third_party/nixpkgs/nixos/modules/services/backup/restic.nix index 78220e99c3..87595f3979 100644 --- a/third_party/nixpkgs/nixos/modules/services/backup/restic.nix +++ b/third_party/nixpkgs/nixos/modules/services/backup/restic.nix @@ -23,25 +23,13 @@ in environmentFile = mkOption { type = with types; nullOr str; - # added on 2021-08-28, s3CredentialsFile should - # be removed in the future (+ remember the warning) - default = config.s3CredentialsFile; + default = null; description = lib.mdDoc '' file containing the credentials to access the repository, in the format of an EnvironmentFile as described by systemd.exec(5) ''; }; - s3CredentialsFile = mkOption { - type = with types; nullOr str; - default = null; - description = lib.mdDoc '' - file containing the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY - for an S3-hosted repository, in the format of an EnvironmentFile - as described by systemd.exec(5) - ''; - }; - rcloneOptions = mkOption { type = with types; nullOr (attrsOf (oneOf [ str bool ])); default = null; @@ -113,12 +101,15 @@ in }; paths = mkOption { + # This is nullable for legacy reasons only. We should consider making it a pure listOf + # after some time has passed since this comment was added. type = types.nullOr (types.listOf types.str); - default = null; + default = [ ]; description = lib.mdDoc '' - Which paths to backup. If null or an empty array, no - backup command will be run. This can be used to create a - prune-only job. + Which paths to backup, in addition to ones specified via + `dynamicFilesFrom`. If null or an empty array and + `dynamicFilesFrom` is also null, no backup command will be run. + This can be used to create a prune-only job. ''; example = [ "/var/lib/postgresql" @@ -142,13 +133,15 @@ in }; timerConfig = mkOption { - type = types.attrsOf unitOption; + type = types.nullOr (types.attrsOf unitOption); default = { OnCalendar = "daily"; Persistent = true; }; description = lib.mdDoc '' - When to run the backup. See {manpage}`systemd.timer(5)` for details. + When to run the backup. See {manpage}`systemd.timer(5)` for + details. If null no timer is created and the backup will only + run when explicitly started. ''; example = { OnCalendar = "00:05"; @@ -231,7 +224,7 @@ in description = lib.mdDoc '' A script that produces a list of files to back up. The results of this command are given to the '--files-from' - option. + option. The result is merged with paths specified via `paths`. ''; example = "find /home/matt/git -type d -name .git"; }; @@ -297,7 +290,6 @@ in }; config = { - warnings = mapAttrsToList (n: v: "services.restic.backups.${n}.s3CredentialsFile is deprecated, please use services.restic.backups.${n}.environmentFile instead.") (filterAttrs (n: v: v.s3CredentialsFile != null) config.services.restic.backups); assertions = mapAttrsToList (n: v: { assertion = (v.repository == null) != (v.repositoryFile == null); message = "services.restic.backups.${n}: exactly one of repository or repositoryFile should be set"; @@ -310,10 +302,7 @@ in resticCmd = "${backup.package}/bin/restic${extraOptions}"; excludeFlags = optional (backup.exclude != []) "--exclude-file=${pkgs.writeText "exclude-patterns" (concatStringsSep "\n" backup.exclude)}"; filesFromTmpFile = "/run/restic-backups-${name}/includes"; - backupPaths = - if (backup.dynamicFilesFrom == null) - then optionalString (backup.paths != null) (concatStringsSep " " backup.paths) - else "--files-from ${filesFromTmpFile}"; + doBackup = (backup.dynamicFilesFrom != null) || (backup.paths != null && backup.paths != []); pruneCmd = optionals (builtins.length backup.pruneOpts > 0) [ (resticCmd + " forget --prune " + (concatStringsSep " " backup.pruneOpts)) (resticCmd + " check " + (concatStringsSep " " backup.checkOpts)) @@ -348,7 +337,7 @@ in after = [ "network-online.target" ]; serviceConfig = { Type = "oneshot"; - ExecStart = (optionals (backupPaths != "") [ "${resticCmd} backup ${concatStringsSep " " (backup.extraBackupArgs ++ excludeFlags)} ${backupPaths}" ]) + ExecStart = (optionals doBackup [ "${resticCmd} backup ${concatStringsSep " " (backup.extraBackupArgs ++ excludeFlags)} --files-from=${filesFromTmpFile}" ]) ++ pruneCmd; User = backup.user; RuntimeDirectory = "restic-backups-${name}"; @@ -358,7 +347,7 @@ in } // optionalAttrs (backup.environmentFile != null) { EnvironmentFile = backup.environmentFile; }; - } // optionalAttrs (backup.initialize || backup.dynamicFilesFrom != null || backup.backupPrepareCommand != null) { + } // optionalAttrs (backup.initialize || doBackup || backup.backupPrepareCommand != null) { preStart = '' ${optionalString (backup.backupPrepareCommand != null) '' ${pkgs.writeScript "backupPrepareCommand" backup.backupPrepareCommand} @@ -366,16 +355,19 @@ in ${optionalString (backup.initialize) '' ${resticCmd} snapshots || ${resticCmd} init ''} + ${optionalString (backup.paths != null && backup.paths != []) '' + cat ${pkgs.writeText "staticPaths" (concatStringsSep "\n" backup.paths)} >> ${filesFromTmpFile} + ''} ${optionalString (backup.dynamicFilesFrom != null) '' - ${pkgs.writeScript "dynamicFilesFromScript" backup.dynamicFilesFrom} > ${filesFromTmpFile} + ${pkgs.writeScript "dynamicFilesFromScript" backup.dynamicFilesFrom} >> ${filesFromTmpFile} ''} ''; - } // optionalAttrs (backup.dynamicFilesFrom != null || backup.backupCleanupCommand != null) { + } // optionalAttrs (doBackup || backup.backupCleanupCommand != null) { postStop = '' ${optionalString (backup.backupCleanupCommand != null) '' ${pkgs.writeScript "backupCleanupCommand" backup.backupCleanupCommand} ''} - ${optionalString (backup.dynamicFilesFrom != null) '' + ${optionalString doBackup '' rm ${filesFromTmpFile} ''} ''; @@ -388,7 +380,7 @@ in wantedBy = [ "timers.target" ]; timerConfig = backup.timerConfig; }) - config.services.restic.backups; + (filterAttrs (_: backup: backup.timerConfig != null) config.services.restic.backups); # generate wrapper scripts, as described in the createWrapper option environment.systemPackages = lib.mapAttrsToList (name: backup: let diff --git a/third_party/nixpkgs/nixos/modules/services/backup/syncoid.nix b/third_party/nixpkgs/nixos/modules/services/backup/syncoid.nix index 0f375455e7..1a1df38617 100644 --- a/third_party/nixpkgs/nixos/modules/services/backup/syncoid.nix +++ b/third_party/nixpkgs/nixos/modules/services/backup/syncoid.nix @@ -369,7 +369,7 @@ in PrivateDevices = true; PrivateMounts = true; PrivateNetwork = mkDefault false; - PrivateUsers = true; + PrivateUsers = false; # Enabling this breaks on zfs-2.2.0 ProtectClock = true; ProtectControlGroups = true; ProtectHome = true; diff --git a/third_party/nixpkgs/nixos/modules/services/backup/znapzend.nix b/third_party/nixpkgs/nixos/modules/services/backup/znapzend.nix index 76f147c18a..2ebe8ad2f6 100644 --- a/third_party/nixpkgs/nixos/modules/services/backup/znapzend.nix +++ b/third_party/nixpkgs/nixos/modules/services/backup/znapzend.nix @@ -359,14 +359,14 @@ in }; features.oracleMode = mkEnableOption (lib.mdDoc '' - Destroy snapshots one by one instead of using one long argument list. + destroying snapshots one by one instead of using one long argument list. If source and destination are out of sync for a long time, you may have so many snapshots to destroy that the argument gets is too long and the - command fails. + command fails ''); features.recvu = mkEnableOption (lib.mdDoc '' recvu feature which uses `-u` on the receiving end to keep the destination - filesystem unmounted. + filesystem unmounted ''); features.compressed = mkEnableOption (lib.mdDoc '' compressed feature which adds the options `-Lce` to @@ -377,7 +377,7 @@ in support and -e is for embedded data support. see {manpage}`znapzend(1)` and {manpage}`zfs(8)` - for more info. + for more info ''); features.sendRaw = mkEnableOption (lib.mdDoc '' sendRaw feature which adds the options `-w` to the @@ -386,25 +386,25 @@ in backup that can't be read without the encryption key/passphrase, useful when the remote isn't fully trusted or not physically secure. This option must be used consistently, raw incrementals cannot be based on - non-raw snapshots and vice versa. + non-raw snapshots and vice versa ''); features.skipIntermediates = mkEnableOption (lib.mdDoc '' - Enable the skipIntermediates feature to send a single increment + the skipIntermediates feature to send a single increment between latest common snapshot and the newly made one. It may skip several source snaps if the destination was offline for some time, and it should skip snapshots not managed by znapzend. Normally for online destinations, the new snapshot is sent as soon as it is created on the - source, so there are no automatic increments to skip. + source, so there are no automatic increments to skip ''); features.lowmemRecurse = mkEnableOption (lib.mdDoc '' use lowmemRecurse on systems where you have too many datasets, so a recursive listing of attributes to find backup plans exhausts the memory available to {command}`znapzend`: instead, go the slower way to first list all impacted dataset names, and then query their - configs one by one. + configs one by one ''); features.zfsGetType = mkEnableOption (lib.mdDoc '' - use zfsGetType if your {command}`zfs get` supports a + using zfsGetType if your {command}`zfs get` supports a `-t` argument for filtering by dataset type at all AND lists properties for snapshots by default when recursing, so that there is too much data to process while searching for backup plans. @@ -412,7 +412,7 @@ in `--recursive` search for backup plans can literally differ by hundreds of times (depending on the amount of snapshots in that dataset tree... and a decent backup plan will ensure you have a lot - of those), so you would benefit from requesting this feature. + of those), so you would benefit from requesting this feature ''); }; }; diff --git a/third_party/nixpkgs/nixos/modules/services/blockchain/ethereum/erigon.nix b/third_party/nixpkgs/nixos/modules/services/blockchain/ethereum/erigon.nix index 8ebe0fcaff..945a373d12 100644 --- a/third_party/nixpkgs/nixos/modules/services/blockchain/ethereum/erigon.nix +++ b/third_party/nixpkgs/nixos/modules/services/blockchain/ethereum/erigon.nix @@ -13,6 +13,8 @@ in { services.erigon = { enable = mkEnableOption (lib.mdDoc "Ethereum implementation on the efficiency frontier"); + package = mkPackageOptionMD pkgs "erigon" { }; + extraArgs = mkOption { type = types.listOf types.str; description = lib.mdDoc "Additional arguments passed to Erigon"; @@ -92,7 +94,7 @@ in { serviceConfig = { LoadCredential = "ERIGON_JWT:${cfg.secretJwtPath}"; - ExecStart = "${pkgs.erigon}/bin/erigon --config ${configFile} --authrpc.jwtsecret=%d/ERIGON_JWT ${lib.escapeShellArgs cfg.extraArgs}"; + ExecStart = "${cfg.package}/bin/erigon --config ${configFile} --authrpc.jwtsecret=%d/ERIGON_JWT ${lib.escapeShellArgs cfg.extraArgs}"; DynamicUser = true; Restart = "on-failure"; StateDirectory = "erigon"; diff --git a/third_party/nixpkgs/nixos/modules/services/cluster/hadoop/default.nix b/third_party/nixpkgs/nixos/modules/services/cluster/hadoop/default.nix index 72bf25c211..ff6b4d5588 100644 --- a/third_party/nixpkgs/nixos/modules/services/cluster/hadoop/default.nix +++ b/third_party/nixpkgs/nixos/modules/services/cluster/hadoop/default.nix @@ -67,16 +67,16 @@ with lib; mapredSiteDefault = mkOption { default = { "mapreduce.framework.name" = "yarn"; - "yarn.app.mapreduce.am.env" = "HADOOP_MAPRED_HOME=${cfg.package}/lib/${cfg.package.untarDir}"; - "mapreduce.map.env" = "HADOOP_MAPRED_HOME=${cfg.package}/lib/${cfg.package.untarDir}"; - "mapreduce.reduce.env" = "HADOOP_MAPRED_HOME=${cfg.package}/lib/${cfg.package.untarDir}"; + "yarn.app.mapreduce.am.env" = "HADOOP_MAPRED_HOME=${cfg.package}"; + "mapreduce.map.env" = "HADOOP_MAPRED_HOME=${cfg.package}"; + "mapreduce.reduce.env" = "HADOOP_MAPRED_HOME=${cfg.package}"; }; defaultText = literalExpression '' { "mapreduce.framework.name" = "yarn"; - "yarn.app.mapreduce.am.env" = "HADOOP_MAPRED_HOME=''${config.${opt.package}}/lib/''${config.${opt.package}.untarDir}"; - "mapreduce.map.env" = "HADOOP_MAPRED_HOME=''${config.${opt.package}}/lib/''${config.${opt.package}.untarDir}"; - "mapreduce.reduce.env" = "HADOOP_MAPRED_HOME=''${config.${opt.package}}/lib/''${config.${opt.package}.untarDir}"; + "yarn.app.mapreduce.am.env" = "HADOOP_MAPRED_HOME=''${config.${opt.package}}"; + "mapreduce.map.env" = "HADOOP_MAPRED_HOME=''${config.${opt.package}}"; + "mapreduce.reduce.env" = "HADOOP_MAPRED_HOME=''${config.${opt.package}}"; } ''; type = types.attrsOf types.anything; @@ -154,13 +154,13 @@ with lib; }; log4jProperties = mkOption { - default = "${cfg.package}/lib/${cfg.package.untarDir}/etc/hadoop/log4j.properties"; + default = "${cfg.package}/etc/hadoop/log4j.properties"; defaultText = literalExpression '' - "''${config.${opt.package}}/lib/''${config.${opt.package}.untarDir}/etc/hadoop/log4j.properties" + "''${config.${opt.package}}/etc/hadoop/log4j.properties" ''; type = types.path; example = literalExpression '' - "''${pkgs.hadoop}/lib/''${pkgs.hadoop.untarDir}/etc/hadoop/log4j.properties"; + "''${pkgs.hadoop}/etc/hadoop/log4j.properties"; ''; description = lib.mdDoc "log4j.properties file added to HADOOP_CONF_DIR"; }; diff --git a/third_party/nixpkgs/nixos/modules/services/cluster/hadoop/yarn.nix b/third_party/nixpkgs/nixos/modules/services/cluster/hadoop/yarn.nix index 26077f35fd..a49aafbd1d 100644 --- a/third_party/nixpkgs/nixos/modules/services/cluster/hadoop/yarn.nix +++ b/third_party/nixpkgs/nixos/modules/services/cluster/hadoop/yarn.nix @@ -160,7 +160,7 @@ in umount /run/wrappers/yarn-nodemanager/cgroup/cpu || true rm -rf /run/wrappers/yarn-nodemanager/ || true mkdir -p /run/wrappers/yarn-nodemanager/{bin,etc/hadoop,cgroup/cpu} - cp ${cfg.package}/lib/${cfg.package.untarDir}/bin/container-executor /run/wrappers/yarn-nodemanager/bin/ + cp ${cfg.package}/bin/container-executor /run/wrappers/yarn-nodemanager/bin/ chgrp hadoop /run/wrappers/yarn-nodemanager/bin/container-executor chmod 6050 /run/wrappers/yarn-nodemanager/bin/container-executor cp ${hadoopConf}/container-executor.cfg /run/wrappers/yarn-nodemanager/etc/hadoop/ diff --git a/third_party/nixpkgs/nixos/modules/services/computing/boinc/client.nix b/third_party/nixpkgs/nixos/modules/services/computing/boinc/client.nix index 51475171bf..ff16795c82 100644 --- a/third_party/nixpkgs/nixos/modules/services/computing/boinc/client.nix +++ b/third_party/nixpkgs/nixos/modules/services/computing/boinc/client.nix @@ -54,7 +54,7 @@ in only the hosts listed in {var}`dataDir`/remote_hosts.cfg will be allowed to connect. - See also: + See also: ''; }; diff --git a/third_party/nixpkgs/nixos/modules/services/computing/slurm/slurm.nix b/third_party/nixpkgs/nixos/modules/services/computing/slurm/slurm.nix index 344c43a429..1cbe7b893f 100644 --- a/third_party/nixpkgs/nixos/modules/services/computing/slurm/slurm.nix +++ b/third_party/nixpkgs/nixos/modules/services/computing/slurm/slurm.nix @@ -6,7 +6,7 @@ let cfg = config.services.slurm; opt = options.services.slurm; - # configuration file can be generated by http://slurm.schedmd.com/configurator.html + # configuration file can be generated by https://slurm.schedmd.com/configurator.html defaultUser = "slurm"; diff --git a/third_party/nixpkgs/nixos/modules/services/continuous-integration/woodpecker/server.nix b/third_party/nixpkgs/nixos/modules/services/continuous-integration/woodpecker/server.nix index cae5ed7cf1..38b42f7288 100644 --- a/third_party/nixpkgs/nixos/modules/services/continuous-integration/woodpecker/server.nix +++ b/third_party/nixpkgs/nixos/modules/services/continuous-integration/woodpecker/server.nix @@ -31,9 +31,9 @@ in description = lib.mdDoc "woodpecker-server config environment variables, for other options read the [documentation](https://woodpecker-ci.org/docs/administration/server-config)"; }; environmentFile = lib.mkOption { - type = lib.types.nullOr lib.types.path; - default = null; - example = "/root/woodpecker-server.env"; + type = with lib.types; coercedTo path (f: [ f ]) (listOf path); + default = [ ]; + example = [ "/root/woodpecker-server.env" ]; description = lib.mdDoc '' File to load environment variables from. This is helpful for specifying secrets. @@ -61,7 +61,7 @@ in StateDirectoryMode = "0700"; UMask = "0007"; ConfigurationDirectory = "woodpecker-server"; - EnvironmentFile = lib.optional (cfg.environmentFile != null) cfg.environmentFile; + EnvironmentFile = cfg.environmentFile; ExecStart = "${cfg.package}/bin/woodpecker-server"; Restart = "on-failure"; RestartSec = 15; diff --git a/third_party/nixpkgs/nixos/modules/services/databases/cassandra.nix b/third_party/nixpkgs/nixos/modules/services/databases/cassandra.nix index e26acb88d8..cd816ffaf0 100644 --- a/third_party/nixpkgs/nixos/modules/services/databases/cassandra.nix +++ b/third_party/nixpkgs/nixos/modules/services/databases/cassandra.nix @@ -122,7 +122,7 @@ in options.services.cassandra = { enable = mkEnableOption (lib.mdDoc '' - Apache Cassandra – Scalable and highly available database. + Apache Cassandra – Scalable and highly available database ''); clusterName = mkOption { diff --git a/third_party/nixpkgs/nixos/modules/services/databases/couchdb.nix b/third_party/nixpkgs/nixos/modules/services/databases/couchdb.nix index 0a81a8dcee..bfecfbb366 100644 --- a/third_party/nixpkgs/nixos/modules/services/databases/couchdb.nix +++ b/third_party/nixpkgs/nixos/modules/services/databases/couchdb.nix @@ -79,7 +79,7 @@ in { ''; }; - # couchdb options: http://docs.couchdb.org/en/latest/config/index.html + # couchdb options: https://docs.couchdb.org/en/latest/config/index.html databaseDir = mkOption { type = types.path; diff --git a/third_party/nixpkgs/nixos/modules/services/databases/ferretdb.nix b/third_party/nixpkgs/nixos/modules/services/databases/ferretdb.nix index 5b2cc59d8c..ab55e22bf2 100644 --- a/third_party/nixpkgs/nixos/modules/services/databases/ferretdb.nix +++ b/third_party/nixpkgs/nixos/modules/services/databases/ferretdb.nix @@ -11,7 +11,7 @@ in options = { services.ferretdb = { - enable = mkEnableOption "FerretDB, an Open Source MongoDB alternative."; + enable = mkEnableOption "FerretDB, an Open Source MongoDB alternative"; package = mkOption { type = types.package; @@ -30,7 +30,7 @@ in }; description = '' Additional configuration for FerretDB, see - + for supported values. ''; }; diff --git a/third_party/nixpkgs/nixos/modules/services/databases/firebird.nix b/third_party/nixpkgs/nixos/modules/services/databases/firebird.nix index 26ed46f0e6..3927c81d95 100644 --- a/third_party/nixpkgs/nixos/modules/services/databases/firebird.nix +++ b/third_party/nixpkgs/nixos/modules/services/databases/firebird.nix @@ -17,7 +17,7 @@ # There are at least two ways to run firebird. superserver has been chosen # however there are no strong reasons to prefer this or the other one AFAIK # Eg superserver is said to be most efficiently using resources according to -# http://www.firebirdsql.org/manual/qsg25-classic-or-super.html +# https://www.firebirdsql.org/manual/qsg25-classic-or-super.html with lib; diff --git a/third_party/nixpkgs/nixos/modules/services/databases/pgmanage.nix b/third_party/nixpkgs/nixos/modules/services/databases/pgmanage.nix index 12c8253ab4..a0933a5ffc 100644 --- a/third_party/nixpkgs/nixos/modules/services/databases/pgmanage.nix +++ b/third_party/nixpkgs/nixos/modules/services/databases/pgmanage.nix @@ -66,7 +66,7 @@ in { pgmanage requires at least one PostgreSQL server be defined. Detailed information about PostgreSQL connection strings is available at: - + Note that you should not specify your user name or password. That information will be entered on the login screen. If you specify a diff --git a/third_party/nixpkgs/nixos/modules/services/databases/postgresql.md b/third_party/nixpkgs/nixos/modules/services/databases/postgresql.md index 4d66ee38be..d65d9616e2 100644 --- a/third_party/nixpkgs/nixos/modules/services/databases/postgresql.md +++ b/third_party/nixpkgs/nixos/modules/services/databases/postgresql.md @@ -5,7 +5,7 @@ *Source:* {file}`modules/services/databases/postgresql.nix` -*Upstream documentation:* +*Upstream documentation:* @@ -17,9 +17,9 @@ PostgreSQL is an advanced, free relational database. To enable PostgreSQL, add the following to your {file}`configuration.nix`: ``` services.postgresql.enable = true; -services.postgresql.package = pkgs.postgresql_11; +services.postgresql.package = pkgs.postgresql_15; ``` -Note that you are required to specify the desired version of PostgreSQL (e.g. `pkgs.postgresql_11`). Since upgrading your PostgreSQL version requires a database dump and reload (see below), NixOS cannot provide a default value for [](#opt-services.postgresql.package) such as the most recent release of PostgreSQL. +Note that you are required to specify the desired version of PostgreSQL (e.g. `pkgs.postgresql_15`). Since upgrading your PostgreSQL version requires a database dump and reload (see below), NixOS cannot provide a default value for [](#opt-services.postgresql.package) such as the most recent release of PostgreSQL.