Project import generated by Copybara.

GitOrigin-RevId: e8057b67ebf307f01bdcc8fba94d94f75039d1f6
This commit is contained in:
Default email 2024-06-05 17:53:02 +02:00
parent fa594b3c10
commit fa5436e0a7
10754 changed files with 230630 additions and 157577 deletions

View file

@ -109,8 +109,15 @@ fb0e5be84331188a69b3edd31679ca6576edb75a
# postgresql: move packages.nix to ext/default.nix # postgresql: move packages.nix to ext/default.nix
719034f6f6749d624faa28dff259309fc0e3e730 719034f6f6749d624faa28dff259309fc0e3e730
# php ecosystem: reformat with nixfmt-rfc-style
75ae7621330ff8db944ce4dff4374e182d5d151f
c759efa5e7f825913f9a69ef20f025f50f56dc4d
# pkgs/os-specific/bsd: Reformat with nixfmt-rfc-style 2024-03-01 # pkgs/os-specific/bsd: Reformat with nixfmt-rfc-style 2024-03-01
3fe3b055adfc020e6a923c466b6bcd978a13069a 3fe3b055adfc020e6a923c466b6bcd978a13069a
# k3s: format with nixfmt-rfc-style # k3s: format with nixfmt-rfc-style
6cfcd3c75428ede517bc6b15a353d704837a2830 6cfcd3c75428ede517bc6b15a353d704837a2830
# python3Packages: format with nixfmt
59b1aef59071cae6e87859dc65de973d2cc595c0

View file

@ -67,8 +67,8 @@
/nixos/lib/make-disk-image.nix @raitobezarius /nixos/lib/make-disk-image.nix @raitobezarius
# Nix, the package manager # Nix, the package manager
pkgs/tools/package-management/nix/ @raitobezarius @ma27 pkgs/tools/package-management/nix/ @raitobezarius
nixos/modules/installer/tools/nix-fallback-paths.nix @raitobezarius @ma27 nixos/modules/installer/tools/nix-fallback-paths.nix @raitobezarius
# Nixpkgs documentation # Nixpkgs documentation
/maintainers/scripts/db-to-md.sh @jtojnar @ryantm /maintainers/scripts/db-to-md.sh @jtojnar @ryantm
@ -306,8 +306,8 @@ nixos/modules/services/networking/networkmanager.nix @Janik-Haag
/pkgs/applications/networking/cluster/terraform-providers @zowoq /pkgs/applications/networking/cluster/terraform-providers @zowoq
# Forgejo # Forgejo
nixos/modules/services/misc/forgejo.nix @bendlas @emilylange nixos/modules/services/misc/forgejo.nix @adamcstephens @bendlas @emilylange
pkgs/applications/version-management/forgejo @bendlas @emilylange pkgs/by-name/fo/forgejo/package.nix @adamcstephens @bendlas @emilylange
# Dotnet # Dotnet
/pkgs/build-support/dotnet @IvarWithoutBones /pkgs/build-support/dotnet @IvarWithoutBones

View file

@ -24,7 +24,7 @@ For new packages please briefly describe the package or provide a link to its ho
- made sure NixOS tests are [linked](https://nixos.org/manual/nixpkgs/unstable/#ssec-nixos-tests-linking) to the relevant packages - made sure NixOS tests are [linked](https://nixos.org/manual/nixpkgs/unstable/#ssec-nixos-tests-linking) to the relevant packages
- [ ] Tested compilation of all packages that depend on this change using `nix-shell -p nixpkgs-review --run "nixpkgs-review rev HEAD"`. Note: all changes have to be committed, also see [nixpkgs-review usage](https://github.com/Mic92/nixpkgs-review#usage) - [ ] Tested compilation of all packages that depend on this change using `nix-shell -p nixpkgs-review --run "nixpkgs-review rev HEAD"`. Note: all changes have to be committed, also see [nixpkgs-review usage](https://github.com/Mic92/nixpkgs-review#usage)
- [ ] Tested basic functionality of all binary files (usually in `./result/bin/`) - [ ] Tested basic functionality of all binary files (usually in `./result/bin/`)
- [24.05 Release Notes](https://github.com/NixOS/nixpkgs/blob/master/nixos/doc/manual/release-notes/rl-2405.section.md) (or backporting [23.05](https://github.com/NixOS/nixpkgs/blob/master/nixos/doc/manual/release-notes/rl-2305.section.md) and [23.11](https://github.com/NixOS/nixpkgs/blob/master/nixos/doc/manual/release-notes/rl-2311.section.md) Release notes) - [24.11 Release Notes](https://github.com/NixOS/nixpkgs/blob/master/nixos/doc/manual/release-notes/rl-2411.section.md) (or backporting [23.11](https://github.com/NixOS/nixpkgs/blob/master/nixos/doc/manual/release-notes/rl-2311.section.md) and [24.05](https://github.com/NixOS/nixpkgs/blob/master/nixos/doc/manual/release-notes/rl-2405.section.md) Release notes)
- [ ] (Package updates) Added a release notes entry if the change is major or breaking - [ ] (Package updates) Added a release notes entry if the change is major or breaking
- [ ] (Module updates) Added a release notes entry if the change is significant - [ ] (Module updates) Added a release notes entry if the change is significant
- [ ] (Module addition) Added a release notes entry if adding a new NixOS module - [ ] (Module addition) Added a release notes entry if adding a new NixOS module

View file

@ -16,6 +16,17 @@
- nixos/modules/services/x11/desktop-managers/cinnamon.nix - nixos/modules/services/x11/desktop-managers/cinnamon.nix
- nixos/tests/cinnamon.nix - nixos/tests/cinnamon.nix
"6.topic: dotnet":
- any:
- changed-files:
- any-glob-to-any-file:
- doc/languages-frameworks/dotnet.section.md
- maintainers/scripts/update-dotnet-lockfiles.nix
- pkgs/build-support/dotnet/**/*
- pkgs/development/compilers/dotnet/**/*
- pkgs/test/dotnet/**/*
- pkgs/top-level/dotnet-packages.nix
"6.topic: emacs": "6.topic: emacs":
- any: - any:
- changed-files: - changed-files:

View file

@ -20,7 +20,7 @@ jobs:
steps: steps:
- uses: actions/checkout@44c2b7a8a4ea60a981eaca3cf939b5f4305c123b # v4.1.5 - uses: actions/checkout@44c2b7a8a4ea60a981eaca3cf939b5f4305c123b # v4.1.5
- uses: cachix/install-nix-action@8887e596b4ee1134dae06b98d573bd674693f47c # v26 - uses: cachix/install-nix-action@8887e596b4ee1134dae06b98d573bd674693f47c # v26
- uses: cachix/cachix-action@18cf96c7c98e048e10a83abd92116114cd8504be # v14 - uses: cachix/cachix-action@ad2ddac53f961de1989924296a1f236fcfbaa4fc # v15
with: with:
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere. # This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.
name: nixpkgs-ci name: nixpkgs-ci

View file

@ -14,16 +14,16 @@ on:
# While `edited` is also triggered when the PR title/body is changed, # While `edited` is also triggered when the PR title/body is changed,
# this PR action is fairly quick, and PR's don't get edited that often, # this PR action is fairly quick, and PR's don't get edited that often,
# so it shouldn't be a problem # so it shouldn't be a problem
# There is a feature request for adding a `base_changed` event:
# https://github.com/orgs/community/discussions/35058
types: [opened, synchronize, reopened, edited] types: [opened, synchronize, reopened, edited]
permissions: {} permissions: {}
# Create a check-by-name concurrency group based on the pull request number. if # We don't use a concurrency group here, because the action is triggered quite often (due to the PR edit
# an event triggers a run on the same PR while a previous run is still in # trigger), and contributers would get notified on any canceled run.
# progress, the previous run will be canceled and the new one will start. # There is a feature request for supressing notifications on concurrency-canceled runs:
concurrency: # https://github.com/orgs/community/discussions/13015
group: check-by-name-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs: jobs:
check: check:

View file

@ -22,7 +22,7 @@ jobs:
with: with:
# explicitly enable sandbox # explicitly enable sandbox
extra_nix_config: sandbox = true extra_nix_config: sandbox = true
- uses: cachix/cachix-action@18cf96c7c98e048e10a83abd92116114cd8504be # v14 - uses: cachix/cachix-action@ad2ddac53f961de1989924296a1f236fcfbaa4fc # v15
with: with:
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere. # This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.
name: nixpkgs-ci name: nixpkgs-ci

View file

@ -24,7 +24,7 @@ jobs:
with: with:
# explicitly enable sandbox # explicitly enable sandbox
extra_nix_config: sandbox = true extra_nix_config: sandbox = true
- uses: cachix/cachix-action@18cf96c7c98e048e10a83abd92116114cd8504be # v14 - uses: cachix/cachix-action@ad2ddac53f961de1989924296a1f236fcfbaa4fc # v15
with: with:
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere. # This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.
name: nixpkgs-ci name: nixpkgs-ci

View file

@ -39,6 +39,10 @@ jobs:
into: staging-next-23.11 into: staging-next-23.11
- from: staging-next-23.11 - from: staging-next-23.11
into: staging-23.11 into: staging-23.11
- from: release-24.05
into: staging-next-24.05
- from: staging-next-24.05
into: staging-24.05
name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }} name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
steps: steps:
- uses: actions/checkout@44c2b7a8a4ea60a981eaca3cf939b5f4305c123b # v4.1.5 - uses: actions/checkout@44c2b7a8a4ea60a981eaca3cf939b5f4305c123b # v4.1.5

View file

@ -10,6 +10,7 @@ Robert Hensing <robert@roberthensing.nl> <roberth@users.noreply.github.com>
Sandro Jäckel <sandro.jaeckel@gmail.com> Sandro Jäckel <sandro.jaeckel@gmail.com>
Sandro Jäckel <sandro.jaeckel@gmail.com> <sandro.jaeckel@sap.com> Sandro Jäckel <sandro.jaeckel@gmail.com> <sandro.jaeckel@sap.com>
superherointj <5861043+superherointj@users.noreply.github.com> superherointj <5861043+superherointj@users.noreply.github.com>
Tomodachi94 <tomodachi94@protonmail.com> Tomo <68489118+Tomodachi94@users.noreply.github.com>
Vladimír Čunát <v@cunat.cz> <vcunat@gmail.com> Vladimír Čunát <v@cunat.cz> <vcunat@gmail.com>
Vladimír Čunát <v@cunat.cz> <vladimir.cunat@nic.cz> Vladimír Čunát <v@cunat.cz> <vladimir.cunat@nic.cz>
Yifei Sun <ysun@hey.com> StepBroBD <Hi@StepBroBD.com> Yifei Sun <ysun@hey.com> StepBroBD <Hi@StepBroBD.com>

View file

@ -1 +1 @@
24.05 24.11

View file

@ -330,7 +330,14 @@ Container system, boot system and library changes are some examples of the pull
## How to merge pull requests ## How to merge pull requests
[pr-merge]: #how-to-merge-pull-requests [pr-merge]: #how-to-merge-pull-requests
The *Nixpkgs committers* are people who have been given To streamline automated updates, leverage the nixpkgs-merge-bot by simply commenting `@NixOS/nixpkgs-merge-bot merge`. The bot will verify if the following conditions are met, refusing to merge otherwise:
- the commenter that issued the command should be among the package maintainers;
- the package should reside in `pkgs/by-name`.
Further, nixpkgs-merge-bot will ensure all ofBorg checks (except the Darwin-related ones) are successfully completed before merging the pull request. Should the checks still be underway, the bot patiently waits for ofBorg to finish before attempting the merge again.
For other pull requests, the *Nixpkgs committers* are people who have been given
permission to merge. permission to merge.
It is possible for community members that have enough knowledge and experience on a special topic to contribute by merging pull requests. It is possible for community members that have enough knowledge and experience on a special topic to contribute by merging pull requests.
@ -359,7 +366,7 @@ See [Nix Channel Status](https://status.nixos.org/) for the current channels and
Here's a brief overview of the main Git branches and what channels they're used for: Here's a brief overview of the main Git branches and what channels they're used for:
- `master`: The main branch, used for the unstable channels such as `nixpkgs-unstable`, `nixos-unstable` and `nixos-unstable-small`. - `master`: The main branch, used for the unstable channels such as `nixpkgs-unstable`, `nixos-unstable` and `nixos-unstable-small`.
- `release-YY.MM` (e.g. `release-23.11`): The NixOS release branches, used for the stable channels such as `nixos-23.11`, `nixos-23.11-small` and `nixpkgs-23.11-darwin`. - `release-YY.MM` (e.g. `release-24.05`): The NixOS release branches, used for the stable channels such as `nixos-24.05`, `nixos-24.05-small` and `nixpkgs-24.05-darwin`.
When a channel is updated, a corresponding Git branch is also updated to point to the corresponding commit. When a channel is updated, a corresponding Git branch is also updated to point to the corresponding commit.
So e.g. the [`nixpkgs-unstable` branch](https://github.com/nixos/nixpkgs/tree/nixpkgs-unstable) corresponds to the Git commit from the [`nixpkgs-unstable` channel](https://channels.nixos.org/nixpkgs-unstable). So e.g. the [`nixpkgs-unstable` branch](https://github.com/nixos/nixpkgs/tree/nixpkgs-unstable) corresponds to the Git commit from the [`nixpkgs-unstable` channel](https://channels.nixos.org/nixpkgs-unstable).

View file

@ -52,9 +52,9 @@ Nixpkgs and NixOS are built and tested by our continuous integration
system, [Hydra](https://hydra.nixos.org/). system, [Hydra](https://hydra.nixos.org/).
* [Continuous package builds for unstable/master](https://hydra.nixos.org/jobset/nixos/trunk-combined) * [Continuous package builds for unstable/master](https://hydra.nixos.org/jobset/nixos/trunk-combined)
* [Continuous package builds for the NixOS 23.11 release](https://hydra.nixos.org/jobset/nixos/release-23.11) * [Continuous package builds for the NixOS 24.05 release](https://hydra.nixos.org/jobset/nixos/release-24.05)
* [Tests for unstable/master](https://hydra.nixos.org/job/nixos/trunk-combined/tested#tabs-constituents) * [Tests for unstable/master](https://hydra.nixos.org/job/nixos/trunk-combined/tested#tabs-constituents)
* [Tests for the NixOS 23.11 release](https://hydra.nixos.org/job/nixos/release-23.11/tested#tabs-constituents) * [Tests for the NixOS 24.05 release](https://hydra.nixos.org/job/nixos/release-24.05/tested#tabs-constituents)
Artifacts successfully built with Hydra are published to cache at Artifacts successfully built with Hydra are published to cache at
https://cache.nixos.org/. When successful build and test criteria are https://cache.nixos.org/. When successful build and test criteria are

View file

@ -85,14 +85,14 @@ let
in in
make-disk-image { make-disk-image {
inherit pkgs lib; inherit pkgs lib;
config = evalConfig { inherit (evalConfig {
modules = [ modules = [
{ {
fileSystems."/" = { device = "/dev/vda"; fsType = "ext4"; autoFormat = true; }; fileSystems."/" = { device = "/dev/vda"; fsType = "ext4"; autoFormat = true; };
boot.grub.device = "/dev/vda"; boot.grub.device = "/dev/vda";
} }
]; ];
}; }) config;
format = "qcow2"; format = "qcow2";
onlyNixStore = false; onlyNixStore = false;
partitionTableType = "legacy+gpt"; partitionTableType = "legacy+gpt";
@ -104,5 +104,3 @@ in
memSize = 2048; # Qemu VM memory size in megabytes. Defaults to 1024M. memSize = 2048; # Qemu VM memory size in megabytes. Defaults to 1024M.
} }
``` ```

View file

@ -40,6 +40,82 @@ If the `moduleNames` argument is omitted, `hasPkgConfigModules` will use `meta.p
::: :::
## `lycheeLinkCheck` {#tester-lycheeLinkCheck}
Check a packaged static site's links with the [`lychee` package](https://search.nixos.org/packages?show=lychee&type=packages&query=lychee).
You may use Nix to reproducibly build static websites, such as for software documentation.
Some packages will install documentation in their `out` or `doc` outputs, or maybe you have dedicated package where you've made your static site reproducible by running a generator, such as [Hugo](https://gohugo.io/) or [mdBook](https://rust-lang.github.io/mdBook/), in a derivation.
If you have a static site that can be built with Nix, you can use `lycheeLinkCheck` to check that the hyperlinks in your site are correct, and do so as part of your Nix workflow and CI.
:::{.example #ex-lycheelinkcheck}
# Check hyperlinks in the `nix` documentation
```nix
testers.lycheeLinkCheck {
site = nix.doc + "/share/doc/nix/manual";
}
```
:::
### Return value {#tester-lycheeLinkCheck-return}
This tester produces a package that does not produce useful outputs, but only succeeds if the hyperlinks in your site are correct. The build log will list the broken links.
It has two modes:
- Build the returned derivation; its build process will check that internal hyperlinks are correct. This runs in the sandbox, so it will not check external hyperlinks, but it is quick and reliable.
- Invoke the `.online` attribute with [`nix run`](https://nixos.org/manual/nix/stable/command-ref/new-cli/nix3-run) ([experimental](https://nixos.org/manual/nix/stable/contributing/experimental-features#xp-feature-nix-command)). This runs outside the sandbox, and checks that both internal and external hyperlinks are correct.
Example:
```shell
nix run nixpkgs#lychee.tests.ok.online
```
### Inputs {#tester-lycheeLinkCheck-inputs}
`site` (path or derivation) {#tester-lycheeLinkCheck-param-site}
: The path to the files to check.
`remap` (attribe set, optional) {#tester-lycheeLinkCheck-param-remap}
: An attribute set where the attribute names are regular expressions.
The values should be strings, derivations, or path values.
In the returned check's default configuration, external URLs are only checked when you run the `.online` attribute.
By adding remappings, you can check offline that URLs to external resources are correct, by providing a stand-in from the file system.
Before checking the existence of a URL, the regular expressions are matched and replaced by their corresponding values.
Example:
```nix
{
"https://nix\\.dev/manual/nix/[a-z0-9.-]*" = "${nix.doc}/share/doc/nix/manual";
"https://nixos\\.org/manual/nix/(un)?stable" = "${emptyDirectory}/placeholder-to-disallow-old-nix-docs-urls";
}
```
Store paths in the attribute values are automatically prefixed with `file://`, because lychee requires this for paths in the file system.
If this is a problem, or if you need to control the order in which replacements are performed, use `extraConfig.remap` instead.
`extraConfig` (attribute set) {#tester-lycheeLinkCheck-param-extraConfig}
: Extra configuration to pass to `lychee` in its [configuration file](https://github.com/lycheeverse/lychee/blob/master/lychee.example.toml).
It is automatically [translated](https://nixos.org/manual/nixos/stable/index.html#sec-settings-nix-representable) to TOML.
Example: `{ "include_verbatim" = true; }`
`lychee` (derivation, optional) {#tester-lycheeLinkCheck-param-lychee}
: The `lychee` package to use.
## `testVersion` {#tester-testVersion} ## `testVersion` {#tester-testVersion}
Checks that the output from running a command contains the specified version string in it as a whole word. Checks that the output from running a command contains the specified version string in it as a whole word.
@ -129,7 +205,7 @@ runCommand "example" {
::: :::
## `testEqualContents` {#tester-equalContents} ## `testEqualContents` {#tester-testEqualContents}
Check that two paths have the same contents. Check that two paths have the same contents.

View file

@ -105,7 +105,15 @@ in pkgs.stdenv.mkDerivation {
ln -s ${optionsDoc.optionsJSON}/share/doc/nixos/options.json ./config-options.json ln -s ${optionsDoc.optionsJSON}/share/doc/nixos/options.json ./config-options.json
''; '';
buildPhase = '' buildPhase = let
pythonInterpreterTable = pkgs.callPackage ./doc-support/python-interpreter-table.nix {};
pythonSection = with lib.strings; replaceStrings
[ "@python-interpreter-table@" ]
[ pythonInterpreterTable ]
(readFile ./languages-frameworks/python.section.md);
in ''
cp ${builtins.toFile "python.section.md" pythonSection} ./languages-frameworks/python.section.md
cat \ cat \
./functions/library.md.in \ ./functions/library.md.in \
${lib-docs}/index.md \ ${lib-docs}/index.md \

View file

@ -0,0 +1,63 @@
# For debugging, run in this directory:
# nix eval --impure --raw --expr 'import ./python-interpreter-table.nix {}'
{ pkgs ? (import ../.. { config = { }; overlays = []; }) }:
let
lib = pkgs.lib;
inherit (lib.attrsets) attrNames filterAttrs;
inherit (lib.lists) elem filter map naturalSort reverseList;
inherit (lib.strings) concatStringsSep;
isPythonInterpreter = name:
/* NB: Package names that don't follow the regular expression:
- `python-cosmopolitan` is not part of `pkgs.pythonInterpreters`.
- `_prebuilt` interpreters are used for bootstrapping internally.
- `python3Minimal` contains python packages, left behind conservatively.
- `rustpython` lacks `pythonVersion` and `implementation`.
*/
(lib.strings.match "(pypy|python)([[:digit:]]*)" name) != null;
interpreterName = pname:
let
cuteName = {
cpython = "CPython";
pypy = "PyPy";
};
interpreter = pkgs.${pname};
in
"${cuteName.${interpreter.implementation}} ${interpreter.pythonVersion}";
interpreters = reverseList (naturalSort (
filter isPythonInterpreter (attrNames pkgs.pythonInterpreters)
));
aliases = pname:
attrNames (
filterAttrs (name: value:
isPythonInterpreter name
&& name != pname
&& interpreterName name == interpreterName pname
) pkgs
);
result = map (pname: {
inherit pname;
aliases = aliases pname;
interpreter = interpreterName pname;
}) interpreters;
toMarkdown = data:
let
line = package: ''
| ${package.pname} | ${join ", " package.aliases or [ ]} | ${package.interpreter} |
'';
in
join "" (map line data);
join = lib.strings.concatStringsSep;
in
''
| Package | Aliases | Interpeter |
|---------|---------|------------|
${toMarkdown result}
''

View file

@ -117,7 +117,6 @@ For more detail about managing the `deps.nix` file, see [Generating and updating
* `useDotnetFromEnv` will change the binary wrapper so that it uses the .NET from the environment. The runtime specified by `dotnet-runtime` is given as a fallback in case no .NET is installed in the user's environment. This is most useful for .NET global tools and LSP servers, which often extend the .NET CLI and their runtime should match the users' .NET runtime. * `useDotnetFromEnv` will change the binary wrapper so that it uses the .NET from the environment. The runtime specified by `dotnet-runtime` is given as a fallback in case no .NET is installed in the user's environment. This is most useful for .NET global tools and LSP servers, which often extend the .NET CLI and their runtime should match the users' .NET runtime.
* `dotnet-sdk` is useful in cases where you need to change what dotnet SDK is being used. You can also set this to the result of `dotnetSdkPackages.combinePackages`, if the project uses multiple SDKs to build. * `dotnet-sdk` is useful in cases where you need to change what dotnet SDK is being used. You can also set this to the result of `dotnetSdkPackages.combinePackages`, if the project uses multiple SDKs to build.
* `dotnet-runtime` is useful in cases where you need to change what dotnet runtime is being used. This can be either a regular dotnet runtime, or an aspnetcore. * `dotnet-runtime` is useful in cases where you need to change what dotnet runtime is being used. This can be either a regular dotnet runtime, or an aspnetcore.
* `dotnet-test-sdk` is useful in cases where unit tests expect a different dotnet SDK. By default, this is set to the `dotnet-sdk` attribute.
* `testProjectFile` is useful in cases where the regular project file does not contain the unit tests. It gets restored and build, but not installed. You may need to regenerate your nuget lockfile after setting this. Note that if set, only tests from this project are executed. * `testProjectFile` is useful in cases where the regular project file does not contain the unit tests. It gets restored and build, but not installed. You may need to regenerate your nuget lockfile after setting this. Note that if set, only tests from this project are executed.
* `disabledTests` is used to disable running specific unit tests. This gets passed as: `dotnet test --filter "FullyQualifiedName!={}"`, to ensure compatibility with all unit test frameworks. * `disabledTests` is used to disable running specific unit tests. This gets passed as: `dotnet test --filter "FullyQualifiedName!={}"`, to ensure compatibility with all unit test frameworks.
* `dotnetRestoreFlags` can be used to pass flags to `dotnet restore`. * `dotnetRestoreFlags` can be used to pass flags to `dotnet restore`.

View file

@ -7,7 +7,7 @@ The following example shows a Nim program that depends only on Nim libraries:
```nix ```nix
{ lib, buildNimPackage, fetchFromGitHub }: { lib, buildNimPackage, fetchFromGitHub }:
buildNimPackage { } (finalAttrs: { buildNimPackage (finalAttrs: {
pname = "ttop"; pname = "ttop";
version = "1.2.7"; version = "1.2.7";
@ -80,7 +80,6 @@ For example, to propagate a dependency on SDL2 for lockfiles that select the Nim
/* … */ /* … */
sdl2 = sdl2 =
lockAttrs: lockAttrs:
finalAttrs:
{ buildInputs ? [ ], ... }: { buildInputs ? [ ], ... }:
{ {
buildInputs = buildInputs ++ [ SDL2 ]; buildInputs = buildInputs ++ [ SDL2 ];
@ -89,9 +88,8 @@ For example, to propagate a dependency on SDL2 for lockfiles that select the Nim
} }
``` ```
The annotations in the `nim-overrides.nix` set are functions that take three arguments and return a new attrset to be overlayed on the package being built. The annotations in the `nim-overrides.nix` set are functions that take two arguments and return a new attrset to be overlayed on the package being built.
- lockAttrs: the attrset for this library from within a lockfile. This can be used to implement library version constraints, such as marking libraries as broken or insecure. - lockAttrs: the attrset for this library from within a lockfile. This can be used to implement library version constraints, such as marking libraries as broken or insecure.
- finalAttrs: the final attrset passed by `buildNimPackage` to `stdenv.mkDerivation`.
- prevAttrs: the attrset produced by initial arguments to `buildNimPackage` and any preceding lockfile overlays. - prevAttrs: the attrset produced by initial arguments to `buildNimPackage` and any preceding lockfile overlays.
### Overriding an Nim library override {#nim-lock-overrides-overrides} ### Overriding an Nim library override {#nim-lock-overrides-overrides}

View file

@ -4,16 +4,7 @@
### Interpreters {#interpreters} ### Interpreters {#interpreters}
| Package | Aliases | Interpreter | @python-interpreter-table@
|------------|-----------------|-------------|
| python27 | python2, python | CPython 2.7 |
| python39 | | CPython 3.9 |
| python310 | | CPython 3.10 |
| python311 | python3 | CPython 3.11 |
| python312 | | CPython 3.12 |
| python313 | | CPython 3.13 |
| pypy27 | pypy2, pypy | PyPy2.7 |
| pypy39 | pypy3 | PyPy 3.9 |
The Nix expressions for the interpreters can be found in The Nix expressions for the interpreters can be found in
`pkgs/development/interpreters/python`. `pkgs/development/interpreters/python`.

View file

@ -431,6 +431,11 @@ div.appendix .informaltable td {
padding: 0.5rem; padding: 0.5rem;
} }
div.book .variablelist .term,
div.appendix .variablelist .term {
font-weight: 500;
}
/* /*
This relies on highlight.js applying certain classes on the prompts. This relies on highlight.js applying certain classes on the prompts.
For more details, see https://highlightjs.readthedocs.io/en/latest/css-classes-reference.html#stylable-scopes For more details, see https://highlightjs.readthedocs.io/en/latest/css-classes-reference.html#stylable-scopes

View file

@ -1 +1 @@
24.05 24.11

View file

@ -90,7 +90,16 @@ rec {
mkOption ? mkOption ?
k: v: if v == null k: v: if v == null
then [] then []
else [ (mkOptionName k) (lib.generators.mkValueStringDefault {} v) ] else if optionValueSeparator == null then
[ (mkOptionName k) (lib.generators.mkValueStringDefault {} v) ]
else
[ "${mkOptionName k}${optionValueSeparator}${lib.generators.mkValueStringDefault {} v}" ],
# how to separate an option from its flag;
# by default, there is no separator, so option `-c` and value `5`
# would become ["-c" "5"].
# This is useful if the command requires equals, for example, `-c=5`.
optionValueSeparator ? null
}: }:
options: options:
let let

View file

@ -1,16 +1,17 @@
/* Collection of functions useful for debugging /**
broken nix expressions. Collection of functions useful for debugging
broken nix expressions.
* `trace`-like functions take two values, print * `trace`-like functions take two values, print
the first to stderr and return the second. the first to stderr and return the second.
* `traceVal`-like functions take one argument * `traceVal`-like functions take one argument
which both printed and returned. which both printed and returned.
* `traceSeq`-like functions fully evaluate their * `traceSeq`-like functions fully evaluate their
traced value before printing (not just to weak traced value before printing (not just to weak
head normal form like trace does by default). head normal form like trace does by default).
* Functions that end in `-Fn` take an additional * Functions that end in `-Fn` take an additional
function as their first argument, which is applied function as their first argument, which is applied
to the traced value before it is printed. to the traced value before it is printed.
*/ */
{ lib }: { lib }:
let let
@ -32,79 +33,190 @@ rec {
# -- TRACING -- # -- TRACING --
/* Conditionally trace the supplied message, based on a predicate. /**
Conditionally trace the supplied message, based on a predicate.
Type: traceIf :: bool -> string -> a -> a
Example: # Inputs
traceIf true "hello" 3
trace: hello `pred`
=> 3
: Predicate to check
`msg`
: Message that should be traced
`x`
: Value to return
# Type
```
traceIf :: bool -> string -> a -> a
```
# Examples
:::{.example}
## `lib.debug.traceIf` usage example
```nix
traceIf true "hello" 3
trace: hello
=> 3
```
:::
*/ */
traceIf = traceIf =
# Predicate to check
pred: pred:
# Message that should be traced
msg: msg:
# Value to return
x: if pred then trace msg x else x; x: if pred then trace msg x else x;
/* Trace the supplied value after applying a function to it, and /**
return the original value. Trace the supplied value after applying a function to it, and
return the original value.
Type: traceValFn :: (a -> b) -> a -> a
Example: # Inputs
traceValFn (v: "mystring ${v}") "foo"
trace: mystring foo `f`
=> "foo"
: Function to apply
`x`
: Value to trace and return
# Type
```
traceValFn :: (a -> b) -> a -> a
```
# Examples
:::{.example}
## `lib.debug.traceValFn` usage example
```nix
traceValFn (v: "mystring ${v}") "foo"
trace: mystring foo
=> "foo"
```
:::
*/ */
traceValFn = traceValFn =
# Function to apply
f: f:
# Value to trace and return
x: trace (f x) x; x: trace (f x) x;
/* Trace the supplied value and return it. /**
Trace the supplied value and return it.
Type: traceVal :: a -> a # Inputs
Example: `x`
traceVal 42
# trace: 42 : Value to trace and return
=> 42
# Type
```
traceVal :: a -> a
```
# Examples
:::{.example}
## `lib.debug.traceVal` usage example
```nix
traceVal 42
# trace: 42
=> 42
```
:::
*/ */
traceVal = traceValFn id; traceVal = traceValFn id;
/* `builtins.trace`, but the value is `builtins.deepSeq`ed first. /**
`builtins.trace`, but the value is `builtins.deepSeq`ed first.
Type: traceSeq :: a -> b -> b
Example: # Inputs
trace { a.b.c = 3; } null
trace: { a = <CODE>; } `x`
=> null
traceSeq { a.b.c = 3; } null : The value to trace
trace: { a = { b = { c = 3; }; }; }
=> null `y`
: The value to return
# Type
```
traceSeq :: a -> b -> b
```
# Examples
:::{.example}
## `lib.debug.traceSeq` usage example
```nix
trace { a.b.c = 3; } null
trace: { a = <CODE>; }
=> null
traceSeq { a.b.c = 3; } null
trace: { a = { b = { c = 3; }; }; }
=> null
```
:::
*/ */
traceSeq = traceSeq =
# The value to trace
x: x:
# The value to return
y: trace (builtins.deepSeq x x) y; y: trace (builtins.deepSeq x x) y;
/* Like `traceSeq`, but only evaluate down to depth n. /**
This is very useful because lots of `traceSeq` usages Like `traceSeq`, but only evaluate down to depth n.
lead to an infinite recursion. This is very useful because lots of `traceSeq` usages
lead to an infinite recursion.
Example:
traceSeqN 2 { a.b.c = 3; } null
trace: { a = { b = {}; }; }
=> null
Type: traceSeqN :: Int -> a -> b -> b # Inputs
*/
`depth`
: 1\. Function argument
`x`
: 2\. Function argument
`y`
: 3\. Function argument
# Type
```
traceSeqN :: Int -> a -> b -> b
```
# Examples
:::{.example}
## `lib.debug.traceSeqN` usage example
```nix
traceSeqN 2 { a.b.c = 3; } null
trace: { a = { b = {}; }; }
=> null
```
:::
*/
traceSeqN = depth: x: y: traceSeqN = depth: x: y:
let snip = v: if isList v then noQuotes "[]" v let snip = v: if isList v then noQuotes "[]" v
else if isAttrs v then noQuotes "{}" v else if isAttrs v then noQuotes "{}" v
@ -118,41 +230,115 @@ rec {
in trace (generators.toPretty { allowPrettyValues = true; } in trace (generators.toPretty { allowPrettyValues = true; }
(modify depth snip x)) y; (modify depth snip x)) y;
/* A combination of `traceVal` and `traceSeq` that applies a /**
provided function to the value to be traced after `deepSeq`ing A combination of `traceVal` and `traceSeq` that applies a
it. provided function to the value to be traced after `deepSeq`ing
it.
# Inputs
`f`
: Function to apply
`v`
: Value to trace
*/ */
traceValSeqFn = traceValSeqFn =
# Function to apply
f: f:
# Value to trace
v: traceValFn f (builtins.deepSeq v v); v: traceValFn f (builtins.deepSeq v v);
/* A combination of `traceVal` and `traceSeq`. */ /**
A combination of `traceVal` and `traceSeq`.
# Inputs
`v`
: Value to trace
*/
traceValSeq = traceValSeqFn id; traceValSeq = traceValSeqFn id;
/* A combination of `traceVal` and `traceSeqN` that applies a /**
provided function to the value to be traced. */ A combination of `traceVal` and `traceSeqN` that applies a
provided function to the value to be traced.
# Inputs
`f`
: Function to apply
`depth`
: 2\. Function argument
`v`
: Value to trace
*/
traceValSeqNFn = traceValSeqNFn =
# Function to apply
f: f:
depth: depth:
# Value to trace
v: traceSeqN depth (f v) v; v: traceSeqN depth (f v) v;
/* A combination of `traceVal` and `traceSeqN`. */ /**
A combination of `traceVal` and `traceSeqN`.
# Inputs
`depth`
: 1\. Function argument
`v`
: Value to trace
*/
traceValSeqN = traceValSeqNFn id; traceValSeqN = traceValSeqNFn id;
/* Trace the input and output of a function `f` named `name`, /**
both down to `depth`. Trace the input and output of a function `f` named `name`,
both down to `depth`.
This is useful for adding around a function call, This is useful for adding around a function call,
to see the before/after of values as they are transformed. to see the before/after of values as they are transformed.
Example:
traceFnSeqN 2 "id" (x: x) { a.b.c = 3; } # Inputs
trace: { fn = "id"; from = { a.b = {}; }; to = { a.b = {}; }; }
=> { a.b.c = 3; } `depth`
: 1\. Function argument
`name`
: 2\. Function argument
`f`
: 3\. Function argument
`v`
: 4\. Function argument
# Examples
:::{.example}
## `lib.debug.traceFnSeqN` usage example
```nix
traceFnSeqN 2 "id" (x: x) { a.b.c = 3; }
trace: { fn = "id"; from = { a.b = {}; }; to = { a.b = {}; }; }
=> { a.b.c = 3; }
```
:::
*/ */
traceFnSeqN = depth: name: f: v: traceFnSeqN = depth: name: f: v:
let res = f v; let res = f v;
@ -168,66 +354,82 @@ rec {
# -- TESTING -- # -- TESTING --
/* Evaluates a set of tests. /**
Evaluates a set of tests.
A test is an attribute set `{expr, expected}`, A test is an attribute set `{expr, expected}`,
denoting an expression and its expected result. denoting an expression and its expected result.
The result is a `list` of __failed tests__, each represented as The result is a `list` of __failed tests__, each represented as
`{name, expected, result}`, `{name, expected, result}`,
- expected - expected
- What was passed as `expected` - What was passed as `expected`
- result - result
- The actual `result` of the test - The actual `result` of the test
Used for regression testing of the functions in lib; see Used for regression testing of the functions in lib; see
tests.nix for more examples. tests.nix for more examples.
Important: Only attributes that start with `test` are executed. Important: Only attributes that start with `test` are executed.
- If you want to run only a subset of the tests add the attribute `tests = ["testName"];` - If you want to run only a subset of the tests add the attribute `tests = ["testName"];`
Example:
runTests { # Inputs
testAndOk = {
expr = lib.and true false;
expected = false;
};
testAndFail = {
expr = lib.and true false;
expected = true;
};
}
->
[
{
name = "testAndFail";
expected = true;
result = false;
}
]
Type: `tests`
runTests :: {
tests = [ String ]; : Tests to run
${testName} :: {
expr :: a; # Type
expected :: a;
}; ```
runTests :: {
tests = [ String ];
${testName} :: {
expr :: a;
expected :: a;
};
}
->
[
{
name :: String;
expected :: a;
result :: a;
} }
-> ]
[ ```
{
name :: String; # Examples
expected :: a; :::{.example}
result :: a; ## `lib.debug.runTests` usage example
}
] ```nix
runTests {
testAndOk = {
expr = lib.and true false;
expected = false;
};
testAndFail = {
expr = lib.and true false;
expected = true;
};
}
->
[
{
name = "testAndFail";
expected = true;
result = false;
}
]
```
:::
*/ */
runTests = runTests =
# Tests to run
tests: concatLists (attrValues (mapAttrs (name: test: tests: concatLists (attrValues (mapAttrs (name: test:
let testsToRun = if tests ? tests then tests.tests else []; let testsToRun = if tests ? tests then tests.tests else [];
in if (substring 0 4 name == "test" || elem name testsToRun) in if (substring 0 4 name == "test" || elem name testsToRun)
@ -237,10 +439,26 @@ rec {
then [ { inherit name; expected = test.expected; result = test.expr; } ] then [ { inherit name; expected = test.expected; result = test.expr; } ]
else [] ) tests)); else [] ) tests));
/* Create a test assuming that list elements are `true`. /**
Create a test assuming that list elements are `true`.
Example:
{ testX = allTrue [ true ]; } # Inputs
`expr`
: 1\. Function argument
# Examples
:::{.example}
## `lib.debug.testAllTrue` usage example
```nix
{ testX = allTrue [ true ]; }
```
:::
*/ */
testAllTrue = expr: { inherit expr; expected = map (x: true) expr; }; testAllTrue = expr: { inherit expr; expected = map (x: true) expr; };
} }

View file

@ -1,4 +1,4 @@
/* /**
<!-- This anchor is here for backwards compatibility --> <!-- This anchor is here for backwards compatibility -->
[]{#sec-fileset} []{#sec-fileset}
@ -6,7 +6,7 @@
A file set is a (mathematical) set of local files that can be added to the Nix store for use in Nix derivations. A file set is a (mathematical) set of local files that can be added to the Nix store for use in Nix derivations.
File sets are easy and safe to use, providing obvious and composable semantics with good error messages to prevent mistakes. File sets are easy and safe to use, providing obvious and composable semantics with good error messages to prevent mistakes.
## Overview {#sec-fileset-overview} # Overview {#sec-fileset-overview}
Basics: Basics:
- [Implicit coercion from paths to file sets](#sec-fileset-path-coercion) - [Implicit coercion from paths to file sets](#sec-fileset-path-coercion)
@ -58,7 +58,7 @@
see [this issue](https://github.com/NixOS/nixpkgs/issues/266356) to request it. see [this issue](https://github.com/NixOS/nixpkgs/issues/266356) to request it.
## Implicit coercion from paths to file sets {#sec-fileset-path-coercion} # Implicit coercion from paths to file sets {#sec-fileset-path-coercion}
All functions accepting file sets as arguments can also accept [paths](https://nixos.org/manual/nix/stable/language/values.html#type-path) as arguments. All functions accepting file sets as arguments can also accept [paths](https://nixos.org/manual/nix/stable/language/values.html#type-path) as arguments.
Such path arguments are implicitly coerced to file sets containing all files under that path: Such path arguments are implicitly coerced to file sets containing all files under that path:
@ -78,7 +78,7 @@
This is in contrast to using [paths in string interpolation](https://nixos.org/manual/nix/stable/language/values.html#type-path), which does add the entire referenced path to the store. This is in contrast to using [paths in string interpolation](https://nixos.org/manual/nix/stable/language/values.html#type-path), which does add the entire referenced path to the store.
::: :::
### Example {#sec-fileset-path-coercion-example} ## Example {#sec-fileset-path-coercion-example}
Assume we are in a local directory with a file hierarchy like this: Assume we are in a local directory with a file hierarchy like this:
``` ```
@ -157,17 +157,34 @@ let
in { in {
/* /**
Create a file set from a path that may or may not exist: Create a file set from a path that may or may not exist:
- If the path does exist, the path is [coerced to a file set](#sec-fileset-path-coercion). - If the path does exist, the path is [coerced to a file set](#sec-fileset-path-coercion).
- If the path does not exist, a file set containing no files is returned. - If the path does not exist, a file set containing no files is returned.
Type:
maybeMissing :: Path -> FileSet
Example: # Inputs
# All files in the current directory, but excluding main.o if it exists
difference ./. (maybeMissing ./main.o) `path`
: 1\. Function argument
# Type
```
maybeMissing :: Path -> FileSet
```
# Examples
:::{.example}
## `lib.fileset.maybeMissing` usage example
```nix
# All files in the current directory, but excluding main.o if it exists
difference ./. (maybeMissing ./main.o)
```
:::
*/ */
maybeMissing = maybeMissing =
path: path:
@ -183,7 +200,7 @@ in {
else else
_singleton path; _singleton path;
/* /**
Incrementally evaluate and trace a file set in a pretty way. Incrementally evaluate and trace a file set in a pretty way.
This function is only intended for debugging purposes. This function is only intended for debugging purposes.
The exact tracing format is unspecified and may change. The exact tracing format is unspecified and may change.
@ -194,27 +211,44 @@ in {
This variant is useful for tracing file sets in the Nix repl. This variant is useful for tracing file sets in the Nix repl.
Type:
trace :: FileSet -> Any -> Any
Example: # Inputs
trace (unions [ ./Makefile ./src ./tests/run.sh ]) null
=> `fileset`
trace: /home/user/src/myProject
trace: - Makefile (regular) : The file set to trace.
trace: - src (all files in directory)
trace: - tests This argument can also be a path,
trace: - run.sh (regular) which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
null
`val`
: The value to return.
# Type
```
trace :: FileSet -> Any -> Any
```
# Examples
:::{.example}
## `lib.fileset.trace` usage example
```nix
trace (unions [ ./Makefile ./src ./tests/run.sh ]) null
=>
trace: /home/user/src/myProject
trace: - Makefile (regular)
trace: - src (all files in directory)
trace: - tests
trace: - run.sh (regular)
null
```
:::
*/ */
trace = trace = fileset:
/*
The file set to trace.
This argument can also be a path,
which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
*/
fileset:
let let
# "fileset" would be a better name, but that would clash with the argument name, # "fileset" would be a better name, but that would clash with the argument name,
# and we cannot change that because of https://github.com/nix-community/nixdoc/issues/76 # and we cannot change that because of https://github.com/nix-community/nixdoc/issues/76
@ -224,7 +258,7 @@ in {
(_printFileset actualFileset) (_printFileset actualFileset)
(x: x); (x: x);
/* /**
Incrementally evaluate and trace a file set in a pretty way. Incrementally evaluate and trace a file set in a pretty way.
This function is only intended for debugging purposes. This function is only intended for debugging purposes.
The exact tracing format is unspecified and may change. The exact tracing format is unspecified and may change.
@ -234,34 +268,47 @@ in {
This variant is useful for tracing file sets passed as arguments to other functions. This variant is useful for tracing file sets passed as arguments to other functions.
Type:
traceVal :: FileSet -> FileSet
Example: # Inputs
toSource {
root = ./.; `fileset`
fileset = traceVal (unions [
./Makefile : The file set to trace and return.
./src
./tests/run.sh This argument can also be a path,
]); which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
}
=> # Type
trace: /home/user/src/myProject
trace: - Makefile (regular) ```
trace: - src (all files in directory) traceVal :: FileSet -> FileSet
trace: - tests ```
trace: - run.sh (regular)
"/nix/store/...-source" # Examples
:::{.example}
## `lib.fileset.traceVal` usage example
```nix
toSource {
root = ./.;
fileset = traceVal (unions [
./Makefile
./src
./tests/run.sh
]);
}
=>
trace: /home/user/src/myProject
trace: - Makefile (regular)
trace: - src (all files in directory)
trace: - tests
trace: - run.sh (regular)
"/nix/store/...-source"
```
:::
*/ */
traceVal = traceVal = fileset:
/*
The file set to trace and return.
This argument can also be a path,
which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
*/
fileset:
let let
# "fileset" would be a better name, but that would clash with the argument name, # "fileset" would be a better name, but that would clash with the argument name,
# and we cannot change that because of https://github.com/nix-community/nixdoc/issues/76 # and we cannot change that because of https://github.com/nix-community/nixdoc/issues/76
@ -273,7 +320,7 @@ in {
# but that would then duplicate work for consumers of the fileset, because then they have to coerce it again # but that would then duplicate work for consumers of the fileset, because then they have to coerce it again
actualFileset; actualFileset;
/* /**
Add the local files contained in `fileset` to the store as a single [store path](https://nixos.org/manual/nix/stable/glossary#gloss-store-path) rooted at `root`. Add the local files contained in `fileset` to the store as a single [store path](https://nixos.org/manual/nix/stable/glossary#gloss-store-path) rooted at `root`.
The result is the store path as a string-like value, making it usable e.g. as the `src` of a derivation, or in string interpolation: The result is the store path as a string-like value, making it usable e.g. as the `src` of a derivation, or in string interpolation:
@ -286,63 +333,13 @@ in {
The name of the store path is always `source`. The name of the store path is always `source`.
Type: # Inputs
toSource :: {
root :: Path,
fileset :: FileSet,
} -> SourceLike
Example: Takes an attribute set with the following attributes
# Import the current directory into the store
# but only include files under ./src
toSource {
root = ./.;
fileset = ./src;
}
=> "/nix/store/...-source"
# Import the current directory into the store `root` (Path; _required_)
# but only include ./Makefile and all files under ./src
toSource {
root = ./.;
fileset = union
./Makefile
./src;
}
=> "/nix/store/...-source"
# Trying to include a file outside the root will fail : The local directory [path](https://nixos.org/manual/nix/stable/language/values.html#type-path) that will correspond to the root of the resulting store path.
toSource {
root = ./.;
fileset = unions [
./Makefile
./src
../LICENSE
];
}
=> <error>
# The root needs to point to a directory that contains all the files
toSource {
root = ../.;
fileset = unions [
./Makefile
./src
../LICENSE
];
}
=> "/nix/store/...-source"
# The root has to be a local filesystem path
toSource {
root = "/nix/store/...-source";
fileset = ./.;
}
=> <error>
*/
toSource = {
/*
(required) The local directory [path](https://nixos.org/manual/nix/stable/language/values.html#type-path) that will correspond to the root of the resulting store path.
Paths in [strings](https://nixos.org/manual/nix/stable/language/values.html#type-string), including Nix store paths, cannot be passed as `root`. Paths in [strings](https://nixos.org/manual/nix/stable/language/values.html#type-string), including Nix store paths, cannot be passed as `root`.
`root` has to be a directory. `root` has to be a directory.
@ -350,10 +347,10 @@ in {
Changing `root` only affects the directory structure of the resulting store path, it does not change which files are added to the store. Changing `root` only affects the directory structure of the resulting store path, it does not change which files are added to the store.
The only way to change which files get added to the store is by changing the `fileset` attribute. The only way to change which files get added to the store is by changing the `fileset` attribute.
::: :::
*/
root, `fileset` (FileSet; _required_)
/*
(required) The file set whose files to import into the store. : The file set whose files to import into the store.
File sets can be created using other functions in this library. File sets can be created using other functions in this library.
This argument can also be a path, This argument can also be a path,
which gets [implicitly coerced to a file set](#sec-fileset-path-coercion). which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
@ -362,7 +359,72 @@ in {
If a directory does not recursively contain any file, it is omitted from the store path contents. If a directory does not recursively contain any file, it is omitted from the store path contents.
::: :::
*/ # Type
```
toSource :: {
root :: Path,
fileset :: FileSet,
} -> SourceLike
```
# Examples
:::{.example}
## `lib.fileset.toSource` usage example
```nix
# Import the current directory into the store
# but only include files under ./src
toSource {
root = ./.;
fileset = ./src;
}
=> "/nix/store/...-source"
# Import the current directory into the store
# but only include ./Makefile and all files under ./src
toSource {
root = ./.;
fileset = union
./Makefile
./src;
}
=> "/nix/store/...-source"
# Trying to include a file outside the root will fail
toSource {
root = ./.;
fileset = unions [
./Makefile
./src
../LICENSE
];
}
=> <error>
# The root needs to point to a directory that contains all the files
toSource {
root = ../.;
fileset = unions [
./Makefile
./src
../LICENSE
];
}
=> "/nix/store/...-source"
# The root has to be a local filesystem path
toSource {
root = "/nix/store/...-source";
fileset = ./.;
}
=> <error>
```
:::
*/
toSource = {
root,
fileset, fileset,
}: }:
let let
@ -418,7 +480,7 @@ in {
}; };
/* /**
The list of file paths contained in the given file set. The list of file paths contained in the given file set.
:::{.note} :::{.note}
@ -432,24 +494,37 @@ in {
The resulting list of files can be turned back into a file set using [`lib.fileset.unions`](#function-library-lib.fileset.unions). The resulting list of files can be turned back into a file set using [`lib.fileset.unions`](#function-library-lib.fileset.unions).
Type:
toList :: FileSet -> [ Path ]
Example: # Inputs
toList ./.
[ ./README.md ./Makefile ./src/main.c ./src/main.h ]
toList (difference ./. ./src) `fileset`
[ ./README.md ./Makefile ]
: The file set whose file paths to return. This argument can also be a path, which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
# Type
```
toList :: FileSet -> [ Path ]
```
# Examples
:::{.example}
## `lib.fileset.toList` usage example
```nix
toList ./.
[ ./README.md ./Makefile ./src/main.c ./src/main.h ]
toList (difference ./. ./src)
[ ./README.md ./Makefile ]
```
:::
*/ */
toList = toList = fileset:
# The file set whose file paths to return.
# This argument can also be a path,
# which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
fileset:
_toList (_coerce "lib.fileset.toList: Argument" fileset); _toList (_coerce "lib.fileset.toList: Argument" fileset);
/* /**
The file set containing all files that are in either of two given file sets. The file set containing all files that are in either of two given file sets.
This is the same as [`unions`](#function-library-lib.fileset.unions), This is the same as [`unions`](#function-library-lib.fileset.unions),
but takes just two file sets instead of a list. but takes just two file sets instead of a list.
@ -458,26 +533,41 @@ in {
The given file sets are evaluated as lazily as possible, The given file sets are evaluated as lazily as possible,
with the first argument being evaluated first if needed. with the first argument being evaluated first if needed.
Type:
union :: FileSet -> FileSet -> FileSet
Example: # Inputs
# Create a file set containing the file `Makefile`
# and all files recursively in the `src` directory
union ./Makefile ./src
# Create a file set containing the file `Makefile` `fileset1`
# and the LICENSE file from the parent directory
union ./Makefile ../LICENSE : The first file set. This argument can also be a path, which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
`fileset2`
: The second file set. This argument can also be a path, which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
# Type
```
union :: FileSet -> FileSet -> FileSet
```
# Examples
:::{.example}
## `lib.fileset.union` usage example
```nix
# Create a file set containing the file `Makefile`
# and all files recursively in the `src` directory
union ./Makefile ./src
# Create a file set containing the file `Makefile`
# and the LICENSE file from the parent directory
union ./Makefile ../LICENSE
```
:::
*/ */
union = union =
# The first file set.
# This argument can also be a path,
# which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
fileset1: fileset1:
# The second file set.
# This argument can also be a path,
# which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
fileset2: fileset2:
_unionMany _unionMany
(_coerceMany "lib.fileset.union" [ (_coerceMany "lib.fileset.union" [
@ -491,7 +581,7 @@ in {
} }
]); ]);
/* /**
The file set containing all files that are in any of the given file sets. The file set containing all files that are in any of the given file sets.
This is the same as [`union`](#function-library-lib.fileset.unions), This is the same as [`union`](#function-library-lib.fileset.unions),
but takes a list of file sets instead of just two. but takes a list of file sets instead of just two.
@ -500,32 +590,46 @@ in {
The given file sets are evaluated as lazily as possible, The given file sets are evaluated as lazily as possible,
with earlier elements being evaluated first if needed. with earlier elements being evaluated first if needed.
Type:
unions :: [ FileSet ] -> FileSet
Example: # Inputs
# Create a file set containing selected files
unions [
# Include the single file `Makefile` in the current directory
# This errors if the file doesn't exist
./Makefile
# Recursively include all files in the `src/code` directory `filesets`
# If this directory is empty this has no effect
./src/code
# Include the files `run.sh` and `unit.c` from the `tests` directory : A list of file sets. The elements can also be paths, which get [implicitly coerced to file sets](#sec-fileset-path-coercion).
./tests/run.sh
./tests/unit.c
# Include the `LICENSE` file from the parent directory # Type
../LICENSE
] ```
unions :: [ FileSet ] -> FileSet
```
# Examples
:::{.example}
## `lib.fileset.unions` usage example
```nix
# Create a file set containing selected files
unions [
# Include the single file `Makefile` in the current directory
# This errors if the file doesn't exist
./Makefile
# Recursively include all files in the `src/code` directory
# If this directory is empty this has no effect
./src/code
# Include the files `run.sh` and `unit.c` from the `tests` directory
./tests/run.sh
./tests/unit.c
# Include the `LICENSE` file from the parent directory
../LICENSE
]
```
:::
*/ */
unions = unions =
# A list of file sets.
# The elements can also be paths,
# which get [implicitly coerced to file sets](#sec-fileset-path-coercion).
filesets: filesets:
if ! isList filesets then if ! isList filesets then
throw '' throw ''
@ -541,28 +645,43 @@ in {
_unionMany _unionMany
]; ];
/* /**
The file set containing all files that are in both of two given file sets. The file set containing all files that are in both of two given file sets.
See also [Intersection (set theory)](https://en.wikipedia.org/wiki/Intersection_(set_theory)). See also [Intersection (set theory)](https://en.wikipedia.org/wiki/Intersection_(set_theory)).
The given file sets are evaluated as lazily as possible, The given file sets are evaluated as lazily as possible,
with the first argument being evaluated first if needed. with the first argument being evaluated first if needed.
Type:
intersection :: FileSet -> FileSet -> FileSet
Example: # Inputs
# Limit the selected files to the ones in ./., so only ./src and ./Makefile
intersection ./. (unions [ ../LICENSE ./src ./Makefile ]) `fileset1`
: The first file set. This argument can also be a path, which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
`fileset2`
: The second file set. This argument can also be a path, which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
# Type
```
intersection :: FileSet -> FileSet -> FileSet
```
# Examples
:::{.example}
## `lib.fileset.intersection` usage example
```nix
# Limit the selected files to the ones in ./., so only ./src and ./Makefile
intersection ./. (unions [ ../LICENSE ./src ./Makefile ])
```
:::
*/ */
intersection = intersection =
# The first file set.
# This argument can also be a path,
# which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
fileset1: fileset1:
# The second file set.
# This argument can also be a path,
# which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
fileset2: fileset2:
let let
filesets = _coerceMany "lib.fileset.intersection" [ filesets = _coerceMany "lib.fileset.intersection" [
@ -580,41 +699,52 @@ in {
(elemAt filesets 0) (elemAt filesets 0)
(elemAt filesets 1); (elemAt filesets 1);
/* /**
The file set containing all files from the first file set that are not in the second file set. The file set containing all files from the first file set that are not in the second file set.
See also [Difference (set theory)](https://en.wikipedia.org/wiki/Complement_(set_theory)#Relative_complement). See also [Difference (set theory)](https://en.wikipedia.org/wiki/Complement_(set_theory)#Relative_complement).
The given file sets are evaluated as lazily as possible, The given file sets are evaluated as lazily as possible,
with the first argument being evaluated first if needed. with the first argument being evaluated first if needed.
Type:
union :: FileSet -> FileSet -> FileSet
Example: # Inputs
# Create a file set containing all files from the current directory,
# except ones under ./tests
difference ./. ./tests
let `positive`
# A set of Nix-related files
nixFiles = unions [ ./default.nix ./nix ./tests/default.nix ]; : The positive file set. The result can only contain files that are also in this file set. This argument can also be a path, which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
in
# Create a file set containing all files under ./tests, except ones in `nixFiles`, `negative`
# meaning only without ./tests/default.nix
difference ./tests nixFiles : The negative file set. The result will never contain files that are also in this file set. This argument can also be a path, which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
# Type
```
union :: FileSet -> FileSet -> FileSet
```
# Examples
:::{.example}
## `lib.fileset.difference` usage example
```nix
# Create a file set containing all files from the current directory,
# except ones under ./tests
difference ./. ./tests
let
# A set of Nix-related files
nixFiles = unions [ ./default.nix ./nix ./tests/default.nix ];
in
# Create a file set containing all files under ./tests, except ones in `nixFiles`,
# meaning only without ./tests/default.nix
difference ./tests nixFiles
```
:::
*/ */
difference = difference =
# The positive file set.
# The result can only contain files that are also in this file set.
#
# This argument can also be a path,
# which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
positive: positive:
# The negative file set.
# The result will never contain files that are also in this file set.
#
# This argument can also be a path,
# which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
negative: negative:
let let
filesets = _coerceMany "lib.fileset.difference" [ filesets = _coerceMany "lib.fileset.difference" [
@ -632,36 +762,15 @@ in {
(elemAt filesets 0) (elemAt filesets 0)
(elemAt filesets 1); (elemAt filesets 1);
/* /**
Filter a file set to only contain files matching some predicate. Filter a file set to only contain files matching some predicate.
Type:
fileFilter ::
({
name :: String,
type :: String,
hasExt :: String -> Bool,
...
} -> Bool)
-> Path
-> FileSet
Example: # Inputs
# Include all regular `default.nix` files in the current directory
fileFilter (file: file.name == "default.nix") ./.
# Include all non-Nix files from the current directory `predicate`
fileFilter (file: ! file.hasExt "nix") ./.
# Include all files that start with a "." in the current directory : The predicate function to call on all files contained in given file set.
fileFilter (file: hasPrefix "." file.name) ./.
# Include all regular files (not symlinks or others) in the current directory
fileFilter (file: file.type == "regular") ./.
*/
fileFilter =
/*
The predicate function to call on all files contained in given file set.
A file is included in the resulting file set if this function returns true for it. A file is included in the resulting file set if this function returns true for it.
This function is called with an attribute set containing these attributes: This function is called with an attribute set containing these attributes:
@ -678,9 +787,47 @@ in {
`hasExt "gitignore"` is true. `hasExt "gitignore"` is true.
Other attributes may be added in the future. Other attributes may be added in the future.
*/
`path`
: The path whose files to filter
# Type
```
fileFilter ::
({
name :: String,
type :: String,
hasExt :: String -> Bool,
...
} -> Bool)
-> Path
-> FileSet
```
# Examples
:::{.example}
## `lib.fileset.fileFilter` usage example
```nix
# Include all regular `default.nix` files in the current directory
fileFilter (file: file.name == "default.nix") ./.
# Include all non-Nix files from the current directory
fileFilter (file: ! file.hasExt "nix") ./.
# Include all files that start with a "." in the current directory
fileFilter (file: hasPrefix "." file.name) ./.
# Include all regular files (not symlinks or others) in the current directory
fileFilter (file: file.type == "regular") ./.
```
:::
*/
fileFilter =
predicate: predicate:
# The path whose files to filter
path: path:
if ! isFunction predicate then if ! isFunction predicate then
throw '' throw ''
@ -699,23 +846,37 @@ in {
else else
_fileFilter predicate path; _fileFilter predicate path;
/* /**
Create a file set with the same files as a `lib.sources`-based value. Create a file set with the same files as a `lib.sources`-based value.
This does not import any of the files into the store. This does not import any of the files into the store.
This can be used to gradually migrate from `lib.sources`-based filtering to `lib.fileset`. This can be used to gradually migrate from `lib.sources`-based filtering to `lib.fileset`.
A file set can be turned back into a source using [`toSource`](#function-library-lib.fileset.toSource). A file set can be turned back into a source using [`toSource`](#function-library-lib.fileset.toSource).
:::{.note} :::{.note}
File sets cannot represent empty directories. File sets cannot represent empty directories.
Turning the result of this function back into a source using `toSource` will therefore not preserve empty directories. Turning the result of this function back into a source using `toSource` will therefore not preserve empty directories.
::: :::
Type:
# Inputs
`source`
: 1\. Function argument
# Type
```
fromSource :: SourceLike -> FileSet fromSource :: SourceLike -> FileSet
```
Example: # Examples
:::{.example}
## `lib.fileset.fromSource` usage example
```nix
# There's no cleanSource-like function for file sets yet, # There's no cleanSource-like function for file sets yet,
# but we can just convert cleanSource to a file set and use it that way # but we can just convert cleanSource to a file set and use it that way
toSource { toSource {
@ -740,6 +901,9 @@ in {
./Makefile ./Makefile
./src ./src
]); ]);
```
:::
*/ */
fromSource = source: fromSource = source:
let let
@ -768,27 +932,41 @@ in {
# If there's no filter, no need to run the expensive conversion, all subpaths will be included # If there's no filter, no need to run the expensive conversion, all subpaths will be included
_singleton path; _singleton path;
/* /**
Create a file set containing all [Git-tracked files](https://git-scm.com/book/en/v2/Git-Basics-Recording-Changes-to-the-Repository) in a repository. Create a file set containing all [Git-tracked files](https://git-scm.com/book/en/v2/Git-Basics-Recording-Changes-to-the-Repository) in a repository.
This function behaves like [`gitTrackedWith { }`](#function-library-lib.fileset.gitTrackedWith) - using the defaults. This function behaves like [`gitTrackedWith { }`](#function-library-lib.fileset.gitTrackedWith) - using the defaults.
Type:
gitTracked :: Path -> FileSet
Example: # Inputs
# Include all files tracked by the Git repository in the current directory
gitTracked ./.
# Include only files tracked by the Git repository in the parent directory `path`
# that are also in the current directory
intersection ./. (gitTracked ../.) : The [path](https://nixos.org/manual/nix/stable/language/values#type-path) to the working directory of a local Git repository.
This directory must contain a `.git` file or subdirectory.
# Type
```
gitTracked :: Path -> FileSet
```
# Examples
:::{.example}
## `lib.fileset.gitTracked` usage example
```nix
# Include all files tracked by the Git repository in the current directory
gitTracked ./.
# Include only files tracked by the Git repository in the parent directory
# that are also in the current directory
intersection ./. (gitTracked ../.)
```
:::
*/ */
gitTracked = gitTracked =
/*
The [path](https://nixos.org/manual/nix/stable/language/values#type-path) to the working directory of a local Git repository.
This directory must contain a `.git` file or subdirectory.
*/
path: path:
_fromFetchGit _fromFetchGit
"gitTracked" "gitTracked"
@ -796,7 +974,7 @@ in {
path path
{}; {};
/* /**
Create a file set containing all [Git-tracked files](https://git-scm.com/book/en/v2/Git-Basics-Recording-Changes-to-the-Repository) in a repository. Create a file set containing all [Git-tracked files](https://git-scm.com/book/en/v2/Git-Basics-Recording-Changes-to-the-Repository) in a repository.
The first argument allows configuration with an attribute set, The first argument allows configuration with an attribute set,
while the second argument is the path to the Git working tree. while the second argument is the path to the Git working tree.
@ -820,27 +998,40 @@ in {
This may change in the future. This may change in the future.
::: :::
Type:
gitTrackedWith :: { recurseSubmodules :: Bool ? false } -> Path -> FileSet
Example: # Inputs
# Include all files tracked by the Git repository in the current directory
# and any submodules under it `options` (attribute set)
gitTracked { recurseSubmodules = true; } ./. : `recurseSubmodules` (optional, default: `false`)
: Whether to recurse into [Git submodules](https://git-scm.com/book/en/v2/Git-Tools-Submodules) to also include their tracked files.
If `true`, this is equivalent to passing the [--recurse-submodules](https://git-scm.com/docs/git-ls-files#Documentation/git-ls-files.txt---recurse-submodules) flag to `git ls-files`.
`path`
: The [path](https://nixos.org/manual/nix/stable/language/values#type-path) to the working directory of a local Git repository.
This directory must contain a `.git` file or subdirectory.
# Type
```
gitTrackedWith :: { recurseSubmodules :: Bool ? false } -> Path -> FileSet
```
# Examples
:::{.example}
## `lib.fileset.gitTrackedWith` usage example
```nix
# Include all files tracked by the Git repository in the current directory
# and any submodules under it
gitTracked { recurseSubmodules = true; } ./.
```
:::
*/ */
gitTrackedWith = gitTrackedWith =
{ {
/*
(optional, default: `false`) Whether to recurse into [Git submodules](https://git-scm.com/book/en/v2/Git-Tools-Submodules) to also include their tracked files.
If `true`, this is equivalent to passing the [--recurse-submodules](https://git-scm.com/docs/git-ls-files#Documentation/git-ls-files.txt---recurse-submodules) flag to `git ls-files`.
*/
recurseSubmodules ? false, recurseSubmodules ? false,
}: }:
/*
The [path](https://nixos.org/manual/nix/stable/language/values#type-path) to the working directory of a local Git repository.
This directory must contain a `.git` file or subdirectory.
*/
path: path:
if ! isBool recurseSubmodules then if ! isBool recurseSubmodules then
throw "lib.fileset.gitTrackedWith: Expected the attribute `recurseSubmodules` of the first argument to be a boolean, but it's a ${typeOf recurseSubmodules} instead." throw "lib.fileset.gitTrackedWith: Expected the attribute `recurseSubmodules` of the first argument to be a boolean, but it's a ${typeOf recurseSubmodules} instead."

View file

@ -1,6 +1,6 @@
{ lib, ... }: { lib, ... }:
rec { rec {
/* /**
`fix f` computes the fixed point of the given function `f`. In other words, the return value is `x` in `x = f x`. `fix f` computes the fixed point of the given function `f`. In other words, the return value is `x` in `x = f x`.
`f` must be a lazy function. `f` must be a lazy function.
@ -63,27 +63,52 @@ rec {
See [`extends`](#function-library-lib.fixedPoints.extends) for an example use case. See [`extends`](#function-library-lib.fixedPoints.extends) for an example use case.
There `self` is also often called `final`. There `self` is also often called `final`.
Type: fix :: (a -> a) -> a
Example: # Inputs
fix (self: { foo = "foo"; bar = "bar"; foobar = self.foo + self.bar; })
=> { bar = "bar"; foo = "foo"; foobar = "foobar"; }
fix (self: [ 1 2 (elemAt self 0 + elemAt self 1) ]) `f`
=> [ 1 2 3 ]
: 1\. Function argument
# Type
```
fix :: (a -> a) -> a
```
# Examples
:::{.example}
## `lib.fixedPoints.fix` usage example
```nix
fix (self: { foo = "foo"; bar = "bar"; foobar = self.foo + self.bar; })
=> { bar = "bar"; foo = "foo"; foobar = "foobar"; }
fix (self: [ 1 2 (elemAt self 0 + elemAt self 1) ])
=> [ 1 2 3 ]
```
:::
*/ */
fix = f: let x = f x; in x; fix = f: let x = f x; in x;
/* /**
A variant of `fix` that records the original recursive attribute set in the A variant of `fix` that records the original recursive attribute set in the
result, in an attribute named `__unfix__`. result, in an attribute named `__unfix__`.
This is useful in combination with the `extends` function to This is useful in combination with the `extends` function to
implement deep overriding. implement deep overriding.
# Inputs
`f`
: 1\. Function argument
*/ */
fix' = f: let x = f x // { __unfix__ = f; }; in x; fix' = f: let x = f x // { __unfix__ = f; }; in x;
/* /**
Return the fixpoint that `f` converges to when called iteratively, starting Return the fixpoint that `f` converges to when called iteratively, starting
with the input `x`. with the input `x`.
@ -92,7 +117,22 @@ rec {
0 0
``` ```
Type: (a -> a) -> a -> a
# Inputs
`f`
: 1\. Function argument
`x`
: 2\. Function argument
# Type
```
(a -> a) -> a -> a
```
*/ */
converge = f: x: converge = f: x:
let let
@ -102,7 +142,7 @@ rec {
then x then x
else converge f x'; else converge f x';
/* /**
Extend a function using an overlay. Extend a function using an overlay.
Overlays allow modifying and extending fixed-point functions, specifically ones returning attribute sets. Overlays allow modifying and extending fixed-point functions, specifically ones returning attribute sets.
@ -217,32 +257,50 @@ rec {
``` ```
::: :::
Type:
extends :: (Attrs -> Attrs -> Attrs) # The overlay to apply to the fixed-point function
-> (Attrs -> Attrs) # A fixed-point function
-> (Attrs -> Attrs) # The resulting fixed-point function
Example: # Inputs
f = final: { a = 1; b = final.a + 2; }
fix f `overlay`
=> { a = 1; b = 3; }
fix (extends (final: prev: { a = prev.a + 10; }) f) : The overlay to apply to the fixed-point function
=> { a = 11; b = 13; }
fix (extends (final: prev: { b = final.a + 5; }) f) `f`
=> { a = 1; b = 6; }
fix (extends (final: prev: { c = final.a + final.b; }) f) : The fixed-point function
=> { a = 1; b = 3; c = 4; }
# Type
```
extends :: (Attrs -> Attrs -> Attrs) # The overlay to apply to the fixed-point function
-> (Attrs -> Attrs) # A fixed-point function
-> (Attrs -> Attrs) # The resulting fixed-point function
```
# Examples
:::{.example}
## `lib.fixedPoints.extends` usage example
```nix
f = final: { a = 1; b = final.a + 2; }
fix f
=> { a = 1; b = 3; }
fix (extends (final: prev: { a = prev.a + 10; }) f)
=> { a = 11; b = 13; }
fix (extends (final: prev: { b = final.a + 5; }) f)
=> { a = 1; b = 6; }
fix (extends (final: prev: { c = final.a + final.b; }) f)
=> { a = 1; b = 3; c = 4; }
```
:::
*/ */
extends = extends =
# The overlay to apply to the fixed-point function
overlay: overlay:
# The fixed-point function
f: f:
# Wrap with parenthesis to prevent nixdoc from rendering the `final` argument in the documentation
# The result should be thought of as a function, the argument of that function is not an argument to `extends` itself # The result should be thought of as a function, the argument of that function is not an argument to `extends` itself
( (
final: final:
@ -252,10 +310,29 @@ rec {
prev // overlay final prev prev // overlay final prev
); );
/* /**
Compose two extending functions of the type expected by 'extends' Compose two extending functions of the type expected by 'extends'
into one where changes made in the first are available in the into one where changes made in the first are available in the
'super' of the second 'super' of the second
# Inputs
`f`
: 1\. Function argument
`g`
: 2\. Function argument
`final`
: 3\. Function argument
`prev`
: 4\. Function argument
*/ */
composeExtensions = composeExtensions =
f: g: final: prev: f: g: final: prev:
@ -263,7 +340,7 @@ rec {
prev' = prev // fApplied; prev' = prev // fApplied;
in fApplied // g final prev'; in fApplied // g final prev';
/* /**
Compose several extending functions of the type expected by 'extends' into Compose several extending functions of the type expected by 'extends' into
one where changes made in preceding functions are made available to one where changes made in preceding functions are made available to
subsequent ones. subsequent ones.
@ -276,7 +353,7 @@ rec {
composeManyExtensions = composeManyExtensions =
lib.foldr (x: y: composeExtensions x y) (final: prev: {}); lib.foldr (x: y: composeExtensions x y) (final: prev: {});
/* /**
Create an overridable, recursive attribute set. For example: Create an overridable, recursive attribute set. For example:
``` ```
@ -298,9 +375,20 @@ rec {
*/ */
makeExtensible = makeExtensibleWithCustomName "extend"; makeExtensible = makeExtensibleWithCustomName "extend";
/* /**
Same as `makeExtensible` but the name of the extending attribute is Same as `makeExtensible` but the name of the extending attribute is
customized. customized.
# Inputs
`extenderName`
: 1\. Function argument
`rattrs`
: 2\. Function argument
*/ */
makeExtensibleWithCustomName = extenderName: rattrs: makeExtensibleWithCustomName = extenderName: rattrs:
fix' (self: (rattrs self) // { fix' (self: (rattrs self) // {

View file

@ -902,6 +902,17 @@ in mkLicense lset) ({
free = false; free = false;
}; };
ncbiPd = {
spdxId = "NCBI-PD";
fullname = "NCBI Public Domain Notice";
# Due to United States copyright law, anything with this "license" does not have a copyright in the
# jurisdiction of the United States. However, other jurisdictions may assign the United States
# government copyright to the work, and the license explicitly states that in such a case, no license
# is granted. This is nonfree and nonredistributable in most jurisdictions other than the United States.
free = false;
redistributable = false;
};
ncsa = { ncsa = {
spdxId = "NCSA"; spdxId = "NCSA";
fullName = "University of Illinois/NCSA Open Source License"; fullName = "University of Illinois/NCSA Open Source License";

View file

@ -26,8 +26,12 @@ rec {
dontDistribute = drv: addMetaAttrs { hydraPlatforms = []; } drv; dontDistribute = drv: addMetaAttrs { hydraPlatforms = []; } drv;
/* Change the symbolic name of a package for presentation purposes /*
(i.e., so that nix-env users can tell them apart). Change the [symbolic name of a derivation](https://nixos.org/manual/nix/stable/language/derivations.html#attr-name).
:::{.warning}
Dependent derivations will be rebuilt when the symbolic name is changed.
:::
*/ */
setName = name: drv: drv // {inherit name;}; setName = name: drv: drv // {inherit name;};

View file

@ -93,6 +93,7 @@ let
else if final.isAndroid then "bionic" else if final.isAndroid then "bionic"
else if final.isLinux /* default */ then "glibc" else if final.isLinux /* default */ then "glibc"
else if final.isFreeBSD then "fblibc" else if final.isFreeBSD then "fblibc"
else if final.isOpenBSD then "oblibc"
else if final.isNetBSD then "nblibc" else if final.isNetBSD then "nblibc"
else if final.isAvr then "avrlibc" else if final.isAvr then "avrlibc"
else if final.isGhcjs then null else if final.isGhcjs then null

View file

@ -342,6 +342,11 @@ rec {
useLLVM = true; useLLVM = true;
}; };
x86_64-openbsd = {
config = "x86_64-unknown-openbsd";
useLLVM = true;
};
# #
# WASM # WASM
# #

View file

@ -469,6 +469,7 @@ rec {
elem (elemAt l 2) [ "wasi" "redox" "mmixware" "ghcjs" "mingw32" ] || elem (elemAt l 2) [ "wasi" "redox" "mmixware" "ghcjs" "mingw32" ] ||
hasPrefix "freebsd" (elemAt l 2) || hasPrefix "freebsd" (elemAt l 2) ||
hasPrefix "netbsd" (elemAt l 2) || hasPrefix "netbsd" (elemAt l 2) ||
hasPrefix "openbsd" (elemAt l 2) ||
hasPrefix "genode" (elemAt l 2) hasPrefix "genode" (elemAt l 2)
then { then {
cpu = elemAt l 0; cpu = elemAt l 0;

View file

@ -1639,6 +1639,27 @@ runTests {
]; ];
}; };
testToGNUCommandLineSeparator = {
expr = cli.toGNUCommandLine { optionValueSeparator = "="; } {
data = builtins.toJSON { id = 0; };
X = "PUT";
retry = 3;
retry-delay = null;
url = [ "https://example.com/foo" "https://example.com/bar" ];
silent = false;
verbose = true;
};
expected = [
"-X=PUT"
"--data={\"id\":0}"
"--retry=3"
"--url=https://example.com/foo"
"--url=https://example.com/bar"
"--verbose"
];
};
testToGNUCommandLineShell = { testToGNUCommandLineShell = {
expr = cli.toGNUCommandLineShell {} { expr = cli.toGNUCommandLineShell {} {
data = builtins.toJSON { id = 0; }; data = builtins.toJSON { id = 0; };

View file

@ -403,7 +403,7 @@ in {
On each release the first letter is bumped and a new animal is chosen On each release the first letter is bumped and a new animal is chosen
starting with that new letter. starting with that new letter.
*/ */
codeName = "Uakari"; codeName = "Vicuña";
/** /**
Returns the current nixpkgs version suffix as string. Returns the current nixpkgs version suffix as string.

View file

@ -272,6 +272,12 @@
githubId = 381298; githubId = 381298;
name = "9R"; name = "9R";
}; };
_9yokuro = {
email = "xzstd099@protonmail.com";
github = "9yokuro";
githubId = 119095935;
name = "9yokuro";
};
A1ca7raz = { A1ca7raz = {
email = "aya@wtm.moe"; email = "aya@wtm.moe";
github = "A1ca7raz"; github = "A1ca7raz";
@ -571,6 +577,15 @@
fingerprint = "51E4 F5AB 1B82 BE45 B422 9CC2 43A5 E25A A5A2 7849"; fingerprint = "51E4 F5AB 1B82 BE45 B422 9CC2 43A5 E25A A5A2 7849";
}]; }];
}; };
aduh95 = {
email = "duhamelantoine1995@gmail.com";
github = "aduh95";
githubId = 14309773;
name = "Antoine du Hamel";
keys = [{
fingerprint = "C0D6 2484 39F1 D560 4AAF FB40 21D9 00FF DB23 3756";
}];
};
aerialx = { aerialx = {
email = "aaron+nixos@aaronlindsay.com"; email = "aaron+nixos@aaronlindsay.com";
github = "AerialX"; github = "AerialX";
@ -1080,6 +1095,12 @@
fingerprint = "1F73 8879 5E5A 3DFC E2B3 FA32 87D1 AADC D25B 8DEE"; fingerprint = "1F73 8879 5E5A 3DFC E2B3 FA32 87D1 AADC D25B 8DEE";
}]; }];
}; };
aman9das = {
email = "amandas62640@gmail.com";
github = "Aman9das";
githubId = 39594914;
name = "Aman Das";
};
amanjeev = { amanjeev = {
email = "aj@amanjeev.com"; email = "aj@amanjeev.com";
github = "amanjeev"; github = "amanjeev";
@ -1168,12 +1189,30 @@
githubId = 858965; githubId = 858965;
name = "Andrew Morsillo"; name = "Andrew Morsillo";
}; };
amyipdev = {
email = "amy@amyip.net";
github = "amyipdev";
githubId = 46307646;
name = "Amy Parker";
keys = [{
fingerprint = "7786 034B D521 49F5 1B0A 2A14 B112 2F04 E962 DDC5";
}];
};
amz-x = { amz-x = {
email = "mail@amz-x.com"; email = "mail@amz-x.com";
github = "amz-x"; github = "amz-x";
githubId = 18249234; githubId = 18249234;
name = "Christopher Crouse"; name = "Christopher Crouse";
}; };
anas = {
email = "anas.elgarhy.dev@gmail.com";
github = "0x61nas";
githubId = 44965145;
name = "Anas Elgarhy";
keys = [{
fingerprint = "E10B D192 9231 08C7 3C35 7EC3 83E0 3DC6 F383 4086";
}];
};
AnatolyPopov = { AnatolyPopov = {
email = "aipopov@live.ru"; email = "aipopov@live.ru";
github = "AnatolyPopov"; github = "AnatolyPopov";
@ -1787,6 +1826,12 @@
githubId = 816777; githubId = 816777;
name = "Ashley Gillman"; name = "Ashley Gillman";
}; };
ashgoldofficial = {
email = "ashley.goldwater@gmail.com";
github = "ASHGOLDOFFICIAL";
githubId = 104313094;
name = "Andrey Shaat";
};
ashkitten = { ashkitten = {
email = "ashlea@protonmail.com"; email = "ashlea@protonmail.com";
github = "ashkitten"; github = "ashkitten";
@ -1876,6 +1921,12 @@
fingerprint = "BF47 81E1 F304 1ADF 18CE C401 DE16 C7D1 536D A72F"; fingerprint = "BF47 81E1 F304 1ADF 18CE C401 DE16 C7D1 536D A72F";
}]; }];
}; };
astronaut0212 = {
email = "goatastronaut0212@proton.me";
github = "goatastronaut0212";
githubId = 119769817;
name = "goatastronaut0212";
};
astsmtl = { astsmtl = {
email = "astsmtl@yandex.ru"; email = "astsmtl@yandex.ru";
github = "astsmtl"; github = "astsmtl";
@ -2029,6 +2080,12 @@
githubId = 1217745; githubId = 1217745;
name = "Aldwin Vlasblom"; name = "Aldwin Vlasblom";
}; };
aveltras = {
email = "romain.viallard@outlook.fr";
github = "aveltras";
githubId = 790607;
name = "Romain Viallard";
};
averagebit = { averagebit = {
email = "averagebit@pm.me"; email = "averagebit@pm.me";
github = "averagebit"; github = "averagebit";
@ -2138,15 +2195,6 @@
fingerprint = "6309 E212 29D4 DA30 AF24 BDED 754B 5C09 63C4 2C50"; fingerprint = "6309 E212 29D4 DA30 AF24 BDED 754B 5C09 63C4 2C50";
}]; }];
}; };
babariviere = {
email = "me@babariviere.com";
github = "babariviere";
githubId = 12128029;
name = "Bastien Rivière";
keys = [{
fingerprint = "74AA 9AB4 E6FF 872B 3C5A CB3E 3903 5CC0 B75D 1142";
}];
};
babbaj = { babbaj = {
name = "babbaj"; name = "babbaj";
email = "babbaj45@gmail.com"; email = "babbaj45@gmail.com";
@ -2986,6 +3034,14 @@
githubId = 184563; githubId = 184563;
name = "Bruno Paz"; name = "Bruno Paz";
}; };
brsvh = {
email = "bsc@brsvh.org";
github = "brsvh";
githubId = 63050399;
keys = [ { fingerprint = "7B74 0DB9 F2AC 6D3B 226B C530 78D7 4502 D92E 0218"; } ];
matrix = "@brsvh:mozilla.org";
name = "Burgess Chang";
};
bryanasdev000 = { bryanasdev000 = {
email = "bryanasdev000@gmail.com"; email = "bryanasdev000@gmail.com";
matrix = "@bryanasdev000:matrix.org"; matrix = "@bryanasdev000:matrix.org";
@ -3199,6 +3255,16 @@
githubId = 3212452; githubId = 3212452;
name = "Cameron Nemo"; name = "Cameron Nemo";
}; };
cameronraysmith = {
email = "cameronraysmith@gmail.com";
matrix = "@cameronraysmith:matrix.org";
github = "cameronraysmith";
githubId = 420942;
name = "Cameron Smith";
keys = [{
fingerprint = "3F14 C258 856E 88AE E0F9 661E FF04 3B36 8811 DD1C";
}];
};
camillemndn = { camillemndn = {
email = "camillemondon@free.fr"; email = "camillemondon@free.fr";
github = "camillemndn"; github = "camillemndn";
@ -3525,6 +3591,16 @@
githubId = 28303440; githubId = 28303440;
name = "Max Hausch"; name = "Max Hausch";
}; };
cherrykitten = {
email = "contact@cherrykitten.dev";
github = "cherrykitten";
githubId = 20300586;
matrix = "@sammy:cherrykitten.dev";
name = "CherryKitten";
keys = [{
fingerprint = "264C FA1A 194C 585D F822 F673 C01A 7CBB A617 BD5F";
}];
};
chessai = { chessai = {
email = "chessai1996@gmail.com"; email = "chessai1996@gmail.com";
github = "chessai"; github = "chessai";
@ -3582,6 +3658,12 @@
githubId = 1118859; githubId = 1118859;
name = "Scott Worley"; name = "Scott Worley";
}; };
chmouel = {
email = "chmouel@chmouel.com";
github = "chmouel";
githubId = 98980;
name = "Chmouel Boudjnah";
};
choochootrain = { choochootrain = {
email = "hurshal@imap.cc"; email = "hurshal@imap.cc";
github = "choochootrain"; github = "choochootrain";
@ -3683,6 +3765,15 @@
github = "ciferkey"; github = "ciferkey";
githubId = 101422; githubId = 101422;
}; };
cig0 = {
name = "Martín Cigorraga";
email = "cig0.github@gmail.com";
github = "cig0";
githubId = 394089;
keys = [{
fingerprint = "1828 B459 DB9A 7EE2 03F4 7E6E AFBE ACC5 5D93 84A0";
}];
};
cigrainger = { cigrainger = {
name = "Christopher Grainger"; name = "Christopher Grainger";
email = "chris@amplified.ai"; email = "chris@amplified.ai";
@ -3728,6 +3819,12 @@
githubId = 136485; githubId = 136485;
name = "Chad Jablonski"; name = "Chad Jablonski";
}; };
cjshearer = {
email = "cjshearer@live.com";
github = "cjshearer";
githubId = 7173077;
name = "Cody Shearer";
};
ck3d = { ck3d = {
email = "ck3d@gmx.de"; email = "ck3d@gmx.de";
github = "ck3d"; github = "ck3d";
@ -3762,6 +3859,15 @@
githubId = 199180; githubId = 199180;
name = "Claes Wallin"; name = "Claes Wallin";
}; };
clebs = {
email = "borja.clemente@gmail.com";
github = "clebs";
githubId = 1059661;
name = "Borja Clemente";
keys = [{
fingerprint = "C4E1 58BD FD33 3C77 B6C7 178E 2539 757E F64C 60DD";
}];
};
cleeyv = { cleeyv = {
email = "cleeyv@riseup.net"; email = "cleeyv@riseup.net";
github = "cleeyv"; github = "cleeyv";
@ -3846,6 +3952,14 @@
githubId = 180339; githubId = 180339;
name = "Andrew Cobb"; name = "Andrew Cobb";
}; };
coca = {
github = "Coca162";
githubId = 62479942;
name = "Coca";
keys = [{
fingerprint = "99CB 86FF 62BB 7DA4 8903 B16D 0328 2DF8 8179 AB19";
}];
};
coconnor = { coconnor = {
email = "coreyoconnor@gmail.com"; email = "coreyoconnor@gmail.com";
github = "coreyoconnor"; github = "coreyoconnor";
@ -4203,6 +4317,11 @@
githubId = 111202; githubId = 111202;
name = "Henry Bubert"; name = "Henry Bubert";
}; };
cryptoluks = {
github = "cryptoluks";
githubId = 9020527;
name = "cryptoluks";
};
CrystalGamma = { CrystalGamma = {
email = "nixos@crystalgamma.de"; email = "nixos@crystalgamma.de";
github = "CrystalGamma"; github = "CrystalGamma";
@ -4521,9 +4640,14 @@
github = "DataHearth"; github = "DataHearth";
githubId = 28595242; githubId = 28595242;
name = "DataHearth"; name = "DataHearth";
keys = [{ keys = [
fingerprint = "A129 2547 0298 BFEE 7EE0 92B3 946E 2D0C 410C 7B3D"; {
}]; fingerprint = "A129 2547 0298 BFEE 7EE0 92B3 946E 2D0C 410C 7B3D";
}
{
fingerprint = "FFC4 92C1 5320 B05D 0F8D 7D58 ABF6 737C 6339 6D35";
}
];
}; };
davegallant = { davegallant = {
name = "Dave Gallant"; name = "Dave Gallant";
@ -6481,6 +6605,15 @@
githubId = 225893; githubId = 225893;
name = "James Cook"; name = "James Cook";
}; };
fangpen = {
email = "hello@fangpenlin.com";
github = "fangpenlin";
githubId = 201615;
name = "Fang-Pen Lin";
keys = [{
fingerprint = "7130 3454 A7CD 0F0A 941A F9A3 2A26 9964 AD29 2131";
}];
};
farcaller = { farcaller = {
name = "Vladimir Pouzanov"; name = "Vladimir Pouzanov";
email = "farcaller@gmail.com"; email = "farcaller@gmail.com";
@ -6804,6 +6937,12 @@
fingerprint = "B722 6464 838F 8BDB 2BEA C8C8 5B0E FDDF BA81 6105"; fingerprint = "B722 6464 838F 8BDB 2BEA C8C8 5B0E FDDF BA81 6105";
}]; }];
}; };
Forden = {
email = "forden@zuku.tech";
github = "Forden";
githubId = 24463229;
name = "Forden";
};
forkk = { forkk = {
email = "forkk@forkk.net"; email = "forkk@forkk.net";
github = "Forkk"; github = "Forkk";
@ -7561,6 +7700,12 @@
fingerprint = "0BAF 2D87 CB43 746F 6237 2D78 DE60 31AB A0BB 269A"; fingerprint = "0BAF 2D87 CB43 746F 6237 2D78 DE60 31AB A0BB 269A";
}]; }];
}; };
Golo300 = {
email = "lanzingertm@gmail.com";
github = "Golo300";
githubId = 58785758;
name = "Tim Lanzinger";
};
Gonzih = { Gonzih = {
email = "gonzih@gmail.com"; email = "gonzih@gmail.com";
github = "Gonzih"; github = "Gonzih";
@ -7669,6 +7814,14 @@
fingerprint = "7FC7 98AB 390E 1646 ED4D 8F1F 797F 6238 68CD 00C2"; fingerprint = "7FC7 98AB 390E 1646 ED4D 8F1F 797F 6238 68CD 00C2";
}]; }];
}; };
greaka = {
email = "git@greaka.de";
github = "greaka";
githubId = 2805834;
name = "Greaka";
keys =
[{ fingerprint = "6275 FB5C C9AC 9D85 FF9E 44C5 EE92 A5CD C367 118C"; }];
};
greg = { greg = {
email = "greg.hellings@gmail.com"; email = "greg.hellings@gmail.com";
github = "greg-hellings"; github = "greg-hellings";
@ -8240,6 +8393,12 @@
githubId = 896431; githubId = 896431;
name = "Chris Hodapp"; name = "Chris Hodapp";
}; };
hogcycle = {
email = "nate@gysli.ng";
github = "hogcycle";
githubId = 57007241;
name = "hogcycle";
};
holgerpeters = { holgerpeters = {
name = "Holger Peters"; name = "Holger Peters";
email = "holger.peters@posteo.de"; email = "holger.peters@posteo.de";
@ -8528,6 +8687,12 @@
githubId = 1550265; githubId = 1550265;
name = "Dominic Steinitz"; name = "Dominic Steinitz";
}; };
ifd3f = {
github = "ifd3f";
githubId = 7308591;
email = "astrid@astrid.tech";
name = "ifd3f";
};
iFreilicht = { iFreilicht = {
github = "iFreilicht"; github = "iFreilicht";
githubId = 9742635; githubId = 9742635;
@ -9283,6 +9448,12 @@
githubId = 2377; githubId = 2377;
name = "Jonathan del Strother"; name = "Jonathan del Strother";
}; };
jdev082 = {
email = "jdev0894@gmail.com";
github = "jdev082";
githubId = 92550746;
name = "jdev082";
};
jdreaver = { jdreaver = {
email = "johndreaver@gmail.com"; email = "johndreaver@gmail.com";
github = "jdreaver"; github = "jdreaver";
@ -9639,6 +9810,12 @@
githubId = 54179289; githubId = 54179289;
name = "Jason Miller"; name = "Jason Miller";
}; };
jn-sena = {
email = "jn-sena@proton.me";
github = "jn-sena";
githubId = 45771313;
name = "Sena";
};
jnsgruk = { jnsgruk = {
email = "jon@sgrs.uk"; email = "jon@sgrs.uk";
github = "jnsgruk"; github = "jnsgruk";
@ -9768,6 +9945,12 @@
githubId = 32305209; githubId = 32305209;
name = "John Children"; name = "John Children";
}; };
johnjohnstone = {
email = "jjohnstone@riseup.net";
github = "johnjohnstone";
githubId = 3208498;
name = "John Johnstone";
};
johnmh = { johnmh = {
email = "johnmh@openblox.org"; email = "johnmh@openblox.org";
github = "JohnMH"; github = "JohnMH";
@ -9816,6 +9999,12 @@
githubId = 25030997; githubId = 25030997;
name = "Yuki Okushi"; name = "Yuki Okushi";
}; };
johnylpm = {
email = "joaoluisparreira@gmail.com";
github = "Johny-LPM";
githubId = 168684553;
name = "João Marques";
};
jojosch = { jojosch = {
name = "Johannes Schleifenbaum"; name = "Johannes Schleifenbaum";
email = "johannes@js-webcoding.de"; email = "johannes@js-webcoding.de";
@ -9889,6 +10078,12 @@
githubId = 8580434; githubId = 8580434;
name = "Jonny Bolton"; name = "Jonny Bolton";
}; };
jonochang = {
name = "Jono Chang";
email = "j.g.chang@gmail.com";
github = "jonochang";
githubId = 13179;
};
jonringer = { jonringer = {
email = "jonringer117@gmail.com"; email = "jonringer117@gmail.com";
matrix = "@jonringer:matrix.org"; matrix = "@jonringer:matrix.org";
@ -10052,6 +10247,11 @@
githubId = 107689; githubId = 107689;
name = "Josh Holland"; name = "Josh Holland";
}; };
jshort = {
github = "jshort";
githubId = 1186444;
name = "James Short";
};
jsierles = { jsierles = {
email = "joshua@hey.com"; email = "joshua@hey.com";
matrix = "@jsierles:matrix.org"; matrix = "@jsierles:matrix.org";
@ -10079,7 +10279,7 @@
githubId = 27734541; githubId = 27734541;
}; };
jtbx = { jtbx = {
email = "jtbx@duck.com"; email = "jeremy@baxters.nz";
name = "Jeremy Baxter"; name = "Jeremy Baxter";
github = "jtbx"; github = "jtbx";
githubId = 92071952; githubId = 92071952;
@ -10283,6 +10483,12 @@
github = "k3a"; github = "k3a";
githubId = 966992; githubId = 966992;
}; };
k3yss = {
email = "rsi.dev17@gmail.com";
name = "Rishi Kumar";
github = "k3yss";
githubId = 96657880;
};
k900 = { k900 = {
name = "Ilya K."; name = "Ilya K.";
email = "me@0upti.me"; email = "me@0upti.me";
@ -10440,6 +10646,12 @@
githubId = 26346867; githubId = 26346867;
name = "K.B.Dharun Krishna"; name = "K.B.Dharun Krishna";
}; };
kbudde = {
email = "kris@budd.ee";
github = "kbudde";
githubId = 1072181;
name = "Kris Budde";
};
kcalvinalvin = { kcalvinalvin = {
email = "calvin@kcalvinalvin.info"; email = "calvin@kcalvinalvin.info";
github = "kcalvinalvin"; github = "kcalvinalvin";
@ -11075,6 +11287,12 @@
githubId = 15742918; githubId = 15742918;
name = "Sergey Kuznetsov"; name = "Sergey Kuznetsov";
}; };
kvendingoldo = {
email = "kvendingoldo@gmail.com";
github = "kvendingoldo";
githubId = 11614750;
name = "Alexander Sharov";
};
kwohlfahrt = { kwohlfahrt = {
email = "kai.wohlfahrt@gmail.com"; email = "kai.wohlfahrt@gmail.com";
github = "kwohlfahrt"; github = "kwohlfahrt";
@ -11367,7 +11585,7 @@
name = "Daniel Kuehn"; name = "Daniel Kuehn";
}; };
lelgenio = { lelgenio = {
email = "lelgenio@disroot.org"; email = "lelgenio@lelgenio.com";
github = "lelgenio"; github = "lelgenio";
githubId = 31388299; githubId = 31388299;
name = "Leonardo Eugênio"; name = "Leonardo Eugênio";
@ -11823,6 +12041,14 @@
githubId = 10626; githubId = 10626;
name = "Andreas Wagner"; name = "Andreas Wagner";
}; };
lpostula = {
email = "lois@postu.la";
github = "loispostula";
githubId = 1423612;
name = "Loïs Postula";
keys =
[{ fingerprint = "0B4A E7C7 D3B7 53F5 3B3D 774C 3819 3C6A 09C3 9ED1"; }];
};
lrewega = { lrewega = {
email = "lrewega@c32.ca"; email = "lrewega@c32.ca";
github = "lrewega"; github = "lrewega";
@ -12318,6 +12544,11 @@
githubId = 18661391; githubId = 18661391;
name = "Malte Janz"; name = "Malte Janz";
}; };
malteneuss = {
github = "malteneuss";
githubId = 5301202;
name = "Malte Neuss";
};
malte-v = { malte-v = {
email = "nixpkgs@mal.tc"; email = "nixpkgs@mal.tc";
github = "malte-v"; github = "malte-v";
@ -12594,6 +12825,12 @@
githubId = 952712; githubId = 952712;
name = "Matt Christ"; name = "Matt Christ";
}; };
matteopacini = {
email = "m@matteopacini.me";
github = "matteo-pacini";
githubId = 3139724;
name = "Matteo Pacini";
};
matthewbauer = { matthewbauer = {
email = "mjbauer95@gmail.com"; email = "mjbauer95@gmail.com";
github = "matthewbauer"; github = "matthewbauer";
@ -12748,6 +12985,12 @@
fingerprint = "1DE4 424D BF77 1192 5DC4 CF5E 9AED 8814 81D8 444E"; fingerprint = "1DE4 424D BF77 1192 5DC4 CF5E 9AED 8814 81D8 444E";
}]; }];
}; };
maxstrid = {
email = "mxwhenderson@gmail.com";
github = "maxstrid";
githubId = 115441224;
name = "Maxwell Henderson";
};
maxux = { maxux = {
email = "root@maxux.net"; email = "root@maxux.net";
github = "maxux"; github = "maxux";
@ -12921,6 +13164,12 @@
githubId = 14259816; githubId = 14259816;
name = "Abin Simon"; name = "Abin Simon";
}; };
me-and = {
name = "Adam Dinwoodie";
email = "nix.thunder.wayne@post.dinwoodie.org";
github = "me-and";
githubId = 1397507;
};
meatcar = { meatcar = {
email = "nixpkgs@denys.me"; email = "nixpkgs@denys.me";
github = "meatcar"; github = "meatcar";
@ -14417,6 +14666,12 @@
githubId = 399535; githubId = 399535;
name = "Niklas Hambüchen"; name = "Niklas Hambüchen";
}; };
nhnn = {
matrix = "@nhnn:nhnn.dev";
github = "thenhnn";
githubId = 162156666;
name = "nhnn";
};
nhooyr = { nhooyr = {
email = "anmol@aubble.com"; email = "anmol@aubble.com";
github = "nhooyr"; github = "nhooyr";
@ -14665,6 +14920,12 @@
githubId = 6930756; githubId = 6930756;
name = "Nicolas Mattia"; name = "Nicolas Mattia";
}; };
nmishin = {
email = "sanduku.default@gmail.com";
github = "Nmishin";
githubId = 4242897;
name = "Nikolai Mishin";
};
noaccos = { noaccos = {
name = "Francesco Noacco"; name = "Francesco Noacco";
email = "francesco.noacco2000@gmail.com"; email = "francesco.noacco2000@gmail.com";
@ -14934,6 +15195,13 @@
github = "nyawox"; github = "nyawox";
githubId = 93813719; githubId = 93813719;
}; };
nydragon = {
name = "nydragon";
github = "nydragon";
email = "nix@ccnlc.eu";
githubId = 56591727;
keys = [ { fingerprint = "25FF 8464 F062 7EC0 0129 6A43 14AA 30A8 65EA 1209"; } ];
};
nzbr = { nzbr = {
email = "nixos@nzbr.de"; email = "nixos@nzbr.de";
github = "nzbr"; github = "nzbr";
@ -14950,6 +15218,12 @@
githubId = 30825096; githubId = 30825096;
name = "Ning Zhang"; name = "Ning Zhang";
}; };
o0th = {
email = "o0th@pm.me";
name = "Sabato Luca Guadagno";
github = "o0th";
githubId = 22490354;
};
oaksoaj = { oaksoaj = {
email = "oaksoaj@riseup.net"; email = "oaksoaj@riseup.net";
name = "Oaksoaj"; name = "Oaksoaj";
@ -15824,6 +16098,12 @@
githubId = 43863; githubId = 43863;
name = "Philip Taron"; name = "Philip Taron";
}; };
philtaken = {
email = "philipp.herzog@protonmail.com";
github = "philtaken";
githubId = 13309623;
name = "Philipp Herzog";
};
phip1611 = { phip1611 = {
email = "phip1611@gmail.com"; email = "phip1611@gmail.com";
github = "phip1611"; github = "phip1611";
@ -16542,6 +16822,12 @@
github = "PhilippWoelfel"; github = "PhilippWoelfel";
githubId = 19400064; githubId = 19400064;
}; };
pyle = {
name = "Adam Pyle";
email = "adam@pyle.dev";
github = "pyle";
githubId = 7279609;
};
pyrolagus = { pyrolagus = {
email = "pyrolagus@gmail.com"; email = "pyrolagus@gmail.com";
github = "PyroLagus"; github = "PyroLagus";
@ -16950,6 +17236,15 @@
githubId = 52847440; githubId = 52847440;
name = "Ryan Burns"; name = "Ryan Burns";
}; };
rconybea = {
email = "n1xpkgs@hushmail.com";
github = "rconybea";
githubId = 8570969;
name = "Roland Conybeare";
keys = [{
fingerprint = "bw5Cr/4ul1C2UvxopphbZbFI1i5PCSnOmPID7mJ/Ogo";
}];
};
rdnetto = { rdnetto = {
email = "rdnetto@gmail.com"; email = "rdnetto@gmail.com";
github = "rdnetto"; github = "rdnetto";
@ -17508,6 +17803,12 @@
github = "rosehobgoblin"; github = "rosehobgoblin";
githubId = 84164410; githubId = 84164410;
}; };
roshaen = {
name = "Roshan Kumar";
email = "roshaen09@gmail.com";
github = "roshaen";
githubId = 58213083;
};
rossabaker = { rossabaker = {
name = "Ross A. Baker"; name = "Ross A. Baker";
email = "ross@rossabaker.com"; email = "ross@rossabaker.com";
@ -17517,8 +17818,12 @@
RossComputerGuy = { RossComputerGuy = {
name = "Tristan Ross"; name = "Tristan Ross";
email = "tristan.ross@midstall.com"; email = "tristan.ross@midstall.com";
matrix = "@rosscomputerguy:matrix.org";
github = "RossComputerGuy"; github = "RossComputerGuy";
githubId = 19699320; githubId = 19699320;
keys = [{
fingerprint = "FD5D F7A8 85BB 378A 0157 5356 B09C 4220 3566 9AF8";
}];
}; };
rostan-t = { rostan-t = {
name = "Rostan Tabet"; name = "Rostan Tabet";
@ -17901,6 +18206,12 @@
githubId = 6022042; githubId = 6022042;
name = "Sam Parkinson"; name = "Sam Parkinson";
}; };
samemrecebi = {
name = "Emre Çebi";
email = "emre@cebi.io";
github = "samemrecebi";
githubId = 64419750;
};
samhug = { samhug = {
email = "s@m-h.ug"; email = "s@m-h.ug";
github = "samhug"; github = "samhug";
@ -18283,6 +18594,11 @@
github = "sei40kr"; github = "sei40kr";
githubId = 11665236; githubId = 11665236;
}; };
seineeloquenz = {
name = "Alexander Linder";
github = "SeineEloquenz";
githubId = 34923333;
};
seirl = { seirl = {
name = "Antoine Pietri"; name = "Antoine Pietri";
email = "antoine.pietri1@gmail.com"; email = "antoine.pietri1@gmail.com";
@ -19585,12 +19901,6 @@
githubId = 36031171; githubId = 36031171;
name = "Supa"; name = "Supa";
}; };
superbo = {
email = "supernbo@gmail.com";
github = "SuperBo";
githubId = 2666479;
name = "Y Nguyen";
};
superherointj = { superherointj = {
email = "sergiomarcelo@yandex.com"; email = "sergiomarcelo@yandex.com";
github = "superherointj"; github = "superherointj";
@ -19720,6 +20030,12 @@
githubId = 12841859; githubId = 12841859;
name = "Syboxez Blank"; name = "Syboxez Blank";
}; };
syedahkam = {
email = "smahkam57@gmail.com";
github = "SyedAhkam";
githubId = 52673095;
name = "Syed Ahkam";
};
symphorien = { symphorien = {
email = "symphorien_nixpkgs@xlumurb.eu"; email = "symphorien_nixpkgs@xlumurb.eu";
matrix = "@symphorien:xlumurb.eu"; matrix = "@symphorien:xlumurb.eu";
@ -20200,6 +20516,12 @@
github = "thefossguy"; github = "thefossguy";
githubId = 44400303; githubId = 44400303;
}; };
thehans255 = {
name = "Hans Jorgensen";
email = "foss-contact@thehans255.com";
github = "thehans255";
githubId = 15896573;
};
thekostins = { thekostins = {
name = "Konstantin"; name = "Konstantin";
email = "anisimovkosta19@gmail.com"; email = "anisimovkosta19@gmail.com";
@ -20551,6 +20873,12 @@
fingerprint = "7944 74B7 D236 DAB9 C9EF E7F9 5CCE 6F14 66D4 7C9E"; fingerprint = "7944 74B7 D236 DAB9 C9EF E7F9 5CCE 6F14 66D4 7C9E";
}]; }];
}; };
toasteruwu = {
email = "Aki@ToasterUwU.com";
github = "ToasterUwU";
githubId = 43654377;
name = "Aki";
};
tobiasBora = { tobiasBora = {
email = "tobias.bora.list@gmail.com"; email = "tobias.bora.list@gmail.com";
github = "tobiasBora"; github = "tobiasBora";
@ -20563,6 +20891,12 @@
githubId = 858790; githubId = 858790;
name = "Tobias Mayer"; name = "Tobias Mayer";
}; };
tobz619 = {
email = "toloke@yahoo.co.uk";
github = "tobz619";
githubId = 93312805;
name = "Tobi Oloke";
};
tochiaha = { tochiaha = {
email = "tochiahan@proton.me"; email = "tochiahan@proton.me";
github = "Tochiaha"; github = "Tochiaha";
@ -20614,11 +20948,14 @@
name = "Tomkoid"; name = "Tomkoid";
}; };
tomodachi94 = { tomodachi94 = {
email = "tomodachi94+nixpkgs@protonmail.com"; email = "tomodachi94@protonmail.com";
matrix = "@tomodachi94:matrix.org"; matrix = "@tomodachi94:matrix.org";
github = "tomodachi94"; github = "tomodachi94";
githubId = 68489118; githubId = 68489118;
name = "Tomodachi94"; name = "Tomodachi94";
keys = [{
fingerprint = "B208 D6E5 B8ED F47D 5687 627B 2E27 5F21 C4D5 54A3";
}];
}; };
tomsiewert = { tomsiewert = {
email = "tom@siewert.io"; email = "tom@siewert.io";
@ -21530,6 +21867,12 @@
name = "Kostas Karachalios"; name = "Kostas Karachalios";
githubId = 81346; githubId = 81346;
}; };
vringar = {
email = "git@zabka.it";
github = "vringar";
name = "Stefan Zabka";
githubId = 13276717;
};
vrthra = { vrthra = {
email = "rahul@gopinath.org"; email = "rahul@gopinath.org";
github = "vrthra"; github = "vrthra";
@ -21664,6 +22007,12 @@
github = "wegank"; github = "wegank";
githubId = 9713184; githubId = 9713184;
}; };
weitzj = {
name = "Jan Weitz";
email = "nixpkgs@janweitz.de";
github = "weitzj";
githubId = 829277;
};
welteki = { welteki = {
email = "welteki@pm.me"; email = "welteki@pm.me";
github = "welteki"; github = "welteki";
@ -22048,6 +22397,12 @@
githubId = 474343; githubId = 474343;
name = "Xavier Zwirtz"; name = "Xavier Zwirtz";
}; };
XBagon = {
name = "XBagon";
email = "xbagon@outlook.de";
github = "XBagon";
githubId = 1523292;
};
xbreak = { xbreak = {
email = "xbreak@alphaware.se"; email = "xbreak@alphaware.se";
github = "xbreak"; github = "xbreak";
@ -22683,6 +23038,12 @@
githubId = 3248; githubId = 3248;
name = "zimbatm"; name = "zimbatm";
}; };
zimeg = {
email = "zim@o526.net";
github = "zimeg";
githubId = 18134219;
name = "zimeg";
};
Zimmi48 = { Zimmi48 = {
email = "theo.zimmermann@telecom-paris.fr"; email = "theo.zimmermann@telecom-paris.fr";
github = "Zimmi48"; github = "Zimmi48";

View file

@ -80,6 +80,11 @@ OK_MISSING_BY_PACKAGE = {
"plasma-desktop": { "plasma-desktop": {
"scim", # upstream is dead, not packaged in Nixpkgs "scim", # upstream is dead, not packaged in Nixpkgs
}, },
"poppler-qt6": {
"gobject-introspection-1.0", # we don't actually want to build the GTK variant
"gdk-pixbuf-2.0",
"gtk+-3.0",
},
"powerdevil": { "powerdevil": {
"DDCUtil", # cursed, intentionally disabled "DDCUtil", # cursed, intentionally disabled
}, },
@ -87,6 +92,9 @@ OK_MISSING_BY_PACKAGE = {
"Qt6Qml", # tests only "Qt6Qml", # tests only
"Qt6Quick", "Qt6Quick",
}, },
"skladnik": {
"POVRay", # too expensive to rerender all the assets
},
"syntax-highlighting": { "syntax-highlighting": {
"XercesC", # only used for extra validation at build time "XercesC", # only used for extra validation at build time
} }

View file

@ -77,6 +77,7 @@ lualogging,,,,,,
luaossl,,,,,5.1, luaossl,,,,,5.1,
luaposix,,,,34.1.1-1,,vyp lblasc luaposix,,,,34.1.1-1,,vyp lblasc
luarepl,,,,,, luarepl,,,,,,
luarocks,,,,,,mrcjkb teto
luarocks-build-rust-mlua,,,,,,mrcjkb luarocks-build-rust-mlua,,,,,,mrcjkb
luarocks-build-treesitter-parser,,,,,,mrcjkb luarocks-build-treesitter-parser,,,,,,mrcjkb
luasec,,,,,,flosse luasec,,,,,,flosse
@ -112,6 +113,7 @@ nvim-nio,,,,,,mrcjkb
pathlib.nvim,,,,,, pathlib.nvim,,,,,,
penlight,,,,,,alerque penlight,,,,,,alerque
plenary.nvim,https://raw.githubusercontent.com/nvim-lua/plenary.nvim/master/plenary.nvim-scm-1.rockspec,,,,5.1, plenary.nvim,https://raw.githubusercontent.com/nvim-lua/plenary.nvim/master/plenary.nvim-scm-1.rockspec,,,,5.1,
psl,,,,0.3,,
rapidjson,,,,,, rapidjson,,,,,,
rest.nvim,,,,,5.1,teto rest.nvim,,,,,5.1,teto
rocks.nvim,,,,,,mrcjkb rocks.nvim,,,,,,mrcjkb

1 name rockspec ref server version luaversion maintainers
77 luaossl 5.1
78 luaposix 34.1.1-1 vyp lblasc
79 luarepl
80 luarocks mrcjkb teto
81 luarocks-build-rust-mlua mrcjkb
82 luarocks-build-treesitter-parser mrcjkb
83 luasec flosse
113 pathlib.nvim
114 penlight alerque
115 plenary.nvim https://raw.githubusercontent.com/nvim-lua/plenary.nvim/master/plenary.nvim-scm-1.rockspec 5.1
116 psl 0.3
117 rapidjson
118 rest.nvim 5.1 teto
119 rocks.nvim mrcjkb

View file

@ -1,18 +1,22 @@
{ stdenv, lib, makeWrapper, perl, perlPackages }: { stdenv, lib, makeWrapper, perl, perlPackages }:
stdenv.mkDerivation { stdenv.mkDerivation {
name = "nixpkgs-lint-1"; pname = "nixpkgs-lint";
version = "1";
nativeBuildInputs = [ makeWrapper ]; nativeBuildInputs = [ makeWrapper ];
buildInputs = [ perl perlPackages.XMLSimple ]; buildInputs = [ perl perlPackages.XMLSimple ];
dontUnpack = true; dontUnpack = true;
buildPhase = "true"; dontBuild = true;
installPhase = installPhase =
'' ''
mkdir -p $out/bin mkdir -p $out/bin
cp ${./nixpkgs-lint.pl} $out/bin/nixpkgs-lint cp ${./nixpkgs-lint.pl} $out/bin/nixpkgs-lint
# make the built version hermetic
substituteInPlace $out/bin/nixpkgs-lint \
--replace-fail "#! /usr/bin/env nix-shell" "#! ${lib.getExe perl}"
wrapProgram $out/bin/nixpkgs-lint --set PERL5LIB $PERL5LIB wrapProgram $out/bin/nixpkgs-lint --set PERL5LIB $PERL5LIB
''; '';

View file

@ -108,7 +108,7 @@ class Repo:
@property @property
def name(self): def name(self):
return self.uri.split("/")[-1] return self.uri.strip("/").split("/")[-1]
@property @property
def branch(self): def branch(self):

View file

@ -236,7 +236,6 @@ with lib.maintainers; {
members = [ members = [
cole-h cole-h
grahamc grahamc
hoverbear
]; ];
scope = "Group registration for packages maintained by Determinate Systems."; scope = "Group registration for packages maintained by Determinate Systems.";
shortName = "Determinate Systems employees"; shortName = "Determinate Systems employees";
@ -345,6 +344,16 @@ with lib.maintainers; {
shortName = "freedesktop.org packaging"; shortName = "freedesktop.org packaging";
}; };
fslabs = {
# Verify additions to this team with at least one already existing member of the team.
members = [
greaka
lpostula
];
scope = "Group registration for packages maintained by Foresight Spatial Labs.";
shortName = "Foresight Spatial Labs employees";
};
gcc = { gcc = {
members = [ members = [
synthetica synthetica
@ -419,7 +428,6 @@ with lib.maintainers; {
bandresen bandresen
hlolli hlolli
glittershark glittershark
babariviere
ericdallo ericdallo
thiagokokada thiagokokada
]; ];
@ -680,6 +688,7 @@ with lib.maintainers; {
dandellion dandellion
sumnerevans sumnerevans
nickcao nickcao
teutat3s
]; ];
scope = "Maintain the ecosystem around Matrix, a decentralized messenger."; scope = "Maintain the ecosystem around Matrix, a decentralized messenger.";
shortName = "Matrix"; shortName = "Matrix";

View file

@ -48,7 +48,7 @@ Reviewing process:
- Description, default and example should be provided. - Description, default and example should be provided.
- Ensure that option changes are backward compatible. - Ensure that option changes are backward compatible.
- `mkRenamedOptionModuleWith` provides a way to make renamed option backward compatible. - `mkRenamedOptionModuleWith` provides a way to make renamed option backward compatible.
- Use `lib.versionAtLeast config.system.stateVersion "23.11"` on backward incompatible changes which may corrupt, change or update the state stored on existing setups. - Use `lib.versionAtLeast config.system.stateVersion "24.05"` on backward incompatible changes which may corrupt, change or update the state stored on existing setups.
- Ensure that removed options are declared with `mkRemovedOptionModule`. - Ensure that removed options are declared with `mkRemovedOptionModule`.
- Ensure that changes that are not backward compatible are mentioned in release notes. - Ensure that changes that are not backward compatible are mentioned in release notes.
- Ensure that documentations affected by the change is updated. - Ensure that documentations affected by the change is updated.

View file

@ -1,17 +1,17 @@
# Experimental feature: Bootspec {#sec-experimental-bootspec} # Bootspec {#sec-bootspec}
Bootspec is a experimental feature, introduced in the [RFC-0125 proposal](https://github.com/NixOS/rfcs/pull/125), the reference implementation can be found [there](https://github.com/NixOS/nixpkgs/pull/172237) in order to standardize bootloader support Bootspec is a feature introduced in [RFC-0125](https://github.com/NixOS/rfcs/pull/125) in order to standardize bootloader support and advanced boot workflows such as SecureBoot and potentially more.
and advanced boot workflows such as SecureBoot and potentially more. The reference implementation can be found [here](https://github.com/NixOS/nixpkgs/pull/172237).
You can enable the creation of bootspec documents through [`boot.bootspec.enable = true`](options.html#opt-boot.bootspec.enable), which will prompt a warning until [RFC-0125](https://github.com/NixOS/rfcs/pull/125) is officially merged. The creation of bootspec documents is enabled by default.
## Schema {#sec-experimental-bootspec-schema} ## Schema {#sec-bootspec-schema}
The bootspec schema is versioned and validated against [a CUE schema file](https://cuelang.org/) which should considered as the source of truth for your applications. The bootspec schema is versioned and validated against [a CUE schema file](https://cuelang.org/) which should considered as the source of truth for your applications.
You will find the current version [here](../../../modules/system/activation/bootspec.cue). You will find the current version [here](../../../modules/system/activation/bootspec.cue).
## Extensions mechanism {#sec-experimental-bootspec-extensions} ## Extensions mechanism {#sec-bootspec-extensions}
Bootspec cannot account for all usecases. Bootspec cannot account for all usecases.
@ -29,8 +29,9 @@ An example for SecureBoot is to get the Nix store path to `/etc/os-release` in o
To reduce incompatibility and prevent names from clashing between applications, it is **highly recommended** to use a unique namespace for your extensions. To reduce incompatibility and prevent names from clashing between applications, it is **highly recommended** to use a unique namespace for your extensions.
## External bootloaders {#sec-experimental-bootspec-external-bootloaders} ## External bootloaders {#sec-bootspec-external-bootloaders}
It is possible to enable your own bootloader through [`boot.loader.external.installHook`](options.html#opt-boot.loader.external.installHook) which can wrap an existing bootloader. It is possible to enable your own bootloader through [`boot.loader.external.installHook`](options.html#opt-boot.loader.external.installHook) which can wrap an existing bootloader.
Currently, there is no good story to compose existing bootloaders to enrich their features, e.g. SecureBoot, etc. It will be necessary to reimplement or reuse existing parts. Currently, there is no good story to compose existing bootloaders to enrich their features, e.g. SecureBoot, etc.
It will be necessary to reimplement or reuse existing parts.

View file

@ -173,7 +173,7 @@ lib.mkOption {
## Extensible Option Types {#sec-option-declarations-eot} ## Extensible Option Types {#sec-option-declarations-eot}
Extensible option types is a feature that allow to extend certain types Extensible option types is a feature that allows to extend certain types
declaration through multiple module files. This feature only work with a declaration through multiple module files. This feature only work with a
restricted set of types, namely `enum` and `submodules` and any composed restricted set of types, namely `enum` and `submodules` and any composed
forms of them. forms of them.

View file

@ -146,6 +146,27 @@ have a predefined type and string generator already declared under
: Outputs the given attribute set as an Elixir map, instead of the : Outputs the given attribute set as an Elixir map, instead of the
default Elixir keyword list default Elixir keyword list
`pkgs.formats.php { finalVariable }` []{#pkgs-formats-php}
: A function taking an attribute set with values
`finalVariable`
: The variable that will store generated expression (usually `config`). If set to `null`, generated expression will contain `return`.
It returns a set with PHP-Config-specific attributes `type`, `lib`, and
`generate` as specified [below](#pkgs-formats-result).
The `lib` attribute contains functions to be used in settings, for
generating special PHP values:
`mkRaw phpCode`
: Outputs the given string as raw PHP code
`mkMixedArray list set`
: Creates PHP array that contains both indexed and associative values. For example, `lib.mkMixedArray [ "hello" "world" ] { "nix" = "is-great"; }` returns `['hello', 'world', 'nix' => 'is-great']`
[]{#pkgs-formats-result} []{#pkgs-formats-result}
These functions all return an attribute set with these values: These functions all return an attribute set with these values:

View file

@ -6,7 +6,7 @@ expressions and associated binaries. The NixOS channels are updated
automatically from NixOS's Git repository after certain tests have automatically from NixOS's Git repository after certain tests have
passed and all packages have been built. These channels are: passed and all packages have been built. These channels are:
- *Stable channels*, such as [`nixos-23.11`](https://channels.nixos.org/nixos-23.11). - *Stable channels*, such as [`nixos-24.05`](https://channels.nixos.org/nixos-24.05).
These only get conservative bug fixes and package upgrades. For These only get conservative bug fixes and package upgrades. For
instance, a channel update may cause the Linux kernel on your system instance, a channel update may cause the Linux kernel on your system
to be upgraded from 4.19.34 to 4.19.38 (a minor bug fix), but not to be upgraded from 4.19.34 to 4.19.38 (a minor bug fix), but not
@ -19,7 +19,7 @@ passed and all packages have been built. These channels are:
radical changes between channel updates. It's not recommended for radical changes between channel updates. It's not recommended for
production systems. production systems.
- *Small channels*, such as [`nixos-23.11-small`](https://channels.nixos.org/nixos-23.11-small) - *Small channels*, such as [`nixos-24.05-small`](https://channels.nixos.org/nixos-24.05-small)
or [`nixos-unstable-small`](https://channels.nixos.org/nixos-unstable-small). or [`nixos-unstable-small`](https://channels.nixos.org/nixos-unstable-small).
These are identical to the stable and unstable channels described above, These are identical to the stable and unstable channels described above,
except that they contain fewer binary packages. This means they get updated except that they contain fewer binary packages. This means they get updated
@ -38,8 +38,8 @@ supported stable release.
When you first install NixOS, you're automatically subscribed to the When you first install NixOS, you're automatically subscribed to the
NixOS channel that corresponds to your installation source. For NixOS channel that corresponds to your installation source. For
instance, if you installed from a 23.11 ISO, you will be subscribed to instance, if you installed from a 24.05 ISO, you will be subscribed to
the `nixos-23.11` channel. To see which NixOS channel you're subscribed the `nixos-24.05` channel. To see which NixOS channel you're subscribed
to, run the following as root: to, run the following as root:
```ShellSession ```ShellSession
@ -54,16 +54,16 @@ To switch to a different NixOS channel, do
``` ```
(Be sure to include the `nixos` parameter at the end.) For instance, to (Be sure to include the `nixos` parameter at the end.) For instance, to
use the NixOS 23.11 stable channel: use the NixOS 24.05 stable channel:
```ShellSession ```ShellSession
# nix-channel --add https://channels.nixos.org/nixos-23.11 nixos # nix-channel --add https://channels.nixos.org/nixos-24.05 nixos
``` ```
If you have a server, you may want to use the "small" channel instead: If you have a server, you may want to use the "small" channel instead:
```ShellSession ```ShellSession
# nix-channel --add https://channels.nixos.org/nixos-23.11-small nixos # nix-channel --add https://channels.nixos.org/nixos-24.05-small nixos
``` ```
And if you want to live on the bleeding edge: And if you want to live on the bleeding edge:
@ -117,6 +117,6 @@ modules. You can also specify a channel explicitly, e.g.
```nix ```nix
{ {
system.autoUpgrade.channel = "https://channels.nixos.org/nixos-23.11"; system.autoUpgrade.channel = "https://channels.nixos.org/nixos-24.05";
} }
``` ```

View file

@ -3,6 +3,7 @@
This section lists the release notes for each stable version of NixOS and current unstable revision. This section lists the release notes for each stable version of NixOS and current unstable revision.
```{=include=} sections ```{=include=} sections
rl-2411.section.md
rl-2405.section.md rl-2405.section.md
rl-2311.section.md rl-2311.section.md
rl-2305.section.md rl-2305.section.md

View file

@ -146,7 +146,7 @@ In addition to numerous new and upgraded packages, this release has the followin
- [touchegg](https://github.com/JoseExposito/touchegg), a multi-touch gesture recognizer. Available as [services.touchegg](#opt-services.touchegg.enable). - [touchegg](https://github.com/JoseExposito/touchegg), a multi-touch gesture recognizer. Available as [services.touchegg](#opt-services.touchegg.enable).
- [pantheon-tweaks](https://github.com/pantheon-tweaks/pantheon-tweaks), an unofficial system settings panel for Pantheon. Available as [programs.pantheon-tweaks](#opt-programs.pantheon-tweaks.enable). - [pantheon-tweaks](https://github.com/pantheon-tweaks/pantheon-tweaks), an unofficial system settings panel for Pantheon. Available as `programs.pantheon-tweaks`.
- [joycond](https://github.com/DanielOgorchock/joycond), a service that uses `hid-nintendo` to provide nintendo joycond pairing and better nintendo switch pro controller support. - [joycond](https://github.com/DanielOgorchock/joycond), a service that uses `hid-nintendo` to provide nintendo joycond pairing and better nintendo switch pro controller support.

View file

@ -366,7 +366,7 @@ In addition to numerous new and upgraded packages, this release includes the fol
__Note:__ secrets from these files will be leaked into the store unless you use a __Note:__ secrets from these files will be leaked into the store unless you use a
[**file**-provider or env-var](https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#file-provider) for secrets! [**file**-provider or env-var](https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#file-provider) for secrets!
- [services.grafana.provision.notifiers](#opt-services.grafana.provision.notifiers) is not affected by this change because - `services.grafana.provision.notifiers` is not affected by this change because
this feature is deprecated by Grafana and will probably be removed in Grafana 10. this feature is deprecated by Grafana and will probably be removed in Grafana 10.
It's recommended to use `services.grafana.provision.alerting.contactPoints` instead. It's recommended to use `services.grafana.provision.alerting.contactPoints` instead.

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,61 @@
# Release 24.11 (“Vicuña”, 2024.11/??) {#sec-release-24.11}
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
## Highlights {#sec-release-24.11-highlights}
- Create the first release note entry in this section!
## New Services {#sec-release-24.11-new-services}
- [Open-WebUI](https://github.com/open-webui/open-webui), a user-friendly WebUI
for LLMs. Available as [services.open-webui](#opt-services.open-webui.enable)
service.
## Backward Incompatibilities {#sec-release-24.11-incompatibilities}
- `nginx` package no longer includes `gd` and `geoip` dependencies. For enabling it, override `nginx` package with the optionals `withImageFilter` and `withGeoIP`.
- `openssh` and `openssh_hpn` are now compiled without Kerberos 5 / GSSAPI support in an effort to reduce the attack surface of the components for the majority of users. Users needing this support can
use the new `opensshWithKerberos` and `openssh_hpnWithKerberos` flavors (e.g. `programs.ssh.package = pkgs.openssh_gssapi`).
- `nvimpager` was updated to version 0.13.0, which changes the order of user and
nvimpager settings: user commands in `-c` and `--cmd` now override the
respective default settings because they are executed later.
- `services.forgejo.mailerPasswordFile` has been deprecated by the drop-in replacement `services.forgejo.secrets.mailer.PASSWD`,
which is part of the new free-form `services.forgejo.secrets` option.
`services.forgejo.secrets` is a small wrapper over systemd's `LoadCredential=`. It has the same structure (sections/keys) as
`services.forgejo.settings` but takes file paths that will be read before service startup instead of some plaintext value.
- The Invoiceplane module now only accepts the structured `settings` option.
`extraConfig` is now removed.
- Legacy package `stalwart-mail_0_6` was dropped, please note the
[manual upgrade process](https://github.com/stalwartlabs/mail-server/blob/main/UPGRADING.md)
before changing the package to `pkgs.stalwart-mail` in
[`services.stalwart-mail.package`](#opt-services.stalwart-mail.package).
- The `stalwart-mail` module now uses RocksDB as the default storage backend
for `stateVersion` ≥ 24.11. (It was previously using SQLite for structured
data and the filesystem for blobs).
- `zx` was updated to v8, which introduces several breaking changes.
See the [v8 changelog](https://github.com/google/zx/releases/tag/8.0.0) for more information.
- The `portunus` package and service do not support weak password hashes anymore.
If you installed Portunus on NixOS 23.11 or earlier, upgrade to NixOS 24.05 first to get support for strong password hashing.
Then, follow the instructions on the [upstream release notes](https://github.com/majewsky/portunus/releases/tag/v2.0.0) to upgrade all existing user accounts to strong password hashes.
If you need to upgrade to 24.11 without having completed the migration, consider the security implications of weak password hashes on your user accounts, and add the following to your configuration:
```nix
services.portunus.package = pkgs.portunus.override { libxcrypt = pkgs.libxcrypt-legacy; };
services.portunus.ldap.package = pkgs.openldap.override { libxcrypt = pkgs.libxcrypt-legacy; };
```
## Other Notable Changes {#sec-release-24.11-notable-changes}
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
- To facilitate dependency injection, the `imgui` package now builds a static archive using vcpkg' CMake rules.
The derivation now installs "impl" headers selectively instead of by a wildcard.
Use `imgui.src` if you just want to access the unpacked sources.

View file

@ -182,6 +182,30 @@ in rec {
in if errors == [] then true in if errors == [] then true
else trace (concatStringsSep "\n" errors) false; else trace (concatStringsSep "\n" errors) false;
checkUnitConfigWithLegacyKey = legacyKey: group: checks: attrs:
let
dump = lib.generators.toPretty { }
(lib.generators.withRecursion { depthLimit = 2; throwOnDepthLimit = false; } attrs);
attrs' =
if legacyKey == null
then attrs
else if ! attrs?${legacyKey}
then attrs
else if removeAttrs attrs [ legacyKey ] == {}
then attrs.${legacyKey}
else throw ''
The declaration
${dump}
must not mix unit options with the legacy key '${legacyKey}'.
This can be fixed by moving all settings from within ${legacyKey}
one level up.
'';
in
checkUnitConfig group checks attrs';
toOption = x: toOption = x:
if x == true then "true" if x == true then "true"
else if x == false then "false" else if x == false then "false"

View file

@ -63,13 +63,13 @@ in {
${attrsToSection def.l2tpConfig} ${attrsToSection def.l2tpConfig}
'' + flip concatMapStrings def.l2tpSessions (x: '' '' + flip concatMapStrings def.l2tpSessions (x: ''
[L2TPSession] [L2TPSession]
${attrsToSection x.l2tpSessionConfig} ${attrsToSection x}
'') + optionalString (def.wireguardConfig != { }) '' '') + optionalString (def.wireguardConfig != { }) ''
[WireGuard] [WireGuard]
${attrsToSection def.wireguardConfig} ${attrsToSection def.wireguardConfig}
'' + flip concatMapStrings def.wireguardPeers (x: '' '' + flip concatMapStrings def.wireguardPeers (x: ''
[WireGuardPeer] [WireGuardPeer]
${attrsToSection x.wireguardPeerConfig} ${attrsToSection x}
'') + optionalString (def.bondConfig != { }) '' '') + optionalString (def.bondConfig != { }) ''
[Bond] [Bond]
${attrsToSection def.bondConfig} ${attrsToSection def.bondConfig}
@ -122,13 +122,13 @@ in {
${concatStringsSep "\n" (map (s: "Xfrm=${s}") def.xfrm)} ${concatStringsSep "\n" (map (s: "Xfrm=${s}") def.xfrm)}
'' + "\n" + flip concatMapStrings def.addresses (x: '' '' + "\n" + flip concatMapStrings def.addresses (x: ''
[Address] [Address]
${attrsToSection x.addressConfig} ${attrsToSection x}
'') + flip concatMapStrings def.routingPolicyRules (x: '' '') + flip concatMapStrings def.routingPolicyRules (x: ''
[RoutingPolicyRule] [RoutingPolicyRule]
${attrsToSection x.routingPolicyRuleConfig} ${attrsToSection x}
'') + flip concatMapStrings def.routes (x: '' '') + flip concatMapStrings def.routes (x: ''
[Route] [Route]
${attrsToSection x.routeConfig} ${attrsToSection x}
'') + optionalString (def.dhcpV4Config != { }) '' '') + optionalString (def.dhcpV4Config != { }) ''
[DHCPv4] [DHCPv4]
${attrsToSection def.dhcpV4Config} ${attrsToSection def.dhcpV4Config}
@ -149,22 +149,22 @@ in {
${attrsToSection def.ipv6SendRAConfig} ${attrsToSection def.ipv6SendRAConfig}
'' + flip concatMapStrings def.ipv6Prefixes (x: '' '' + flip concatMapStrings def.ipv6Prefixes (x: ''
[IPv6Prefix] [IPv6Prefix]
${attrsToSection x.ipv6PrefixConfig} ${attrsToSection x}
'') + flip concatMapStrings def.ipv6RoutePrefixes (x: '' '') + flip concatMapStrings def.ipv6RoutePrefixes (x: ''
[IPv6RoutePrefix] [IPv6RoutePrefix]
${attrsToSection x.ipv6RoutePrefixConfig} ${attrsToSection x}
'') + flip concatMapStrings def.dhcpServerStaticLeases (x: '' '') + flip concatMapStrings def.dhcpServerStaticLeases (x: ''
[DHCPServerStaticLease] [DHCPServerStaticLease]
${attrsToSection x.dhcpServerStaticLeaseConfig} ${attrsToSection x}
'') + optionalString (def.bridgeConfig != { }) '' '') + optionalString (def.bridgeConfig != { }) ''
[Bridge] [Bridge]
${attrsToSection def.bridgeConfig} ${attrsToSection def.bridgeConfig}
'' + flip concatMapStrings def.bridgeFDBs (x: '' '' + flip concatMapStrings def.bridgeFDBs (x: ''
[BridgeFDB] [BridgeFDB]
${attrsToSection x.bridgeFDBConfig} ${attrsToSection x}
'') + flip concatMapStrings def.bridgeMDBs (x: '' '') + flip concatMapStrings def.bridgeMDBs (x: ''
[BridgeMDB] [BridgeMDB]
${attrsToSection x.bridgeMDBConfig} ${attrsToSection x}
'') + optionalString (def.lldpConfig != { }) '' '') + optionalString (def.lldpConfig != { }) ''
[LLDP] [LLDP]
${attrsToSection def.lldpConfig} ${attrsToSection def.lldpConfig}
@ -251,7 +251,7 @@ in {
${attrsToSection def.quickFairQueueingConfigClass} ${attrsToSection def.quickFairQueueingConfigClass}
'' + flip concatMapStrings def.bridgeVLANs (x: '' '' + flip concatMapStrings def.bridgeVLANs (x: ''
[BridgeVLAN] [BridgeVLAN]
${attrsToSection x.bridgeVLANConfig} ${attrsToSection x}
'') + def.extraConfig; '') + def.extraConfig;
} }

View file

@ -24,6 +24,7 @@ python3Packages.buildPythonApplication {
coreutils coreutils
netpbm netpbm
python3Packages.colorama python3Packages.colorama
python3Packages.junit-xml
python3Packages.ptpython python3Packages.ptpython
qemu_pkg qemu_pkg
socat socat
@ -46,7 +47,7 @@ python3Packages.buildPythonApplication {
echo -e "\x1b[32m## run mypy\x1b[0m" echo -e "\x1b[32m## run mypy\x1b[0m"
mypy test_driver extract-docstrings.py mypy test_driver extract-docstrings.py
echo -e "\x1b[32m## run ruff\x1b[0m" echo -e "\x1b[32m## run ruff\x1b[0m"
ruff . ruff check .
echo -e "\x1b[32m## run black\x1b[0m" echo -e "\x1b[32m## run black\x1b[0m"
black --check --diff . black --check --diff .
''; '';

View file

@ -19,8 +19,8 @@ test_driver = ["py.typed"]
[tool.ruff] [tool.ruff]
line-length = 88 line-length = 88
select = ["E", "F", "I", "U", "N"] lint.select = ["E", "F", "I", "U", "N"]
ignore = ["E501"] lint.ignore = ["E501"]
# xxx: we can import https://pypi.org/project/types-colorama/ here # xxx: we can import https://pypi.org/project/types-colorama/ here
[[tool.mypy.overrides]] [[tool.mypy.overrides]]
@ -31,6 +31,10 @@ ignore_missing_imports = true
module = "ptpython.*" module = "ptpython.*"
ignore_missing_imports = true ignore_missing_imports = true
[[tool.mypy.overrides]]
module = "junit_xml.*"
ignore_missing_imports = true
[tool.black] [tool.black]
line-length = 88 line-length = 88
target-version = ['py39'] target-version = ['py39']

View file

@ -6,7 +6,12 @@ from pathlib import Path
import ptpython.repl import ptpython.repl
from test_driver.driver import Driver from test_driver.driver import Driver
from test_driver.logger import rootlog from test_driver.logger import (
CompositeLogger,
JunitXMLLogger,
TerminalLogger,
XMLLogger,
)
class EnvDefault(argparse.Action): class EnvDefault(argparse.Action):
@ -92,6 +97,11 @@ def main() -> None:
default=Path.cwd(), default=Path.cwd(),
type=writeable_dir, type=writeable_dir,
) )
arg_parser.add_argument(
"--junit-xml",
help="Enable JunitXML report generation to the given path",
type=Path,
)
arg_parser.add_argument( arg_parser.add_argument(
"testscript", "testscript",
action=EnvDefault, action=EnvDefault,
@ -102,14 +112,24 @@ def main() -> None:
args = arg_parser.parse_args() args = arg_parser.parse_args()
output_directory = args.output_directory.resolve()
logger = CompositeLogger([TerminalLogger()])
if "LOGFILE" in os.environ.keys():
logger.add_logger(XMLLogger(os.environ["LOGFILE"]))
if args.junit_xml:
logger.add_logger(JunitXMLLogger(output_directory / args.junit_xml))
if not args.keep_vm_state: if not args.keep_vm_state:
rootlog.info("Machine state will be reset. To keep it, pass --keep-vm-state") logger.info("Machine state will be reset. To keep it, pass --keep-vm-state")
with Driver( with Driver(
args.start_scripts, args.start_scripts,
args.vlans, args.vlans,
args.testscript.read_text(), args.testscript.read_text(),
args.output_directory.resolve(), output_directory,
logger,
args.keep_vm_state, args.keep_vm_state,
args.global_timeout, args.global_timeout,
) as driver: ) as driver:
@ -125,7 +145,7 @@ def main() -> None:
tic = time.time() tic = time.time()
driver.run_tests() driver.run_tests()
toc = time.time() toc = time.time()
rootlog.info(f"test script finished in {(toc-tic):.2f}s") logger.info(f"test script finished in {(toc-tic):.2f}s")
def generate_driver_symbols() -> None: def generate_driver_symbols() -> None:
@ -134,7 +154,7 @@ def generate_driver_symbols() -> None:
in user's test scripts. That list is then used by pyflakes to lint those in user's test scripts. That list is then used by pyflakes to lint those
scripts. scripts.
""" """
d = Driver([], [], "", Path()) d = Driver([], [], "", Path(), CompositeLogger([]))
test_symbols = d.test_symbols() test_symbols = d.test_symbols()
with open("driver-symbols", "w") as fp: with open("driver-symbols", "w") as fp:
fp.write(",".join(test_symbols.keys())) fp.write(",".join(test_symbols.keys()))

View file

@ -9,7 +9,7 @@ from typing import Any, Callable, ContextManager, Dict, Iterator, List, Optional
from colorama import Fore, Style from colorama import Fore, Style
from test_driver.logger import rootlog from test_driver.logger import AbstractLogger
from test_driver.machine import Machine, NixStartScript, retry from test_driver.machine import Machine, NixStartScript, retry
from test_driver.polling_condition import PollingCondition from test_driver.polling_condition import PollingCondition
from test_driver.vlan import VLan from test_driver.vlan import VLan
@ -49,6 +49,7 @@ class Driver:
polling_conditions: List[PollingCondition] polling_conditions: List[PollingCondition]
global_timeout: int global_timeout: int
race_timer: threading.Timer race_timer: threading.Timer
logger: AbstractLogger
def __init__( def __init__(
self, self,
@ -56,6 +57,7 @@ class Driver:
vlans: List[int], vlans: List[int],
tests: str, tests: str,
out_dir: Path, out_dir: Path,
logger: AbstractLogger,
keep_vm_state: bool = False, keep_vm_state: bool = False,
global_timeout: int = 24 * 60 * 60 * 7, global_timeout: int = 24 * 60 * 60 * 7,
): ):
@ -63,12 +65,13 @@ class Driver:
self.out_dir = out_dir self.out_dir = out_dir
self.global_timeout = global_timeout self.global_timeout = global_timeout
self.race_timer = threading.Timer(global_timeout, self.terminate_test) self.race_timer = threading.Timer(global_timeout, self.terminate_test)
self.logger = logger
tmp_dir = get_tmp_dir() tmp_dir = get_tmp_dir()
with rootlog.nested("start all VLans"): with self.logger.nested("start all VLans"):
vlans = list(set(vlans)) vlans = list(set(vlans))
self.vlans = [VLan(nr, tmp_dir) for nr in vlans] self.vlans = [VLan(nr, tmp_dir, self.logger) for nr in vlans]
def cmd(scripts: List[str]) -> Iterator[NixStartScript]: def cmd(scripts: List[str]) -> Iterator[NixStartScript]:
for s in scripts: for s in scripts:
@ -84,6 +87,7 @@ class Driver:
tmp_dir=tmp_dir, tmp_dir=tmp_dir,
callbacks=[self.check_polling_conditions], callbacks=[self.check_polling_conditions],
out_dir=self.out_dir, out_dir=self.out_dir,
logger=self.logger,
) )
for cmd in cmd(start_scripts) for cmd in cmd(start_scripts)
] ]
@ -92,19 +96,19 @@ class Driver:
return self return self
def __exit__(self, *_: Any) -> None: def __exit__(self, *_: Any) -> None:
with rootlog.nested("cleanup"): with self.logger.nested("cleanup"):
self.race_timer.cancel() self.race_timer.cancel()
for machine in self.machines: for machine in self.machines:
machine.release() machine.release()
def subtest(self, name: str) -> Iterator[None]: def subtest(self, name: str) -> Iterator[None]:
"""Group logs under a given test name""" """Group logs under a given test name"""
with rootlog.nested("subtest: " + name): with self.logger.subtest(name):
try: try:
yield yield
return True return True
except Exception as e: except Exception as e:
rootlog.error(f'Test "{name}" failed with error: "{e}"') self.logger.error(f'Test "{name}" failed with error: "{e}"')
raise e raise e
def test_symbols(self) -> Dict[str, Any]: def test_symbols(self) -> Dict[str, Any]:
@ -118,7 +122,7 @@ class Driver:
machines=self.machines, machines=self.machines,
vlans=self.vlans, vlans=self.vlans,
driver=self, driver=self,
log=rootlog, log=self.logger,
os=os, os=os,
create_machine=self.create_machine, create_machine=self.create_machine,
subtest=subtest, subtest=subtest,
@ -150,13 +154,13 @@ class Driver:
def test_script(self) -> None: def test_script(self) -> None:
"""Run the test script""" """Run the test script"""
with rootlog.nested("run the VM test script"): with self.logger.nested("run the VM test script"):
symbols = self.test_symbols() # call eagerly symbols = self.test_symbols() # call eagerly
exec(self.tests, symbols, None) exec(self.tests, symbols, None)
def run_tests(self) -> None: def run_tests(self) -> None:
"""Run the test script (for non-interactive test runs)""" """Run the test script (for non-interactive test runs)"""
rootlog.info( self.logger.info(
f"Test will time out and terminate in {self.global_timeout} seconds" f"Test will time out and terminate in {self.global_timeout} seconds"
) )
self.race_timer.start() self.race_timer.start()
@ -168,13 +172,13 @@ class Driver:
def start_all(self) -> None: def start_all(self) -> None:
"""Start all machines""" """Start all machines"""
with rootlog.nested("start all VMs"): with self.logger.nested("start all VMs"):
for machine in self.machines: for machine in self.machines:
machine.start() machine.start()
def join_all(self) -> None: def join_all(self) -> None:
"""Wait for all machines to shut down""" """Wait for all machines to shut down"""
with rootlog.nested("wait for all VMs to finish"): with self.logger.nested("wait for all VMs to finish"):
for machine in self.machines: for machine in self.machines:
machine.wait_for_shutdown() machine.wait_for_shutdown()
self.race_timer.cancel() self.race_timer.cancel()
@ -182,7 +186,7 @@ class Driver:
def terminate_test(self) -> None: def terminate_test(self) -> None:
# This will be usually running in another thread than # This will be usually running in another thread than
# the thread actually executing the test script. # the thread actually executing the test script.
with rootlog.nested("timeout reached; test terminating..."): with self.logger.nested("timeout reached; test terminating..."):
for machine in self.machines: for machine in self.machines:
machine.release() machine.release()
# As we cannot `sys.exit` from another thread # As we cannot `sys.exit` from another thread
@ -227,7 +231,7 @@ class Driver:
f"Unsupported arguments passed to create_machine: {args}" f"Unsupported arguments passed to create_machine: {args}"
) )
rootlog.warning( self.logger.warning(
Fore.YELLOW Fore.YELLOW
+ Style.BRIGHT + Style.BRIGHT
+ "WARNING: Using create_machine with a single dictionary argument is deprecated and will be removed in NixOS 24.11" + "WARNING: Using create_machine with a single dictionary argument is deprecated and will be removed in NixOS 24.11"
@ -246,13 +250,14 @@ class Driver:
start_command=cmd, start_command=cmd,
name=name, name=name,
keep_vm_state=keep_vm_state, keep_vm_state=keep_vm_state,
logger=self.logger,
) )
def serial_stdout_on(self) -> None: def serial_stdout_on(self) -> None:
rootlog._print_serial_logs = True self.logger.print_serial_logs(True)
def serial_stdout_off(self) -> None: def serial_stdout_off(self) -> None:
rootlog._print_serial_logs = False self.logger.print_serial_logs(False)
def check_polling_conditions(self) -> None: def check_polling_conditions(self) -> None:
for condition in self.polling_conditions: for condition in self.polling_conditions:
@ -271,6 +276,7 @@ class Driver:
def __init__(self, fun: Callable): def __init__(self, fun: Callable):
self.condition = PollingCondition( self.condition = PollingCondition(
fun, fun,
driver.logger,
seconds_interval, seconds_interval,
description, description,
) )
@ -285,15 +291,17 @@ class Driver:
def wait(self, timeout: int = 900) -> None: def wait(self, timeout: int = 900) -> None:
def condition(last: bool) -> bool: def condition(last: bool) -> bool:
if last: if last:
rootlog.info(f"Last chance for {self.condition.description}") driver.logger.info(
f"Last chance for {self.condition.description}"
)
ret = self.condition.check(force=True) ret = self.condition.check(force=True)
if not ret and not last: if not ret and not last:
rootlog.info( driver.logger.info(
f"({self.condition.description} failure not fatal yet)" f"({self.condition.description} failure not fatal yet)"
) )
return ret return ret
with rootlog.nested(f"waiting for {self.condition.description}"): with driver.logger.nested(f"waiting for {self.condition.description}"):
retry(condition, timeout=timeout) retry(condition, timeout=timeout)
if fun_ is None: if fun_ is None:

View file

@ -1,33 +1,238 @@
import atexit
import codecs import codecs
import os import os
import sys import sys
import time import time
import unicodedata import unicodedata
from contextlib import contextmanager from abc import ABC, abstractmethod
from contextlib import ExitStack, contextmanager
from pathlib import Path
from queue import Empty, Queue from queue import Empty, Queue
from typing import Any, Dict, Iterator from typing import Any, Dict, Iterator, List
from xml.sax.saxutils import XMLGenerator from xml.sax.saxutils import XMLGenerator
from xml.sax.xmlreader import AttributesImpl from xml.sax.xmlreader import AttributesImpl
from colorama import Fore, Style from colorama import Fore, Style
from junit_xml import TestCase, TestSuite
class Logger: class AbstractLogger(ABC):
def __init__(self) -> None: @abstractmethod
self.logfile = os.environ.get("LOGFILE", "/dev/null") def log(self, message: str, attributes: Dict[str, str] = {}) -> None:
self.logfile_handle = codecs.open(self.logfile, "wb") pass
self.xml = XMLGenerator(self.logfile_handle, encoding="utf-8")
self.queue: "Queue[Dict[str, str]]" = Queue()
self.xml.startDocument() @abstractmethod
self.xml.startElement("logfile", attrs=AttributesImpl({})) @contextmanager
def subtest(self, name: str, attributes: Dict[str, str] = {}) -> Iterator[None]:
pass
@abstractmethod
@contextmanager
def nested(self, message: str, attributes: Dict[str, str] = {}) -> Iterator[None]:
pass
@abstractmethod
def info(self, *args, **kwargs) -> None: # type: ignore
pass
@abstractmethod
def warning(self, *args, **kwargs) -> None: # type: ignore
pass
@abstractmethod
def error(self, *args, **kwargs) -> None: # type: ignore
pass
@abstractmethod
def log_serial(self, message: str, machine: str) -> None:
pass
@abstractmethod
def print_serial_logs(self, enable: bool) -> None:
pass
class JunitXMLLogger(AbstractLogger):
class TestCaseState:
def __init__(self) -> None:
self.stdout = ""
self.stderr = ""
self.failure = False
def __init__(self, outfile: Path) -> None:
self.tests: dict[str, JunitXMLLogger.TestCaseState] = {
"main": self.TestCaseState()
}
self.currentSubtest = "main"
self.outfile: Path = outfile
self._print_serial_logs = True self._print_serial_logs = True
atexit.register(self.close)
def log(self, message: str, attributes: Dict[str, str] = {}) -> None:
self.tests[self.currentSubtest].stdout += message + os.linesep
@contextmanager
def subtest(self, name: str, attributes: Dict[str, str] = {}) -> Iterator[None]:
old_test = self.currentSubtest
self.tests.setdefault(name, self.TestCaseState())
self.currentSubtest = name
yield
self.currentSubtest = old_test
@contextmanager
def nested(self, message: str, attributes: Dict[str, str] = {}) -> Iterator[None]:
self.log(message)
yield
def info(self, *args, **kwargs) -> None: # type: ignore
self.tests[self.currentSubtest].stdout += args[0] + os.linesep
def warning(self, *args, **kwargs) -> None: # type: ignore
self.tests[self.currentSubtest].stdout += args[0] + os.linesep
def error(self, *args, **kwargs) -> None: # type: ignore
self.tests[self.currentSubtest].stderr += args[0] + os.linesep
self.tests[self.currentSubtest].failure = True
def log_serial(self, message: str, machine: str) -> None:
if not self._print_serial_logs:
return
self.log(f"{machine} # {message}")
def print_serial_logs(self, enable: bool) -> None:
self._print_serial_logs = enable
def close(self) -> None:
with open(self.outfile, "w") as f:
test_cases = []
for name, test_case_state in self.tests.items():
tc = TestCase(
name,
stdout=test_case_state.stdout,
stderr=test_case_state.stderr,
)
if test_case_state.failure:
tc.add_failure_info("test case failed")
test_cases.append(tc)
ts = TestSuite("NixOS integration test", test_cases)
f.write(TestSuite.to_xml_string([ts]))
class CompositeLogger(AbstractLogger):
def __init__(self, logger_list: List[AbstractLogger]) -> None:
self.logger_list = logger_list
def add_logger(self, logger: AbstractLogger) -> None:
self.logger_list.append(logger)
def log(self, message: str, attributes: Dict[str, str] = {}) -> None:
for logger in self.logger_list:
logger.log(message, attributes)
@contextmanager
def subtest(self, name: str, attributes: Dict[str, str] = {}) -> Iterator[None]:
with ExitStack() as stack:
for logger in self.logger_list:
stack.enter_context(logger.subtest(name, attributes))
yield
@contextmanager
def nested(self, message: str, attributes: Dict[str, str] = {}) -> Iterator[None]:
with ExitStack() as stack:
for logger in self.logger_list:
stack.enter_context(logger.nested(message, attributes))
yield
def info(self, *args, **kwargs) -> None: # type: ignore
for logger in self.logger_list:
logger.info(*args, **kwargs)
def warning(self, *args, **kwargs) -> None: # type: ignore
for logger in self.logger_list:
logger.warning(*args, **kwargs)
def error(self, *args, **kwargs) -> None: # type: ignore
for logger in self.logger_list:
logger.error(*args, **kwargs)
sys.exit(1)
def print_serial_logs(self, enable: bool) -> None:
for logger in self.logger_list:
logger.print_serial_logs(enable)
def log_serial(self, message: str, machine: str) -> None:
for logger in self.logger_list:
logger.log_serial(message, machine)
class TerminalLogger(AbstractLogger):
def __init__(self) -> None:
self._print_serial_logs = True
def maybe_prefix(self, message: str, attributes: Dict[str, str]) -> str:
if "machine" in attributes:
return f"{attributes['machine']}: {message}"
return message
@staticmethod @staticmethod
def _eprint(*args: object, **kwargs: Any) -> None: def _eprint(*args: object, **kwargs: Any) -> None:
print(*args, file=sys.stderr, **kwargs) print(*args, file=sys.stderr, **kwargs)
def log(self, message: str, attributes: Dict[str, str] = {}) -> None:
self._eprint(self.maybe_prefix(message, attributes))
@contextmanager
def subtest(self, name: str, attributes: Dict[str, str] = {}) -> Iterator[None]:
with self.nested("subtest: " + name, attributes):
yield
@contextmanager
def nested(self, message: str, attributes: Dict[str, str] = {}) -> Iterator[None]:
self._eprint(
self.maybe_prefix(
Style.BRIGHT + Fore.GREEN + message + Style.RESET_ALL, attributes
)
)
tic = time.time()
yield
toc = time.time()
self.log(f"(finished: {message}, in {toc - tic:.2f} seconds)")
def info(self, *args, **kwargs) -> None: # type: ignore
self.log(*args, **kwargs)
def warning(self, *args, **kwargs) -> None: # type: ignore
self.log(*args, **kwargs)
def error(self, *args, **kwargs) -> None: # type: ignore
self.log(*args, **kwargs)
def print_serial_logs(self, enable: bool) -> None:
self._print_serial_logs = enable
def log_serial(self, message: str, machine: str) -> None:
if not self._print_serial_logs:
return
self._eprint(Style.DIM + f"{machine} # {message}" + Style.RESET_ALL)
class XMLLogger(AbstractLogger):
def __init__(self, outfile: str) -> None:
self.logfile_handle = codecs.open(outfile, "wb")
self.xml = XMLGenerator(self.logfile_handle, encoding="utf-8")
self.queue: Queue[dict[str, str]] = Queue()
self._print_serial_logs = True
self.xml.startDocument()
self.xml.startElement("logfile", attrs=AttributesImpl({}))
def close(self) -> None: def close(self) -> None:
self.xml.endElement("logfile") self.xml.endElement("logfile")
self.xml.endDocument() self.xml.endDocument()
@ -54,17 +259,19 @@ class Logger:
def error(self, *args, **kwargs) -> None: # type: ignore def error(self, *args, **kwargs) -> None: # type: ignore
self.log(*args, **kwargs) self.log(*args, **kwargs)
sys.exit(1)
def log(self, message: str, attributes: Dict[str, str] = {}) -> None: def log(self, message: str, attributes: Dict[str, str] = {}) -> None:
self._eprint(self.maybe_prefix(message, attributes))
self.drain_log_queue() self.drain_log_queue()
self.log_line(message, attributes) self.log_line(message, attributes)
def print_serial_logs(self, enable: bool) -> None:
self._print_serial_logs = enable
def log_serial(self, message: str, machine: str) -> None: def log_serial(self, message: str, machine: str) -> None:
if not self._print_serial_logs:
return
self.enqueue({"msg": message, "machine": machine, "type": "serial"}) self.enqueue({"msg": message, "machine": machine, "type": "serial"})
if self._print_serial_logs:
self._eprint(Style.DIM + f"{machine} # {message}" + Style.RESET_ALL)
def enqueue(self, item: Dict[str, str]) -> None: def enqueue(self, item: Dict[str, str]) -> None:
self.queue.put(item) self.queue.put(item)
@ -80,13 +287,12 @@ class Logger:
pass pass
@contextmanager @contextmanager
def nested(self, message: str, attributes: Dict[str, str] = {}) -> Iterator[None]: def subtest(self, name: str, attributes: Dict[str, str] = {}) -> Iterator[None]:
self._eprint( with self.nested("subtest: " + name, attributes):
self.maybe_prefix( yield
Style.BRIGHT + Fore.GREEN + message + Style.RESET_ALL, attributes
)
)
@contextmanager
def nested(self, message: str, attributes: Dict[str, str] = {}) -> Iterator[None]:
self.xml.startElement("nest", attrs=AttributesImpl({})) self.xml.startElement("nest", attrs=AttributesImpl({}))
self.xml.startElement("head", attrs=AttributesImpl(attributes)) self.xml.startElement("head", attrs=AttributesImpl(attributes))
self.xml.characters(message) self.xml.characters(message)
@ -100,6 +306,3 @@ class Logger:
self.log(f"(finished: {message}, in {toc - tic:.2f} seconds)") self.log(f"(finished: {message}, in {toc - tic:.2f} seconds)")
self.xml.endElement("nest") self.xml.endElement("nest")
rootlog = Logger()

View file

@ -17,7 +17,7 @@ from pathlib import Path
from queue import Queue from queue import Queue
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple
from test_driver.logger import rootlog from test_driver.logger import AbstractLogger
from .qmp import QMPSession from .qmp import QMPSession
@ -270,6 +270,7 @@ class Machine:
out_dir: Path, out_dir: Path,
tmp_dir: Path, tmp_dir: Path,
start_command: StartCommand, start_command: StartCommand,
logger: AbstractLogger,
name: str = "machine", name: str = "machine",
keep_vm_state: bool = False, keep_vm_state: bool = False,
callbacks: Optional[List[Callable]] = None, callbacks: Optional[List[Callable]] = None,
@ -280,6 +281,7 @@ class Machine:
self.name = name self.name = name
self.start_command = start_command self.start_command = start_command
self.callbacks = callbacks if callbacks is not None else [] self.callbacks = callbacks if callbacks is not None else []
self.logger = logger
# set up directories # set up directories
self.shared_dir = self.tmp_dir / "shared-xchg" self.shared_dir = self.tmp_dir / "shared-xchg"
@ -307,15 +309,15 @@ class Machine:
return self.booted and self.connected return self.booted and self.connected
def log(self, msg: str) -> None: def log(self, msg: str) -> None:
rootlog.log(msg, {"machine": self.name}) self.logger.log(msg, {"machine": self.name})
def log_serial(self, msg: str) -> None: def log_serial(self, msg: str) -> None:
rootlog.log_serial(msg, self.name) self.logger.log_serial(msg, self.name)
def nested(self, msg: str, attrs: Dict[str, str] = {}) -> _GeneratorContextManager: def nested(self, msg: str, attrs: Dict[str, str] = {}) -> _GeneratorContextManager:
my_attrs = {"machine": self.name} my_attrs = {"machine": self.name}
my_attrs.update(attrs) my_attrs.update(attrs)
return rootlog.nested(msg, my_attrs) return self.logger.nested(msg, my_attrs)
def wait_for_monitor_prompt(self) -> str: def wait_for_monitor_prompt(self) -> str:
assert self.monitor is not None assert self.monitor is not None
@ -1113,8 +1115,8 @@ class Machine:
def cleanup_statedir(self) -> None: def cleanup_statedir(self) -> None:
shutil.rmtree(self.state_dir) shutil.rmtree(self.state_dir)
rootlog.log(f"deleting VM state directory {self.state_dir}") self.logger.log(f"deleting VM state directory {self.state_dir}")
rootlog.log("if you want to keep the VM state, pass --keep-vm-state") self.logger.log("if you want to keep the VM state, pass --keep-vm-state")
def shutdown(self) -> None: def shutdown(self) -> None:
""" """
@ -1221,7 +1223,7 @@ class Machine:
def release(self) -> None: def release(self) -> None:
if self.pid is None: if self.pid is None:
return return
rootlog.info(f"kill machine (pid {self.pid})") self.logger.info(f"kill machine (pid {self.pid})")
assert self.process assert self.process
assert self.shell assert self.shell
assert self.monitor assert self.monitor

View file

@ -2,7 +2,7 @@ import time
from math import isfinite from math import isfinite
from typing import Callable, Optional from typing import Callable, Optional
from .logger import rootlog from test_driver.logger import AbstractLogger
class PollingConditionError(Exception): class PollingConditionError(Exception):
@ -13,6 +13,7 @@ class PollingCondition:
condition: Callable[[], bool] condition: Callable[[], bool]
seconds_interval: float seconds_interval: float
description: Optional[str] description: Optional[str]
logger: AbstractLogger
last_called: float last_called: float
entry_count: int entry_count: int
@ -20,11 +21,13 @@ class PollingCondition:
def __init__( def __init__(
self, self,
condition: Callable[[], Optional[bool]], condition: Callable[[], Optional[bool]],
logger: AbstractLogger,
seconds_interval: float = 2.0, seconds_interval: float = 2.0,
description: Optional[str] = None, description: Optional[str] = None,
): ):
self.condition = condition # type: ignore self.condition = condition # type: ignore
self.seconds_interval = seconds_interval self.seconds_interval = seconds_interval
self.logger = logger
if description is None: if description is None:
if condition.__doc__: if condition.__doc__:
@ -41,7 +44,7 @@ class PollingCondition:
if (self.entered or not self.overdue) and not force: if (self.entered or not self.overdue) and not force:
return True return True
with self, rootlog.nested(self.nested_message): with self, self.logger.nested(self.nested_message):
time_since_last = time.monotonic() - self.last_called time_since_last = time.monotonic() - self.last_called
last_message = ( last_message = (
f"Time since last: {time_since_last:.2f}s" f"Time since last: {time_since_last:.2f}s"
@ -49,13 +52,13 @@ class PollingCondition:
else "(not called yet)" else "(not called yet)"
) )
rootlog.info(last_message) self.logger.info(last_message)
try: try:
res = self.condition() # type: ignore res = self.condition() # type: ignore
except Exception: except Exception:
res = False res = False
res = res is None or res res = res is None or res
rootlog.info(self.status_message(res)) self.logger.info(self.status_message(res))
return res return res
def maybe_raise(self) -> None: def maybe_raise(self) -> None:

View file

@ -4,7 +4,7 @@ import pty
import subprocess import subprocess
from pathlib import Path from pathlib import Path
from test_driver.logger import rootlog from test_driver.logger import AbstractLogger
class VLan: class VLan:
@ -19,17 +19,20 @@ class VLan:
pid: int pid: int
fd: io.TextIOBase fd: io.TextIOBase
logger: AbstractLogger
def __repr__(self) -> str: def __repr__(self) -> str:
return f"<Vlan Nr. {self.nr}>" return f"<Vlan Nr. {self.nr}>"
def __init__(self, nr: int, tmp_dir: Path): def __init__(self, nr: int, tmp_dir: Path, logger: AbstractLogger):
self.nr = nr self.nr = nr
self.socket_dir = tmp_dir / f"vde{self.nr}.ctl" self.socket_dir = tmp_dir / f"vde{self.nr}.ctl"
self.logger = logger
# TODO: don't side-effect environment here # TODO: don't side-effect environment here
os.environ[f"QEMU_VDE_SOCKET_{self.nr}"] = str(self.socket_dir) os.environ[f"QEMU_VDE_SOCKET_{self.nr}"] = str(self.socket_dir)
rootlog.info("start vlan") self.logger.info("start vlan")
pty_master, pty_slave = pty.openpty() pty_master, pty_slave = pty.openpty()
# The --hub is required for the scenario determined by # The --hub is required for the scenario determined by
@ -52,11 +55,11 @@ class VLan:
assert self.process.stdout is not None assert self.process.stdout is not None
self.process.stdout.readline() self.process.stdout.readline()
if not (self.socket_dir / "ctl").exists(): if not (self.socket_dir / "ctl").exists():
rootlog.error("cannot start vde_switch") self.logger.error("cannot start vde_switch")
rootlog.info(f"running vlan (pid {self.pid}; ctl {self.socket_dir})") self.logger.info(f"running vlan (pid {self.pid}; ctl {self.socket_dir})")
def __del__(self) -> None: def __del__(self) -> None:
rootlog.info(f"kill vlan (pid {self.pid})") self.logger.info(f"kill vlan (pid {self.pid})")
self.fd.close() self.fd.close()
self.process.terminate() self.process.terminate()

View file

@ -4,7 +4,7 @@
from test_driver.driver import Driver from test_driver.driver import Driver
from test_driver.vlan import VLan from test_driver.vlan import VLan
from test_driver.machine import Machine from test_driver.machine import Machine
from test_driver.logger import Logger from test_driver.logger import AbstractLogger
from typing import Callable, Iterator, ContextManager, Optional, List, Dict, Any, Union from typing import Callable, Iterator, ContextManager, Optional, List, Dict, Any, Union
from typing_extensions import Protocol from typing_extensions import Protocol
from pathlib import Path from pathlib import Path
@ -44,7 +44,7 @@ test_script: Callable[[], None]
machines: List[Machine] machines: List[Machine]
vlans: List[VLan] vlans: List[VLan]
driver: Driver driver: Driver
log: Logger log: AbstractLogger
create_machine: CreateMachineProtocol create_machine: CreateMachineProtocol
run_tests: Callable[[], None] run_tests: Callable[[], None]
join_all: Callable[[], None] join_all: Callable[[], None]

View file

@ -1,7 +1,36 @@
# Amazon images # Amazon images
* The `create-amis.sh` script will be replaced by https://github.com/NixOS/amis which will regularly upload AMIs per NixOS channel bump. AMIs are regularly uploaded from Hydra. This automation lives in
https://github.com/NixOS/amis
* @arianvp is planning to drop zfs support
## How to upload an AMI for testing
If you want to upload an AMI from changes in a local nixpkgs checkout.
```bash
nix-build nixos/release.nix -A amazonImage
export AWS_REGION=us-west-2
export AWS_PROFILE=my-profile
nix run nixpkgs#upload-ami -- --image-info ./result/nix-support/image-info.json
```
## How to build your own NixOS config into an AMI
I suggest looking at https://github.com/nix-community/nixos-generators for a user-friendly interface.
```bash
nixos-generate -c ./my-config.nix -f amazon
export AWS_REGION=us-west-2
export AWS_PROFILE=my-profile
nix run github:NixOS/amis#upload-ami -- --image-info ./result/nix-support/image-info.json
```
## Roadmap
* @arianvp is planning to drop zfs support unless someone else picks it up
* @arianvp is planning to rewrite the image builder to use the repart-based image builder. * @arianvp is planning to rewrite the image builder to use the repart-based image builder.
* @arianvp is planning to perhaps rewrite `upload-ami` to use coldnsap
* @arianvp is planning to move `upload-ami` tooling into nixpkgs once it has stabilized. And only keep the Github Action in separate repo

View file

@ -71,9 +71,8 @@ in {
''; '';
zfsBuilder = import ../../../lib/make-multi-disk-zfs-image.nix { zfsBuilder = import ../../../lib/make-multi-disk-zfs-image.nix {
inherit lib config configFile; inherit lib config configFile pkgs;
inherit (cfg) contents format name; inherit (cfg) contents format name;
pkgs = import ../../../.. { inherit (pkgs) system; }; # ensure we use the regular qemu-kvm package
includeChannel = true; includeChannel = true;
@ -120,10 +119,9 @@ in {
}; };
extBuilder = import ../../../lib/make-disk-image.nix { extBuilder = import ../../../lib/make-disk-image.nix {
inherit lib config configFile; inherit lib config configFile pkgs;
inherit (cfg) contents format name; inherit (cfg) contents format name;
pkgs = import ../../../.. { inherit (pkgs) system; }; # ensure we use the regular qemu-kvm package
fsType = "ext4"; fsType = "ext4";
partitionTableType = if config.ec2.efi then "efi" else "legacy+gpt"; partitionTableType = if config.ec2.efi then "efi" else "legacy+gpt";

View file

@ -1,368 +0,0 @@
#!/usr/bin/env nix-shell
#!nix-shell -p awscli -p jq -p qemu -i bash
# shellcheck shell=bash
#
# Future Deprecation?
# This entire thing should probably be replaced with a generic terraform config
# Uploads and registers NixOS images built from the
# <nixos/release.nix> amazonImage attribute. Images are uploaded and
# registered via a home region, and then copied to other regions.
# The home region requires an s3 bucket, and an IAM role named "vmimport"
# (by default) with access to the S3 bucket. The name can be
# configured with the "service_role_name" variable. Configuration of the
# vmimport role is documented in
# https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html
# set -x
set -euo pipefail
var () { true; }
# configuration
var ${state_dir:=$HOME/amis/ec2-images}
var ${home_region:=eu-west-1}
var ${bucket:=nixos-amis}
var ${service_role_name:=vmimport}
# Output of the command:
# $ nix-shell -I nixpkgs=. -p awscli --run 'aws ec2 describe-regions --region us-east-1 --all-regions --query "Regions[].{Name:RegionName}" --output text | sort | sed -e s/^/\ \ /'
var ${regions:=
af-south-1
ap-east-1
ap-northeast-1
ap-northeast-2
ap-northeast-3
ap-south-1
ap-south-2
ap-southeast-1
ap-southeast-2
ap-southeast-3
ap-southeast-4
ca-central-1
eu-central-1
eu-central-2
eu-north-1
eu-south-1
eu-south-2
eu-west-1
eu-west-2
eu-west-3
il-central-1
me-central-1
me-south-1
sa-east-1
us-east-1
us-east-2
us-west-1
us-west-2
}
regions=($regions)
log() {
echo "$@" >&2
}
if [ "$#" -ne 1 ]; then
log "Usage: ./upload-amazon-image.sh IMAGE_OUTPUT"
exit 1
fi
# result of the amazon-image from nixos/release.nix
store_path=$1
if [ ! -e "$store_path" ]; then
log "Store path: $store_path does not exist, fetching..."
nix-store --realise "$store_path"
fi
if [ ! -d "$store_path" ]; then
log "store_path: $store_path is not a directory. aborting"
exit 1
fi
read_image_info() {
if [ ! -e "$store_path/nix-support/image-info.json" ]; then
log "Image missing metadata"
exit 1
fi
jq -r "$1" "$store_path/nix-support/image-info.json"
}
# We handle a single image per invocation, store all attributes in
# globals for convenience.
zfs_disks=$(read_image_info .disks)
is_zfs_image=
if jq -e .boot <<< "$zfs_disks"; then
is_zfs_image=1
zfs_boot=".disks.boot"
fi
image_label="$(read_image_info .label)${is_zfs_image:+-ZFS}"
image_system=$(read_image_info .system)
image_files=( $(read_image_info ".disks.root.file") )
image_logical_bytes=$(read_image_info "${zfs_boot:-.disks.root}.logical_bytes")
if [[ -n "$is_zfs_image" ]]; then
image_files+=( $(read_image_info .disks.boot.file) )
fi
# Derived attributes
image_logical_gigabytes=$(((image_logical_bytes-1)/1024/1024/1024+1)) # Round to the next GB
case "$image_system" in
aarch64-linux)
amazon_arch=arm64
;;
x86_64-linux)
amazon_arch=x86_64
;;
*)
log "Unknown system: $image_system"
exit 1
esac
image_name="NixOS-${image_label}-${image_system}"
image_description="NixOS ${image_label} ${image_system}"
log "Image Details:"
log " Name: $image_name"
log " Description: $image_description"
log " Size (gigabytes): $image_logical_gigabytes"
log " System: $image_system"
log " Amazon Arch: $amazon_arch"
read_state() {
local state_key=$1
local type=$2
cat "$state_dir/$state_key.$type" 2>/dev/null || true
}
write_state() {
local state_key=$1
local type=$2
local val=$3
mkdir -p "$state_dir"
echo "$val" > "$state_dir/$state_key.$type"
}
wait_for_import() {
local region=$1
local task_id=$2
local state snapshot_id
log "Waiting for import task $task_id to be completed"
while true; do
read -r state message snapshot_id < <(
aws ec2 describe-import-snapshot-tasks --region "$region" --import-task-ids "$task_id" | \
jq -r '.ImportSnapshotTasks[].SnapshotTaskDetail | "\(.Status) \(.StatusMessage) \(.SnapshotId)"'
)
log " ... state=$state message=$message snapshot_id=$snapshot_id"
case "$state" in
active)
sleep 10
;;
completed)
echo "$snapshot_id"
return
;;
*)
log "Unexpected snapshot import state: '${state}'"
log "Full response: "
aws ec2 describe-import-snapshot-tasks --region "$region" --import-task-ids "$task_id" >&2
exit 1
;;
esac
done
}
wait_for_image() {
local region=$1
local ami_id=$2
local state
log "Waiting for image $ami_id to be available"
while true; do
read -r state < <(
aws ec2 describe-images --image-ids "$ami_id" --region "$region" | \
jq -r ".Images[].State"
)
log " ... state=$state"
case "$state" in
pending)
sleep 10
;;
available)
return
;;
*)
log "Unexpected AMI state: '${state}'"
exit 1
;;
esac
done
}
make_image_public() {
local region=$1
local ami_id=$2
wait_for_image "$region" "$ami_id"
log "Making image $ami_id public"
aws ec2 modify-image-attribute \
--image-id "$ami_id" --region "$region" --launch-permission 'Add={Group=all}' >&2
}
upload_image() {
local region=$1
for image_file in "${image_files[@]}"; do
local aws_path=${image_file#/}
if [[ -n "$is_zfs_image" ]]; then
local suffix=${image_file%.*}
suffix=${suffix##*.}
fi
local state_key="$region.$image_label${suffix:+.${suffix}}.$image_system"
local task_id
task_id=$(read_state "$state_key" task_id)
local snapshot_id
snapshot_id=$(read_state "$state_key" snapshot_id)
local ami_id
ami_id=$(read_state "$state_key" ami_id)
if [ -z "$task_id" ]; then
log "Checking for image on S3"
if ! aws s3 ls --region "$region" "s3://${bucket}/${aws_path}" >&2; then
log "Image missing from aws, uploading"
aws s3 cp --region "$region" "$image_file" "s3://${bucket}/${aws_path}" >&2
fi
log "Importing image from S3 path s3://$bucket/$aws_path"
task_id=$(aws ec2 import-snapshot --role-name "$service_role_name" --disk-container "{
\"Description\": \"nixos-image-${image_label}-${image_system}\",
\"Format\": \"vhd\",
\"UserBucket\": {
\"S3Bucket\": \"$bucket\",
\"S3Key\": \"$aws_path\"
}
}" --region "$region" | jq -r '.ImportTaskId')
write_state "$state_key" task_id "$task_id"
fi
if [ -z "$snapshot_id" ]; then
snapshot_id=$(wait_for_import "$region" "$task_id")
write_state "$state_key" snapshot_id "$snapshot_id"
fi
done
if [ -z "$ami_id" ]; then
log "Registering snapshot $snapshot_id as AMI"
local block_device_mappings=(
"DeviceName=/dev/xvda,Ebs={SnapshotId=$snapshot_id,VolumeSize=$image_logical_gigabytes,DeleteOnTermination=true,VolumeType=gp3}"
)
if [[ -n "$is_zfs_image" ]]; then
local root_snapshot_id=$(read_state "$region.$image_label.root.$image_system" snapshot_id)
local root_image_logical_bytes=$(read_image_info ".disks.root.logical_bytes")
local root_image_logical_gigabytes=$(((root_image_logical_bytes-1)/1024/1024/1024+1)) # Round to the next GB
block_device_mappings+=(
"DeviceName=/dev/xvdb,Ebs={SnapshotId=$root_snapshot_id,VolumeSize=$root_image_logical_gigabytes,DeleteOnTermination=true,VolumeType=gp3}"
)
fi
local extra_flags=(
--root-device-name /dev/xvda
--sriov-net-support simple
--ena-support
--virtualization-type hvm
)
block_device_mappings+=("DeviceName=/dev/sdb,VirtualName=ephemeral0")
block_device_mappings+=("DeviceName=/dev/sdc,VirtualName=ephemeral1")
block_device_mappings+=("DeviceName=/dev/sdd,VirtualName=ephemeral2")
block_device_mappings+=("DeviceName=/dev/sde,VirtualName=ephemeral3")
ami_id=$(
aws ec2 register-image \
--name "$image_name" \
--description "$image_description" \
--region "$region" \
--architecture $amazon_arch \
--block-device-mappings "${block_device_mappings[@]}" \
--boot-mode $(read_image_info .boot_mode) \
"${extra_flags[@]}" \
| jq -r '.ImageId'
)
write_state "$state_key" ami_id "$ami_id"
fi
[[ -v PRIVATE ]] || make_image_public "$region" "$ami_id"
echo "$ami_id"
}
copy_to_region() {
local region=$1
local from_region=$2
local from_ami_id=$3
state_key="$region.$image_label.$image_system"
ami_id=$(read_state "$state_key" ami_id)
if [ -z "$ami_id" ]; then
log "Copying $from_ami_id to $region"
ami_id=$(
aws ec2 copy-image \
--region "$region" \
--source-region "$from_region" \
--source-image-id "$from_ami_id" \
--name "$image_name" \
--description "$image_description" \
| jq -r '.ImageId'
)
write_state "$state_key" ami_id "$ami_id"
fi
[[ -v PRIVATE ]] || make_image_public "$region" "$ami_id"
echo "$ami_id"
}
upload_all() {
home_image_id=$(upload_image "$home_region")
jq -n \
--arg key "$home_region.$image_system" \
--arg value "$home_image_id" \
'$ARGS.named'
for region in "${regions[@]}"; do
if [ "$region" = "$home_region" ]; then
continue
fi
copied_image_id=$(copy_to_region "$region" "$home_region" "$home_image_id")
jq -n \
--arg key "$region.$image_system" \
--arg value "$copied_image_id" \
'$ARGS.named'
done
}
upload_all | jq --slurp from_entries

View file

@ -20,7 +20,7 @@
}; };
in '' in ''
if [ ! -e /etc/nixos/configuration.nix ]; then if [ ! -e /etc/nixos/configuration.nix ]; then
install -m 644 -D ${config} /etc/nixos/configuration.nix install -m 0644 -D ${config} /etc/nixos/configuration.nix
fi fi
''; '';

View file

@ -20,8 +20,7 @@
}; };
in '' in ''
if [ ! -e /etc/nixos/configuration.nix ]; then if [ ! -e /etc/nixos/configuration.nix ]; then
mkdir -p /etc/nixos install -m 0644 -D ${config} /etc/nixos/configuration.nix
cp ${config} /etc/nixos/configuration.nix
fi fi
''; '';

View file

@ -46,18 +46,20 @@ with lib;
graphviz = super.graphviz-nox; graphviz = super.graphviz-nox;
gst_all_1 = super.gst_all_1 // { gst_all_1 = super.gst_all_1 // {
gst-plugins-bad = super.gst_all_1.gst-plugins-bad.override { guiSupport = false; }; gst-plugins-bad = super.gst_all_1.gst-plugins-bad.override { guiSupport = false; };
gst-plugins-base = super.gst_all_1.gst-plugins-base.override { enableWayland = false; enableX11 = false; }; gst-plugins-base = super.gst_all_1.gst-plugins-base.override { enableGl = false; enableWayland = false; enableX11 = false; };
gst-plugins-good = super.gst_all_1.gst-plugins-good.override { enableWayland = false; enableX11 = false; gtkSupport = false; qt5Support = false; qt6Support = false; }; gst-plugins-good = super.gst_all_1.gst-plugins-good.override { enableWayland = false; enableX11 = false; gtkSupport = false; qt5Support = false; qt6Support = false; };
gst-plugins-rs = super.gst_all_1.gst-plugins-rs.override { withGtkPlugins = false; };
}; };
imagemagick = super.imagemagick.override { libX11Support = false; libXtSupport = false; }; imagemagick = super.imagemagick.override { libX11Support = false; libXtSupport = false; };
imagemagickBig = super.imagemagickBig.override { libX11Support = false; libXtSupport = false; }; imagemagickBig = super.imagemagickBig.override { libX11Support = false; libXtSupport = false; };
intel-vaapi-driver = super.intel-vaapi-driver.override { enableGui = false; }; intel-vaapi-driver = super.intel-vaapi-driver.override { enableGui = false; };
libdevil = super.libdevil-nox; libdevil = super.libdevil-nox;
libextractor = super.libextractor.override { gtkSupport = false; }; libextractor = super.libextractor.override { gtkSupport = false; };
libplacebo = super.libplacebo.override { vulkanSupport = false; };
libva = super.libva-minimal; libva = super.libva-minimal;
limesuite = super.limesuite.override { withGui = false; }; limesuite = super.limesuite.override { withGui = false; };
mc = super.mc.override { x11Support = false; }; mc = super.mc.override { x11Support = false; };
mpv-unwrapped = super.mpv-unwrapped.override { sdl2Support = false; x11Support = false; waylandSupport = false; }; mpv-unwrapped = super.mpv-unwrapped.override { drmSupport = false; screenSaverSupport = false; sdl2Support = false; vulkanSupport = false; waylandSupport = false; x11Support = false; };
msmtp = super.msmtp.override { withKeyring = false; }; msmtp = super.msmtp.override { withKeyring = false; };
mupdf = super.mupdf.override { enableGL = false; enableX11 = false; }; mupdf = super.mupdf.override { enableGL = false; enableX11 = false; };
neofetch = super.neofetch.override { x11Support = false; }; neofetch = super.neofetch.override { x11Support = false; };
@ -70,6 +72,7 @@ with lib;
networkmanager-vpnc = super.networkmanager-vpnc.override { withGnome = false; }; networkmanager-vpnc = super.networkmanager-vpnc.override { withGnome = false; };
pango = super.pango.override { x11Support = false; }; pango = super.pango.override { x11Support = false; };
pinentry-curses = super.pinentry-curses.override { withLibsecret = false; }; pinentry-curses = super.pinentry-curses.override { withLibsecret = false; };
pinentry-tty = super.pinentry-tty.override { withLibsecret = false; };
pipewire = super.pipewire.override { vulkanSupport = false; x11Support = false; }; pipewire = super.pipewire.override { vulkanSupport = false; x11Support = false; };
pythonPackagesExtensions = super.pythonPackagesExtensions ++ [ pythonPackagesExtensions = super.pythonPackagesExtensions ++ [
(python-final: python-prev: { (python-final: python-prev: {

View file

@ -12,7 +12,7 @@ with lib;
system.nssModules = mkOption { system.nssModules = mkOption {
type = types.listOf types.path; type = types.listOf types.path;
internal = true; internal = true;
default = []; default = [ ];
description = '' description = ''
Search path for NSS (Name Service Switch) modules. This allows Search path for NSS (Name Service Switch) modules. This allows
several DNS resolution methods to be specified via several DNS resolution methods to be specified via
@ -35,7 +35,7 @@ with lib;
This option only takes effect if nscd is enabled. This option only takes effect if nscd is enabled.
''; '';
default = []; default = [ ];
}; };
group = mkOption { group = mkOption {
@ -47,7 +47,7 @@ with lib;
This option only takes effect if nscd is enabled. This option only takes effect if nscd is enabled.
''; '';
default = []; default = [ ];
}; };
shadow = mkOption { shadow = mkOption {
@ -59,7 +59,19 @@ with lib;
This option only takes effect if nscd is enabled. This option only takes effect if nscd is enabled.
''; '';
default = []; default = [ ];
};
sudoers = mkOption {
type = types.listOf types.str;
description = ''
List of sudoers entries to configure in {file}`/etc/nsswitch.conf`.
Note that "files" is always prepended.
This option only takes effect if nscd is enabled.
'';
default = [ ];
}; };
hosts = mkOption { hosts = mkOption {
@ -71,7 +83,7 @@ with lib;
This option only takes effect if nscd is enabled. This option only takes effect if nscd is enabled.
''; '';
default = []; default = [ ];
}; };
services = mkOption { services = mkOption {
@ -83,7 +95,7 @@ with lib;
This option only takes effect if nscd is enabled. This option only takes effect if nscd is enabled.
''; '';
default = []; default = [ ];
}; };
}; };
}; };
@ -112,6 +124,7 @@ with lib;
passwd: ${concatStringsSep " " config.system.nssDatabases.passwd} passwd: ${concatStringsSep " " config.system.nssDatabases.passwd}
group: ${concatStringsSep " " config.system.nssDatabases.group} group: ${concatStringsSep " " config.system.nssDatabases.group}
shadow: ${concatStringsSep " " config.system.nssDatabases.shadow} shadow: ${concatStringsSep " " config.system.nssDatabases.shadow}
sudoers: ${concatStringsSep " " config.system.nssDatabases.sudoers}
hosts: ${concatStringsSep " " config.system.nssDatabases.hosts} hosts: ${concatStringsSep " " config.system.nssDatabases.hosts}
networks: files networks: files
@ -126,6 +139,7 @@ with lib;
passwd = mkBefore [ "files" ]; passwd = mkBefore [ "files" ];
group = mkBefore [ "files" ]; group = mkBefore [ "files" ];
shadow = mkBefore [ "files" ]; shadow = mkBefore [ "files" ];
sudoers = mkBefore [ "files" ];
hosts = mkMerge [ hosts = mkMerge [
(mkOrder 998 [ "files" ]) (mkOrder 998 [ "files" ])
(mkOrder 1499 [ "dns" ]) (mkOrder 1499 [ "dns" ])

View file

@ -96,7 +96,7 @@ in
Sets which portal backend should be used to provide the implementation Sets which portal backend should be used to provide the implementation
for the requested interface. For details check {manpage}`portals.conf(5)`. for the requested interface. For details check {manpage}`portals.conf(5)`.
Configs will be linked to `/etx/xdg/xdg-desktop-portal/` with the name `$desktop-portals.conf` Configs will be linked to `/etc/xdg/xdg-desktop-portal/` with the name `$desktop-portals.conf`
for `xdg.portal.config.$desktop` and `portals.conf` for `xdg.portal.config.common` for `xdg.portal.config.$desktop` and `portals.conf` for `xdg.portal.config.common`
as an exception. as an exception.
''; '';

View file

@ -13,7 +13,7 @@ let
} // optionalAttrs (p.description != null) { } // optionalAttrs (p.description != null) {
D = p.description; D = p.description;
} // optionalAttrs (p.ppdOptions != {}) { } // optionalAttrs (p.ppdOptions != {}) {
o = mapAttrsToList (name: value: "'${name}'='${value}'") p.ppdOptions; o = mapAttrsToList (name: value: "${name}=${value}") p.ppdOptions;
}); });
in '' in ''
${pkgs.cups}/bin/lpadmin ${args} -E ${pkgs.cups}/bin/lpadmin ${args} -E

View file

@ -3,12 +3,10 @@
lib, lib,
pkgs, pkgs,
... ...
}: let }:
let
nvidiaEnabled = (lib.elem "nvidia" config.services.xserver.videoDrivers); nvidiaEnabled = (lib.elem "nvidia" config.services.xserver.videoDrivers);
nvidia_x11 = nvidia_x11 = if nvidiaEnabled || cfg.datacenter.enable then cfg.package else null;
if nvidiaEnabled || cfg.datacenter.enable
then cfg.package
else null;
cfg = config.hardware.nvidia; cfg = config.hardware.nvidia;
@ -19,8 +17,9 @@
primeEnabled = syncCfg.enable || reverseSyncCfg.enable || offloadCfg.enable; primeEnabled = syncCfg.enable || reverseSyncCfg.enable || offloadCfg.enable;
busIDType = lib.types.strMatching "([[:print:]]+[\:\@][0-9]{1,3}\:[0-9]{1,2}\:[0-9])?"; busIDType = lib.types.strMatching "([[:print:]]+[\:\@][0-9]{1,3}\:[0-9]{1,2}\:[0-9])?";
ibtSupport = cfg.open || (nvidia_x11.ibtSupport or false); ibtSupport = cfg.open || (nvidia_x11.ibtSupport or false);
settingsFormat = pkgs.formats.keyValue {}; settingsFormat = pkgs.formats.keyValue { };
in { in
{
options = { options = {
hardware.nvidia = { hardware.nvidia = {
datacenter.enable = lib.mkEnableOption '' datacenter.enable = lib.mkEnableOption ''
@ -29,50 +28,50 @@ in {
datacenter.settings = lib.mkOption { datacenter.settings = lib.mkOption {
type = settingsFormat.type; type = settingsFormat.type;
default = { default = {
LOG_LEVEL=4; LOG_LEVEL = 4;
LOG_FILE_NAME="/var/log/fabricmanager.log"; LOG_FILE_NAME = "/var/log/fabricmanager.log";
LOG_APPEND_TO_LOG=1; LOG_APPEND_TO_LOG = 1;
LOG_FILE_MAX_SIZE=1024; LOG_FILE_MAX_SIZE = 1024;
LOG_USE_SYSLOG=0; LOG_USE_SYSLOG = 0;
DAEMONIZE=1; DAEMONIZE = 1;
BIND_INTERFACE_IP="127.0.0.1"; BIND_INTERFACE_IP = "127.0.0.1";
STARTING_TCP_PORT=16000; STARTING_TCP_PORT = 16000;
FABRIC_MODE=0; FABRIC_MODE = 0;
FABRIC_MODE_RESTART=0; FABRIC_MODE_RESTART = 0;
STATE_FILE_NAME="/var/tmp/fabricmanager.state"; STATE_FILE_NAME = "/var/tmp/fabricmanager.state";
FM_CMD_BIND_INTERFACE="127.0.0.1"; FM_CMD_BIND_INTERFACE = "127.0.0.1";
FM_CMD_PORT_NUMBER=6666; FM_CMD_PORT_NUMBER = 6666;
FM_STAY_RESIDENT_ON_FAILURES=0; FM_STAY_RESIDENT_ON_FAILURES = 0;
ACCESS_LINK_FAILURE_MODE=0; ACCESS_LINK_FAILURE_MODE = 0;
TRUNK_LINK_FAILURE_MODE=0; TRUNK_LINK_FAILURE_MODE = 0;
NVSWITCH_FAILURE_MODE=0; NVSWITCH_FAILURE_MODE = 0;
ABORT_CUDA_JOBS_ON_FM_EXIT=1; ABORT_CUDA_JOBS_ON_FM_EXIT = 1;
TOPOLOGY_FILE_PATH="${nvidia_x11.fabricmanager}/share/nvidia-fabricmanager/nvidia/nvswitch"; TOPOLOGY_FILE_PATH = "${nvidia_x11.fabricmanager}/share/nvidia-fabricmanager/nvidia/nvswitch";
DATABASE_PATH="${nvidia_x11.fabricmanager}/share/nvidia-fabricmanager/nvidia/nvswitch"; DATABASE_PATH = "${nvidia_x11.fabricmanager}/share/nvidia-fabricmanager/nvidia/nvswitch";
}; };
defaultText = lib.literalExpression '' defaultText = lib.literalExpression ''
{ {
LOG_LEVEL=4; LOG_LEVEL=4;
LOG_FILE_NAME="/var/log/fabricmanager.log"; LOG_FILE_NAME="/var/log/fabricmanager.log";
LOG_APPEND_TO_LOG=1; LOG_APPEND_TO_LOG=1;
LOG_FILE_MAX_SIZE=1024; LOG_FILE_MAX_SIZE=1024;
LOG_USE_SYSLOG=0; LOG_USE_SYSLOG=0;
DAEMONIZE=1; DAEMONIZE=1;
BIND_INTERFACE_IP="127.0.0.1"; BIND_INTERFACE_IP="127.0.0.1";
STARTING_TCP_PORT=16000; STARTING_TCP_PORT=16000;
FABRIC_MODE=0; FABRIC_MODE=0;
FABRIC_MODE_RESTART=0; FABRIC_MODE_RESTART=0;
STATE_FILE_NAME="/var/tmp/fabricmanager.state"; STATE_FILE_NAME="/var/tmp/fabricmanager.state";
FM_CMD_BIND_INTERFACE="127.0.0.1"; FM_CMD_BIND_INTERFACE="127.0.0.1";
FM_CMD_PORT_NUMBER=6666; FM_CMD_PORT_NUMBER=6666;
FM_STAY_RESIDENT_ON_FAILURES=0; FM_STAY_RESIDENT_ON_FAILURES=0;
ACCESS_LINK_FAILURE_MODE=0; ACCESS_LINK_FAILURE_MODE=0;
TRUNK_LINK_FAILURE_MODE=0; TRUNK_LINK_FAILURE_MODE=0;
NVSWITCH_FAILURE_MODE=0; NVSWITCH_FAILURE_MODE=0;
ABORT_CUDA_JOBS_ON_FM_EXIT=1; ABORT_CUDA_JOBS_ON_FM_EXIT=1;
TOPOLOGY_FILE_PATH="''${nvidia_x11.fabricmanager}/share/nvidia-fabricmanager/nvidia/nvswitch"; TOPOLOGY_FILE_PATH="''${nvidia_x11.fabricmanager}/share/nvidia-fabricmanager/nvidia/nvswitch";
DATABASE_PATH="''${nvidia_x11.fabricmanager}/share/nvidia-fabricmanager/nvidia/nvswitch"; DATABASE_PATH="''${nvidia_x11.fabricmanager}/share/nvidia-fabricmanager/nvidia/nvswitch";
} }
''; '';
description = '' description = ''
Additional configuration options for fabricmanager. Additional configuration options for fabricmanager.
@ -211,7 +210,9 @@ in {
(lib.mkEnableOption '' (lib.mkEnableOption ''
nvidia-settings, NVIDIA's GUI configuration tool nvidia-settings, NVIDIA's GUI configuration tool
'') '')
// {default = true;}; // {
default = true;
};
nvidiaPersistenced = lib.mkEnableOption '' nvidiaPersistenced = lib.mkEnableOption ''
nvidia-persistenced a update for NVIDIA GPU headless mode, i.e. nvidia-persistenced a update for NVIDIA GPU headless mode, i.e.
@ -226,7 +227,8 @@ in {
''; '';
package = lib.mkOption { package = lib.mkOption {
default = config.boot.kernelPackages.nvidiaPackages."${if cfg.datacenter.enable then "dc" else "stable"}"; default =
config.boot.kernelPackages.nvidiaPackages."${if cfg.datacenter.enable then "dc" else "stable"}";
defaultText = lib.literalExpression '' defaultText = lib.literalExpression ''
config.boot.kernelPackages.nvidiaPackages."\$\{if cfg.datacenter.enable then "dc" else "stable"}" config.boot.kernelPackages.nvidiaPackages."\$\{if cfg.datacenter.enable then "dc" else "stable"}"
''; '';
@ -242,403 +244,404 @@ in {
}; };
}; };
config = let config =
igpuDriver = let
if pCfg.intelBusId != "" igpuDriver = if pCfg.intelBusId != "" then "modesetting" else "amdgpu";
then "modesetting" igpuBusId = if pCfg.intelBusId != "" then pCfg.intelBusId else pCfg.amdgpuBusId;
else "amdgpu"; in
igpuBusId = lib.mkIf (nvidia_x11 != null) (
if pCfg.intelBusId != "" lib.mkMerge [
then pCfg.intelBusId # Common
else pCfg.amdgpuBusId; ({
in assertions = [
lib.mkIf (nvidia_x11 != null) (lib.mkMerge [ {
# Common assertion = !(nvidiaEnabled && cfg.datacenter.enable);
({ message = "You cannot configure both X11 and Data Center drivers at the same time.";
assertions = [ }
{
assertion = !(nvidiaEnabled && cfg.datacenter.enable);
message = "You cannot configure both X11 and Data Center drivers at the same time.";
}
];
boot = {
blacklistedKernelModules = ["nouveau" "nvidiafb"];
# Don't add `nvidia-uvm` to `kernelModules`, because we want
# `nvidia-uvm` be loaded only after `udev` rules for `nvidia` kernel
# module are applied.
#
# Instead, we use `softdep` to lazily load `nvidia-uvm` kernel module
# after `nvidia` kernel module is loaded and `udev` rules are applied.
extraModprobeConfig = ''
softdep nvidia post: nvidia-uvm
'';
};
systemd.tmpfiles.rules =
lib.optional config.virtualisation.docker.enableNvidia
"L+ /run/nvidia-docker/bin - - - - ${nvidia_x11.bin}/origBin";
services.udev.extraRules =
''
# Create /dev/nvidia-uvm when the nvidia-uvm module is loaded.
KERNEL=="nvidia", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidiactl c 195 255'"
KERNEL=="nvidia", RUN+="${pkgs.runtimeShell} -c 'for i in $$(cat /proc/driver/nvidia/gpus/*/information | grep Minor | cut -d \ -f 4); do mknod -m 666 /dev/nvidia$${i} c 195 $${i}; done'"
KERNEL=="nvidia_modeset", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidia-modeset c 195 254'"
KERNEL=="nvidia_uvm", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidia-uvm c $$(grep nvidia-uvm /proc/devices | cut -d \ -f 1) 0'"
KERNEL=="nvidia_uvm", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidia-uvm-tools c $$(grep nvidia-uvm /proc/devices | cut -d \ -f 1) 1'"
'';
hardware.opengl = {
extraPackages = [
nvidia_x11.out
]; ];
extraPackages32 = [ boot = {
nvidia_x11.lib32 blacklistedKernelModules = [
]; "nouveau"
}; "nvidiafb"
environment.systemPackages = [ ];
nvidia_x11.bin
];
})
# X11
(lib.mkIf nvidiaEnabled {
assertions = [
{
assertion = primeEnabled -> pCfg.intelBusId == "" || pCfg.amdgpuBusId == "";
message = "You cannot configure both an Intel iGPU and an AMD APU. Pick the one corresponding to your processor.";
}
{ # Don't add `nvidia-uvm` to `kernelModules`, because we want
assertion = offloadCfg.enableOffloadCmd -> offloadCfg.enable || reverseSyncCfg.enable; # `nvidia-uvm` be loaded only after `udev` rules for `nvidia` kernel
message = "Offload command requires offloading or reverse prime sync to be enabled."; # module are applied.
} #
# Instead, we use `softdep` to lazily load `nvidia-uvm` kernel module
{ # after `nvidia` kernel module is loaded and `udev` rules are applied.
assertion = primeEnabled -> pCfg.nvidiaBusId != "" && (pCfg.intelBusId != "" || pCfg.amdgpuBusId != ""); extraModprobeConfig = ''
message = "When NVIDIA PRIME is enabled, the GPU bus IDs must be configured."; softdep nvidia post: nvidia-uvm
} '';
{
assertion = offloadCfg.enable -> lib.versionAtLeast nvidia_x11.version "435.21";
message = "NVIDIA PRIME render offload is currently only supported on versions >= 435.21.";
}
{
assertion = (reverseSyncCfg.enable && pCfg.amdgpuBusId != "") -> lib.versionAtLeast nvidia_x11.version "470.0";
message = "NVIDIA PRIME render offload for AMD APUs is currently only supported on versions >= 470 beta.";
}
{
assertion = !(syncCfg.enable && offloadCfg.enable);
message = "PRIME Sync and Offload cannot be both enabled";
}
{
assertion = !(syncCfg.enable && reverseSyncCfg.enable);
message = "PRIME Sync and PRIME Reverse Sync cannot be both enabled";
}
{
assertion = !(syncCfg.enable && cfg.powerManagement.finegrained);
message = "Sync precludes powering down the NVIDIA GPU.";
}
{
assertion = cfg.powerManagement.finegrained -> offloadCfg.enable;
message = "Fine-grained power management requires offload to be enabled.";
}
{
assertion = cfg.powerManagement.enable -> lib.versionAtLeast nvidia_x11.version "430.09";
message = "Required files for driver based power management only exist on versions >= 430.09.";
}
{
assertion = cfg.open -> (cfg.package ? open && cfg.package ? firmware);
message = "This version of NVIDIA driver does not provide a corresponding opensource kernel driver";
}
{
assertion = cfg.dynamicBoost.enable -> lib.versionAtLeast nvidia_x11.version "510.39.01";
message = "NVIDIA's Dynamic Boost feature only exists on versions >= 510.39.01";
}];
# If Optimus/PRIME is enabled, we:
# - Specify the configured NVIDIA GPU bus ID in the Device section for the
# "nvidia" driver.
# - Add the AllowEmptyInitialConfiguration option to the Screen section for the
# "nvidia" driver, in order to allow the X server to start without any outputs.
# - Add a separate Device section for the Intel GPU, using the "modesetting"
# driver and with the configured BusID.
# - OR add a separate Device section for the AMD APU, using the "amdgpu"
# driver and with the configures BusID.
# - Reference that Device section from the ServerLayout section as an inactive
# device.
# - Configure the display manager to run specific `xrandr` commands which will
# configure/enable displays connected to the Intel iGPU / AMD APU.
# reverse sync implies offloading
hardware.nvidia.prime.offload.enable = lib.mkDefault reverseSyncCfg.enable;
services.xserver.drivers =
lib.optional primeEnabled {
name = igpuDriver;
display = offloadCfg.enable;
modules = lib.optional (igpuDriver == "amdgpu") pkgs.xorg.xf86videoamdgpu;
deviceSection =
''
BusID "${igpuBusId}"
''
+ lib.optionalString (syncCfg.enable && igpuDriver != "amdgpu") ''
Option "AccelMethod" "none"
'';
}
++ lib.singleton {
name = "nvidia";
modules = [nvidia_x11.bin];
display = !offloadCfg.enable;
deviceSection =
''
Option "SidebandSocketPath" "/run/nvidia-xdriver/"
'' +
lib.optionalString primeEnabled
''
BusID "${pCfg.nvidiaBusId}"
''
+ lib.optionalString pCfg.allowExternalGpu ''
Option "AllowExternalGpus"
'';
screenSection =
''
Option "RandRRotation" "on"
''
+ lib.optionalString syncCfg.enable ''
Option "AllowEmptyInitialConfiguration"
''
+ lib.optionalString cfg.forceFullCompositionPipeline ''
Option "metamodes" "nvidia-auto-select +0+0 {ForceFullCompositionPipeline=On}"
Option "AllowIndirectGLXProtocol" "off"
Option "TripleBuffer" "on"
'';
}; };
systemd.tmpfiles.rules = lib.optional config.virtualisation.docker.enableNvidia "L+ /run/nvidia-docker/bin - - - - ${nvidia_x11.bin}/origBin";
services.xserver.serverLayoutSection = services.udev.extraRules = ''
lib.optionalString syncCfg.enable '' # Create /dev/nvidia-uvm when the nvidia-uvm module is loaded.
Inactive "Device-${igpuDriver}[0]" KERNEL=="nvidia", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidiactl c 195 255'"
'' KERNEL=="nvidia", RUN+="${pkgs.runtimeShell} -c 'for i in $$(cat /proc/driver/nvidia/gpus/*/information | grep Minor | cut -d \ -f 4); do mknod -m 666 /dev/nvidia$${i} c 195 $${i}; done'"
+ lib.optionalString reverseSyncCfg.enable '' KERNEL=="nvidia_modeset", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidia-modeset c 195 254'"
Inactive "Device-nvidia[0]" KERNEL=="nvidia_uvm", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidia-uvm c $$(grep nvidia-uvm /proc/devices | cut -d \ -f 1) 0'"
'' KERNEL=="nvidia_uvm", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidia-uvm-tools c $$(grep nvidia-uvm /proc/devices | cut -d \ -f 1) 1'"
+ lib.optionalString offloadCfg.enable ''
Option "AllowNVIDIAGPUScreens"
''; '';
hardware.opengl = {
extraPackages = [ nvidia_x11.out ] ++ (lib.optional (builtins.hasAttr "libXNVCtrl" nvidia_x11.settings) nvidia_x11.settings.libXNVCtrl);
extraPackages32 = [ nvidia_x11.lib32 ];
};
environment.systemPackages = [ nvidia_x11.bin ];
})
# X11
(lib.mkIf nvidiaEnabled {
assertions = [
{
assertion = primeEnabled -> pCfg.intelBusId == "" || pCfg.amdgpuBusId == "";
message = "You cannot configure both an Intel iGPU and an AMD APU. Pick the one corresponding to your processor.";
}
services.xserver.displayManager.setupCommands = let {
gpuProviderName = assertion = offloadCfg.enableOffloadCmd -> offloadCfg.enable || reverseSyncCfg.enable;
if igpuDriver == "amdgpu" message = "Offload command requires offloading or reverse prime sync to be enabled.";
then }
# find the name of the provider if amdgpu
"`${lib.getExe pkgs.xorg.xrandr} --listproviders | ${lib.getExe pkgs.gnugrep} -i AMD | ${lib.getExe pkgs.gnused} -n 's/^.*name://p'`"
else igpuDriver;
providerCmdParams =
if syncCfg.enable
then "\"${gpuProviderName}\" NVIDIA-0"
else "NVIDIA-G0 \"${gpuProviderName}\"";
in
lib.optionalString (syncCfg.enable || reverseSyncCfg.enable) ''
# Added by nvidia configuration module for Optimus/PRIME.
${lib.getExe pkgs.xorg.xrandr} --setprovideroutputsource ${providerCmdParams}
${lib.getExe pkgs.xorg.xrandr} --auto
'';
environment.etc = { {
"nvidia/nvidia-application-profiles-rc" = lib.mkIf nvidia_x11.useProfiles {source = "${nvidia_x11.bin}/share/nvidia/nvidia-application-profiles-rc";}; assertion =
primeEnabled -> pCfg.nvidiaBusId != "" && (pCfg.intelBusId != "" || pCfg.amdgpuBusId != "");
message = "When NVIDIA PRIME is enabled, the GPU bus IDs must be configured.";
}
# 'nvidia_x11' installs it's files to /run/opengl-driver/... {
"egl/egl_external_platform.d".source = "/run/opengl-driver/share/egl/egl_external_platform.d/"; assertion = offloadCfg.enable -> lib.versionAtLeast nvidia_x11.version "435.21";
}; message = "NVIDIA PRIME render offload is currently only supported on versions >= 435.21.";
}
hardware.opengl = { {
extraPackages = [ assertion =
pkgs.nvidia-vaapi-driver (reverseSyncCfg.enable && pCfg.amdgpuBusId != "") -> lib.versionAtLeast nvidia_x11.version "470.0";
message = "NVIDIA PRIME render offload for AMD APUs is currently only supported on versions >= 470 beta.";
}
{
assertion = !(syncCfg.enable && offloadCfg.enable);
message = "PRIME Sync and Offload cannot be both enabled";
}
{
assertion = !(syncCfg.enable && reverseSyncCfg.enable);
message = "PRIME Sync and PRIME Reverse Sync cannot be both enabled";
}
{
assertion = !(syncCfg.enable && cfg.powerManagement.finegrained);
message = "Sync precludes powering down the NVIDIA GPU.";
}
{
assertion = cfg.powerManagement.finegrained -> offloadCfg.enable;
message = "Fine-grained power management requires offload to be enabled.";
}
{
assertion = cfg.powerManagement.enable -> lib.versionAtLeast nvidia_x11.version "430.09";
message = "Required files for driver based power management only exist on versions >= 430.09.";
}
{
assertion = cfg.open -> (cfg.package ? open && cfg.package ? firmware);
message = "This version of NVIDIA driver does not provide a corresponding opensource kernel driver";
}
{
assertion = cfg.dynamicBoost.enable -> lib.versionAtLeast nvidia_x11.version "510.39.01";
message = "NVIDIA's Dynamic Boost feature only exists on versions >= 510.39.01";
}
]; ];
extraPackages32 = [
pkgs.pkgsi686Linux.nvidia-vaapi-driver
];
};
environment.systemPackages =
lib.optional cfg.nvidiaSettings nvidia_x11.settings
++ lib.optional cfg.nvidiaPersistenced nvidia_x11.persistenced
++ lib.optional offloadCfg.enableOffloadCmd
(pkgs.writeShellScriptBin "nvidia-offload" ''
export __NV_PRIME_RENDER_OFFLOAD=1
export __NV_PRIME_RENDER_OFFLOAD_PROVIDER=NVIDIA-G0
export __GLX_VENDOR_LIBRARY_NAME=nvidia
export __VK_LAYER_NV_optimus=NVIDIA_only
exec "$@"
'');
systemd.packages = lib.optional cfg.powerManagement.enable nvidia_x11.out; # If Optimus/PRIME is enabled, we:
# - Specify the configured NVIDIA GPU bus ID in the Device section for the
# "nvidia" driver.
# - Add the AllowEmptyInitialConfiguration option to the Screen section for the
# "nvidia" driver, in order to allow the X server to start without any outputs.
# - Add a separate Device section for the Intel GPU, using the "modesetting"
# driver and with the configured BusID.
# - OR add a separate Device section for the AMD APU, using the "amdgpu"
# driver and with the configures BusID.
# - Reference that Device section from the ServerLayout section as an inactive
# device.
# - Configure the display manager to run specific `xrandr` commands which will
# configure/enable displays connected to the Intel iGPU / AMD APU.
systemd.services = let # reverse sync implies offloading
nvidiaService = state: { hardware.nvidia.prime.offload.enable = lib.mkDefault reverseSyncCfg.enable;
description = "NVIDIA system ${state} actions";
path = [pkgs.kbd]; services.xserver.drivers =
serviceConfig = { lib.optional primeEnabled {
Type = "oneshot"; name = igpuDriver;
ExecStart = "${nvidia_x11.out}/bin/nvidia-sleep.sh '${state}'"; display = offloadCfg.enable;
modules = lib.optional (igpuDriver == "amdgpu") pkgs.xorg.xf86videoamdgpu;
deviceSection =
''
BusID "${igpuBusId}"
''
+ lib.optionalString (syncCfg.enable && igpuDriver != "amdgpu") ''
Option "AccelMethod" "none"
'';
}
++ lib.singleton {
name = "nvidia";
modules = [ nvidia_x11.bin ];
display = !offloadCfg.enable;
deviceSection =
''
Option "SidebandSocketPath" "/run/nvidia-xdriver/"
''
+ lib.optionalString primeEnabled ''
BusID "${pCfg.nvidiaBusId}"
''
+ lib.optionalString pCfg.allowExternalGpu ''
Option "AllowExternalGpus"
'';
screenSection =
''
Option "RandRRotation" "on"
''
+ lib.optionalString syncCfg.enable ''
Option "AllowEmptyInitialConfiguration"
''
+ lib.optionalString cfg.forceFullCompositionPipeline ''
Option "metamodes" "nvidia-auto-select +0+0 {ForceFullCompositionPipeline=On}"
Option "AllowIndirectGLXProtocol" "off"
Option "TripleBuffer" "on"
'';
}; };
before = ["systemd-${state}.service"];
requiredBy = ["systemd-${state}.service"]; services.xserver.serverLayoutSection =
lib.optionalString syncCfg.enable ''
Inactive "Device-${igpuDriver}[0]"
''
+ lib.optionalString reverseSyncCfg.enable ''
Inactive "Device-nvidia[0]"
''
+ lib.optionalString offloadCfg.enable ''
Option "AllowNVIDIAGPUScreens"
'';
services.xserver.displayManager.setupCommands =
let
gpuProviderName =
if igpuDriver == "amdgpu" then
# find the name of the provider if amdgpu
"`${lib.getExe pkgs.xorg.xrandr} --listproviders | ${lib.getExe pkgs.gnugrep} -i AMD | ${lib.getExe pkgs.gnused} -n 's/^.*name://p'`"
else
igpuDriver;
providerCmdParams =
if syncCfg.enable then "\"${gpuProviderName}\" NVIDIA-0" else "NVIDIA-G0 \"${gpuProviderName}\"";
in
lib.optionalString (syncCfg.enable || reverseSyncCfg.enable) ''
# Added by nvidia configuration module for Optimus/PRIME.
${lib.getExe pkgs.xorg.xrandr} --setprovideroutputsource ${providerCmdParams}
${lib.getExe pkgs.xorg.xrandr} --auto
'';
environment.etc = {
"nvidia/nvidia-application-profiles-rc" = lib.mkIf nvidia_x11.useProfiles {
source = "${nvidia_x11.bin}/share/nvidia/nvidia-application-profiles-rc";
};
# 'nvidia_x11' installs it's files to /run/opengl-driver/...
"egl/egl_external_platform.d".source = "/run/opengl-driver/share/egl/egl_external_platform.d/";
}; };
in
lib.mkMerge [ hardware.opengl = {
(lib.mkIf cfg.powerManagement.enable { extraPackages = [ pkgs.nvidia-vaapi-driver ];
nvidia-suspend = nvidiaService "suspend"; extraPackages32 = [ pkgs.pkgsi686Linux.nvidia-vaapi-driver ];
nvidia-hibernate = nvidiaService "hibernate"; };
nvidia-resume = environment.systemPackages =
(nvidiaService "resume") lib.optional cfg.nvidiaSettings nvidia_x11.settings
// { ++ lib.optional cfg.nvidiaPersistenced nvidia_x11.persistenced
before = []; ++ lib.optional offloadCfg.enableOffloadCmd (
after = ["systemd-suspend.service" "systemd-hibernate.service"]; pkgs.writeShellScriptBin "nvidia-offload" ''
requiredBy = ["systemd-suspend.service" "systemd-hibernate.service"]; export __NV_PRIME_RENDER_OFFLOAD=1
}; export __NV_PRIME_RENDER_OFFLOAD_PROVIDER=NVIDIA-G0
}) export __GLX_VENDOR_LIBRARY_NAME=nvidia
(lib.mkIf cfg.nvidiaPersistenced { export __VK_LAYER_NV_optimus=NVIDIA_only
"nvidia-persistenced" = { exec "$@"
description = "NVIDIA Persistence Daemon"; ''
wantedBy = ["multi-user.target"]; );
systemd.packages = lib.optional cfg.powerManagement.enable nvidia_x11.out;
systemd.services =
let
nvidiaService = state: {
description = "NVIDIA system ${state} actions";
path = [ pkgs.kbd ];
serviceConfig = { serviceConfig = {
Type = "forking"; Type = "oneshot";
Restart = "always"; ExecStart = "${nvidia_x11.out}/bin/nvidia-sleep.sh '${state}'";
PIDFile = "/var/run/nvidia-persistenced/nvidia-persistenced.pid";
ExecStart = "${lib.getExe nvidia_x11.persistenced} --verbose";
ExecStopPost = "${pkgs.coreutils}/bin/rm -rf /var/run/nvidia-persistenced";
}; };
before = [ "systemd-${state}.service" ];
requiredBy = [ "systemd-${state}.service" ];
}; };
}) in
(lib.mkIf cfg.dynamicBoost.enable { lib.mkMerge [
"nvidia-powerd" = { (lib.mkIf cfg.powerManagement.enable {
description = "nvidia-powerd service"; nvidia-suspend = nvidiaService "suspend";
path = [ nvidia-hibernate = nvidiaService "hibernate";
pkgs.util-linux # nvidia-powerd wants lscpu nvidia-resume = (nvidiaService "resume") // {
]; before = [ ];
wantedBy = ["multi-user.target"]; after = [
serviceConfig = { "systemd-suspend.service"
Type = "dbus"; "systemd-hibernate.service"
BusName = "nvidia.powerd.server"; ];
ExecStart = "${nvidia_x11.bin}/bin/nvidia-powerd"; requiredBy = [
"systemd-suspend.service"
"systemd-hibernate.service"
];
}; };
}; })
}) (lib.mkIf cfg.nvidiaPersistenced {
]; "nvidia-persistenced" = {
services.acpid.enable = true; description = "NVIDIA Persistence Daemon";
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "forking";
Restart = "always";
PIDFile = "/var/run/nvidia-persistenced/nvidia-persistenced.pid";
ExecStart = "${lib.getExe nvidia_x11.persistenced} --verbose";
ExecStopPost = "${pkgs.coreutils}/bin/rm -rf /var/run/nvidia-persistenced";
};
};
})
(lib.mkIf cfg.dynamicBoost.enable {
"nvidia-powerd" = {
description = "nvidia-powerd service";
path = [
pkgs.util-linux # nvidia-powerd wants lscpu
];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "dbus";
BusName = "nvidia.powerd.server";
ExecStart = "${nvidia_x11.bin}/bin/nvidia-powerd";
};
};
})
];
services.acpid.enable = true;
services.dbus.packages = lib.optional cfg.dynamicBoost.enable nvidia_x11.bin; services.dbus.packages = lib.optional cfg.dynamicBoost.enable nvidia_x11.bin;
hardware.firmware = lib.optional cfg.open nvidia_x11.firmware; hardware.firmware =
let
isOpen = cfg.open;
isNewUnfree = lib.versionAtLeast nvidia_x11.version "555";
in
lib.optional (isOpen || isNewUnfree) nvidia_x11.firmware;
systemd.tmpfiles.rules = [ systemd.tmpfiles.rules =
# Remove the following log message: [
# (WW) NVIDIA: Failed to bind sideband socket to # Remove the following log message:
# (WW) NVIDIA: '/var/run/nvidia-xdriver-b4f69129' Permission denied # (WW) NVIDIA: Failed to bind sideband socket to
# # (WW) NVIDIA: '/var/run/nvidia-xdriver-b4f69129' Permission denied
# https://bbs.archlinux.org/viewtopic.php?pid=1909115#p1909115 #
"d /run/nvidia-xdriver 0770 root users" # https://bbs.archlinux.org/viewtopic.php?pid=1909115#p1909115
] ++ lib.optional (nvidia_x11.persistenced != null && config.virtualisation.docker.enableNvidia) "d /run/nvidia-xdriver 0770 root users"
"L+ /run/nvidia-docker/extras/bin/nvidia-persistenced - - - - ${nvidia_x11.persistenced}/origBin/nvidia-persistenced"; ]
++ lib.optional (nvidia_x11.persistenced != null && config.virtualisation.docker.enableNvidia)
"L+ /run/nvidia-docker/extras/bin/nvidia-persistenced - - - - ${nvidia_x11.persistenced}/origBin/nvidia-persistenced";
boot = { boot = {
extraModulePackages = extraModulePackages = if cfg.open then [ nvidia_x11.open ] else [ nvidia_x11.bin ];
if cfg.open # nvidia-uvm is required by CUDA applications.
then [nvidia_x11.open] kernelModules = lib.optionals config.services.xserver.enable [
else [nvidia_x11.bin]; "nvidia"
# nvidia-uvm is required by CUDA applications. "nvidia_modeset"
kernelModules = "nvidia_drm"
lib.optionals config.services.xserver.enable ["nvidia" "nvidia_modeset" "nvidia_drm"]; ];
# If requested enable modesetting via kernel parameter. # If requested enable modesetting via kernel parameter.
kernelParams = kernelParams =
lib.optional (offloadCfg.enable || cfg.modesetting.enable) "nvidia-drm.modeset=1" lib.optional (offloadCfg.enable || cfg.modesetting.enable) "nvidia-drm.modeset=1"
++ lib.optional cfg.powerManagement.enable "nvidia.NVreg_PreserveVideoMemoryAllocations=1" ++ lib.optional cfg.powerManagement.enable "nvidia.NVreg_PreserveVideoMemoryAllocations=1"
++ lib.optional cfg.open "nvidia.NVreg_OpenRmEnableUnsupportedGpus=1" ++ lib.optional cfg.open "nvidia.NVreg_OpenRmEnableUnsupportedGpus=1"
++ lib.optional (config.boot.kernelPackages.kernel.kernelAtLeast "6.2" && !ibtSupport) "ibt=off"; ++ lib.optional (config.boot.kernelPackages.kernel.kernelAtLeast "6.2" && !ibtSupport) "ibt=off";
# enable finegrained power management # enable finegrained power management
extraModprobeConfig = lib.optionalString cfg.powerManagement.finegrained '' extraModprobeConfig = lib.optionalString cfg.powerManagement.finegrained ''
options nvidia "NVreg_DynamicPowerManagement=0x02" options nvidia "NVreg_DynamicPowerManagement=0x02"
''; '';
}; };
services.udev.extraRules = services.udev.extraRules = lib.optionalString cfg.powerManagement.finegrained (
lib.optionalString cfg.powerManagement.finegrained ( lib.optionalString (lib.versionOlder config.boot.kernelPackages.kernel.version "5.5") ''
lib.optionalString (lib.versionOlder config.boot.kernelPackages.kernel.version "5.5") '' # Remove NVIDIA USB xHCI Host Controller devices, if present
# Remove NVIDIA USB xHCI Host Controller devices, if present ACTION=="add", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x0c0330", ATTR{remove}="1"
ACTION=="add", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x0c0330", ATTR{remove}="1"
# Remove NVIDIA USB Type-C UCSI devices, if present # Remove NVIDIA USB Type-C UCSI devices, if present
ACTION=="add", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x0c8000", ATTR{remove}="1" ACTION=="add", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x0c8000", ATTR{remove}="1"
# Remove NVIDIA Audio devices, if present # Remove NVIDIA Audio devices, if present
ACTION=="add", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x040300", ATTR{remove}="1" ACTION=="add", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x040300", ATTR{remove}="1"
'' ''
+ '' + ''
# Enable runtime PM for NVIDIA VGA/3D controller devices on driver bind # Enable runtime PM for NVIDIA VGA/3D controller devices on driver bind
ACTION=="bind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030000", TEST=="power/control", ATTR{power/control}="auto" ACTION=="bind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030000", TEST=="power/control", ATTR{power/control}="auto"
ACTION=="bind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030200", TEST=="power/control", ATTR{power/control}="auto" ACTION=="bind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030200", TEST=="power/control", ATTR{power/control}="auto"
# Disable runtime PM for NVIDIA VGA/3D controller devices on driver unbind # Disable runtime PM for NVIDIA VGA/3D controller devices on driver unbind
ACTION=="unbind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030000", TEST=="power/control", ATTR{power/control}="on" ACTION=="unbind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030000", TEST=="power/control", ATTR{power/control}="on"
ACTION=="unbind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030200", TEST=="power/control", ATTR{power/control}="on" ACTION=="unbind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030200", TEST=="power/control", ATTR{power/control}="on"
'' ''
); );
}) })
# Data Center # Data Center
(lib.mkIf (cfg.datacenter.enable) { (lib.mkIf (cfg.datacenter.enable) {
boot.extraModulePackages = [ boot.extraModulePackages = [ nvidia_x11.bin ];
nvidia_x11.bin
];
systemd = { systemd = {
tmpfiles.rules = tmpfiles.rules =
lib.optional (nvidia_x11.persistenced != null && config.virtualisation.docker.enableNvidia) lib.optional (nvidia_x11.persistenced != null && config.virtualisation.docker.enableNvidia)
"L+ /run/nvidia-docker/extras/bin/nvidia-persistenced - - - - ${nvidia_x11.persistenced}/origBin/nvidia-persistenced"; "L+ /run/nvidia-docker/extras/bin/nvidia-persistenced - - - - ${nvidia_x11.persistenced}/origBin/nvidia-persistenced";
services = lib.mkMerge [ services = lib.mkMerge [
({ ({
nvidia-fabricmanager = { nvidia-fabricmanager = {
enable = true; enable = true;
description = "Start NVIDIA NVLink Management"; description = "Start NVIDIA NVLink Management";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
unitConfig.After = [ "network-online.target" ]; unitConfig.After = [ "network-online.target" ];
unitConfig.Requires = [ "network-online.target" ]; unitConfig.Requires = [ "network-online.target" ];
serviceConfig = { serviceConfig = {
Type = "forking"; Type = "forking";
TimeoutStartSec = 240; TimeoutStartSec = 240;
ExecStart = let ExecStart =
nv-fab-conf = settingsFormat.generate "fabricmanager.conf" cfg.datacenter.settings; let
in nv-fab-conf = settingsFormat.generate "fabricmanager.conf" cfg.datacenter.settings;
in
"${lib.getExe nvidia_x11.fabricmanager} -c ${nv-fab-conf}"; "${lib.getExe nvidia_x11.fabricmanager} -c ${nv-fab-conf}";
LimitCORE="infinity"; LimitCORE = "infinity";
};
}; };
}; })
}) (lib.mkIf cfg.nvidiaPersistenced {
(lib.mkIf cfg.nvidiaPersistenced { "nvidia-persistenced" = {
"nvidia-persistenced" = { description = "NVIDIA Persistence Daemon";
description = "NVIDIA Persistence Daemon"; wantedBy = [ "multi-user.target" ];
wantedBy = ["multi-user.target"]; serviceConfig = {
serviceConfig = { Type = "forking";
Type = "forking"; Restart = "always";
Restart = "always"; PIDFile = "/var/run/nvidia-persistenced/nvidia-persistenced.pid";
PIDFile = "/var/run/nvidia-persistenced/nvidia-persistenced.pid"; ExecStart = "${lib.getExe nvidia_x11.persistenced} --verbose";
ExecStart = "${lib.getExe nvidia_x11.persistenced} --verbose"; ExecStopPost = "${pkgs.coreutils}/bin/rm -rf /var/run/nvidia-persistenced";
ExecStopPost = "${pkgs.coreutils}/bin/rm -rf /var/run/nvidia-persistenced"; };
}; };
}; })
}) ];
]; };
};
environment.systemPackages = environment.systemPackages =
lib.optional cfg.datacenter.enable nvidia_x11.fabricmanager lib.optional cfg.datacenter.enable nvidia_x11.fabricmanager
++ lib.optional cfg.nvidiaPersistenced nvidia_x11.persistenced; ++ lib.optional cfg.nvidiaPersistenced nvidia_x11.persistenced;
}) })
]); ]
);
} }

View file

@ -6,7 +6,7 @@ let
in in
{ {
options.hardware.xone = { options.hardware.xone = {
enable = mkEnableOption "the xone driver for Xbox One and Xbobx Series X|S accessories"; enable = mkEnableOption "the xone driver for Xbox One and Xbox Series X|S accessories";
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {

View file

@ -10,7 +10,6 @@
, mypy , mypy
, systemd , systemd
, fakeroot , fakeroot
, util-linux
# filesystem tools # filesystem tools
, dosfstools , dosfstools
@ -105,7 +104,6 @@ in
nativeBuildInputs = [ nativeBuildInputs = [
systemd systemd
fakeroot fakeroot
util-linux
] ++ lib.optionals (compression.enable) [ ] ++ lib.optionals (compression.enable) [
compressionPkg compressionPkg
] ++ fileSystemTools; ] ++ fileSystemTools;
@ -148,7 +146,7 @@ in
runHook preBuild runHook preBuild
echo "Building image with systemd-repart..." echo "Building image with systemd-repart..."
unshare --map-root-user fakeroot systemd-repart \ fakeroot systemd-repart \
''${systemdRepartFlags[@]} \ ''${systemdRepartFlags[@]} \
${imageFileBasename}.raw \ ${imageFileBasename}.raw \
| tee repart-output.json | tee repart-output.json

View file

@ -23,7 +23,7 @@
environment.systemPackages = with pkgs; [ environment.systemPackages = with pkgs; [
# Graphical text editor # Graphical text editor
kate plasma5Packages.kate
]; ];
system.activationScripts.installerDesktop = let system.activationScripts.installerDesktop = let
@ -40,7 +40,7 @@
ln -sfT ${manualDesktopFile} ${desktopDir + "nixos-manual.desktop"} ln -sfT ${manualDesktopFile} ${desktopDir + "nixos-manual.desktop"}
ln -sfT ${pkgs.gparted}/share/applications/gparted.desktop ${desktopDir + "gparted.desktop"} ln -sfT ${pkgs.gparted}/share/applications/gparted.desktop ${desktopDir + "gparted.desktop"}
ln -sfT ${pkgs.konsole}/share/applications/org.kde.konsole.desktop ${desktopDir + "org.kde.konsole.desktop"} ln -sfT ${pkgs.plasma5Packages.konsole}/share/applications/org.kde.konsole.desktop ${desktopDir + "org.kde.konsole.desktop"}
ln -sfT ${pkgs.calamares-nixos}/share/applications/io.calamares.calamares.desktop ${desktopDir + "io.calamares.calamares.desktop"} ln -sfT ${pkgs.calamares-nixos}/share/applications/io.calamares.calamares.desktop ${desktopDir + "io.calamares.calamares.desktop"}
''; '';

View file

@ -23,7 +23,7 @@
environment.systemPackages = with pkgs; [ environment.systemPackages = with pkgs; [
# Graphical text editor # Graphical text editor
kate plasma5Packages.kate
]; ];
system.activationScripts.installerDesktop = let system.activationScripts.installerDesktop = let
@ -40,7 +40,7 @@
ln -sfT ${manualDesktopFile} ${desktopDir + "nixos-manual.desktop"} ln -sfT ${manualDesktopFile} ${desktopDir + "nixos-manual.desktop"}
ln -sfT ${pkgs.gparted}/share/applications/gparted.desktop ${desktopDir + "gparted.desktop"} ln -sfT ${pkgs.gparted}/share/applications/gparted.desktop ${desktopDir + "gparted.desktop"}
ln -sfT ${pkgs.konsole}/share/applications/org.kde.konsole.desktop ${desktopDir + "org.kde.konsole.desktop"} ln -sfT ${pkgs.plasma5Packages.konsole}/share/applications/org.kde.konsole.desktop ${desktopDir + "org.kde.konsole.desktop"}
''; '';
} }

View file

@ -39,7 +39,8 @@ with lib;
# !!! Hack - attributes expected by other modules. # !!! Hack - attributes expected by other modules.
environment.systemPackages = [ pkgs.grub2_efi ] environment.systemPackages = [ pkgs.grub2_efi ]
++ (lib.optionals (pkgs.stdenv.hostPlatform.system != "aarch64-linux") [pkgs.grub2 pkgs.syslinux]); ++ (lib.optionals (lib.meta.availableOn pkgs.stdenv.hostPlatform pkgs.syslinux)
[pkgs.grub2 pkgs.syslinux]);
fileSystems."/" = mkImageMediaOverride fileSystems."/" = mkImageMediaOverride
{ fsType = "tmpfs"; { fsType = "tmpfs";

View file

@ -121,7 +121,7 @@ in
image = { image = {
id = lib.mkOption { id = lib.mkOption {
type = types.nullOr (types.strMatching "^[a-z0-9._-]+$"); type = types.nullOr types.str;
default = null; default = null;
description = '' description = ''
Image identifier. Image identifier.
@ -135,7 +135,7 @@ in
}; };
version = lib.mkOption { version = lib.mkOption {
type = types.nullOr (types.strMatching "^[a-z0-9._-]+$"); type = types.nullOr types.str;
default = null; default = null;
description = '' description = ''
Image version. Image version.

View file

@ -158,6 +158,7 @@
./programs/bash/ls-colors.nix ./programs/bash/ls-colors.nix
./programs/bash/undistract-me.nix ./programs/bash/undistract-me.nix
./programs/bcc.nix ./programs/bcc.nix
./programs/benchexec.nix
./programs/browserpass.nix ./programs/browserpass.nix
./programs/calls.nix ./programs/calls.nix
./programs/captive-browser.nix ./programs/captive-browser.nix
@ -167,6 +168,7 @@
./programs/chromium.nix ./programs/chromium.nix
./programs/clash-verge.nix ./programs/clash-verge.nix
./programs/cnping.nix ./programs/cnping.nix
./programs/cpu-energy-meter.nix
./programs/command-not-found/command-not-found.nix ./programs/command-not-found/command-not-found.nix
./programs/coolercontrol.nix ./programs/coolercontrol.nix
./programs/criu.nix ./programs/criu.nix
@ -216,6 +218,7 @@
./programs/kbdlight.nix ./programs/kbdlight.nix
./programs/kclock.nix ./programs/kclock.nix
./programs/kdeconnect.nix ./programs/kdeconnect.nix
./programs/ladybird.nix
./programs/lazygit.nix ./programs/lazygit.nix
./programs/kubeswitch.nix ./programs/kubeswitch.nix
./programs/less.nix ./programs/less.nix
@ -247,9 +250,9 @@
./programs/oblogout.nix ./programs/oblogout.nix
./programs/oddjobd.nix ./programs/oddjobd.nix
./programs/openvpn3.nix ./programs/openvpn3.nix
./programs/pantheon-tweaks.nix
./programs/partition-manager.nix ./programs/partition-manager.nix
./programs/plotinus.nix ./programs/plotinus.nix
./programs/pqos-wrapper.nix
./programs/projecteur.nix ./programs/projecteur.nix
./programs/proxychains.nix ./programs/proxychains.nix
./programs/qdmr.nix ./programs/qdmr.nix
@ -279,6 +282,7 @@
./programs/systemtap.nix ./programs/systemtap.nix
./programs/thefuck.nix ./programs/thefuck.nix
./programs/thunar.nix ./programs/thunar.nix
./programs/thunderbird.nix
./programs/tmux.nix ./programs/tmux.nix
./programs/traceroute.nix ./programs/traceroute.nix
./programs/trippy.nix ./programs/trippy.nix
@ -290,6 +294,7 @@
./programs/virt-manager.nix ./programs/virt-manager.nix
./programs/wavemon.nix ./programs/wavemon.nix
./programs/wayland/cardboard.nix ./programs/wayland/cardboard.nix
./programs/wayland/hyprlock.nix
./programs/wayland/hyprland.nix ./programs/wayland/hyprland.nix
./programs/wayland/labwc.nix ./programs/wayland/labwc.nix
./programs/wayland/river.nix ./programs/wayland/river.nix
@ -415,6 +420,7 @@
./services/cluster/kubernetes/scheduler.nix ./services/cluster/kubernetes/scheduler.nix
./services/cluster/pacemaker/default.nix ./services/cluster/pacemaker/default.nix
./services/cluster/patroni/default.nix ./services/cluster/patroni/default.nix
./services/cluster/rke2/default.nix
./services/cluster/spark/default.nix ./services/cluster/spark/default.nix
./services/computing/boinc/client.nix ./services/computing/boinc/client.nix
./services/computing/foldingathome/client.nix ./services/computing/foldingathome/client.nix
@ -765,6 +771,7 @@
./services/misc/octoprint.nix ./services/misc/octoprint.nix
./services/misc/ollama.nix ./services/misc/ollama.nix
./services/misc/ombi.nix ./services/misc/ombi.nix
./services/misc/open-webui.nix
./services/misc/osrm.nix ./services/misc/osrm.nix
./services/misc/owncast.nix ./services/misc/owncast.nix
./services/misc/packagekit.nix ./services/misc/packagekit.nix
@ -1106,6 +1113,7 @@
./services/networking/ocserv.nix ./services/networking/ocserv.nix
./services/networking/ofono.nix ./services/networking/ofono.nix
./services/networking/oidentd.nix ./services/networking/oidentd.nix
./services/networking/oink.nix
./services/networking/onedrive.nix ./services/networking/onedrive.nix
./services/networking/openconnect.nix ./services/networking/openconnect.nix
./services/networking/openvpn.nix ./services/networking/openvpn.nix
@ -1322,9 +1330,11 @@
./services/video/unifi-video.nix ./services/video/unifi-video.nix
./services/video/v4l2-relayd.nix ./services/video/v4l2-relayd.nix
./services/wayland/cage.nix ./services/wayland/cage.nix
./services/wayland/hypridle.nix
./services/web-apps/akkoma.nix ./services/web-apps/akkoma.nix
./services/web-apps/alps.nix ./services/web-apps/alps.nix
./services/web-apps/anuko-time-tracker.nix ./services/web-apps/anuko-time-tracker.nix
./services/web-apps/artalk.nix
./services/web-apps/atlassian/confluence.nix ./services/web-apps/atlassian/confluence.nix
./services/web-apps/atlassian/crowd.nix ./services/web-apps/atlassian/crowd.nix
./services/web-apps/atlassian/jira.nix ./services/web-apps/atlassian/jira.nix
@ -1338,6 +1348,7 @@
./services/web-apps/chatgpt-retrieval-plugin.nix ./services/web-apps/chatgpt-retrieval-plugin.nix
./services/web-apps/cloudlog.nix ./services/web-apps/cloudlog.nix
./services/web-apps/code-server.nix ./services/web-apps/code-server.nix
./services/web-apps/commafeed.nix
./services/web-apps/convos.nix ./services/web-apps/convos.nix
./services/web-apps/crabfit.nix ./services/web-apps/crabfit.nix
./services/web-apps/davis.nix ./services/web-apps/davis.nix
@ -1348,7 +1359,9 @@
./services/web-apps/dolibarr.nix ./services/web-apps/dolibarr.nix
./services/web-apps/engelsystem.nix ./services/web-apps/engelsystem.nix
./services/web-apps/ethercalc.nix ./services/web-apps/ethercalc.nix
./services/web-apps/filesender.nix
./services/web-apps/firefly-iii.nix ./services/web-apps/firefly-iii.nix
./services/web-apps/flarum.nix
./services/web-apps/fluidd.nix ./services/web-apps/fluidd.nix
./services/web-apps/freshrss.nix ./services/web-apps/freshrss.nix
./services/web-apps/galene.nix ./services/web-apps/galene.nix
@ -1392,6 +1405,7 @@
./services/web-apps/netbox.nix ./services/web-apps/netbox.nix
./services/web-apps/nextcloud.nix ./services/web-apps/nextcloud.nix
./services/web-apps/nextcloud-notify_push.nix ./services/web-apps/nextcloud-notify_push.nix
./services/web-apps/nextjs-ollama-llm-ui.nix
./services/web-apps/nexus.nix ./services/web-apps/nexus.nix
./services/web-apps/nifi.nix ./services/web-apps/nifi.nix
./services/web-apps/node-red.nix ./services/web-apps/node-red.nix
@ -1420,6 +1434,7 @@
./services/web-apps/selfoss.nix ./services/web-apps/selfoss.nix
./services/web-apps/shiori.nix ./services/web-apps/shiori.nix
./services/web-apps/silverbullet.nix ./services/web-apps/silverbullet.nix
./services/web-apps/simplesamlphp.nix
./services/web-apps/slskd.nix ./services/web-apps/slskd.nix
./services/web-apps/snipe-it.nix ./services/web-apps/snipe-it.nix
./services/web-apps/sogo.nix ./services/web-apps/sogo.nix
@ -1437,6 +1452,7 @@
./services/web-apps/zitadel.nix ./services/web-apps/zitadel.nix
./services/web-servers/agate.nix ./services/web-servers/agate.nix
./services/web-servers/apache-httpd/default.nix ./services/web-servers/apache-httpd/default.nix
./services/web-servers/bluemap.nix
./services/web-servers/caddy/default.nix ./services/web-servers/caddy/default.nix
./services/web-servers/darkhttpd.nix ./services/web-servers/darkhttpd.nix
./services/web-servers/fcgiwrap.nix ./services/web-servers/fcgiwrap.nix

View file

@ -120,8 +120,8 @@ in
wantedBy = [ (if type == "services" then "multi-user.target" else if type == "timers" then "timers.target" else null) ]; wantedBy = [ (if type == "services" then "multi-user.target" else if type == "timers" then "timers.target" else null) ];
}; };
}; };
mkService = lib.mkSystemd "services"; mkService = mkSystemd "services";
mkTimer = lib.mkSystemd "timers"; mkTimer = mkSystemd "timers";
in in
{ {
packages = [ atop (lib.mkIf cfg.netatop.enable cfg.netatop.package) ]; packages = [ atop (lib.mkIf cfg.netatop.enable cfg.netatop.package) ];

View file

@ -0,0 +1,98 @@
{ lib
, pkgs
, config
, options
, ...
}:
let
cfg = config.programs.benchexec;
opt = options.programs.benchexec;
filterUsers = x:
if builtins.isString x then config.users.users ? ${x} else
if builtins.isInt x then x else
throw "filterUsers expects string (username) or int (UID)";
uid = x:
if builtins.isString x then config.users.users.${x}.uid else
if builtins.isInt x then x else
throw "uid expects string (username) or int (UID)";
in
{
options.programs.benchexec = {
enable = lib.mkEnableOption "BenchExec";
package = lib.options.mkPackageOption pkgs "benchexec" { };
users = lib.options.mkOption {
type = with lib.types; listOf (either str int);
description = ''
Users that intend to use BenchExec.
Provide usernames of users that are configured via {option}`${options.users.users}` as string,
and UIDs of "mutable users" as integers.
Control group delegation will be configured via systemd.
For more information, see <https://github.com/sosy-lab/benchexec/blob/3.18/doc/INSTALL.md#setting-up-cgroups>.
'';
default = [ ];
example = lib.literalExpression ''
[
"alice" # username of a user configured via ${options.users.users}
1007 # UID of a mutable user
]
'';
};
};
config = lib.mkIf cfg.enable {
assertions = (map
(user: {
assertion = config.users.users ? ${user};
message = ''
The user '${user}' intends to use BenchExec (via `${opt.users}`), but is not configured via `${options.users.users}`.
'';
})
(builtins.filter builtins.isString cfg.users)
) ++ (map
(id: {
assertion = config.users.mutableUsers;
message = ''
The user with UID '${id}' intends to use BenchExec (via `${opt.users}`), but mutable users are disabled via `${options.users.mutableUsers}`.
'';
})
(builtins.filter builtins.isInt cfg.users)
) ++ [
{
assertion = config.systemd.enableUnifiedCgroupHierarchy == true;
message = ''
The BenchExec module `${opt.enable}` only supports control groups 2 (`${options.systemd.enableUnifiedCgroupHierarchy} = true`).
'';
}
];
environment.systemPackages = [ cfg.package ];
# See <https://github.com/sosy-lab/benchexec/blob/3.18/doc/INSTALL.md#setting-up-cgroups>.
systemd.services = builtins.listToAttrs (map
(user: {
name = "user@${builtins.toString (uid user)}";
value = {
serviceConfig.Delegate = "yes";
overrideStrategy = "asDropin";
};
})
(builtins.filter filterUsers cfg.users));
# See <https://github.com/sosy-lab/benchexec/blob/3.18/doc/INSTALL.md#requirements>.
virtualisation.lxc.lxcfs.enable = lib.mkDefault true;
# See <https://github.com/sosy-lab/benchexec/blob/3.18/doc/INSTALL.md#requirements>.
programs = {
cpu-energy-meter.enable = lib.mkDefault true;
pqos-wrapper.enable = lib.mkDefault true;
};
# See <https://github.com/sosy-lab/benchexec/blob/3.18/doc/INSTALL.md#kernel-requirements>.
security.unprivilegedUsernsClone = true;
};
meta.maintainers = with lib.maintainers; [ lorenzleutgeb ];
}

View file

@ -48,9 +48,11 @@ in
# Nvidia support # Nvidia support
(lib.mkIf cfg.nvidiaSupport { (lib.mkIf cfg.nvidiaSupport {
systemd.services.coolercontrold.path = with config.boot.kernelPackages; [ systemd.services.coolercontrold.path = let
nvidia_x11 # nvidia-smi nvidiaPkg = config.hardware.nvidia.package;
nvidia_x11.settings # nvidia-settings in [
nvidiaPkg # nvidia-smi
nvidiaPkg.settings # nvidia-settings
]; ];
}) })
]); ]);

View file

@ -0,0 +1,27 @@
{ config
, lib
, pkgs
, ...
}: {
options.programs.cpu-energy-meter = {
enable = lib.mkEnableOption "CPU Energy Meter";
package = lib.mkPackageOption pkgs "cpu-energy-meter" { };
};
config =
let
cfg = config.programs.cpu-energy-meter;
in
lib.mkIf cfg.enable {
hardware.cpu.x86.msr.enable = true;
security.wrappers.${cfg.package.meta.mainProgram} = {
owner = "nobody";
group = config.hardware.cpu.x86.msr.group;
source = lib.getExe cfg.package;
capabilities = "cap_sys_rawio=ep";
};
};
meta.maintainers = with lib.maintainers; [ lorenzleutgeb ];
}

View file

@ -287,7 +287,7 @@ in
(_: value: { Value = value; Status = cfg.preferencesStatus; }) (_: value: { Value = value; Status = cfg.preferencesStatus; })
cfg.preferences); cfg.preferences);
ExtensionSettings = builtins.listToAttrs (builtins.map ExtensionSettings = builtins.listToAttrs (builtins.map
(lang: builtins.nameValuePair (lang: lib.attrsets.nameValuePair
"langpack-${lang}@firefox.mozilla.org" "langpack-${lang}@firefox.mozilla.org"
{ {
installation_mode = "normal_installed"; installation_mode = "normal_installed";

View file

@ -8,22 +8,6 @@ let
agentSettingsFormat = pkgs.formats.keyValue { agentSettingsFormat = pkgs.formats.keyValue {
mkKeyValue = lib.generators.mkKeyValueDefault { } " "; mkKeyValue = lib.generators.mkKeyValueDefault { } " ";
}; };
xserverCfg = config.services.xserver;
defaultPinentryFlavor =
if xserverCfg.desktopManager.lxqt.enable
|| xserverCfg.desktopManager.plasma5.enable
|| xserverCfg.desktopManager.plasma6.enable
|| xserverCfg.desktopManager.deepin.enable then
"qt"
else if xserverCfg.desktopManager.xfce.enable then
"gtk2"
else if xserverCfg.enable || config.programs.sway.enable then
"gnome3"
else
"curses";
in in
{ {
imports = [ imports = [

View file

@ -21,7 +21,6 @@
lib.mkIf cfg.enable { lib.mkIf cfg.enable {
environment.systemPackages = [ environment.systemPackages = [
cfg.package cfg.package
pkgs.sshfs
]; ];
networking.firewall = rec { networking.firewall = rec {
allowedTCPPortRanges = [ { from = 1714; to = 1764; } ]; allowedTCPPortRanges = [ { from = 1714; to = 1764; } ];

View file

@ -0,0 +1,14 @@
{ config, pkgs, lib, ... }:
let
cfg = config.programs.ladybird;
in {
options = {
programs.ladybird.enable = lib.mkEnableOption "the Ladybird web browser";
};
config = lib.mkIf cfg.enable {
environment.systemPackages = [ pkgs.ladybird ];
fonts.fontDir.enable = true;
};
}

View file

@ -35,6 +35,8 @@ in
# therefore also enables this module # therefore also enables this module
enable = lib.mkEnableOption "less, a file pager"; enable = lib.mkEnableOption "less, a file pager";
package = lib.mkPackageOption pkgs "less" { };
configFile = lib.mkOption { configFile = lib.mkOption {
type = lib.types.nullOr lib.types.path; type = lib.types.nullOr lib.types.path;
default = null; default = null;
@ -110,7 +112,7 @@ in
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
environment.systemPackages = [ pkgs.less ]; environment.systemPackages = [ cfg.package ];
environment.variables = { environment.variables = {
LESSKEYIN_SYSTEM = builtins.toString lessKey; LESSKEYIN_SYSTEM = builtins.toString lessKey;

View file

@ -1,17 +0,0 @@
{ config, lib, pkgs, ... }:
{
meta = {
maintainers = lib.teams.pantheon.members;
};
###### interface
options = {
programs.pantheon-tweaks.enable = lib.mkEnableOption "Pantheon Tweaks, an unofficial system settings panel for Pantheon";
};
###### implementation
config = lib.mkIf config.programs.pantheon-tweaks.enable {
services.xserver.desktopManager.pantheon.extraSwitchboardPlugs = [ pkgs.pantheon-tweaks ];
};
}

View file

@ -0,0 +1,27 @@
{ config
, lib
, pkgs
, ...
}:
let
cfg = config.programs.pqos-wrapper;
in
{
options.programs.pqos-wrapper = {
enable = lib.mkEnableOption "PQoS Wrapper for BenchExec";
package = lib.mkPackageOption pkgs "pqos-wrapper" { };
};
config = lib.mkIf cfg.enable {
hardware.cpu.x86.msr.enable = true;
security.wrappers.${cfg.package.meta.mainProgram} = {
owner = "nobody";
group = config.hardware.cpu.x86.msr.group;
source = lib.getExe cfg.package;
capabilities = "cap_sys_rawio=eip";
};
};
meta.maintainers = with lib.maintainers; [ lorenzleutgeb ];
}

View file

@ -12,7 +12,8 @@ in
package = lib.mkPackageOptionMD pkgs "screen" { }; package = lib.mkPackageOptionMD pkgs "screen" { };
screenrc = lib.mkOption { screenrc = lib.mkOption {
type = with lib.types; nullOr lines; type = lib.types.lines;
default = "";
example = '' example = ''
defscrollback 10000 defscrollback 10000
startup_message off startup_message off
@ -22,20 +23,22 @@ in
}; };
}; };
config = { config = lib.mkMerge [
# TODO: Added in 24.05, remove before 24.11 {
assertions = [ # TODO: Added in 24.05, remove before 24.11
{ assertions = [
assertion = cfg.screenrc != null -> cfg.enable; {
message = "`programs.screen.screenrc` has been configured, but `programs.screen.enable` is not true"; assertion = cfg.screenrc != "" -> cfg.enable;
} message = "`programs.screen.screenrc` has been configured, but `programs.screen.enable` is not true";
]; }
} // lib.mkIf cfg.enable { ];
environment.etc.screenrc = { }
enable = cfg.screenrc != null; (lib.mkIf cfg.enable {
text = cfg.screenrc; environment.etc.screenrc = {
}; text = cfg.screenrc;
environment.systemPackages = [ cfg.package ]; };
security.pam.services.screen = {}; environment.systemPackages = [ cfg.package ];
}; security.pam.services.screen = {};
})
];
} }

View file

@ -4,6 +4,8 @@ let
cfg = config.programs.steam; cfg = config.programs.steam;
gamescopeCfg = config.programs.gamescope; gamescopeCfg = config.programs.gamescope;
extraCompatPaths = lib.makeSearchPathOutput "steamcompattool" "" cfg.extraCompatPackages;
steam-gamescope = let steam-gamescope = let
exports = builtins.attrValues (builtins.mapAttrs (n: v: "export ${n}=${v}") cfg.gamescopeSession.env); exports = builtins.attrValues (builtins.mapAttrs (n: v: "export ${n}=${v}") cfg.gamescopeSession.env);
in in
@ -42,7 +44,7 @@ in {
''; '';
apply = steam: steam.override (prev: { apply = steam: steam.override (prev: {
extraEnv = (lib.optionalAttrs (cfg.extraCompatPackages != [ ]) { extraEnv = (lib.optionalAttrs (cfg.extraCompatPackages != [ ]) {
STEAM_EXTRA_COMPAT_TOOLS_PATHS = lib.makeSearchPathOutput "steamcompattool" "" cfg.extraCompatPackages; STEAM_EXTRA_COMPAT_TOOLS_PATHS = extraCompatPaths;
}) // (lib.optionalAttrs cfg.extest.enable { }) // (lib.optionalAttrs cfg.extest.enable {
LD_PRELOAD = "${pkgs.pkgsi686Linux.extest}/lib/libextest.so"; LD_PRELOAD = "${pkgs.pkgsi686Linux.extest}/lib/libextest.so";
}) // (prev.extraEnv or {}); }) // (prev.extraEnv or {});
@ -53,6 +55,7 @@ in {
then [ package ] ++ extraPackages then [ package ] ++ extraPackages
else [ package32 ] ++ extraPackages32; else [ package32 ] ++ extraPackages32;
in prevLibs ++ additionalLibs; in prevLibs ++ additionalLibs;
extraPkgs = p: (cfg.extraPackages ++ lib.optionals (prev ? extraPkgs) (prev.extraPkgs p));
} // lib.optionalAttrs (cfg.gamescopeSession.enable && gamescopeCfg.capSysNice) } // lib.optionalAttrs (cfg.gamescopeSession.enable && gamescopeCfg.capSysNice)
{ {
buildFHSEnv = pkgs.buildFHSEnv.override { buildFHSEnv = pkgs.buildFHSEnv.override {
@ -69,6 +72,19 @@ in {
''; '';
}; };
extraPackages = lib.mkOption {
type = lib.types.listOf lib.types.package;
default = [ ];
example = lib.literalExpression ''
with pkgs; [
gamescope
]
'';
description = ''
Additional packages to add to the Steam environment.
'';
};
extraCompatPackages = lib.mkOption { extraCompatPackages = lib.mkOption {
type = lib.types.listOf lib.types.package; type = lib.types.listOf lib.types.package;
default = [ ]; default = [ ];
@ -86,6 +102,19 @@ in {
''; '';
}; };
fontPackages = lib.mkOption {
type = lib.types.listOf lib.types.package;
# `fonts.packages` is a list of paths now, filter out which are not packages
default = builtins.filter lib.types.package.check config.fonts.packages;
defaultText = lib.literalExpression "builtins.filter lib.types.package.check config.fonts.packages";
example = lib.literalExpression "with pkgs; [ source-han-sans ]";
description = ''
Font packages to use in Steam.
Defaults to system fonts, but could be overridden to use other fonts useful for users who would like to customize CJK fonts used in Steam. According to the [upstream issue](https://github.com/ValveSoftware/steam-for-linux/issues/10422#issuecomment-1944396010), Steam only follows the per-user fontconfig configuration.
'';
};
remotePlay.openFirewall = lib.mkOption { remotePlay.openFirewall = lib.mkOption {
type = lib.types.bool; type = lib.types.bool;
default = false; default = false;
@ -139,6 +168,11 @@ in {
Load the extest library into Steam, to translate X11 input events to Load the extest library into Steam, to translate X11 input events to
uinput events (e.g. for using Steam Input on Wayland) uinput events (e.g. for using Steam Input on Wayland)
''; '';
protontricks = {
enable = lib.mkEnableOption "protontricks, a simple wrapper for running Winetricks commands for Proton-enabled games";
package = lib.mkPackageOption pkgs "protontricks" { };
};
}; };
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
@ -158,6 +192,8 @@ in {
}; };
}; };
programs.steam.extraPackages = cfg.fontPackages;
programs.gamescope.enable = lib.mkDefault cfg.gamescopeSession.enable; programs.gamescope.enable = lib.mkDefault cfg.gamescopeSession.enable;
services.displayManager.sessionPackages = lib.mkIf cfg.gamescopeSession.enable [ gamescopeSessionFile ]; services.displayManager.sessionPackages = lib.mkIf cfg.gamescopeSession.enable [ gamescopeSessionFile ];
@ -169,7 +205,8 @@ in {
environment.systemPackages = [ environment.systemPackages = [
cfg.package cfg.package
cfg.package.run cfg.package.run
] ++ lib.optional cfg.gamescopeSession.enable steam-gamescope; ] ++ lib.optional cfg.gamescopeSession.enable steam-gamescope
++ lib.optional cfg.protontricks.enable (cfg.protontricks.package.override { inherit extraCompatPaths; });
networking.firewall = lib.mkMerge [ networking.firewall = lib.mkMerge [
(lib.mkIf (cfg.remotePlay.openFirewall || cfg.localNetworkGameTransfers.openFirewall) { (lib.mkIf (cfg.remotePlay.openFirewall || cfg.localNetworkGameTransfers.openFirewall) {
@ -192,5 +229,5 @@ in {
]; ];
}; };
meta.maintainers = lib.teams.steam; meta.maintainers = lib.teams.steam.members;
} }

View file

@ -0,0 +1,89 @@
{
pkgs,
config,
lib,
...
}:
let
cfg = config.programs.thunderbird;
policyFormat = pkgs.formats.json { };
policyDoc = "https://github.com/thunderbird/policy-templates";
in
{
options.programs.thunderbird = {
enable = lib.mkEnableOption "Thunderbird mail client";
package = lib.mkPackageOption pkgs "thunderbird" { };
policies = lib.mkOption {
type = policyFormat.type;
default = { };
description = ''
Group policies to install.
See [Thunderbird's documentation](${policyDoc})
for a list of available options.
This can be used to install extensions declaratively! Check out the
documentation of the `ExtensionSettings` policy for details.
'';
};
preferences = lib.mkOption {
type =
with lib.types;
attrsOf (oneOf [
bool
int
str
]);
default = { };
description = ''
Preferences to set from `about:config`.
Some of these might be able to be configured more ergonomically
using policies.
'';
};
preferencesStatus = lib.mkOption {
type = lib.types.enum [
"default"
"locked"
"user"
"clear"
];
default = "locked";
description = ''
The status of `thunderbird.preferences`.
`status` can assume the following values:
- `"default"`: Preferences appear as default.
- `"locked"`: Preferences appear as default and can't be changed.
- `"user"`: Preferences appear as changed.
- `"clear"`: Value has no effect. Resets to factory defaults on each startup.
'';
};
};
config = lib.mkIf cfg.enable {
environment.systemPackages = [ cfg.package ];
environment.etc =
let
policiesJSON = policyFormat.generate "thunderbird-policies.json" { inherit (cfg) policies; };
in
lib.mkIf (cfg.policies != { }) { "thunderbird/policies/policies.json".source = policiesJSON; };
programs.thunderbird.policies = {
DisableAppUpdate = true;
Preferences = builtins.mapAttrs (_: value: {
Value = value;
Status = cfg.preferencesStatus;
}) cfg.preferences;
};
};
meta.maintainers = with lib.maintainers; [ nydragon ];
}

View file

@ -2,15 +2,27 @@
let let
cfg = config.programs.virt-manager; cfg = config.programs.virt-manager;
in { in
{
options.programs.virt-manager = { options.programs.virt-manager = {
enable = lib.mkEnableOption "virt-manager, an UI for managing virtual machines in libvirt"; enable = lib.mkEnableOption "virt-manager, an UI for managing virtual machines in libvirt";
package = lib.mkPackageOption pkgs "virt-manager" {}; package = lib.mkPackageOption pkgs "virt-manager" { };
}; };
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
environment.systemPackages = [ cfg.package ]; environment.systemPackages = [ cfg.package ];
programs.dconf.enable = true; programs.dconf = {
profiles.user.databases = [
{
settings = {
"org/virt-manager/virt-manager/connections" = {
autoconnect = [ "qemu:///system" ];
uris = [ "qemu:///system" ];
};
};
}
];
};
}; };
} }

View file

@ -1,46 +1,41 @@
{ config { config, lib, pkgs, ... }:
, lib
, pkgs
, ...
}:
let let
cfg = config.programs.hyprland; cfg = config.programs.hyprland;
finalPortalPackage = cfg.portalPackage.override { wayland-lib = import ./lib.nix { inherit lib; };
hyprland = cfg.finalPackage;
};
in in
{ {
options.programs.hyprland = { options.programs.hyprland = {
enable = lib.mkEnableOption null // { enable = lib.mkEnableOption ''
description = '' Hyprland, the dynamic tiling Wayland compositor that doesn't sacrifice on its looks.
Whether to enable Hyprland, the dynamic tiling Wayland compositor that doesn't sacrifice on its looks. You can manually launch Hyprland by executing {command}`Hyprland` on a TTY.
A configuration file will be generated in {file}`~/.config/hypr/hyprland.conf`.
See <https://wiki.hyprland.org> for more information'';
You can manually launch Hyprland by executing {command}`Hyprland` on a TTY. package = lib.mkPackageOption pkgs "hyprland" {
extraDescription = ''
A configuration file will be generated in {file}`~/.config/hypr/hyprland.conf`. If the package is not overridable with `enableXWayland`, then the module option
See <https://wiki.hyprland.org> for more information. {option}`xwayland` will have no effect.
''; '';
}; } // {
apply = p: wayland-lib.genFinalPackage p {
package = lib.mkPackageOption pkgs "hyprland" { };
finalPackage = lib.mkOption {
type = lib.types.package;
readOnly = true;
default = cfg.package.override {
enableXWayland = cfg.xwayland.enable; enableXWayland = cfg.xwayland.enable;
}; };
defaultText = lib.literalExpression
"`programs.hyprland.package` with applied configuration";
description = ''
The Hyprland package after applying configuration.
'';
}; };
portalPackage = lib.mkPackageOption pkgs "xdg-desktop-portal-hyprland" { }; portalPackage = lib.mkPackageOption pkgs "xdg-desktop-portal-hyprland" {
extraDescription = ''
If the package is not overridable with `hyprland`, then the Hyprland package
used by the portal may differ from the one set in the module option {option}`package`.
'';
} // {
apply = p: wayland-lib.genFinalPackage p {
hyprland = cfg.package;
};
};
xwayland.enable = lib.mkEnableOption ("XWayland") // { default = true; }; xwayland.enable = lib.mkEnableOption "XWayland" // { default = true; };
systemd.setPath.enable = lib.mkEnableOption null // { systemd.setPath.enable = lib.mkEnableOption null // {
default = true; default = true;
@ -53,33 +48,31 @@ in
}; };
}; };
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable (lib.mkMerge [
environment.systemPackages = [ cfg.finalPackage ]; {
environment.systemPackages = [ cfg.package ];
fonts.enableDefaultPackages = lib.mkDefault true; # To make a Hyprland session available if a display manager like SDDM is enabled:
hardware.opengl.enable = lib.mkDefault true; services.displayManager.sessionPackages = [ cfg.package ];
programs = { xdg.portal = {
dconf.enable = lib.mkDefault true; extraPortals = [ cfg.portalPackage ];
xwayland.enable = lib.mkDefault cfg.xwayland.enable; configPackages = lib.mkDefault [ cfg.package ];
}; };
security.polkit.enable = true; systemd = lib.mkIf cfg.systemd.setPath.enable {
user.extraConfig = ''
DefaultEnvironment="PATH=$PATH:/run/current-system/sw/bin:/etc/profiles/per-user/%u/bin:/run/wrappers/bin"
'';
};
}
services.displayManager.sessionPackages = [ cfg.finalPackage ]; (import ./wayland-session.nix {
inherit lib pkgs;
xdg.portal = { enableXWayland = cfg.xwayland.enable;
enable = lib.mkDefault true; enableWlrPortal = false; # Hyprland has its own portal, wlr is not needed
extraPortals = [ finalPortalPackage ]; })
configPackages = lib.mkDefault [ cfg.finalPackage ]; ]);
};
systemd = lib.mkIf cfg.systemd.setPath.enable {
user.extraConfig = ''
DefaultEnvironment="PATH=$PATH:/run/current-system/sw/bin:/etc/profiles/per-user/%u/bin:/run/wrappers/bin"
'';
};
};
imports = [ imports = [
(lib.mkRemovedOptionModule (lib.mkRemovedOptionModule
@ -95,4 +88,6 @@ in
"Nvidia patches are no longer needed" "Nvidia patches are no longer needed"
) )
]; ];
meta.maintainers = with lib.maintainers; [ fufexan ];
} }

View file

@ -0,0 +1,25 @@
{ lib, pkgs, config, ... }:
let
cfg = config.programs.hyprlock;
in
{
options.programs.hyprlock = {
enable = lib.mkEnableOption "hyprlock, Hyprland's GPU-accelerated screen locking utility";
package = lib.mkPackageOption pkgs "hyprlock" { };
};
config = lib.mkIf cfg.enable {
environment.systemPackages = [
cfg.package
];
# Hyprlock needs Hypridle systemd service to be running to detect idle time
services.hypridle.enable = true;
# Hyprlock needs PAM access to authenticate, else it fallbacks to su
security.pam.services.hyprlock = {};
};
meta.maintainers = with lib.maintainers; [ johnrtitor ];
}

View file

@ -0,0 +1,12 @@
{ lib }:
{
genFinalPackage = pkg: args:
let
expectedArgs = with lib;
lib.naturalSort (lib.attrNames args);
existingArgs = with lib;
naturalSort (intersectLists expectedArgs (attrNames (functionArgs pkg.override)));
in
if existingArgs != expectedArgs then pkg else pkg.override args;
}

View file

@ -1,37 +1,40 @@
{ { config, lib, pkgs, ... }:
config,
pkgs,
lib,
...
}:
let let
cfg = config.programs.river; cfg = config.programs.river;
in {
wayland-lib = import ./lib.nix { inherit lib; };
in
{
options.programs.river = { options.programs.river = {
enable = lib.mkEnableOption "river, a dynamic tiling Wayland compositor"; enable = lib.mkEnableOption "river, a dynamic tiling Wayland compositor";
package = lib.mkPackageOption pkgs "river" { package = lib.mkPackageOption pkgs "river" {
nullable = true; nullable = true;
extraDescription = '' extraDescription = ''
If the package is not overridable with `xwaylandSupport`, then the module option
{option}`xwayland` will have no effect.
Set to `null` to not add any River package to your path. Set to `null` to not add any River package to your path.
This should be done if you want to use the Home Manager River module to install River. This should be done if you want to use the Home Manager River module to install River.
''; '';
} // {
apply = p: if p == null then null else
wayland-lib.genFinalPackage p {
xwaylandSupport = cfg.xwayland.enable;
};
}; };
xwayland.enable = lib.mkEnableOption "XWayland" // { default = true; };
extraPackages = lib.mkOption { extraPackages = lib.mkOption {
type = with lib.types; listOf package; type = with lib.types; listOf package;
default = with pkgs; [ default = with pkgs; [ swaylock foot dmenu ];
swaylock
foot
dmenu
];
defaultText = lib.literalExpression '' defaultText = lib.literalExpression ''
with pkgs; [ swaylock foot dmenu ]; with pkgs; [ swaylock foot dmenu ];
''; '';
example = lib.literalExpression '' example = lib.literalExpression ''
with pkgs; [ with pkgs; [ termite rofi light ]
termite rofi light
]
''; '';
description = '' description = ''
Extra packages to be installed system wide. See Extra packages to be installed system wide. See
@ -41,19 +44,22 @@ in {
}; };
}; };
config = config = lib.mkIf cfg.enable (lib.mkMerge [
lib.mkIf cfg.enable (lib.mkMerge [ {
{ environment.systemPackages = lib.optional (cfg.package != null) cfg.package ++ cfg.extraPackages;
environment.systemPackages = lib.optional (cfg.package != null) cfg.package ++ cfg.extraPackages;
# To make a river session available if a display manager like SDDM is enabled: # To make a river session available if a display manager like SDDM is enabled:
services.displayManager.sessionPackages = lib.optionals (cfg.package != null) [ cfg.package ]; services.displayManager.sessionPackages = lib.optional (cfg.package != null) cfg.package;
# https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1050913 # https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1050913
xdg.portal.config.river.default = lib.mkDefault [ "wlr" "gtk" ]; xdg.portal.config.river.default = lib.mkDefault [ "wlr" "gtk" ];
} }
(import ./wayland-session.nix { inherit lib pkgs; })
]); (import ./wayland-session.nix {
inherit lib pkgs;
enableXWayland = cfg.xwayland.enable;
})
]);
meta.maintainers = with lib.maintainers; [ GaetanLepage ]; meta.maintainers = with lib.maintainers; [ GaetanLepage ];
} }

View file

@ -1,52 +1,11 @@
{ config, pkgs, lib, ... }: { config, lib, pkgs, ... }:
let let
cfg = config.programs.sway; cfg = config.programs.sway;
wrapperOptions = lib.types.submodule { wayland-lib = import ./lib.nix { inherit lib; };
options = in
let {
mkWrapperFeature = default: description: lib.mkOption {
type = lib.types.bool;
inherit default;
example = !default;
description = "Whether to make use of the ${description}";
};
in {
base = mkWrapperFeature true ''
base wrapper to execute extra session commands and prepend a
dbus-run-session to the sway command.
'';
gtk = mkWrapperFeature false ''
wrapGAppsHook wrapper to execute sway with required environment
variables for GTK applications.
'';
};
};
genFinalPackage = pkg:
let
expectedArgs = lib.naturalSort [
"extraSessionCommands"
"extraOptions"
"withBaseWrapper"
"withGtkWrapper"
"isNixOS"
];
existedArgs = with lib;
naturalSort
(intersectLists expectedArgs (attrNames (functionArgs pkg.override)));
in if existedArgs != expectedArgs then
pkg
else
pkg.override {
extraSessionCommands = cfg.extraSessionCommands;
extraOptions = cfg.extraOptions;
withBaseWrapper = cfg.wrapperFeatures.base;
withGtkWrapper = cfg.wrapperFeatures.gtk;
isNixOS = true;
};
in {
options.programs.sway = { options.programs.sway = {
enable = lib.mkEnableOption '' enable = lib.mkEnableOption ''
Sway, the i3-compatible tiling Wayland compositor. You can manually launch Sway, the i3-compatible tiling Wayland compositor. You can manually launch
@ -55,28 +14,36 @@ in {
<https://github.com/swaywm/sway/wiki> and <https://github.com/swaywm/sway/wiki> and
"man 5 sway" for more information''; "man 5 sway" for more information'';
package = lib.mkOption { package = lib.mkPackageOption pkgs "sway" {
type = with lib.types; nullOr package; nullable = true;
default = pkgs.sway; extraDescription = ''
apply = p: if p == null then null else genFinalPackage p; If the package is not overridable with `extraSessionCommands`, `extraOptions`,
defaultText = lib.literalExpression "pkgs.sway"; `withBaseWrapper`, `withGtkWrapper`, `enableXWayland` and `isNixOS`,
description = '' then the module options {option}`wrapperFeatures`, {option}`extraSessionCommands`,
Sway package to use. If the package does not contain the override arguments {option}`extraOptions` and {option}`xwayland` will have no effect.
`extraSessionCommands`, `extraOptions`, `withBaseWrapper`, `withGtkWrapper`,
`isNixOS`, then the module options {option}`wrapperFeatures`, Set to `null` to not add any Sway package to your path.
{option}`wrapperFeatures` and {option}`wrapperFeatures` will have no effect. This should be done if you want to use the Home Manager Sway module to install Sway.
Set to `null` to not add any Sway package to your path. This should be done if
you want to use the Home Manager Sway module to install Sway.
''; '';
} // {
apply = p: if p == null then null else
wayland-lib.genFinalPackage p {
extraSessionCommands = cfg.extraSessionCommands;
extraOptions = cfg.extraOptions;
withBaseWrapper = cfg.wrapperFeatures.base;
withGtkWrapper = cfg.wrapperFeatures.gtk;
enableXWayland = cfg.xwayland.enable;
isNixOS = true;
};
}; };
wrapperFeatures = lib.mkOption { wrapperFeatures = {
type = wrapperOptions; base = lib.mkEnableOption ''
default = { }; the base wrapper to execute extra session commands and prepend a
example = { gtk = true; }; dbus-run-session to the sway command'' // { default = true; };
description = '' gtk = lib.mkEnableOption ''
Attribute set of features to enable in the wrapper. the wrapGAppsHook wrapper to execute sway with required environment
''; variables for GTK applications'';
}; };
extraSessionCommands = lib.mkOption { extraSessionCommands = lib.mkOption {
@ -114,19 +81,16 @@ in {
''; '';
}; };
xwayland.enable = lib.mkEnableOption "XWayland" // { default = true; };
extraPackages = lib.mkOption { extraPackages = lib.mkOption {
type = with lib.types; listOf package; type = with lib.types; listOf package;
default = with pkgs; [ default = with pkgs; [ swaylock swayidle foot dmenu wmenu ];
swaylock swayidle foot dmenu wmenu
];
defaultText = lib.literalExpression '' defaultText = lib.literalExpression ''
with pkgs; [ swaylock swayidle foot dmenu wmenu ]; with pkgs; [ swaylock swayidle foot dmenu wmenu ];
''; '';
example = lib.literalExpression '' example = lib.literalExpression ''
with pkgs; [ with pkgs; [ i3status i3status-rust termite rofi light ]
i3status i3status-rust
termite rofi light
]
''; '';
description = '' description = ''
Extra packages to be installed system wide. See Extra packages to be installed system wide. See
@ -135,46 +99,50 @@ in {
for a list of useful software. for a list of useful software.
''; '';
}; };
}; };
config = lib.mkIf cfg.enable config = lib.mkIf cfg.enable (lib.mkMerge [
(lib.mkMerge [ {
{ assertions = [
assertions = [ {
{ assertion = cfg.extraSessionCommands != "" -> cfg.wrapperFeatures.base;
assertion = cfg.extraSessionCommands != "" -> cfg.wrapperFeatures.base; message = ''
message = '' The extraSessionCommands for Sway will not be run if wrapperFeatures.base is disabled.
The extraSessionCommands for Sway will not be run if '';
wrapperFeatures.base is disabled. }
''; ];
}
];
environment = { environment = {
systemPackages = lib.optional (cfg.package != null) cfg.package ++ cfg.extraPackages; systemPackages = lib.optional (cfg.package != null) cfg.package ++ cfg.extraPackages;
# Needed for the default wallpaper:
pathsToLink = lib.optionals (cfg.package != null) [ "/share/backgrounds/sway" ]; # Needed for the default wallpaper:
etc = { pathsToLink = lib.optional (cfg.package != null) "/share/backgrounds/sway";
"sway/config.d/nixos.conf".source = pkgs.writeText "nixos.conf" ''
# Import the most important environment variables into the D-Bus and systemd etc = {
# user environments (e.g. required for screen sharing and Pinentry prompts): "sway/config.d/nixos.conf".source = pkgs.writeText "nixos.conf" ''
exec dbus-update-activation-environment --systemd DISPLAY WAYLAND_DISPLAY SWAYSOCK XDG_CURRENT_DESKTOP # Import the most important environment variables into the D-Bus and systemd
''; # user environments (e.g. required for screen sharing and Pinentry prompts):
} // lib.optionalAttrs (cfg.package != null) { exec dbus-update-activation-environment --systemd DISPLAY WAYLAND_DISPLAY SWAYSOCK XDG_CURRENT_DESKTOP
"sway/config".source = lib.mkOptionDefault "${cfg.package}/etc/sway/config"; '';
}; } // lib.optionalAttrs (cfg.package != null) {
"sway/config".source = lib.mkOptionDefault "${cfg.package}/etc/sway/config";
}; };
};
programs.gnupg.agent.pinentryPackage = lib.mkDefault pkgs.pinentry-gnome3; programs.gnupg.agent.pinentryPackage = lib.mkDefault pkgs.pinentry-gnome3;
# https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1050913 # To make a Sway session available if a display manager like SDDM is enabled:
xdg.portal.config.sway.default = lib.mkDefault [ "wlr" "gtk" ]; services.displayManager.sessionPackages = lib.optional (cfg.package != null) cfg.package;
# To make a Sway session available if a display manager like SDDM is enabled: # https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1050913
services.displayManager.sessionPackages = lib.optionals (cfg.package != null) [ cfg.package ]; } xdg.portal.config.sway.default = lib.mkDefault [ "wlr" "gtk" ];
(import ./wayland-session.nix { inherit lib pkgs; }) }
]);
(import ./wayland-session.nix {
inherit lib pkgs;
enableXWayland = cfg.xwayland.enable;
})
]);
meta.maintainers = with lib.maintainers; [ primeos colemickens ]; meta.maintainers = with lib.maintainers; [ primeos colemickens ];
} }

View file

@ -1,23 +1,27 @@
{ lib, pkgs, ... }: { {
security = { lib,
polkit.enable = true; pkgs,
pam.services.swaylock = {}; enableXWayland ? true,
}; enableWlrPortal ? true,
}:
hardware.opengl.enable = lib.mkDefault true; {
fonts.enableDefaultPackages = lib.mkDefault true; security = {
polkit.enable = true;
pam.services.swaylock = {};
};
programs = { hardware.opengl.enable = lib.mkDefault true;
dconf.enable = lib.mkDefault true; fonts.enableDefaultPackages = lib.mkDefault true;
xwayland.enable = lib.mkDefault true;
};
xdg.portal = { programs = {
enable = lib.mkDefault true; dconf.enable = lib.mkDefault true;
xwayland.enable = lib.mkDefault enableXWayland;
};
extraPortals = [ xdg.portal.wlr.enable = enableWlrPortal;
# For screen sharing
pkgs.xdg-desktop-portal-wlr # Window manager only sessions (unlike DEs) don't handle XDG
]; # autostart files, so force them to run the service
}; services.xserver.desktopManager.runXdgAutostartIfNone = lib.mkDefault true;
} }

View file

@ -5,7 +5,7 @@ let
settingsFormat = pkgs.formats.toml { }; settingsFormat = pkgs.formats.toml { };
names = [ "yazi" "theme" "keymap" ]; files = [ "yazi" "theme" "keymap" ];
in in
{ {
options.programs.yazi = { options.programs.yazi = {
@ -15,7 +15,7 @@ in
settings = lib.mkOption { settings = lib.mkOption {
type = with lib.types; submodule { type = with lib.types; submodule {
options = lib.listToAttrs (map options = (lib.listToAttrs (map
(name: lib.nameValuePair name (lib.mkOption { (name: lib.nameValuePair name (lib.mkOption {
inherit (settingsFormat) type; inherit (settingsFormat) type;
default = { }; default = { };
@ -25,26 +25,65 @@ in
See https://yazi-rs.github.io/docs/configuration/${name}/ for documentation. See https://yazi-rs.github.io/docs/configuration/${name}/ for documentation.
''; '';
})) }))
names); files));
}; };
default = { }; default = { };
description = '' description = ''
Configuration included in `$YAZI_CONFIG_HOME`. Configuration included in `$YAZI_CONFIG_HOME`.
''; '';
}; };
initLua = lib.mkOption {
type = with lib.types; nullOr path;
default = null;
description = ''
The init.lua for Yazi itself.
'';
example = lib.literalExpression "./init.lua";
};
plugins = lib.mkOption {
type = with lib.types; attrsOf (oneOf [ path package ]);
default = { };
description = ''
Lua plugins.
See https://yazi-rs.github.io/docs/plugins/overview/ for documentation.
'';
example = lib.literalExpression ''
{
foo = ./foo;
bar = pkgs.bar;
}
'';
};
flavors = lib.mkOption {
type = with lib.types; attrsOf (oneOf [ path package ]);
default = { };
description = ''
Pre-made themes.
See https://yazi-rs.github.io/docs/flavors/overview/ for documentation.
'';
example = lib.literalExpression ''
{
foo = ./foo;
bar = pkgs.bar;
}
'';
};
}; };
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
environment = { environment.systemPackages = [
systemPackages = [ cfg.package ]; (cfg.package.override {
variables.YAZI_CONFIG_HOME = "/etc/yazi/"; inherit (cfg) settings initLua plugins flavors;
etc = lib.attrsets.mergeAttrsList (map })
(name: lib.optionalAttrs (cfg.settings.${name} != { }) { ];
"yazi/${name}.toml".source = settingsFormat.generate "${name}.toml" cfg.settings.${name};
})
names);
};
}; };
meta = { meta = {
maintainers = with lib.maintainers; [ linsui ]; maintainers = with lib.maintainers; [ linsui ];
}; };

View file

@ -40,12 +40,16 @@ in
(mkRemovedOptionModule [ "networking" "vpnc" ] "Use environment.etc.\"vpnc/service.conf\" instead.") (mkRemovedOptionModule [ "networking" "vpnc" ] "Use environment.etc.\"vpnc/service.conf\" instead.")
(mkRemovedOptionModule [ "networking" "wicd" ] "The corresponding package was removed from nixpkgs.") (mkRemovedOptionModule [ "networking" "wicd" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "programs" "gnome-documents" ] "The corresponding package was removed from nixpkgs.") (mkRemovedOptionModule [ "programs" "gnome-documents" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "programs" "pantheon-tweaks" ] ''
pantheon-tweaks is no longer a switchboard plugin but an independent app,
adding the package to environment.systemPackages is sufficient.
'')
(mkRemovedOptionModule [ "programs" "tilp2" ] "The corresponding package was removed from nixpkgs.") (mkRemovedOptionModule [ "programs" "tilp2" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "programs" "way-cooler" ] ("way-cooler is abandoned by its author: " + (mkRemovedOptionModule [ "programs" "way-cooler" ] ("way-cooler is abandoned by its author: " +
"https://way-cooler.org/blog/2020/01/09/way-cooler-post-mortem.html")) "https://way-cooler.org/blog/2020/01/09/way-cooler-post-mortem.html"))
(mkRemovedOptionModule [ "security" "hideProcessInformation" ] '' (mkRemovedOptionModule [ "security" "hideProcessInformation" ] ''
The hidepid module was removed, since the underlying machinery The hidepid module was removed, since the underlying machinery
is broken when using cgroups-v2. is broken when using cgroups-v2.
'') '')
(mkRemovedOptionModule [ "services" "baget" "enable" ] "The baget module was removed due to the upstream package being unmaintained.") (mkRemovedOptionModule [ "services" "baget" "enable" ] "The baget module was removed due to the upstream package being unmaintained.")
(mkRemovedOptionModule [ "services" "beegfs" ] "The BeeGFS module has been removed") (mkRemovedOptionModule [ "services" "beegfs" ] "The BeeGFS module has been removed")

Some files were not shown because too many files have changed in this diff Show more