Project import generated by Copybara.

GitOrigin-RevId: c777cdf5c564015d5f63b09cc93bef4178b19b01
This commit is contained in:
Default email 2022-04-27 11:35:20 +02:00
parent 8d1ae0fce1
commit 22017988c6
3512 changed files with 126578 additions and 134860 deletions

View file

@ -55,6 +55,11 @@ trim_trailing_whitespace = unset
[*.lock]
indent_size = unset
# trailing whitespace is an actual syntax element of classic Markdown/
# CommonMark to enforce a line break
[*.md]
trim_trailing_whitespace = unset
[eggs.nix]
trim_trailing_whitespace = unset

View file

@ -41,7 +41,8 @@
/pkgs/build-support/cc-wrapper @Ericson2314
/pkgs/build-support/bintools-wrapper @Ericson2314
/pkgs/build-support/setup-hooks @Ericson2314
/pkgs/build-support/setup-hooks/auto-patchelf.sh @aszlig
/pkgs/build-support/setup-hooks/auto-patchelf.sh @layus
/pkgs/build-support/setup-hooks/auto-patchelf.py @layus
# Nixpkgs build-support
/pkgs/build-support/writers @lassulus @Profpatsch
@ -284,3 +285,7 @@
/nixos/modules/services/misc/matrix-conduit.nix @piegamesde
/nixos/tests/matrix-appservice-irc.nix @piegamesde
/nixos/tests/matrix-conduit.nix @piegamesde
# Dotnet
/pkgs/build-support/dotnet @IvarWithoutBones
/pkgs/development/compilers/dotnet @IvarWithoutBones

View file

@ -25,14 +25,15 @@ jobs:
git commit -m "${{ steps.setup.outputs.title }}" providers.json
popd
- name: create PR
uses: peter-evans/create-pull-request@v3
uses: peter-evans/create-pull-request@v4
with:
body: |
Automatic update of terraform providers.
Automatic update by [update-terraform-providers](https://github.com/NixOS/nixpkgs/blob/master/.github/workflows/update-terraform-providers.yml) action.
Created by [update-terraform-providers](https://github.com/NixOS/nixpkgs/blob/master/.github/workflows/update-terraform-providers.yml) action.
Check that all providers build with `@ofborg build terraform-full`
Check that all providers build with:
```
@ofborg build terraform-full
```
branch: terraform-providers-update
delete-branch: false
labels: "2.status: work-in-progress"

View file

@ -6,11 +6,11 @@ When using Nix, you will frequently need to download source code and other files
Because fixed output derivations are _identified_ by their hash, a common mistake is to update a fetcher's URL or a version parameter, without updating the hash. **This will cause the old contents to be used.** So remember to always invalidate the hash argument.
For those who develop and maintain fetchers, a similar problem arises with changes to the implementation of a fetcher. These may cause a fixed output derivation to fail, but won't normally be caught by tests because the supposed output is already in the store or cache. For the purpose of testing, you can use a trick that is embodied by the [`invalidateFetcherByDrvHash`](#sec-pkgs-invalidateFetcherByDrvHash) function. It uses the derivation `name` to create a unique output path per fetcher implementation, defeating the caching precisely where it would be harmful.
For those who develop and maintain fetchers, a similar problem arises with changes to the implementation of a fetcher. These may cause a fixed output derivation to fail, but won't normally be caught by tests because the supposed output is already in the store or cache. For the purpose of testing, you can use a trick that is embodied by the [`invalidateFetcherByDrvHash`](#tester-invalidateFetcherByDrvHash) function. It uses the derivation `name` to create a unique output path per fetcher implementation, defeating the caching precisely where it would be harmful.
## `fetchurl` and `fetchzip` {#fetchurl}
Two basic fetchers are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of fetchurl is provided below.
Two basic fetchers are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of `fetchurl` is provided below.
```nix
{ stdenv, fetchurl }:
@ -24,9 +24,9 @@ stdenv.mkDerivation {
}
```
The main difference between `fetchurl` and `fetchzip` is in how they store the contents. `fetchurl` will store the unaltered contents of the URL within the Nix store. `fetchzip` on the other hand will decompress the archive for you, making files and directories directly accessible in the future. `fetchzip` can only be used with archives. Despite the name, `fetchzip` is not limited to .zip files and can also be used with any tarball.
The main difference between `fetchurl` and `fetchzip` is in how they store the contents. `fetchurl` will store the unaltered contents of the URL within the Nix store. `fetchzip` on the other hand, will decompress the archive for you, making files and directories directly accessible in the future. `fetchzip` can only be used with archives. Despite the name, `fetchzip` is not limited to .zip files and can also be used with any tarball.
`fetchpatch` works very similarly to `fetchurl` with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time.
`fetchpatch` works very similarly to `fetchurl` with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example, it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time.
Most other fetchers return a directory rather than a single file.
@ -38,9 +38,9 @@ Used with Subversion. Expects `url` to a Subversion directory, `rev`, and `sha25
Used with Git. Expects `url` to a Git repo, `rev`, and `sha256`. `rev` in this case can be full the git commit id (SHA1 hash) or a tag name like `refs/tags/v1.0`.
Additionally the following optional arguments can be given: `fetchSubmodules = true` makes `fetchgit` also fetch the submodules of a repository. If `deepClone` is set to true, the entire repository is cloned as opposing to just creating a shallow clone. `deepClone = true` also implies `leaveDotGit = true` which means that the `.git` directory of the clone won't be removed after checkout.
Additionally, the following optional arguments can be given: `fetchSubmodules = true` makes `fetchgit` also fetch the submodules of a repository. If `deepClone` is set to true, the entire repository is cloned as opposing to just creating a shallow clone. `deepClone = true` also implies `leaveDotGit = true` which means that the `.git` directory of the clone won't be removed after checkout.
If only parts of the repository are needed, `sparseCheckout` can be used. This will prevent git from fetching unnecessary blobs from server, see [git sparse-checkout](https://git-scm.com/docs/git-sparse-checkout) and [git clone --filter](https://git-scm.com/docs/git-clone#Documentation/git-clone.txt---filterltfilter-specgt) for more infomation:
If only parts of the repository are needed, `sparseCheckout` can be used. This will prevent git from fetching unnecessary blobs from server, see [git sparse-checkout](https://git-scm.com/docs/git-sparse-checkout) and [git clone --filter](https://git-scm.com/docs/git-clone#Documentation/git-clone.txt---filterltfilter-specgt) for more information:
```nix
{ stdenv, fetchgit }:
@ -78,17 +78,17 @@ A number of fetcher functions wrap part of `fetchurl` and `fetchzip`. They are m
## `fetchFromGitHub` {#fetchfromgithub}
`fetchFromGitHub` expects four arguments. `owner` is a string corresponding to the GitHub user or organization that controls this repository. `repo` corresponds to the name of the software repository. These are located at the top of every GitHub HTML page as `owner`/`repo`. `rev` corresponds to the Git commit hash or tag (e.g `v1.0`) that will be downloaded from Git. Finally, `sha256` corresponds to the hash of the extracted directory. Again, other hash algorithms are also available but `sha256` is currently preferred.
`fetchFromGitHub` expects four arguments. `owner` is a string corresponding to the GitHub user or organization that controls this repository. `repo` corresponds to the name of the software repository. These are located at the top of every GitHub HTML page as `owner`/`repo`. `rev` corresponds to the Git commit hash or tag (e.g `v1.0`) that will be downloaded from Git. Finally, `sha256` corresponds to the hash of the extracted directory. Again, other hash algorithms are also available, but `sha256` is currently preferred.
`fetchFromGitHub` uses `fetchzip` to download the source archive generated by GitHub for the specified revision. If `leaveDotGit`, `deepClone` or `fetchSubmodules` are set to `true`, `fetchFromGitHub` will use `fetchgit` instead. Refer to its section for documentation of these options.
## `fetchFromGitLab` {#fetchfromgitlab}
This is used with GitLab repositories. The arguments expected are very similar to fetchFromGitHub above.
This is used with GitLab repositories. The arguments expected are very similar to `fetchFromGitHub` above.
## `fetchFromGitiles` {#fetchfromgitiles}
This is used with Gitiles repositories. The arguments expected are similar to fetchgit.
This is used with Gitiles repositories. The arguments expected are similar to `fetchgit`.
## `fetchFromBitbucket` {#fetchfrombitbucket}
@ -96,11 +96,11 @@ This is used with BitBucket repositories. The arguments expected are very simila
## `fetchFromSavannah` {#fetchfromsavannah}
This is used with Savannah repositories. The arguments expected are very similar to fetchFromGitHub above.
This is used with Savannah repositories. The arguments expected are very similar to `fetchFromGitHub` above.
## `fetchFromRepoOrCz` {#fetchfromrepoorcz}
This is used with repo.or.cz repositories. The arguments expected are very similar to fetchFromGitHub above.
This is used with repo.or.cz repositories. The arguments expected are very similar to `fetchFromGitHub` above.
## `fetchFromSourcehut` {#fetchfromsourcehut}
@ -111,4 +111,4 @@ or "hg"), `domain` and `fetchSubmodules`.
If `fetchSubmodules` is `true`, `fetchFromSourcehut` uses `fetchgit`
or `fetchhg` with `fetchSubmodules` or `fetchSubrepos` set to `true`,
respectively. Otherwise the fetcher uses `fetchzip`.
respectively. Otherwise, the fetcher uses `fetchzip`.

View file

@ -58,7 +58,7 @@ After the new layer has been created, its closure (to which `contents`, `config`
At the end of the process, only one new single layer will be produced and added to the resulting image.
The resulting repository will only list the single image `image/tag`. In the case of [the `buildImage` example](#ex-dockerTools-buildImage) it would be `redis/latest`.
The resulting repository will only list the single image `image/tag`. In the case of [the `buildImage` example](#ex-dockerTools-buildImage), it would be `redis/latest`.
It is possible to inspect the arguments with which an image was built using its `buildArgs` attribute.
@ -87,7 +87,7 @@ pkgs.dockerTools.buildImage {
}
```
and now the Docker CLI will display a reasonable date and sort the images as expected:
Now the Docker CLI will display a reasonable date and sort the images as expected:
```ShellSession
$ docker images
@ -95,7 +95,7 @@ REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest de2bf4786de6 About a minute ago 25.2MB
```
however, the produced images will not be binary reproducible.
However, the produced images will not be binary reproducible.
## buildLayeredImage {#ssec-pkgs-dockerTools-buildLayeredImage}
@ -119,13 +119,13 @@ Create a Docker image with many of the store paths being on their own layer to i
`contents` _optional_
: Top level paths in the container. Either a single derivation, or a list of derivations.
: Top-level paths in the container. Either a single derivation, or a list of derivations.
*Default:* `[]`
`config` _optional_
: Run-time configuration of the container. A full list of the options are available at in the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
: Run-time configuration of the container. A full list of the options are available at in the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
*Default:* `{}`
@ -195,9 +195,9 @@ pkgs.dockerTools.buildLayeredImage {
Increasing the `maxLayers` increases the number of layers which have a chance to be shared between different images.
Modern Docker installations support up to 128 layers, however older versions support as few as 42.
Modern Docker installations support up to 128 layers, but older versions support as few as 42.
If the produced image will not be extended by other Docker builds, it is safe to set `maxLayers` to `128`. However it will be impossible to extend the image further.
If the produced image will not be extended by other Docker builds, it is safe to set `maxLayers` to `128`. However, it will be impossible to extend the image further.
The first (`maxLayers-2`) most "popular" paths will have their own individual layers, then layer \#`maxLayers-1` will contain all the remaining "unpopular" paths, and finally layer \#`maxLayers` will contain the Image configuration.
@ -213,7 +213,7 @@ The image produced by running the output script can be piped directly into `dock
$(nix-build) | docker load
```
Alternatively, the image be piped via `gzip` into `skopeo`, e.g. to copy it into a registry:
Alternatively, the image be piped via `gzip` into `skopeo`, e.g., to copy it into a registry:
```ShellSession
$(nix-build) | gzip --fast | skopeo copy docker-archive:/dev/stdin docker://some_docker_registry/myimage:tag

View file

@ -1,10 +1,10 @@
# pkgs.ociTools {#sec-pkgs-ociTools}
`pkgs.ociTools` is a set of functions for creating containers according to the [OCI container specification v1.0.0](https://github.com/opencontainers/runtime-spec). Beyond that it makes no assumptions about the container runner you choose to use to run the created container.
`pkgs.ociTools` is a set of functions for creating containers according to the [OCI container specification v1.0.0](https://github.com/opencontainers/runtime-spec). Beyond that, it makes no assumptions about the container runner you choose to use to run the created container.
## buildContainer {#ssec-pkgs-ociTools-buildContainer}
This function creates a simple OCI container that runs a single command inside of it. An OCI container consists of a `config.json` and a rootfs directory.The nix store of the container will contain all referenced dependencies of the given command.
This function creates a simple OCI container that runs a single command inside of it. An OCI container consists of a `config.json` and a rootfs directory. The nix store of the container will contain all referenced dependencies of the given command.
The parameters of `buildContainer` with an example value are described below:
@ -30,7 +30,7 @@ buildContainer {
}
```
- `args` specifies a set of arguments to run inside the container. This is the only required argument for `buildContainer`. All referenced packages inside the derivation will be made available inside the container
- `args` specifies a set of arguments to run inside the container. This is the only required argument for `buildContainer`. All referenced packages inside the derivation will be made available inside the container.
- `mounts` specifies additional mount points chosen by the user. By default only a minimal set of necessary filesystems are mounted into the container (e.g procfs, cgroupfs)

View file

@ -33,7 +33,7 @@ in snapTools.makeSnap {
## Build a Graphical Snap {#ssec-pkgs-snapTools-build-a-snap-firefox}
Graphical programs require many more integrations with the host. This example uses Firefox as an example, because it is one of the most complicated programs we could package.
Graphical programs require many more integrations with the host. This example uses Firefox as an example because it is one of the most complicated programs we could package.
``` {#ex-snapTools-buildSnap-firefox .nix}
let

View file

@ -4,13 +4,13 @@ The [Citrix Workspace App](https://www.citrix.com/products/workspace-app/) is a
## Basic usage {#sec-citrix-base}
The tarball archive needs to be downloaded manually as the license agreements of the vendor for [Citrix Workspace](https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html) needs to be accepted first. Then run `nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz`. With the archive available in the store the package can be built and installed with Nix.
The tarball archive needs to be downloaded manually, as the license agreements of the vendor for [Citrix Workspace](https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html) needs to be accepted first. Then run `nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz`. With the archive available in the store, the package can be built and installed with Nix.
## Citrix Selfservice {#sec-citrix-selfservice}
## Citrix Self-service {#sec-citrix-selfservice}
The [selfservice](https://support.citrix.com/article/CTX200337) is an application managing Citrix desktops and applications. Please note that this feature only works with at least citrix_workspace_20_06_0 and later versions.
The [self-service](https://support.citrix.com/article/CTX200337) is an application managing Citrix desktops and applications. Please note that this feature only works with at least citrix_workspace_20_06_0 and later versions.
In order to set this up, you first have to [download the `.cr` file from the Netscaler Gateway](https://its.uiowa.edu/support/article/102186). After that you can configure the `selfservice` like this:
In order to set this up, you first have to [download the `.cr` file from the Netscaler Gateway](https://its.uiowa.edu/support/article/102186). After that, you can configure the `selfservice` like this:
```ShellSession
$ storebrowse -C ~/Downloads/receiverconfig.cr
@ -19,7 +19,7 @@ $ selfservice
## Custom certificates {#sec-citrix-custom-certs}
The `Citrix Workspace App` in `nixpkgs` trusts several certificates [from the Mozilla database](https://curl.haxx.se/docs/caextract.html) by default. However several companies using Citrix might require their own corporate certificate. On distros with imperative packaging these certs can be stored easily in [`$ICAROOT`](https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/), however this directory is a store path in `nixpkgs`. In order to work around this issue the package provides a simple mechanism to add custom certificates without rebuilding the entire package using `symlinkJoin`:
The `Citrix Workspace App` in `nixpkgs` trusts several certificates [from the Mozilla database](https://curl.haxx.se/docs/caextract.html) by default. However, several companies using Citrix might require their own corporate certificate. On distros with imperative packaging, these certs can be stored easily in [`$ICAROOT`](https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/), however this directory is a store path in `nixpkgs`. In order to work around this issue, the package provides a simple mechanism to add custom certificates without rebuilding the entire package using `symlinkJoin`:
```nix
with import <nixpkgs> { config.allowUnfree = true; };

View file

@ -8,9 +8,9 @@ Nixpkgs provides a number of packages that will install Eclipse in its various f
$ nix-env -f '<nixpkgs>' -qaP -A eclipses --description
```
Once an Eclipse variant is installed it can be run using the `eclipse` command, as expected. From within Eclipse it is then possible to install plugins in the usual manner by either manually specifying an Eclipse update site or by installing the Marketplace Client plugin and using it to discover and install other plugins. This installation method provides an Eclipse installation that closely resemble a manually installed Eclipse.
Once an Eclipse variant is installed, it can be run using the `eclipse` command, as expected. From within Eclipse, it is then possible to install plugins in the usual manner by either manually specifying an Eclipse update site or by installing the Marketplace Client plugin and using it to discover and install other plugins. This installation method provides an Eclipse installation that closely resemble a manually installed Eclipse.
If you prefer to install plugins in a more declarative manner then Nixpkgs also offer a number of Eclipse plugins that can be installed in an _Eclipse environment_. This type of environment is created using the function `eclipseWithPlugins` found inside the `nixpkgs.eclipses` attribute set. This function takes as argument `{ eclipse, plugins ? [], jvmArgs ? [] }` where `eclipse` is a one of the Eclipse packages described above, `plugins` is a list of plugin derivations, and `jvmArgs` is a list of arguments given to the JVM running the Eclipse. For example, say you wish to install the latest Eclipse Platform with the popular Eclipse Color Theme plugin and also allow Eclipse to use more RAM. You could then add
If you prefer to install plugins in a more declarative manner, then Nixpkgs also offer a number of Eclipse plugins that can be installed in an _Eclipse environment_. This type of environment is created using the function `eclipseWithPlugins` found inside the `nixpkgs.eclipses` attribute set. This function takes as argument `{ eclipse, plugins ? [], jvmArgs ? [] }` where `eclipse` is a one of the Eclipse packages described above, `plugins` is a list of plugin derivations, and `jvmArgs` is a list of arguments given to the JVM running the Eclipse. For example, say you wish to install the latest Eclipse Platform with the popular Eclipse Color Theme plugin and also allow Eclipse to use more RAM. You could then add:
```nix
packageOverrides = pkgs: {
@ -22,15 +22,15 @@ packageOverrides = pkgs: {
}
```
to your Nixpkgs configuration (`~/.config/nixpkgs/config.nix`) and install it by running `nix-env -f '<nixpkgs>' -iA myEclipse` and afterward run Eclipse as usual. It is possible to find out which plugins are available for installation using `eclipseWithPlugins` by running
to your Nixpkgs configuration (`~/.config/nixpkgs/config.nix`) and install it by running `nix-env -f '<nixpkgs>' -iA myEclipse` and afterward run Eclipse as usual. It is possible to find out which plugins are available for installation using `eclipseWithPlugins` by running:
```ShellSession
$ nix-env -f '<nixpkgs>' -qaP -A eclipses.plugins --description
```
If there is a need to install plugins that are not available in Nixpkgs then it may be possible to define these plugins outside Nixpkgs using the `buildEclipseUpdateSite` and `buildEclipsePlugin` functions found in the `nixpkgs.eclipses.plugins` attribute set. Use the `buildEclipseUpdateSite` function to install a plugin distributed as an Eclipse update site. This function takes `{ name, src }` as argument where `src` indicates the Eclipse update site archive. All Eclipse features and plugins within the downloaded update site will be installed. When an update site archive is not available then the `buildEclipsePlugin` function can be used to install a plugin that consists of a pair of feature and plugin JARs. This function takes an argument `{ name, srcFeature, srcPlugin }` where `srcFeature` and `srcPlugin` are the feature and plugin JARs, respectively.
If there is a need to install plugins that are not available in Nixpkgs then it may be possible to define these plugins outside Nixpkgs using the `buildEclipseUpdateSite` and `buildEclipsePlugin` functions found in the `nixpkgs.eclipses.plugins` attribute set. Use the `buildEclipseUpdateSite` function to install a plugin distributed as an Eclipse update site. This function takes `{ name, src }` as argument, where `src` indicates the Eclipse update site archive. All Eclipse features and plugins within the downloaded update site will be installed. When an update site archive is not available, then the `buildEclipsePlugin` function can be used to install a plugin that consists of a pair of feature and plugin JARs. This function takes an argument `{ name, srcFeature, srcPlugin }` where `srcFeature` and `srcPlugin` are the feature and plugin JARs, respectively.
Expanding the previous example with two plugins using the above functions we have
Expanding the previous example with two plugins using the above functions, we have:
```nix
packageOverrides = pkgs: {

View file

@ -1,6 +1,6 @@
# Elm {#sec-elm}
To start a development environment do
To start a development environment, run:
```ShellSession
nix-shell -p elmPackages.elm elmPackages.elm-format

View file

@ -20,7 +20,7 @@ The Emacs package comes with some extra helpers to make it easier to configure.
}
```
You can install it like any other packages via `nix-env -iA myEmacs`. However, this will only install those packages. It will not `configure` them for us. To do this, we need to provide a configuration file. Luckily, it is possible to do this from within Nix! By modifying the above example, we can make Emacs load a custom config file. The key is to create a package that provide a `default.el` file in `/share/emacs/site-start/`. Emacs knows to load this file automatically when it starts.
You can install it like any other packages via `nix-env -iA myEmacs`. However, this will only install those packages. It will not `configure` them for us. To do this, we need to provide a configuration file. Luckily, it is possible to do this from within Nix! By modifying the above example, we can make Emacs load a custom config file. The key is to create a package that provides a `default.el` file in `/share/emacs/site-start/`. Emacs knows to load this file automatically when it starts.
```nix
{
@ -101,9 +101,9 @@ You can install it like any other packages via `nix-env -iA myEmacs`. However, t
}
```
This provides a fairly full Emacs start file. It will load in addition to the user's presonal config. You can always disable it by passing `-q` to the Emacs command.
This provides a fairly full Emacs start file. It will load in addition to the user's personal config. You can always disable it by passing `-q` to the Emacs command.
Sometimes `emacs.pkgs.withPackages` is not enough, as this package set has some priorities imposed on packages (with the lowest priority assigned to Melpa Unstable, and the highest for packages manually defined in `pkgs/top-level/emacs-packages.nix`). But you can't control this priorities when some package is installed as a dependency. You can override it on per-package-basis, providing all the required dependencies manually - but it's tedious and there is always a possibility that an unwanted dependency will sneak in through some other package. To completely override such a package you can use `overrideScope'`.
Sometimes `emacs.pkgs.withPackages` is not enough, as this package set has some priorities imposed on packages (with the lowest priority assigned to Melpa Unstable, and the highest for packages manually defined in `pkgs/top-level/emacs-packages.nix`). But you can't control these priorities when some package is installed as a dependency. You can override it on a per-package-basis, providing all the required dependencies manually, but it's tedious and there is always a possibility that an unwanted dependency will sneak in through some other package. To completely override such a package, you can use `overrideScope'`.
```nix
overrides = self: super: rec {

View file

@ -1,10 +1,10 @@
# /etc files {#etc}
Certain calls in glibc require access to runtime files found in /etc such as `/etc/protocols` or `/etc/services` -- [getprotobyname](https://linux.die.net/man/3/getprotobyname) is one such function.
Certain calls in glibc require access to runtime files found in `/etc` such as `/etc/protocols` or `/etc/services` -- [getprotobyname](https://linux.die.net/man/3/getprotobyname) is one such function.
On non-NixOS distributions these files are typically provided by packages (i.e. [netbase](https://packages.debian.org/sid/netbase)) if not already pre-installed in your distribution. This can cause non-reproducibility for code if they rely on these files being present.
On non-NixOS distributions these files are typically provided by packages (i.e., [netbase](https://packages.debian.org/sid/netbase)) if not already pre-installed in your distribution. This can cause non-reproducibility for code if they rely on these files being present.
If [iana-etc](https://hydra.nixos.org/job/nixos/trunk-combined/nixpkgs.iana-etc.x86_64-linux) is part of your _buildInputs_ then it will set the environment varaibles `NIX_ETC_PROTOCOLS` and `NIX_ETC_SERVICES` to the corresponding files in the package through a _setup-hook_.
If [iana-etc](https://hydra.nixos.org/job/nixos/trunk-combined/nixpkgs.iana-etc.x86_64-linux) is part of your `buildInputs`, then it will set the environment variables `NIX_ETC_PROTOCOLS` and `NIX_ETC_SERVICES` to the corresponding files in the package through a setup hook.
```bash
@ -15,4 +15,4 @@ NIX_ETC_SERVICES=/nix/store/aj866hr8fad8flnggwdhrldm0g799ccz-iana-etc-20210225/e
NIX_ETC_PROTOCOLS=/nix/store/aj866hr8fad8flnggwdhrldm0g799ccz-iana-etc-20210225/etc/protocols
```
Nixpkg's version of [glibc](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/glibc/default.nix) has been patched to check for the existence of these environment variables. If the environment variable are *not set*, then it will attempt to find the files at the default location within _/etc_.
Nixpkg's version of [glibc](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/glibc/default.nix) has been patched to check for the existence of these environment variables. If the environment variables are *not* set, then it will attempt to find the files at the default location within `/etc`.

View file

@ -2,7 +2,7 @@
## Build wrapped Firefox with extensions and policies {#build-wrapped-firefox-with-extensions-and-policies}
The `wrapFirefox` function allows to pass policies, preferences and extension that are available to Firefox. With the help of `fetchFirefoxAddon` this allows build a Firefox version that already comes with addons pre-installed:
The `wrapFirefox` function allows to pass policies, preferences and extensions that are available to Firefox. With the help of `fetchFirefoxAddon` this allows to build a Firefox version that already comes with add-ons pre-installed:
```nix
{
@ -40,13 +40,12 @@ The `wrapFirefox` function allows to pass policies, preferences and extension th
}
```
If `nixExtensions != null` then all manually installed addons will be uninstalled from your browser profile.
To view available enterprise policies visit [enterprise policies](https://github.com/mozilla/policy-templates#enterprisepoliciesenabled)
or type into the Firefox url bar: `about:policies#documentation`.
Nix installed addons do not have a valid signature, which is why signature verification is disabled. This does not compromise security because downloaded addons are checksumed and manual addons can't be installed. Also make sure that the `name` field of fetchFirefoxAddon is unique. If you remove an addon from the nixExtensions array, rebuild and start Firefox the removed addon will be completly removed with all of its settings.
If `nixExtensions != null`, then all manually installed add-ons will be uninstalled from your browser profile.
To view available enterprise policies, visit [enterprise policies](https://github.com/mozilla/policy-templates#enterprisepoliciesenabled)
or type into the Firefox URL bar: `about:policies#documentation`.
Nix installed add-ons do not have a valid signature, which is why signature verification is disabled. This does not compromise security because downloaded add-ons are checksummed and manual add-ons can't be installed. Also, make sure that the `name` field of `fetchFirefoxAddon` is unique. If you remove an add-on from the `nixExtensions` array, rebuild and start Firefox: the removed add-on will be completely removed with all of its settings.
## Troubleshooting {#sec-firefox-troubleshooting}
If addons are marked as broken or the signature is invalid, make sure you have Firefox ESR installed. Normal Firefox does not provide the ability anymore to disable signature verification for addons thus nix addons get disabled by the normal Firefox binary.
If addons do not appear installed although they have been defined in your nix configuration file reset the local addon state of your Firefox profile by clicking `help -> restart with addons disabled -> restart -> refresh firefox`. This can happen if you switch from manual addon mode to nix addon mode and then back to manual mode and then again to nix addon mode.
If add-ons are marked as broken or the signature is invalid, make sure you have Firefox ESR installed. Normal Firefox does not provide the ability anymore to disable signature verification for add-ons thus nix add-ons get disabled by the normal Firefox binary.
If add-ons do not appear installed despite being defined in your nix configuration file, reset the local add-on state of your Firefox profile by clicking `Help -> More Troubleshooting Information -> Refresh Firefox`. This can happen if you switch from manual add-on mode to nix add-on mode and then back to manual mode and then again to nix add-on mode.

View file

@ -36,7 +36,7 @@ using `buildFishPlugin` and running unit tests with the `fishtape` test runner.
## Fish wrapper {#sec-fish-wrapper}
The `wrapFish` package is a wrapper around Fish which can be used to create
Fish shells initialised with some plugins as well as completions, configuration
Fish shells initialized with some plugins as well as completions, configuration
snippets and functions sourced from the given paths. This provides a convenient
way to test Fish plugins and scripts without having to alter the environment.

View file

@ -24,10 +24,10 @@ packages on macOS:
checking for fuse.h... no
configure: error: No fuse.h found.
This happens on autoconf based projects that uses `AC_CHECK_HEADERS` or
This happens on autoconf based projects that use `AC_CHECK_HEADERS` or
`AC_CHECK_LIBS` to detect libfuse, and will occur even when the `fuse` package
is included in `buildInputs`. It happens because libfuse headers throw an error
on macOS if the `FUSE_USE_VERSION` macro is undefined. Many proejcts do define
on macOS if the `FUSE_USE_VERSION` macro is undefined. Many projects do define
`FUSE_USE_VERSION`, but only inside C source files. This results in the above
error at configure time because the configure script would attempt to compile
sample FUSE programs without defining `FUSE_USE_VERSION`.

View file

@ -6,7 +6,7 @@ This package is an ibus-based completion method to speed up typing.
IBus needs to be configured accordingly to activate `typing-booster`. The configuration depends on the desktop manager in use. For detailed instructions, please refer to the [upstream docs](https://mike-fabian.github.io/ibus-typing-booster/documentation.html).
On NixOS you need to explicitly enable `ibus` with given engines before customizing your desktop to use `typing-booster`. This can be achieved using the `ibus` module:
On NixOS, you need to explicitly enable `ibus` with given engines before customizing your desktop to use `typing-booster`. This can be achieved using the `ibus` module:
```nix
{ pkgs, ... }: {
@ -19,7 +19,7 @@ On NixOS you need to explicitly enable `ibus` with given engines before customiz
## Using custom hunspell dictionaries {#sec-ibus-typing-booster-customize-hunspell}
The IBus engine is based on `hunspell` to support completion in many languages. By default the dictionaries `de-de`, `en-us`, `fr-moderne` `es-es`, `it-it`, `sv-se` and `sv-fi` are in use. To add another dictionary, the package can be overridden like this:
The IBus engine is based on `hunspell` to support completion in many languages. By default, the dictionaries `de-de`, `en-us`, `fr-moderne` `es-es`, `it-it`, `sv-se` and `sv-fi` are in use. To add another dictionary, the package can be overridden like this:
```nix
ibus-engines.typing-booster.override { langs = [ "de-at" "en-gb" ]; }
@ -31,7 +31,7 @@ _Note: each language passed to `langs` must be an attribute name in `pkgs.hunspe
The `ibus-engines.typing-booster` package contains a program named `emoji-picker`. To display all emojis correctly, a special font such as `noto-fonts-emoji` is needed:
On NixOS it can be installed using the following expression:
On NixOS, it can be installed using the following expression:
```nix
{ pkgs, ... }: { fonts.fonts = with pkgs; [ noto-fonts-emoji ]; }

View file

@ -4,7 +4,7 @@ The Nix expressions to build the Linux kernel are in [`pkgs/os-specific/linux/ke
The function that builds the kernel has an argument `kernelPatches` which should be a list of `{name, patch, extraConfig}` attribute sets, where `name` is the name of the patch (which is included in the kernels `meta.description` attribute), `patch` is the patch itself (possibly compressed), and `extraConfig` (optional) is a string specifying extra options to be concatenated to the kernel configuration file (`.config`).
The kernel derivation exports an attribute `features` specifying whether optional functionality is or isnt enabled. This is used in NixOS to implement kernel-specific behaviour. For instance, if the kernel has the `iwlwifi` feature (i.e. has built-in support for Intel wireless chipsets), then NixOS doesnt have to build the external `iwlwifi` package:
The kernel derivation exports an attribute `features` specifying whether optional functionality is or isnt enabled. This is used in NixOS to implement kernel-specific behaviour. For instance, if the kernel has the `iwlwifi` feature (i.e., has built-in support for Intel wireless chipsets), then NixOS doesnt have to build the external `iwlwifi` package:
```nix
modulesTree = [kernel]
@ -14,19 +14,19 @@ modulesTree = [kernel]
How to add a new (major) version of the Linux kernel to Nixpkgs:
1. Copy the old Nix expression (e.g. `linux-2.6.21.nix`) to the new one (e.g. `linux-2.6.22.nix`) and update it.
1. Copy the old Nix expression (e.g., `linux-2.6.21.nix`) to the new one (e.g., `linux-2.6.22.nix`) and update it.
2. Add the new kernel to the `kernels` attribute set in `linux-kernels.nix` (e.g., create an attribute `kernel_2_6_22`).
3. Now were going to update the kernel configuration. First unpack the kernel. Then for each supported platform (`i686`, `x86_64`, `uml`) do the following:
1. Make an copy from the old config (e.g. `config-2.6.21-i686-smp`) to the new one (e.g. `config-2.6.22-i686-smp`).
1. Make a copy from the old config (e.g., `config-2.6.21-i686-smp`) to the new one (e.g., `config-2.6.22-i686-smp`).
2. Copy the config file for this platform (e.g. `config-2.6.22-i686-smp`) to `.config` in the kernel source tree.
2. Copy the config file for this platform (e.g., `config-2.6.22-i686-smp`) to `.config` in the kernel source tree.
3. Run `make oldconfig ARCH={i386,x86_64,um}` and answer all questions. (For the uml configuration, also add `SHELL=bash`.) Make sure to keep the configuration consistent between platforms (i.e. dont enable some feature on `i686` and disable it on `x86_64`).
3. Run `make oldconfig ARCH={i386,x86_64,um}` and answer all questions. (For the uml configuration, also add `SHELL=bash`.) Make sure to keep the configuration consistent between platforms (i.e., dont enable some feature on `i686` and disable it on `x86_64`).
4. If needed you can also run `make menuconfig`:
4. If needed, you can also run `make menuconfig`:
```ShellSession
$ nix-env -f "<nixpkgs>" -iA ncurses
@ -34,7 +34,7 @@ How to add a new (major) version of the Linux kernel to Nixpkgs:
$ make menuconfig ARCH=arch
```
5. Copy `.config` over the new config file (e.g. `config-2.6.22-i686-smp`).
5. Copy `.config` over the new config file (e.g., `config-2.6.22-i686-smp`).
4. Test building the kernel: `nix-build -A linuxKernel.kernels.kernel_2_6_22`. If it compiles, ship it! For extra credit, try booting NixOS with it.

View file

@ -1,5 +1,5 @@
# Locales {#locales}
To allow simultaneous use of packages linked against different versions of `glibc` with different locale archive formats Nixpkgs patches `glibc` to rely on `LOCALE_ARCHIVE` environment variable.
To allow simultaneous use of packages linked against different versions of `glibc` with different locale archive formats, Nixpkgs patches `glibc` to rely on `LOCALE_ARCHIVE` environment variable.
On non-NixOS distributions this variable is obviously not set. This can cause regressions in language support or even crashes in some Nixpkgs-provided programs. The simplest way to mitigate this problem is exporting the `LOCALE_ARCHIVE` variable pointing to `${glibcLocales}/lib/locale/locale-archive`. The drawback (and the reason this is not the default) is the relatively large (a hundred MiB) size of the full set of locales. It is possible to build a custom set of locales by overriding parameters `allLocales` and `locales` of the package.
On non-NixOS distributions, this variable is obviously not set. This can cause regressions in language support or even crashes in some Nixpkgs-provided programs. The simplest way to mitigate this problem is exporting the `LOCALE_ARCHIVE` variable pointing to `${glibcLocales}/lib/locale/locale-archive`. The drawback (and the reason this is not the default) is the relatively large (a hundred MiB) size of the full set of locales. It is possible to build a custom set of locales by overriding parameters `allLocales` and `locales` of the package.

View file

@ -4,8 +4,8 @@
## ETags on static files served from the Nix store {#sec-nginx-etag}
HTTP has a couple different mechanisms for caching to prevent clients from having to download the same content repeatedly if a resource has not changed since the last time it was requested. When nginx is used as a server for static files, it implements the caching mechanism based on the [`Last-Modified`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified) response header automatically; unfortunately, it works by using filesystem timestamps to determine the value of the `Last-Modified` header. This doesn't give the desired behavior when the file is in the Nix store, because all file timestamps are set to 0 (for reasons related to build reproducibility).
HTTP has a couple of different mechanisms for caching to prevent clients from having to download the same content repeatedly if a resource has not changed since the last time it was requested. When nginx is used as a server for static files, it implements the caching mechanism based on the [`Last-Modified`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified) response header automatically; unfortunately, it works by using filesystem timestamps to determine the value of the `Last-Modified` header. This doesn't give the desired behavior when the file is in the Nix store because all file timestamps are set to 0 (for reasons related to build reproducibility).
Fortunately, HTTP supports an alternative (and more effective) caching mechanism: the [`ETag`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) response header. The value of the `ETag` header specifies some identifier for the particular content that the server is sending (e.g. a hash). When a client makes a second request for the same resource, it sends that value back in an `If-None-Match` header. If the ETag value is unchanged, then the server does not need to resend the content.
Fortunately, HTTP supports an alternative (and more effective) caching mechanism: the [`ETag`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) response header. The value of the `ETag` header specifies some identifier for the particular content that the server is sending (e.g., a hash). When a client makes a second request for the same resource, it sends that value back in an `If-None-Match` header. If the ETag value is unchanged, then the server does not need to resend the content.
As of NixOS 19.09, the nginx package in Nixpkgs is patched such that when nginx serves a file out of `/nix/store`, the hash in the store path is used as the `ETag` header in the HTTP response, thus providing proper caching functionality. This happens automatically; you do not need to do modify any configuration to get this behavior.

View file

@ -12,4 +12,4 @@ The NixOS desktop or other non-headless configurations are the primary target fo
If you are using a non-NixOS GNU/Linux/X11 desktop with free software video drivers, consider launching OpenGL-dependent programs from Nixpkgs with Nixpkgs versions of `libglvnd` and `mesa.drivers` in `LD_LIBRARY_PATH`. For Mesa drivers, the Linux kernel version doesn't have to match nixpkgs.
For proprietary video drivers you might have luck with also adding the corresponding video driver package.
For proprietary video drivers, you might have luck with also adding the corresponding video driver package.

View file

@ -4,7 +4,7 @@ Some packages provide the shell integration to be more useful. But unlike other
- `fzf` : `fzf-share`
E.g. `fzf` can then used in the `.bashrc` like this:
E.g. `fzf` can then be used in the `.bashrc` like this:
```bash
source "$(fzf-share)/completion.bash"

View file

@ -2,20 +2,20 @@
## Steam in Nix {#sec-steam-nix}
Steam is distributed as a `.deb` file, for now only as an i686 package (the amd64 package only has documentation). When unpacked, it has a script called `steam` that in Ubuntu (their target distro) would go to `/usr/bin`. When run for the first time, this script copies some files to the user's home, which include another script that is the ultimate responsible for launching the steam binary, which is also in \$HOME.
Steam is distributed as a `.deb` file, for now only as an i686 package (the amd64 package only has documentation). When unpacked, it has a script called `steam` that in Ubuntu (their target distro) would go to `/usr/bin`. When run for the first time, this script copies some files to the user's home, which include another script that is the ultimate responsible for launching the steam binary, which is also in `$HOME`.
Nix problems and constraints:
- We don't have `/bin/bash` and many scripts point there. Similarly for `/usr/bin/python`.
- We don't have `/bin/bash` and many scripts point there. Same thing for `/usr/bin/python`.
- We don't have the dynamic loader in `/lib`.
- The `steam.sh` script in \$HOME can not be patched, as it is checked and rewritten by steam.
- The `steam.sh` script in `$HOME` cannot be patched, as it is checked and rewritten by steam.
- The steam binary cannot be patched, it's also checked.
The current approach to deploy Steam in NixOS is composing a FHS-compatible chroot environment, as documented [here](http://sandervanderburg.blogspot.nl/2013/09/composing-fhs-compatible-chroot.html). This allows us to have binaries in the expected paths without disrupting the system, and to avoid patching them to work in a non FHS environment.
## How to play {#sec-steam-play}
Use `programs.steam.enable = true;` if you want to add steam to systemPackages and also enable a few workarrounds aswell as Steam controller support or other Steam supported controllers such as the DualShock 4 or Nintendo Switch Pr.
Use `programs.steam.enable = true;` if you want to add steam to `systemPackages` and also enable a few workarounds as well as Steam controller support or other Steam supported controllers such as the DualShock 4 or Nintendo Switch Pro Controller.
## Troubleshooting {#sec-steam-troub}
@ -32,7 +32,7 @@ Use `programs.steam.enable = true;` if you want to add steam to systemPackages a
- **Using the FOSS Radeon or nouveau (nvidia) drivers**
- The `newStdcpp` parameter was removed since NixOS 17.09 and should not be needed anymore.
- Steam ships statically linked with a version of libcrypto that conflics with the one dynamically loaded by radeonsi_dri.so. If you get the error
- Steam ships statically linked with a version of `libcrypto` that conflicts with the one dynamically loaded by radeonsi_dri.so. If you get the error:
```
steam.sh: line 713: 7842 Segmentation fault (core dumped)
@ -42,13 +42,13 @@ Use `programs.steam.enable = true;` if you want to add steam to systemPackages a
- **Java**
1. There is no java in steam chrootenv by default. If you get a message like
1. There is no java in steam chrootenv by default. If you get a message like:
```
/home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found
```
you need to add
you need to add:
```nix
steam.override { withJava = true; };
@ -56,7 +56,7 @@ Use `programs.steam.enable = true;` if you want to add steam to systemPackages a
## steam-run {#sec-steam-run}
The FHS-compatible chroot used for Steam can also be used to run other Linux games that expect a FHS environment. To use it, install the `steam-run` package and run the game with
The FHS-compatible chroot used for Steam can also be used to run other Linux games that expect a FHS environment. To use it, install the `steam-run` package and run the game with:
```
steam-run ./foo

View file

@ -4,7 +4,7 @@ Urxvt, also known as rxvt-unicode, is a highly customizable terminal emulator.
## Configuring urxvt {#sec-urxvt-conf}
In `nixpkgs`, urxvt is provided by the package `rxvt-unicode`. It can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, use an overlay or directly install an expression that overrides its configuration, such as
In `nixpkgs`, urxvt is provided by the package `rxvt-unicode`. It can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, use an overlay or directly install an expression that overrides its configuration, such as:
```nix
rxvt-unicode.override {
@ -58,14 +58,14 @@ rxvt-unicode.override {
## Packaging urxvt plugins {#sec-urxvt-pkg}
Urxvt plugins resides in `pkgs/applications/misc/rxvt-unicode-plugins`. To add a new plugin create an expression in a subdirectory and add the package to the set in `pkgs/applications/misc/rxvt-unicode-plugins/default.nix`.
Urxvt plugins resides in `pkgs/applications/misc/rxvt-unicode-plugins`. To add a new plugin, create an expression in a subdirectory and add the package to the set in `pkgs/applications/misc/rxvt-unicode-plugins/default.nix`.
A plugin can be any kind of derivation, the only requirement is that it should always install perl scripts in `$out/lib/urxvt/perl`. Look for existing plugins for examples.
If the plugin is itself a perl package that needs to be imported from other plugins or scripts, add the following passthrough:
If the plugin is itself a Perl package that needs to be imported from other plugins or scripts, add the following passthrough:
```nix
passthru.perlPackages = [ "self" ];
```
This will make the urxvt wrapper pick up the dependency and set up the perl path accordingly.
This will make the urxvt wrapper pick up the dependency and set up the Perl path accordingly.

View file

@ -1,6 +1,6 @@
# Weechat {#sec-weechat}
# WeeChat {#sec-weechat}
Weechat can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, install an expression that overrides its configuration such as
WeeChat can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, install an expression that overrides its configuration, such as:
```nix
weechat.override {configure = {availablePlugins, ...}: {
@ -13,7 +13,7 @@ If the `configure` function returns an attrset without the `plugins` attribute,
The plugins currently available are `python`, `perl`, `ruby`, `guile`, `tcl` and `lua`.
The python and perl plugins allows the addition of extra libraries. For instance, the `inotify.py` script in `weechat-scripts` requires D-Bus or libnotify, and the `fish.py` script requires `pycrypto`. To use these scripts, use the plugin's `withPackages` attribute:
The Python and Perl plugins allows the addition of extra libraries. For instance, the `inotify.py` script in `weechat-scripts` requires D-Bus or libnotify, and the `fish.py` script requires `pycrypto`. To use these scripts, use the plugin's `withPackages` attribute:
```nix
weechat.override { configure = {availablePlugins, ...}: {
@ -49,7 +49,7 @@ weechat.override {
Further values can be added to the list of commands when running `weechat --run-command "your-commands"`.
Additionally it's possible to specify scripts to be loaded when starting `weechat`. These will be loaded before the commands from `init`:
Additionally, it's possible to specify scripts to be loaded when starting `weechat`. These will be loaded before the commands from `init`:
```nix
weechat.override {
@ -64,7 +64,7 @@ weechat.override {
}
```
In `nixpkgs` there's a subpackage which contains derivations for WeeChat scripts. Such derivations expect a `passthru.scripts` attribute which contains a list of all scripts inside the store path. Furthermore all scripts have to live in `$out/share`. An exemplary derivation looks like this:
In `nixpkgs` there's a subpackage which contains derivations for WeeChat scripts. Such derivations expect a `passthru.scripts` attribute, which contains a list of all scripts inside the store path. Furthermore, all scripts have to live in `$out/share`. An exemplary derivation looks like this:
```nix
{ stdenv, fetchurl }:

View file

@ -7,5 +7,4 @@
</para>
<xi:include href="special/fhs-environments.section.xml" />
<xi:include href="special/mkshell.section.xml" />
<xi:include href="special/invalidateFetcherByDrvHash.section.xml" />
</chapter>

View file

@ -1,31 +0,0 @@
## `invalidateFetcherByDrvHash` {#sec-pkgs-invalidateFetcherByDrvHash}
Use the derivation hash to invalidate the output via name, for testing.
Type: `(a@{ name, ... } -> Derivation) -> a -> Derivation`
Normally, fixed output derivations can and should be cached by their output
hash only, but for testing we want to re-fetch everytime the fetcher changes.
Changes to the fetcher become apparent in the drvPath, which is a hash of
how to fetch, rather than a fixed store path.
By inserting this hash into the name, we can make sure to re-run the fetcher
every time the fetcher changes.
This relies on the assumption that Nix isn't clever enough to reuse its
database of local store contents to optimize fetching.
You might notice that the "salted" name derives from the normal invocation,
not the final derivation. `invalidateFetcherByDrvHash` has to invoke the fetcher
function twice: once to get a derivation hash, and again to produce the final
fixed output derivation.
Example:
tests.fetchgit = invalidateFetcherByDrvHash fetchgit {
name = "nix-source";
url = "https://github.com/NixOS/nix";
rev = "9d9dbe6ed05854e03811c361a3380e09183f4f4a";
sha256 = "sha256-7DszvbCNTjpzGRmpIVAWXk20P0/XTrWZ79KSOGLrUWY=";
};

View file

@ -0,0 +1,82 @@
# Testers {#chap-testers}
This chapter describes several testing builders which are available in the <literal>testers</literal> namespace.
## `testVersion` {#tester-testVersion}
Checks the command output contains the specified version
Although simplistic, this test assures that the main program
can run. While there's no substitute for a real test case,
it does catch dynamic linking errors and such. It also provides
some protection against accidentally building the wrong version,
for example when using an 'old' hash in a fixed-output derivation.
Examples:
```nix
passthru.tests.version = testVersion { package = hello; };
passthru.tests.version = testVersion {
package = seaweedfs;
command = "weed version";
};
passthru.tests.version = testVersion {
package = key;
command = "KeY --help";
# Wrong '2.5' version in the code. Drop on next version.
version = "2.5";
};
```
## `testEqualDerivation` {#tester-testEqualDerivation}
Checks that two packages produce the exact same build instructions.
This can be used to make sure that a certain difference of configuration,
such as the presence of an overlay does not cause a cache miss.
When the derivations are equal, the return value is an empty file.
Otherwise, the build log explains the difference via `nix-diff`.
Example:
```nix
testEqualDerivation
"The hello package must stay the same when enabling checks."
hello
(hello.overrideAttrs(o: { doCheck = true; }))
```
## `invalidateFetcherByDrvHash` {#tester-invalidateFetcherByDrvHash}
Use the derivation hash to invalidate the output via name, for testing.
Type: `(a@{ name, ... } -> Derivation) -> a -> Derivation`
Normally, fixed output derivations can and should be cached by their output
hash only, but for testing we want to re-fetch everytime the fetcher changes.
Changes to the fetcher become apparent in the drvPath, which is a hash of
how to fetch, rather than a fixed store path.
By inserting this hash into the name, we can make sure to re-run the fetcher
every time the fetcher changes.
This relies on the assumption that Nix isn't clever enough to reuse its
database of local store contents to optimize fetching.
You might notice that the "salted" name derives from the normal invocation,
not the final derivation. `invalidateFetcherByDrvHash` has to invoke the fetcher
function twice: once to get a derivation hash, and again to produce the final
fixed output derivation.
Example:
```nix
tests.fetchgit = invalidateFetcherByDrvHash fetchgit {
name = "nix-source";
url = "https://github.com/NixOS/nix";
rev = "9d9dbe6ed05854e03811c361a3380e09183f4f4a";
sha256 = "sha256-7DszvbCNTjpzGRmpIVAWXk20P0/XTrWZ79KSOGLrUWY=";
};
```

View file

@ -35,10 +35,10 @@ This works just like `runCommand`. The only difference is that it also provides
## `runCommandLocal` {#trivial-builder-runCommandLocal}
Variant of `runCommand` that forces the derivation to be built locally, it is not substituted. This is intended for very cheap commands (<1s execution time). It saves on the network roundrip and can speed up a build.
Variant of `runCommand` that forces the derivation to be built locally, it is not substituted. This is intended for very cheap commands (<1s execution time). It saves on the network round-trip and can speed up a build.
::: {.note}
This sets [`allowSubstitutes` to `false`](https://nixos.org/nix/manual/#adv-attr-allowSubstitutes), so only use `runCommandLocal` if you are certain the user will always have a builder for the `system` of the derivation. This should be true for most trivial use cases (e.g. just copying some files to a different location or adding symlinks), because there the `system` is usually the same as `builtins.currentSystem`.
This sets [`allowSubstitutes` to `false`](https://nixos.org/nix/manual/#adv-attr-allowSubstitutes), so only use `runCommandLocal` if you are certain the user will always have a builder for the `system` of the derivation. This should be true for most trivial use cases (e.g., just copying some files to a different location or adding symlinks) because there the `system` is usually the same as `builtins.currentSystem`.
:::
## `writeTextFile`, `writeText`, `writeTextDir`, `writeScript`, `writeScriptBin` {#trivial-builder-writeText}
@ -219,5 +219,5 @@ produces an output path `/nix/store/<hash>-runtime-references` containing
/nix/store/<hash>-hello-2.10
```
but none of `hello`'s dependencies, because those are not referenced directly
but none of `hello`'s dependencies because those are not referenced directly
by `hi`'s output.

View file

@ -96,7 +96,7 @@ We use jbidwatcher as an example for a discontinued project here.
1. Have Nixpkgs checked out locally and up to date.
1. Create a new branch for your change, e.g. `git checkout -b jbidwatcher`
1. Remove the actual package including its directory, e.g. `rm -rf pkgs/applications/misc/jbidwatcher`
1. Remove the actual package including its directory, e.g. `git rm -rf pkgs/applications/misc/jbidwatcher`
1. Remove the package from the list of all packages (`pkgs/top-level/all-packages.nix`).
1. Add an alias for the package name in `pkgs/top-level/aliases.nix` (There is also `pkgs/applications/editors/vim/plugins/aliases.nix`. Package sets typically do not have aliases, so we can't add them there.)

10
third_party/nixpkgs/doc/hooks/index.xml vendored Normal file
View file

@ -0,0 +1,10 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="chap-hooks">
<title>Hooks reference</title>
<para>
Nixpkgs has several hook packages that augment the stdenv phases.
</para>
<xi:include href="./postgresql-test-hook.section.xml" />
</chapter>

View file

@ -0,0 +1,59 @@
# `postgresqlTestHook` {#sec-postgresqlTestHook}
This hook starts a PostgreSQL server during the `checkPhase`. Example:
```nix
{ stdenv, postgresql, postgresqlTestHook }:
stdenv.mkDerivation {
# ...
checkInputs = [
postgresql
postgresqlTestHook
];
}
```
If you use a custom `checkPhase`, remember to add the `runHook` calls:
```nix
checkPhase ''
runHook preCheck
# ... your tests
runHook postCheck
''
```
## Variables {#sec-postgresqlTestHook-variables}
The hook logic will read a number of variables and set them to a default value if unset or empty.
Exported variables:
- `PGDATA`: location of server files.
- `PGHOST`: location of UNIX domain socket directory; the default `host` in a connection string.
- `PGUSER`: user to create / log in with, default: `test_user`.
- `PGDATABASE`: database name, default: `test_db`.
Bash-only variables:
- `postgresqlTestUserOptions`: SQL options to use when creating the `$PGUSER` role, default: `LOGIN`.
- `postgresqlTestSetupSQL`: SQL commands to run as database administrator after startup, default: statements that create `$PGUSER` and `$PGDATABASE`.
- `postgresqlTestSetupCommands`: bash commands to run after database start, defaults to running `$postgresqlTestSetupSQL` as database administrator.
- `postgresqlEnableTCP`: set to `1` to enable TCP listening. Flaky; not recommended.
- `postgresqlStartCommands`: defaults to `pg_ctl start`.
## TCP and the Nix sandbox {#sec-postgresqlTestHook-tcp}
`postgresqlEnableTCP` relies on network sandboxing, which is not available on macOS and some custom Nix installations, resulting in flaky tests.
For this reason, it is disabled by default.
The preferred solution is to make the test suite use a UNIX domain socket connection. This is the default behavior when no `host` connection parameter is provided.
Some test suites hardcode a value for `host` though, so a patch may be required. If you can upstream the patch, you can make `host` default to the `PGHOST` environment variable when set. Otherwise, you can patch it locally to omit the `host` connection string parameter altogether.
::: {.note}
The error `libpq: failed (could not receive data from server: Connection refused` is generally an indication that the test suite is trying to connect through TCP.
:::

View file

@ -0,0 +1,49 @@
# CHICKEN {#sec-chicken}
[CHICKEN](https://call-cc.org/) is a
[R⁵RS](https://schemers.org/Documents/Standards/R5RS/HTML/)-compliant Scheme
compiler. It includes an interactive mode and a custom package format, "eggs".
## Using Eggs
Eggs described in nixpkgs are available inside the
`chickenPackages.chickenEggs` attrset. Including an egg as a build input is
done in the typical Nix fashion. For example, to include support for [SRFI
189](https://srfi.schemers.org/srfi-189/srfi-189.html) in a derivation, one
might write:
```nix
buildInputs = [
chicken
chickenPackages.chickenEggs.srfi-189
];
```
Both `chicken` and its eggs have a setup hook which configures the environment
variables `CHICKEN_INCLUDE_PATH` and `CHICKEN_REPOSITORY_PATH`.
## Updating Eggs
nixpkgs only knows about a subset of all published eggs. It uses
[egg2nix](https://github.com/the-kenny/egg2nix) to generate a
package set from a list of eggs to include.
The package set is regenerated by running the following shell commands:
```
$ nix-shell -p chickenPackages.egg2nix
$ cd pkgs/development/compilers/chicken/5/
$ egg2nix eggs.scm > eggs.nix
```
## Adding Eggs
When we run `egg2nix`, we obtain one collection of eggs with
mutually-compatible versions. This means that when we add new eggs, we may
need to update existing eggs. To keep those separate, follow the procedure for
updating eggs before including more eggs.
To include more eggs, edit `pkgs/development/compilers/chicken/5/eggs.scm`.
The first section of this file lists eggs which are required by `egg2nix`
itself; all other eggs go into the second section. After editing, follow the
procedure for updating eggs.

View file

@ -42,7 +42,21 @@ Unlike other libraries mentioned in this section, GdkPixbuf only supports a sing
### Icons {#ssec-gnome-icons}
When an application uses icons, an icon theme should be available in `XDG_DATA_DIRS` during runtime. The package for the default, icon-less [hicolor-icon-theme](https://www.freedesktop.org/wiki/Software/icon-theme/) (should be propagated by every icon theme) contains [a setup hook](#ssec-gnome-hooks-hicolor-icon-theme) that will pick up icon themes from `buildInputs` and pass it to our wrapper. Unfortunately, relying on that would mean every user has to download the theme included in the package expression no matter their preference. For that reason, we leave the installation of icon theme on the user. If you use one of the desktop environments, you probably already have an icon theme installed.
When an application uses icons, an icon theme should be available in `XDG_DATA_DIRS` during runtime. The package for the default, icon-less [hicolor-icon-theme](https://www.freedesktop.org/wiki/Software/icon-theme/) (should be propagated by every icon theme) contains [a setup hook](#ssec-gnome-hooks-hicolor-icon-theme) that will pick up icon themes from `buildInputs` and add their datadirs to `XDG_ICON_DIRS` environment variable (this is Nixpkgs specific, not actually a XDG standard variable). Unfortunately, relying on that would mean every user has to download the theme included in the package expression no matter their preference. For that reason, we leave the installation of icon theme on the user. If you use one of the desktop environments, you probably already have an icon theme installed.
In the rare case you need to use icons from dependencies (e.g. when an app forces an icon theme), you can use the following to pick them up:
```nix
buildInputs = [
pantheon.elementary-icon-theme
];
preFixup = ''
gappsWrapperArgs+=(
# The icon theme is hardcoded.
--prefix XDG_DATA_DIRS : "$XDG_ICON_DIRS"
)
'';
```
To avoid costly file system access when locating icons, GTK, [as well as Qt](https://woboq.com/blog/qicon-reads-gtk-icon-cache-in-qt57.html), can rely on `icon-theme.cache` files from the themes' top-level directories. These files are generated using `gtk-update-icon-cache`, which is expected to be run whenever an icon is added or removed to an icon theme (typically an application icon into `hicolor` theme) and some programs do indeed run this after icon installation. However, since packages are installed into their own prefix by Nix, this would lead to conflicts. For that reason, `gtk3` provides a [setup hook](#ssec-gnome-hooks-gtk-drop-icon-theme-cache) that will clean the file from installation. Since most applications only ship their own icon that will be loaded on start-up, it should not affect them too much. On the other hand, icon themes are much larger and more widely used so we need to cache them. Because we recommend installing icon themes globally, we will generate the cache files from all packages in a profile using a NixOS module. You can enable the cache generation using `gtk.iconCache.enable` option if your desktop environment does not already do that.
@ -98,7 +112,7 @@ For convenience, it also adds `dconf.lib` for a GIO module implementing a GSetti
- []{#ssec-gnome-hooks-dconf} `dconf.lib` is a dependency of `wrapGAppsHook`, which then also adds it to the `GIO_EXTRA_MODULES` variable.
- []{#ssec-gnome-hooks-hicolor-icon-theme} `hicolor-icon-theme`s setup hook will add icon themes to `XDG_ICON_DIRS` which is prepended to `XDG_DATA_DIRS` by `wrapGAppsHook`.
- []{#ssec-gnome-hooks-hicolor-icon-theme} `hicolor-icon-theme`s setup hook will add icon themes to `XDG_ICON_DIRS`.
- []{#ssec-gnome-hooks-gobject-introspection} `gobject-introspection` setup hook populates `GI_TYPELIB_PATH` variable with `lib/girepository-1.0` directories of dependencies, which is then added to wrapper by `wrapGAppsHook`. It also adds `share` directories of dependencies to `XDG_DATA_DIRS`, which is intended to promote GIR files but it also [pollutes the closures](https://github.com/NixOS/nixpkgs/issues/32790) of packages using `wrapGAppsHook`.

View file

@ -9,6 +9,7 @@
<xi:include href="android.section.xml" />
<xi:include href="beam.section.xml" />
<xi:include href="bower.section.xml" />
<xi:include href="chicken.section.xml" />
<xi:include href="coq.section.xml" />
<xi:include href="crystal.section.xml" />
<xi:include href="cuda.section.xml" />

View file

@ -25,8 +25,10 @@
<title>Builders</title>
<xi:include href="builders/fetchers.chapter.xml" />
<xi:include href="builders/trivial-builders.chapter.xml" />
<xi:include href="builders/testers.chapter.xml" />
<xi:include href="builders/special.xml" />
<xi:include href="builders/images.xml" />
<xi:include href="hooks/index.xml" />
<xi:include href="languages-frameworks/index.xml" />
<xi:include href="builders/packages/index.xml" />
</part>

View file

@ -175,6 +175,40 @@ The NixOS tests are available as `nixosTests` in parameters of derivations. For
NixOS tests run in a VM, so they are slower than regular package tests. For more information see [NixOS module tests](https://nixos.org/manual/nixos/stable/#sec-nixos-tests).
Alternatively, you can specify other derivations as tests. You can make use of
the optional parameter to inject the correct package without
relying on non-local definitions, even in the presence of `overrideAttrs`.
Here that's `finalAttrs.finalPackage`, but you could choose a different name if
`finalAttrs` already exists in your scope.
`(mypkg.overrideAttrs f).passthru.tests` will be as expected, as long as the
definition of `tests` does not rely on the original `mypkg` or overrides it in
all places.
```nix
# my-package/default.nix
{ stdenv, callPackage }:
stdenv.mkDerivation (finalAttrs: {
# ...
passthru.tests.example = callPackage ./example.nix { my-package = finalAttrs.finalPackage; };
})
```
```nix
# my-package/example.nix
{ runCommand, lib, my-package, ... }:
runCommand "my-package-test" {
nativeBuildInputs = [ my-package ];
src = lib.sources.sourcesByRegex ./. [ ".*.in" ".*.expected" ];
} ''
my-package --help
my-package <example.in >example.actual
diff -U3 --color=auto example.expected example.actual
mkdir $out
''
```
### `timeout` {#var-meta-timeout}
A timeout (in seconds) for building the derivation. If the derivation takes longer than this time to build, it can fail due to breaking the timeout. However, all computers do not have the same computing power, hence some builders may decide to apply a multiplicative factor to this value. When filling this value in, try to keep it approximately consistent with other values already present in `nixpkgs`.

View file

@ -317,6 +317,60 @@ The script will be usually run from the root of the Nixpkgs repository but you s
For information about how to run the updates, execute `nix-shell maintainers/scripts/update.nix`.
### Recursive attributes in `mkDerivation`
If you pass a function to `mkDerivation`, it will receive as its argument the final arguments, including the overrides when reinvoked via `overrideAttrs`. For example:
```nix
mkDerivation (finalAttrs: {
pname = "hello";
withFeature = true;
configureFlags =
lib.optionals finalAttrs.withFeature ["--with-feature"];
})
```
Note that this does not use the `rec` keyword to reuse `withFeature` in `configureFlags`.
The `rec` keyword works at the syntax level and is unaware of overriding.
Instead, the definition references `finalAttrs`, allowing users to change `withFeature`
consistently with `overrideAttrs`.
`finalAttrs` also contains the attribute `finalPackage`, which includes the output paths, etc.
Let's look at a more elaborate example to understand the differences between
various bindings:
```nix
# `pkg` is the _original_ definition (for illustration purposes)
let pkg =
mkDerivation (finalAttrs: {
# ...
# An example attribute
packages = [];
# `passthru.tests` is a commonly defined attribute.
passthru.tests.simple = f finalAttrs.finalPackage;
# An example of an attribute containing a function
passthru.appendPackages = packages':
finalAttrs.finalPackage.overrideAttrs (newSelf: super: {
packages = super.packages ++ packages';
});
# For illustration purposes; referenced as
# `(pkg.overrideAttrs(x)).finalAttrs` etc in the text below.
passthru.finalAttrs = finalAttrs;
passthru.original = pkg;
});
in pkg
```
Unlike the `pkg` binding in the above example, the `finalAttrs` parameter always references the final attributes. For instance `(pkg.overrideAttrs(x)).finalAttrs.finalPackage` is identical to `pkg.overrideAttrs(x)`, whereas `(pkg.overrideAttrs(x)).original` is the same as the original `pkg`.
See also the section about [`passthru.tests`](#var-meta-tests).
## Phases {#sec-stdenv-phases}
`stdenv.mkDerivation` sets the Nix [derivation](https://nixos.org/manual/nix/stable/expressions/derivations.html#derivations)'s builder to a script that loads the stdenv `setup.sh` bash library and calls `genericBuild`. Most packaging functions rely on this default builder.

View file

@ -39,14 +39,18 @@ The function `overrideAttrs` allows overriding the attribute set passed to a `st
Example usage:
```nix
helloWithDebug = pkgs.hello.overrideAttrs (oldAttrs: rec {
helloWithDebug = pkgs.hello.overrideAttrs (finalAttrs: previousAttrs: {
separateDebugInfo = true;
});
```
In the above example, the `separateDebugInfo` attribute is overridden to be true, thus building debug info for `helloWithDebug`, while all other attributes will be retained from the original `hello` package.
The argument `oldAttrs` is conventionally used to refer to the attr set originally passed to `stdenv.mkDerivation`.
The argument `previousAttrs` is conventionally used to refer to the attr set originally passed to `stdenv.mkDerivation`.
The argument `finalAttrs` refers to the final attributes passed to `mkDerivation`, plus the `finalPackage` attribute which is equal to the result of `mkDerivation` or subsequent `overrideAttrs` calls.
If only a one-argument function is written, the argument has the meaning of `previousAttrs`.
::: {.note}
Note that `separateDebugInfo` is processed only by the `stdenv.mkDerivation` function, not the generated, raw Nix derivation. Thus, using `overrideDerivation` will not work in this case, as it overrides only the attributes of the final derivation. It is for this reason that `overrideAttrs` should be preferred in (almost) all cases to `overrideDerivation`, i.e. to allow using `stdenv.mkDerivation` to process input arguments, as well as the fact that it is easier to use (you can use the same attribute names you see in your Nix code, instead of the ones generated (e.g. `buildInputs` vs `nativeBuildInputs`), and it involves less typing).

View file

@ -11,6 +11,9 @@ let
callLibs = file: import file { lib = self; };
in {
# interacting with flakes
flakes = callLibs ./flakes.nix;
# often used, or depending on very little
trivial = callLibs ./trivial.nix;
fixedPoints = callLibs ./fixed-points.nix;
@ -59,6 +62,7 @@ let
# linux kernel configuration
kernel = callLibs ./kernel.nix;
inherit (self.flakes) callLocklessFlake;
inherit (builtins) add addErrorContext attrNames concatLists
deepSeq elem elemAt filter genericClosure genList getAttr
hasAttr head isAttrs isBool isInt isList isString length
@ -94,7 +98,8 @@ let
concatImapStringsSep makeSearchPath makeSearchPathOutput
makeLibraryPath makeBinPath optionalString
hasInfix hasPrefix hasSuffix stringToCharacters stringAsChars escape
escapeShellArg escapeShellArgs escapeRegex escapeXML replaceChars lowerChars
escapeShellArg escapeShellArgs isValidPosixName toShellVar toShellVars
escapeRegex escapeXML replaceChars lowerChars
upperChars toLower toUpper addContextFrom splitString
removePrefix removeSuffix versionOlder versionAtLeast
getName getVersion
@ -108,7 +113,7 @@ let
makeScope makeScopeWithSplicing;
inherit (self.meta) addMetaAttrs dontDistribute setName updateName
appendToName mapDerivationAttrset setPrio lowPrio lowPrioSet hiPrio
hiPrioSet getLicenseFromSpdxId;
hiPrioSet getLicenseFromSpdxId getExe;
inherit (self.sources) pathType pathIsDirectory cleanSourceFilter
cleanSource sourceByRegex sourceFilesBySuffices
commitIdFromGitRepo cleanSourceWith pathHasContext

22
third_party/nixpkgs/lib/flakes.nix vendored Normal file
View file

@ -0,0 +1,22 @@
{ lib }:
rec {
/* imports a flake.nix without acknowledging its lock file, useful for
referencing subflakes from a parent flake. The second argument allows
specifying the inputs of this flake.
Example:
callLocklessFlake {
path = ./directoryContainingFlake;
inputs = { inherit nixpkgs; };
}
*/
callLocklessFlake = { path, inputs ? { } }:
let
self = { outPath = path; } //
((import (path + "/flake.nix")).outputs (inputs // { self = self; }));
in
self;
}

View file

@ -126,4 +126,18 @@ rec {
lib.warn "getLicenseFromSpdxId: No license matches the given SPDX ID: ${licstr}"
{ shortName = licstr; }
);
/* Get the path to the main program of a derivation with either
meta.mainProgram or pname or name
Type: getExe :: derivation -> string
Example:
getExe pkgs.hello
=> "/nix/store/g124820p9hlv4lj8qplzxw1c44dxaw1k-hello-2.12/bin/hello"
getExe pkgs.mustache-go
=> "/nix/store/am9ml4f4ywvivxnkiaqwr0hyxka1xjsf-mustache-go-1.3.0/bin/mustache"
*/
getExe = x:
"${lib.getBin x}/bin/${x.meta.mainProgram or (lib.getName x)}";
}

View file

@ -113,6 +113,10 @@ rec {
args ? {}
, # This would be remove in the future, Prefer _module.check option instead.
check ? true
# Internal variable to avoid `_key` collisions regardless
# of `extendModules`. Used in `submoduleWith`.
# Test case: lib/tests/modules, "168767"
, extensionOffset ? 0
}:
let
withWarnings = x:
@ -156,7 +160,10 @@ rec {
type = types.lazyAttrsOf types.raw;
# Only render documentation once at the root of the option tree,
# not for all individual submodules.
internal = prefix != [];
# Allow merging option decls to make this internal regardless.
${if prefix == []
then null # unset => visible
else "internal"} = true;
# TODO: Change the type of this option to a submodule with a
# freeformType, so that individual arguments can be documented
# separately
@ -338,15 +345,17 @@ rec {
modules ? [],
specialArgs ? {},
prefix ? [],
extensionOffset ? length modules,
}:
evalModules (evalModulesArgs // {
modules = regularModules ++ modules;
specialArgs = evalModulesArgs.specialArgs or {} // specialArgs;
prefix = extendArgs.prefix or evalModulesArgs.prefix;
inherit extensionOffset;
});
type = lib.types.submoduleWith {
inherit modules specialArgs;
inherit modules specialArgs extensionOffset;
};
result = withWarnings {

View file

@ -17,6 +17,7 @@ rec {
head
isInt
isList
isAttrs
isString
match
parseDrvName
@ -253,10 +254,7 @@ rec {
=> false
*/
hasInfix = infix: content:
let
drop = x: substring 1 (stringLength x) x;
in hasPrefix infix content
|| content != "" && hasInfix infix (drop content);
builtins.match ".*${escapeRegex infix}.*" "${content}" != null;
/* Convert a string to a list of characters (i.e. singleton strings).
This allows you to, e.g., map a function over each character. However,
@ -327,6 +325,65 @@ rec {
*/
escapeShellArgs = concatMapStringsSep " " escapeShellArg;
/* Test whether the given name is a valid POSIX shell variable name.
Type: string -> bool
Example:
isValidPosixName "foo_bar000"
=> true
isValidPosixName "0-bad.jpg"
=> false
*/
isValidPosixName = name: match "[a-zA-Z_][a-zA-Z0-9_]*" name != null;
/* Translate a Nix value into a shell variable declaration, with proper escaping.
Supported value types are strings (mapped to regular variables), lists of strings
(mapped to Bash-style arrays) and attribute sets of strings (mapped to Bash-style
associative arrays). Note that "strings" include string-coercible values like paths.
Strings are translated into POSIX sh-compatible code; lists and attribute sets
assume a shell that understands Bash syntax (e.g. Bash or ZSH).
Type: string -> (string | listOf string | attrsOf string) -> string
Example:
''
${toShellVar "foo" "some string"}
[[ "$foo" == "some string" ]]
''
*/
toShellVar = name: value:
lib.throwIfNot (isValidPosixName name) "toShellVar: ${name} is not a valid shell variable name" (
if isAttrs value then
"declare -A ${name}=(${
concatStringsSep " " (lib.mapAttrsToList (n: v:
"[${escapeShellArg n}]=${escapeShellArg v}"
) value)
})"
else if isList value then
"declare -a ${name}=(${escapeShellArgs value})"
else
"${name}=${escapeShellArg value}"
);
/* Translate an attribute set into corresponding shell variable declarations
using `toShellVar`.
Type: attrsOf (string | listOf string | attrsOf string) -> string
Example:
let
foo = "value";
bar = foo;
in ''
${toShellVars { inherit foo bar; }}
[[ "$foo" == "$bar" ]]
''
*/
toShellVars = vars: concatStringsSep "\n" (lib.mapAttrsToList toShellVar vars);
/* Turn a string into a Nix expression representing that string
Type: string -> string

View file

@ -74,6 +74,8 @@ in {
mips = filterDoubles predicates.isMips;
mmix = filterDoubles predicates.isMmix;
riscv = filterDoubles predicates.isRiscV;
riscv32 = filterDoubles predicates.isRiscV32;
riscv64 = filterDoubles predicates.isRiscV64;
vc4 = filterDoubles predicates.isVc4;
or1k = filterDoubles predicates.isOr1k;
m68k = filterDoubles predicates.isM68k;

View file

@ -13,6 +13,7 @@ rec {
isx86_64 = { cpu = { family = "x86"; bits = 64; }; };
isPowerPC = { cpu = cpuTypes.powerpc; };
isPower = { cpu = { family = "power"; }; };
isPower64 = { cpu = { family = "power"; bits = 64; }; };
isx86 = { cpu = { family = "x86"; }; };
isAarch32 = { cpu = { family = "arm"; bits = 32; }; };
isAarch64 = { cpu = { family = "arm"; bits = 64; }; };
@ -23,6 +24,8 @@ rec {
isMips64n64 = { cpu = { family = "mips"; bits = 64; }; abi = { abi = "64"; }; };
isMmix = { cpu = { family = "mmix"; }; };
isRiscV = { cpu = { family = "riscv"; }; };
isRiscV32 = { cpu = { family = "riscv"; bits = 32; }; };
isRiscV64 = { cpu = { family = "riscv"; bits = 64; }; };
isSparc = { cpu = { family = "sparc"; }; };
isWasm = { cpu = { family = "wasm"; }; };
isMsp430 = { cpu = { family = "msp430"; }; };

View file

@ -3,7 +3,7 @@
# targetPlatform, etc) containing at least the minimal set of attrs
# required (see types.parsedPlatform in lib/systems/parse.nix). This
# file takes an already-valid platform and further elaborates it with
# optional fields such as linux-kernel, gcc, etc.
# optional fields; currently these are: linux-kernel, gcc, and rustc.
{ lib }:
rec {
@ -500,7 +500,7 @@ rec {
# based on:
# https://www.mail-archive.com/qemu-discuss@nongnu.org/msg05179.html
# https://gmplib.org/~tege/qemu.html#mips64-debian
mips64el-qemu-linux-gnuabi64 = (import ./examples).mips64el-linux-gnuabi64 // {
mips64el-qemu-linux-gnuabi64 = {
linux-kernel = {
name = "mips64el";
baseConfig = "64r2el_defconfig";
@ -568,5 +568,5 @@ rec {
else if platform.parsed.cpu == lib.systems.parse.cpuTypes.powerpc64le then powernv
else pc;
else { };
}

View file

@ -0,0 +1,8 @@
{
outputs = { self, subflake, callLocklessFlake }: rec {
x = (callLocklessFlake {
path = subflake;
inputs = {};
}).subflakeOutput;
};
}

View file

@ -0,0 +1,5 @@
{
outputs = { self }: {
subflakeOutput = 1;
};
}

View file

@ -22,6 +22,15 @@ in
runTests {
# FLAKES
testCallLocklessFlake = {
expr = callLocklessFlake {
path = ./flakes/subflakeTest;
inputs = { subflake = ./flakes/subflakeTest/subflake; inherit callLocklessFlake; };
};
expected = { x = 1; outPath = ./flakes/subflakeTest; };
};
# TRIVIAL
@ -251,6 +260,56 @@ runTests {
expected = "&quot;test&quot; &apos;test&apos; &lt; &amp; &gt;";
};
testToShellVars = {
expr = ''
${toShellVars {
STRing01 = "just a 'string'";
_array_ = [ "with" "more strings" ];
assoc."with some" = ''
strings
possibly newlines
'';
}}
'';
expected = ''
STRing01='just a '\'''string'\''''
declare -a _array_=('with' 'more strings')
declare -A assoc=(['with some']='strings
possibly newlines
')
'';
};
testHasInfixFalse = {
expr = hasInfix "c" "abde";
expected = false;
};
testHasInfixTrue = {
expr = hasInfix "c" "abcde";
expected = true;
};
testHasInfixDerivation = {
expr = hasInfix "hello" (import ../.. { system = "x86_64-linux"; }).hello;
expected = true;
};
testHasInfixPath = {
expr = hasInfix "tests" ./.;
expected = true;
};
testHasInfixPathStoreDir = {
expr = hasInfix builtins.storeDir ./.;
expected = true;
};
testHasInfixToString = {
expr = hasInfix "a" { __toString = _: "a"; };
expected = true;
};
# LISTS
testFilter = {

View file

@ -293,7 +293,7 @@ checkConfigOutput '^"a c"$' config.result ./functionTo/merging-attrs.nix
# moduleType
checkConfigOutput '^"a b"$' config.resultFoo ./declare-variants.nix ./define-variant.nix
checkConfigOutput '^"a y z"$' config.resultFooBar ./declare-variants.nix ./define-variant.nix
checkConfigOutput '^"a b y z"$' config.resultFooBar ./declare-variants.nix ./define-variant.nix
checkConfigOutput '^"a b c"$' config.resultFooFoo ./declare-variants.nix ./define-variant.nix
## emptyValue's
@ -327,6 +327,10 @@ checkConfigError 'The option .theOption.nested. in .other.nix. is already declar
# Test that types.optionType leaves types untouched as long as they don't need to be merged
checkConfigOutput 'ok' config.freeformItems.foo.bar ./adhoc-freeformType-survives-type-merge.nix
# Anonymous submodules don't get nixed by import resolution/deduplication
# because of an `extendModules` bug, issue 168767.
checkConfigOutput '^1$' config.sub.specialisation.value ./extendModules-168767-imports.nix
cat <<EOF
====== module tests ======
$pass Pass

View file

@ -0,0 +1,41 @@
{ lib
, extendModules
, ...
}:
with lib;
{
imports = [
{
options.sub = mkOption {
default = { };
type = types.submodule (
{ config
, extendModules
, ...
}:
{
options.value = mkOption {
type = types.int;
};
options.specialisation = mkOption {
default = { };
inherit
(extendModules {
modules = [{
specialisation = mkOverride 0 { };
}];
})
type;
};
}
);
};
}
{ config.sub.value = 1; }
];
}

View file

@ -19,6 +19,9 @@ with lib.systems.doubles; lib.runTests {
testi686 = mseteq i686 [ "i686-linux" "i686-freebsd" "i686-genode" "i686-netbsd" "i686-openbsd" "i686-cygwin" "i686-windows" "i686-none" "i686-darwin" ];
testmips = mseteq mips [ "mips64el-linux" "mipsel-linux" "mipsel-netbsd" ];
testmmix = mseteq mmix [ "mmix-mmixware" ];
testriscv = mseteq riscv [ "riscv32-linux" "riscv64-linux" "riscv32-netbsd" "riscv64-netbsd" "riscv32-none" "riscv64-none" ];
testriscv32 = mseteq riscv32 [ "riscv32-linux" "riscv32-netbsd" "riscv32-none" ];
testriscv64 = mseteq riscv64 [ "riscv64-linux" "riscv64-netbsd" "riscv64-none" ];
testx86_64 = mseteq x86_64 [ "x86_64-linux" "x86_64-darwin" "x86_64-freebsd" "x86_64-genode" "x86_64-redox" "x86_64-openbsd" "x86_64-netbsd" "x86_64-cygwin" "x86_64-solaris" "x86_64-windows" "x86_64-none" ];
testcygwin = mseteq cygwin [ "i686-cygwin" "x86_64-cygwin" ];

View file

@ -568,6 +568,11 @@ rec {
{ modules
, specialArgs ? {}
, shorthandOnlyDefinesConfig ? false
# Internal variable to avoid `_key` collisions regardless
# of `extendModules`. Wired through by `evalModules`.
# Test case: lib/tests/modules, "168767"
, extensionOffset ? 0
}@attrs:
let
inherit (lib.modules) evalModules;
@ -579,11 +584,11 @@ rec {
allModules = defs: imap1 (n: { value, file }:
if isFunction value
then setFunctionArgs
(args: lib.modules.unifyModuleSyntax file "${toString file}-${toString n}" (value args))
(args: lib.modules.unifyModuleSyntax file "${toString file}-${toString (n + extensionOffset)}" (value args))
(functionArgs value)
else if isAttrs value
then
lib.modules.unifyModuleSyntax file "${toString file}-${toString n}" (shorthandToModule value)
lib.modules.unifyModuleSyntax file "${toString file}-${toString (n + extensionOffset)}" (shorthandToModule value)
else value
) defs;
@ -620,6 +625,7 @@ rec {
(base.extendModules {
modules = [ { _module.args.name = last loc; } ] ++ allModules defs;
prefix = loc;
extensionOffset = extensionOffset + length defs;
}).config;
emptyValue = { value = {}; };
getSubOptions = prefix: (base.extendModules

View file

@ -380,6 +380,12 @@
githubId = 209175;
name = "Alesya Huzik";
};
aidalgol = {
email = "aidalgol+nixpkgs@fastmail.net";
github = "aidalgol";
githubId = 2313201;
name = "Aidan Gauland";
};
aij = {
email = "aij+git@mrph.org";
github = "aij";
@ -519,6 +525,12 @@
githubId = 38869148;
name = "Alex Eyre";
};
algram = {
email = "aliasgram@gmail.com";
github = "Algram";
githubId = 5053729;
name = "Alias Gram";
};
alibabzo = {
email = "alistair.bill@gmail.com";
github = "alibabzo";
@ -669,6 +681,18 @@
githubId = 858965;
name = "Andrew Morsillo";
};
an-empty-string = {
name = "Tris Emmy Wilson";
email = "tris@tris.fyi";
github = "an-empty-string";
githubId = 681716;
};
AnatolyPopov = {
email = "aipopov@live.ru";
github = "AnatolyPopov";
githubId = 2312534;
name = "Anatolii Popov";
};
andehen = {
email = "git@andehen.net";
github = "andehen";
@ -817,6 +841,13 @@
githubId = 5327697;
name = "Anatolii Prylutskyi";
};
anselmschueler = {
email = "mail@anselmschueler.com";
github = "schuelermine";
githubId = 48802534;
name = "Anselm Schüler";
matrix = "@schuelermine:matrix.org";
};
antoinerg = {
email = "roygobeil.antoine@gmail.com";
github = "antoinerg";
@ -1037,8 +1068,8 @@
name = "Kirill Boltaev";
};
ashley = {
email = "personavinny@protonmail.com";
github = "paranoidcat";
email = "ashley@kira64.xyz";
github = "kira64xyz";
githubId = 84152630;
name = "Ashley Chiara";
};
@ -1378,6 +1409,12 @@
githubId = 164148;
name = "Ben Darwin";
};
bdd = {
email = "bdd@mindcast.org";
github = "bdd";
githubId = 11135;
name = "Berk D. Demir";
};
bdesham = {
email = "benjamin@esham.io";
github = "bdesham";
@ -1794,6 +1831,12 @@
githubId = 7214361;
name = "Roman Gerasimenko";
};
builditluc = {
email = "builditluc@icloud.com";
github = "builditluc";
githubId = 37375448;
name = "Buildit";
};
bburdette = {
email = "bburdette@protonmail.com";
github = "bburdette";
@ -2043,6 +2086,12 @@
githubId = 8228888;
name = "Charlie Hanley";
};
charlesbaynham = {
email = "charlesbaynham@gmail.com";
github = "charlesbaynham";
githubId = 4397637;
name = "Charles Baynham";
};
CharlesHD = {
email = "charleshdespointes@gmail.com";
github = "CharlesHD";
@ -2083,12 +2132,6 @@
githubId = 18648043;
name = "Daniel Cartwright";
};
chiiruno = {
email = "okinan@protonmail.com";
github = "chiiruno";
githubId = 30435868;
name = "Okina Matara";
};
Chili-Man = {
email = "dr.elhombrechile@gmail.com";
name = "Diego Rodriguez";
@ -2260,6 +2303,7 @@
fingerprint = "539F 0655 4D35 38A5 429A E253 13E7 9449 C052 5215";
}];
name = "ckie";
matrix = "@ckie:ckie.dev";
};
clkamp = {
email = "c@lkamp.de";
@ -3837,6 +3881,13 @@
githubId = 222467;
name = "Dmitry Ivanov";
};
ethindp = {
name = "Ethin Probst";
email = "harlydavidsen@gmail.com";
matrix = "@ethindp:the-gdn.net";
github = "ethindp";
githubId = 8030501;
};
Etjean = {
email = "et.jean@outlook.fr";
github = "Etjean";
@ -4073,11 +4124,18 @@
matrix = "@felschr:matrix.org";
github = "felschr";
githubId = 3314323;
name = "Felix Tenley";
keys = [{
longkeyid = "ed25519/0x910ACB9F6BD26F58";
fingerprint = "6AB3 7A28 5420 9A41 82D9 0068 910A CB9F 6BD2 6F58";
}];
name = "Felix Schröter";
keys = [
{
# historical
longkeyid = "ed25519/0x910ACB9F6BD26F58";
fingerprint = "6AB3 7A28 5420 9A41 82D9 0068 910A CB9F 6BD2 6F58";
}
{
longkeyid = "ed25519/0x671E39E6744C807D";
fingerprint = "7E08 6842 0934 AA1D 6821 1F2A 671E 39E6 744C 807D";
}
];
};
ffinkdevs = {
email = "fink@h0st.space";
@ -4696,6 +4754,12 @@
githubId = 201997;
name = "Eric Seidel";
};
grindhold = {
name = "grindhold";
email = "grindhold+nix@skarphed.org";
github = "grindhold";
githubId = 2592640;
};
gspia = {
email = "iahogsp@gmail.com";
github = "gspia";
@ -5585,6 +5649,12 @@
githubId = 488556;
name = "Javier Aguirre";
};
jayesh-bhoot = {
name = "Jayesh Bhoot";
email = "jayesh@bhoot.sh";
github = "jayesh-bhoot";
githubId = 1915507;
};
jb55 = {
email = "jb55@jb55.com";
github = "jb55";
@ -6815,12 +6885,6 @@
githubId = 99639;
name = "Pawel Kruszewski";
};
ktosiek = {
email = "tomasz.kontusz@gmail.com";
github = "ktosiek";
githubId = 278013;
name = "Tomasz Kontusz";
};
kubukoz = {
email = "kubukoz@gmail.com";
github = "kubukoz";
@ -7154,6 +7218,13 @@
githubId = 1769386;
name = "Liam Diprose";
};
libjared = {
email = "jared@perrycode.com";
github = "libjared";
githubId = 3746656;
matrix = "@libjared:matrix.org";
name = "Jared Perry";
};
liff = {
email = "liff@iki.fi";
github = "liff";
@ -7184,6 +7255,13 @@
githubId = 714;
name = "Lily Ballard";
};
lilyinstarlight = {
email = "lily@lily.flowers";
matrix = "@lily:lily.flowers";
github = "lilyinstarlight";
githubId = 298109;
name = "Lily Foster";
};
limeytexan = {
email = "limeytexan@gmail.com";
github = "limeytexan";
@ -8844,6 +8922,12 @@
githubId = 3747396;
name = "Nathan Isom";
};
neilmayhew = {
email = "nix@neil.mayhew.name";
github = "neilmayhew";
githubId = 166791;
name = "Neil Mayhew";
};
nelsonjeppesen = {
email = "nix@jeppesen.io";
github = "NelsonJeppesen";
@ -11042,22 +11126,6 @@
githubId = 132835;
name = "Samuel Dionne-Riel";
};
samuelgrf = {
email = "s@muel.gr";
github = "samuelgrf";
githubId = 67663538;
name = "Samuel Gräfenstein";
keys = [
{
longkeyid = "rsa4096/0xDE75F92E318123F0";
fingerprint = "6F2E 2A90 423C 8111 BFF2 895E DE75 F92E 3181 23F0";
}
{
longkeyid = "rsa4096/0xEF76A063F15C63C8";
fingerprint = "FF24 5832 8FAF 4660 18C6 186E EF76 A063 F15C 63C8";
}
];
};
samuelrivas = {
email = "samuelrivas@gmail.com";
github = "samuelrivas";
@ -11300,6 +11368,12 @@
githubId = 307899;
name = "Gurkan Gur";
};
serge = {
email = "sb@canva.com";
github = "serge-belov";
githubId = 38824235;
name = "Serge Belov";
};
sersorrel = {
email = "ash@sorrel.sh";
github = "sersorrel";
@ -11989,6 +12063,12 @@
githubId = 1694705;
name = "Sam Stites";
};
strager = {
email = "strager.nds@gmail.com";
github = "strager";
githubId = 48666;
name = "Matthew \"strager\" Glazar";
};
stumoss = {
email = "samoss@gmail.com";
github = "stumoss";
@ -12481,6 +12561,12 @@
githubId = 844343;
name = "Thiago K. Okada";
};
thibaultlemaire = {
email = "thibault.lemaire@protonmail.com";
github = "ThibaultLemaire";
githubId = 21345269;
name = "Thibault Lemaire";
};
thibautmarty = {
email = "github@thibautmarty.fr";
matrix = "@thibaut:thibautmarty.fr";
@ -12727,6 +12813,13 @@
githubId = 90456;
name = "Rebecca (Bex) Kelly";
};
tpw_rules = {
name = "Thomas Watson";
email = "twatson52@icloud.com";
matrix = "@tpw_rules:matrix.org";
github = "tpwrules";
githubId = 208010;
};
travisbhartwell = {
email = "nafai@travishartwell.net";
github = "travisbhartwell";
@ -13201,6 +13294,12 @@
githubId = 1771332;
name = "László Vaskó";
};
vlinkz = {
email = "vmfuentes64@gmail.com";
github = "vlinkz";
githubId = 20145996;
name = "Victor Fuentes";
};
vlstill = {
email = "xstill@fi.muni.cz";
github = "vlstill";
@ -13493,7 +13592,7 @@
name = "Andrei Pampu";
};
wolfangaukang = {
email = "liquid.query960@4wrd.cc";
email = "clone.gleeful135+nixpkgs@anonaddy.me";
github = "wolfangaukang";
githubId = 8378365;
name = "P. R. d. O.";
@ -13546,6 +13645,13 @@
github = "wunderbrick";
githubId = 52174714;
};
wyndon = {
email = "72203260+wyndon@users.noreply.github.com";
matrix = "@wyndon:envs.net";
github = "wyndon";
githubId = 72203260;
name = "wyndon";
};
wyvie = {
email = "elijahrum@gmail.com";
github = "wyvie";
@ -14105,6 +14211,12 @@
github = "deifactor";
githubId = 30192992;
};
deinferno = {
name = "deinferno";
email = "14363193+deinferno@users.noreply.github.com";
github = "deinferno";
githubId = 14363193;
};
fzakaria = {
name = "Farid Zakaria";
email = "farid.m.zakaria@gmail.com";
@ -14228,6 +14340,12 @@
github = "zbioe";
githubId = 7332055;
};
zendo = {
name = "zendo";
email = "linzway@qq.com";
github = "zendo";
githubId = 348013;
};
zenithal = {
name = "zenithal";
email = "i@zenithal.me";
@ -14348,4 +14466,16 @@
github = "kuwii";
githubId = 10705175;
};
melias122 = {
name = "Martin Elias";
email = "martin+nixpkgs@elias.sx";
github = "melias122";
githubId = 1027766;
};
bryanhonof = {
name = "Bryan Honof";
email = "bryanhonof@gmail.com";
github = "bryanhonof";
githubId = 5932804;
};
}

View file

@ -52,7 +52,7 @@ luadbi-postgresql,,,,,,
luadbi-sqlite3,,,,,,
luaepnf,,,,,,
luaevent,,,,,,
luaexpat,,,,1.3.0-1,,arobyn flosse
luaexpat,,,,1.4.1-1,,arobyn flosse
luaffi,,,http://luarocks.org/dev,,,
luafilesystem,,,,1.7.0-2,,flosse
lualogging,,,,,,
@ -64,6 +64,7 @@ luasocket,,,,,,
luasql-sqlite3,,,,,,vyp
luassert,,,,,,
luasystem,,,,,,
luaunbound,,,,,
luautf8,,,,,,pstn
luazip,,,,,,
lua-yajl,,,,,,pstn

Can't render this file because it has a wrong number of fields in line 67.

View file

@ -87,7 +87,8 @@ def make_request(url: str, token=None) -> urllib.request.Request:
return urllib.request.Request(url, headers=headers)
Redirects = Dict['Repo', 'Repo']
# a dictionary of plugins and their new repositories
Redirects = Dict['PluginDesc', 'Repo']
class Repo:
def __init__(
@ -96,8 +97,8 @@ class Repo:
self.uri = uri
'''Url to the repo'''
self._branch = branch
# {old_uri: new_uri}
self.redirect: Redirects = {}
# Redirect is the new Repo to use
self.redirect: Optional['Repo'] = None
self.token = "dummy_token"
@property
@ -207,7 +208,7 @@ class RepoGitHub(Repo):
)
new_repo = RepoGitHub(owner=new_owner, repo=new_name, branch=self.branch)
self.redirect[self] = new_repo
self.redirect = new_repo
def prefetch(self, commit: str) -> str:
@ -237,7 +238,7 @@ class RepoGitHub(Repo):
}}'''
@dataclass
@dataclass(frozen=True)
class PluginDesc:
repo: Repo
branch: str
@ -310,6 +311,16 @@ def load_plugins_from_csv(config: FetchConfig, input_file: Path,) -> List[Plugin
return plugins
def run_nix_expr(expr):
with CleanEnvironment():
cmd = ["nix", "eval", "--extra-experimental-features",
"nix-command", "--impure", "--json", "--expr", expr]
log.debug("Running command %s", cmd)
out = subprocess.check_output(cmd)
data = json.loads(out)
return data
class Editor:
"""The configuration of the update script."""
@ -332,9 +343,15 @@ class Editor:
self.deprecated = deprecated or root.joinpath("deprecated.json")
self.cache_file = cache_file or f"{name}-plugin-cache.json"
def get_current_plugins(self):
def get_current_plugins(self) -> List[Plugin]:
"""To fill the cache"""
return get_current_plugins(self)
data = run_nix_expr(self.get_plugins)
plugins = []
for name, attr in data.items():
print("get_current_plugins: name %s" % name)
p = Plugin(name, attr["rev"], attr["submodules"], attr["sha256"])
plugins.append(p)
return plugins
def load_plugin_spec(self, config: FetchConfig, plugin_file) -> List[PluginDesc]:
'''CSV spec'''
@ -448,24 +465,10 @@ class CleanEnvironment(object):
self.empty_config.close()
def get_current_plugins(editor: Editor) -> List[Plugin]:
with CleanEnvironment():
cmd = ["nix", "eval", "--extra-experimental-features", "nix-command", "--impure", "--json", "--expr", editor.get_plugins]
log.debug("Running command %s", cmd)
out = subprocess.check_output(cmd)
data = json.loads(out)
plugins = []
for name, attr in data.items():
print("get_current_plugins: name %s" % name)
p = Plugin(name, attr["rev"], attr["submodules"], attr["sha256"])
plugins.append(p)
return plugins
def prefetch_plugin(
p: PluginDesc,
cache: "Optional[Cache]" = None,
) -> Tuple[Plugin, Redirects]:
) -> Tuple[Plugin, Optional[Repo]]:
repo, branch, alias = p.repo, p.branch, p.alias
name = alias or p.repo.name
commit = None
@ -479,7 +482,7 @@ def prefetch_plugin(
return cached_plugin, repo.redirect
has_submodules = repo.has_submodules()
print(f"prefetch {name}")
log.debug(f"prefetch {name}")
sha256 = repo.prefetch(commit)
return (
@ -488,7 +491,7 @@ def prefetch_plugin(
)
def print_download_error(plugin: str, ex: Exception):
def print_download_error(plugin: PluginDesc, ex: Exception):
print(f"{plugin}: {ex}", file=sys.stderr)
ex_traceback = ex.__traceback__
tb_lines = [
@ -498,19 +501,21 @@ def print_download_error(plugin: str, ex: Exception):
print("\n".join(tb_lines))
def check_results(
results: List[Tuple[PluginDesc, Union[Exception, Plugin], Redirects]]
results: List[Tuple[PluginDesc, Union[Exception, Plugin], Optional[Repo]]]
) -> Tuple[List[Tuple[PluginDesc, Plugin]], Redirects]:
''' '''
failures: List[Tuple[str, Exception]] = []
failures: List[Tuple[PluginDesc, Exception]] = []
plugins = []
# {old: new} plugindesc
redirects: Dict[Repo, Repo] = {}
redirects: Redirects = {}
for (pdesc, result, redirect) in results:
if isinstance(result, Exception):
failures.append((pdesc.name, result))
failures.append((pdesc, result))
else:
plugins.append((pdesc, result))
redirects.update(redirect)
new_pdesc = pdesc
if redirect is not None:
redirects.update({pdesc: redirect})
new_pdesc = PluginDesc(redirect, pdesc.branch, pdesc.alias)
plugins.append((new_pdesc, result))
print(f"{len(results) - len(failures)} plugins were checked", end="")
if len(failures) == 0:
@ -591,13 +596,13 @@ class Cache:
def prefetch(
pluginDesc: PluginDesc, cache: Cache
) -> Tuple[PluginDesc, Union[Exception, Plugin], dict]:
) -> Tuple[PluginDesc, Union[Exception, Plugin], Optional[Repo]]:
try:
plugin, redirect = prefetch_plugin(pluginDesc, cache)
cache[plugin.commit] = plugin
return (pluginDesc, plugin, redirect)
except Exception as e:
return (pluginDesc, e, {})
return (pluginDesc, e, None)
@ -606,7 +611,7 @@ def rewrite_input(
input_file: Path,
deprecated: Path,
# old pluginDesc and the new
redirects: Dict[PluginDesc, PluginDesc] = {},
redirects: Redirects = {},
append: List[PluginDesc] = [],
):
plugins = load_plugins_from_csv(config, input_file,)
@ -618,9 +623,10 @@ def rewrite_input(
cur_date_iso = datetime.now().strftime("%Y-%m-%d")
with open(deprecated, "r") as f:
deprecations = json.load(f)
for old, new in redirects.items():
old_plugin, _ = prefetch_plugin(old)
new_plugin, _ = prefetch_plugin(new)
for pdesc, new_repo in redirects.items():
new_pdesc = PluginDesc(new_repo, pdesc.branch, pdesc.alias)
old_plugin, _ = prefetch_plugin(pdesc)
new_plugin, _ = prefetch_plugin(new_pdesc)
if old_plugin.normalized_name != new_plugin.normalized_name:
deprecations[old_plugin.normalized_name] = {
"new": new_plugin.normalized_name,

View file

@ -198,6 +198,18 @@ with lib.maintainers; {
enableFeatureFreezePing = true;
};
enlightenment = {
members = [
romildo
];
githubTeams = [
"enlightenment"
];
scope = "Maintain Enlightenment desktop environment and related packages.";
shortName = "Enlightenment";
enableFeatureFreezePing = true;
};
# Dummy group for the "everyone else" section
feature-freeze-everyone-else = {
members = [ ];
@ -343,6 +355,30 @@ with lib.maintainers; {
shortName = "Linux Kernel";
};
lumina = {
members = [
romildo
];
githubTeams = [
"lumina"
];
scope = "Maintain lumina desktop environment and related packages.";
shortName = "Lumina";
enableFeatureFreezePing = true;
};
lxqt = {
members = [
romildo
];
githubTeams = [
"lxqt"
];
scope = "Maintain LXQt desktop environment and related packages.";
shortName = "LXQt";
enableFeatureFreezePing = true;
};
marketing = {
members = [
garbas

View file

@ -40,7 +40,7 @@ section for details on container networking.)
To disable the container, just remove it from `configuration.nix` and
run `nixos-rebuild
switch`. Note that this will not delete the root directory of the
container in `/var/lib/containers`. Containers can be destroyed using
container in `/var/lib/nixos-containers`. Containers can be destroyed using
the imperative method: `nixos-container destroy foo`.
Declarative containers can be started and stopped using the

View file

@ -10,8 +10,8 @@ You create a container with identifier `foo` as follows:
# nixos-container create foo
```
This creates the container's root directory in `/var/lib/containers/foo`
and a small configuration file in `/etc/containers/foo.conf`. It also
This creates the container's root directory in `/var/lib/nixos-containers/foo`
and a small configuration file in `/etc/nixos-containers/foo.conf`. It also
builds the container's initial system configuration and stores it in
`/nix/var/nix/profiles/per-container/foo/system`. You can modify the
initial configuration of the container on the command line. For

View file

@ -14,7 +14,6 @@
<xi:include href="../from_md/development/building-parts.chapter.xml" />
<xi:include href="../from_md/development/what-happens-during-a-system-switch.chapter.xml" />
<xi:include href="../from_md/development/writing-documentation.chapter.xml" />
<xi:include href="../from_md/development/building-nixos.chapter.xml" />
<xi:include href="../from_md/development/nixos-tests.chapter.xml" />
<xi:include href="../from_md/development/testing-installer.chapter.xml" />
</part>

View file

@ -159,34 +159,42 @@ The following methods are available on machine objects:
`execute`
: Execute a shell command, returning a list `(status, stdout)`.
Commands are run with `set -euo pipefail` set:
- If several commands are separated by `;` and one fails, the
command as a whole will fail.
- For pipelines, the last non-zero exit status will be returned
(if there is one; otherwise zero will be returned).
- Dereferencing unset variables fails the command.
- It will wait for stdout to be closed.
If the command detaches, it must close stdout, as `execute` will wait
for this to consume all output reliably. This can be achieved by
redirecting stdout to stderr `>&2`, to `/dev/console`, `/dev/null` or
a file. Examples of detaching commands are `sleep 365d &`, where the
shell forks a new process that can write to stdout and `xclip -i`, where
the `xclip` command itself forks without closing stdout.
Takes an optional parameter `check_return` that defaults to `True`.
Setting this parameter to `False` will not check for the return code
and return -1 instead. This can be used for commands that shut down
the VM and would therefore break the pipe that would be used for
retrieving the return code.
A timeout for the command can be specified (in seconds) using the optional
`timeout` parameter, e.g., `execute(cmd, timeout=10)` or
`execute(cmd, timeout=None)`. The default is 900 seconds.
`succeed`
: Execute a shell command, raising an exception if the exit status is
not zero, otherwise returning the standard output. Commands are run
with `set -euo pipefail` set:
- If several commands are separated by `;` and one fails, the
command as a whole will fail.
- For pipelines, the last non-zero exit status will be returned
(if there is one, zero will be returned otherwise).
- Dereferencing unset variables fail the command.
- It will wait for stdout to be closed. See `execute` for the
implications.
not zero, otherwise returning the standard output. Similar to `execute`,
except that the timeout is `None` by default. See `execute` for details on
command execution.
`fail`
@ -196,10 +204,13 @@ The following methods are available on machine objects:
`wait_until_succeeds`
: Repeat a shell command with 1-second intervals until it succeeds.
Has a default timeout of 900 seconds which can be modified, e.g.
`wait_until_succeeds(cmd, timeout=10)`. See `execute` for details on
command execution.
`wait_until_fails`
: Repeat a shell command with 1-second intervals until it fails.
: Like `wait_until_succeeds`, but repeating the command until it fails.
`wait_for_unit`

View file

@ -48,8 +48,8 @@ containers.database = {
<literal>configuration.nix</literal> and run
<literal>nixos-rebuild switch</literal>. Note that this will not
delete the root directory of the container in
<literal>/var/lib/containers</literal>. Containers can be destroyed
using the imperative method:
<literal>/var/lib/nixos-containers</literal>. Containers can be
destroyed using the imperative method:
<literal>nixos-container destroy foo</literal>.
</para>
<para>

View file

@ -14,8 +14,9 @@
</programlisting>
<para>
This creates the containers root directory in
<literal>/var/lib/containers/foo</literal> and a small configuration
file in <literal>/etc/containers/foo.conf</literal>. It also builds
<literal>/var/lib/nixos-containers/foo</literal> and a small
configuration file in
<literal>/etc/nixos-containers/foo.conf</literal>. It also builds
the containers initial system configuration and stores it in
<literal>/nix/var/nix/profiles/per-container/foo/system</literal>.
You can modify the initial configuration of the container on the

View file

@ -274,35 +274,9 @@ start_all()
<listitem>
<para>
Execute a shell command, returning a list
<literal>(status, stdout)</literal>. If the command
detaches, it must close stdout, as
<literal>execute</literal> will wait for this to consume all
output reliably. This can be achieved by redirecting stdout
to stderr <literal>&gt;&amp;2</literal>, to
<literal>/dev/console</literal>,
<literal>/dev/null</literal> or a file. Examples of
detaching commands are <literal>sleep 365d &amp;</literal>,
where the shell forks a new process that can write to stdout
and <literal>xclip -i</literal>, where the
<literal>xclip</literal> command itself forks without
closing stdout. Takes an optional parameter
<literal>check_return</literal> that defaults to
<literal>True</literal>. Setting this parameter to
<literal>False</literal> will not check for the return code
and return -1 instead. This can be used for commands that
shut down the VM and would therefore break the pipe that
would be used for retrieving the return code.
<literal>(status, stdout)</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>succeed</literal>
</term>
<listitem>
<para>
Execute a shell command, raising an exception if the exit
status is not zero, otherwise returning the standard output.
Commands are run with <literal>set -euo pipefail</literal>
set:
</para>
@ -317,22 +291,63 @@ start_all()
<listitem>
<para>
For pipelines, the last non-zero exit status will be
returned (if there is one, zero will be returned
otherwise).
returned (if there is one; otherwise zero will be
returned).
</para>
</listitem>
<listitem>
<para>
Dereferencing unset variables fail the command.
Dereferencing unset variables fails the command.
</para>
</listitem>
<listitem>
<para>
It will wait for stdout to be closed. See
<literal>execute</literal> for the implications.
It will wait for stdout to be closed.
</para>
</listitem>
</itemizedlist>
<para>
If the command detaches, it must close stdout, as
<literal>execute</literal> will wait for this to consume all
output reliably. This can be achieved by redirecting stdout
to stderr <literal>&gt;&amp;2</literal>, to
<literal>/dev/console</literal>,
<literal>/dev/null</literal> or a file. Examples of
detaching commands are <literal>sleep 365d &amp;</literal>,
where the shell forks a new process that can write to stdout
and <literal>xclip -i</literal>, where the
<literal>xclip</literal> command itself forks without
closing stdout.
</para>
<para>
Takes an optional parameter <literal>check_return</literal>
that defaults to <literal>True</literal>. Setting this
parameter to <literal>False</literal> will not check for the
return code and return -1 instead. This can be used for
commands that shut down the VM and would therefore break the
pipe that would be used for retrieving the return code.
</para>
<para>
A timeout for the command can be specified (in seconds)
using the optional <literal>timeout</literal> parameter,
e.g., <literal>execute(cmd, timeout=10)</literal> or
<literal>execute(cmd, timeout=None)</literal>. The default
is 900 seconds.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>succeed</literal>
</term>
<listitem>
<para>
Execute a shell command, raising an exception if the exit
status is not zero, otherwise returning the standard output.
Similar to <literal>execute</literal>, except that the
timeout is <literal>None</literal> by default. See
<literal>execute</literal> for details on command execution.
</para>
</listitem>
</varlistentry>
<varlistentry>
@ -353,7 +368,10 @@ start_all()
<listitem>
<para>
Repeat a shell command with 1-second intervals until it
succeeds.
succeeds. Has a default timeout of 900 seconds which can be
modified, e.g.
<literal>wait_until_succeeds(cmd, timeout=10)</literal>. See
<literal>execute</literal> for details on command execution.
</para>
</listitem>
</varlistentry>
@ -363,8 +381,8 @@ start_all()
</term>
<listitem>
<para>
Repeat a shell command with 1-second intervals until it
fails.
Like <literal>wait_until_succeeds</literal>, but repeating
the command until it fails.
</para>
</listitem>
</varlistentry>

View file

@ -43,6 +43,40 @@ $ nix-build -A config.system.build.isoImage -I nixos-config=modules/installer/cd
</para>
<programlisting>
# mount -o loop -t iso9660 ./result/iso/cd.iso /mnt/iso
</programlisting>
</section>
<section xml:id="sec-building-image-drivers">
<title>Additional drivers or firmware</title>
<para>
If you need additional (non-distributable) drivers or firmware in
the installer, you might want to extend these configurations.
</para>
<para>
For example, to build the GNOME graphical installer ISO, but with
support for certain WiFi adapters present in some MacBooks, you
can create the following file at
<literal>modules/installer/cd-dvd/installation-cd-graphical-gnome-macbook.nix</literal>:
</para>
<programlisting language="bash">
{ config, ... }:
{
imports = [ ./installation-cd-graphical-gnome.nix ];
boot.initrd.kernelModules = [ &quot;wl&quot; ];
boot.kernelModules = [ &quot;kvm-intel&quot; &quot;wl&quot; ];
boot.extraModulePackages = [ config.boot.kernelPackages.broadcom_sta ];
}
</programlisting>
<para>
Then build it like in the example above:
</para>
<programlisting>
$ git clone https://github.com/NixOS/nixpkgs.git
$ cd nixpkgs/nixos
$ export NIXPKGS_ALLOW_UNFREE=1
$ nix-build -A config.system.build.isoImage -I nixos-config=modules/installer/cd-dvd/installation-cd-graphical-gnome-macbook.nix default.nix
</programlisting>
</section>
<section xml:id="sec-building-image-tech-notes">

View file

@ -43,6 +43,33 @@
Shell.
</para>
</listitem>
<listitem>
<para>
<literal>stdenv.mkDerivation</literal> now supports a
self-referencing <literal>finalAttrs:</literal> parameter
containing the final <literal>mkDerivation</literal> arguments
including overrides. <literal>drv.overrideAttrs</literal> now
supports two parameters
<literal>finalAttrs: previousAttrs:</literal>. This allows
packaging configuration to be overridden in a consistent
manner by providing an alternative to
<literal>rec {}</literal> syntax.
</para>
<para>
Additionally, <literal>passthru</literal> can now reference
<literal>finalAttrs.finalPackage</literal> containing the
final package, including attributes such as the output paths
and <literal>overrideAttrs</literal>.
</para>
<para>
New language integrations can be simplified by overriding a
<quote>prototype</quote> package containing the
language-specific logic. This removes the need for a extra
layer of overriding for the <quote>generic builder</quote>
arguments, thus removing a usability problem and source of
error.
</para>
</listitem>
<listitem>
<para>
PHP 8.1 is now available
@ -73,6 +100,29 @@
Systemd has been upgraded to the version 250.
</para>
</listitem>
<listitem>
<para>
Pulseaudio has been upgraded to version 15.0 and now
optionally
<link xlink:href="https://www.freedesktop.org/wiki/Software/PulseAudio/Notes/15.0/#supportforldacandaptxbluetoothcodecsplussbcxqsbcwithhigher-qualityparameters">supports
additional Bluetooth audio codecs</link> like aptX or LDAC,
with codec switching support being available in
<literal>pavucontrol</literal>. This feature is disabled by
default but can be enabled by using
<literal>hardware.pulseaudio.package = pkgs.pulseaudioFull;</literal>.
Existing 3rd party modules that provided similar
functionality, like <literal>pulseaudio-modules-bt</literal>
or <literal>pulseaudio-hsphfpd</literal> are deprecated and
have been removed.
</para>
</listitem>
<listitem>
<para>
The new
<link xlink:href="https://nixos.org/manual/nixpkgs/stable/#sec-postgresqlTestHook"><literal>postgresqlTestHook</literal></link>
runs a PostgreSQL server for the duration of package checks.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://kops.sigs.k8s.io"><literal>kops</literal></link>
@ -101,6 +151,14 @@
default.
</para>
</listitem>
<listitem>
<para>
The GNOME and Plasma installation CDs now use
<literal>pkgs.calamares</literal> and
<literal>pkgs.calamares-nixos-extensions</literal> to allow
users to easily install and set up NixOS with a GUI.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="sec-release-22.05-new-services">
@ -368,6 +426,14 @@
<link xlink:href="options.html#opt-services.headscale.enable">services.headscale</link>
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://github.com/lakinduakash/linux-wifi-hotspot">create_ap</link>,
a module for creating wifi hotspots using the program
linux-wifi-hotspot. Available as
<link xlink:href="options.html#opt-services.create_ap.enable">services.create_ap</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://0xerr0r.github.io/blocky/">blocky</link>,
@ -456,12 +522,85 @@
new versions will release.
</para>
</listitem>
<listitem>
<para>
The configuration and state directories used by
<literal>nixos-containers</literal> have been moved from
<literal>/etc/containers</literal> and
<literal>/var/lib/containers</literal> to
<literal>/etc/nixos-containers</literal> and
<literal>/var/lib/nixos-containers</literal>.
</para>
<para>
If you are changing <literal>system.stateVersion</literal> to
<literal>&quot;22.05&quot;</literal> manually on an existing
system you are responsible for migrating these directories
yourself.
</para>
<para>
This is to improve compatibility with
<literal>libcontainer</literal> based software such as Podman
and Skopeo which assumes they have ownership over
<literal>/etc/containers</literal>.
</para>
</listitem>
<listitem>
<para>
For new installations
<literal>virtualisation.oci-containers.backend</literal> is
now set to <literal>podman</literal> by default. If you still
want to use Docker on systems where
<literal>system.stateVersion</literal> is set to to
<literal>&quot;22.05&quot;</literal> set
<literal>virtualisation.oci-containers.backend = &quot;docker&quot;;</literal>.Old
systems with older <literal>stateVersion</literal>s stay with
<quote>docker</quote>.
</para>
</listitem>
<listitem>
<para>
<literal>security.klogd</literal> was removed. Logging of
kernel messages is handled by systemd since Linux 3.5.
</para>
</listitem>
<listitem>
<para>
<literal>pkgs.ssmtp</literal> has been dropped due to the
program being unmaintained. <literal>pkgs.msmtp</literal> can
be used instead as a substitute <literal>sendmail</literal>
implementation. The corresponding options
<literal>services.ssmtp.*</literal> have been removed as well.
<literal>programs.msmtp.*</literal> can be used instead for an
equivalent setup. For example:
</para>
<programlisting language="bash">
{
# Original ssmtp configuration:
services.ssmtp = {
enable = true;
useTLS = true;
useSTARTTLS = true;
hostName = &quot;smtp.example:587&quot;;
authUser = &quot;someone&quot;;
authPassFile = &quot;/secrets/password.txt&quot;;
};
# Equivalent msmtp configuration:
programs.msmtp = {
enable = true;
accounts.default = {
tls = true;
tls_starttls = true;
auth = true;
host = &quot;smtp.example&quot;;
port = 587;
user = &quot;someone&quot;;
passwordeval = &quot;cat /secrets/password.txt&quot;;
};
};
}
</programlisting>
</listitem>
<listitem>
<para>
<literal>services.kubernetes.addons.dashboard</literal> was
@ -504,12 +643,28 @@
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
In the ncdns module, the default value of
<literal>services.ncdns.address</literal> has been changed to
the IPv6 loopback address (<literal>::1</literal>).
</para>
</listitem>
<listitem>
<para>
<literal>openssh</literal> has been update to 8.9p1, changing
the FIDO security key middleware interface.
</para>
</listitem>
<listitem>
<para>
<literal>git</literal> no longer hardcodes the path to
openssh ssh binary to reduce the amount of rebuilds. If you
are using git with ssh remotes and do not have a ssh binary in
your enviroment consider adding <literal>openssh</literal> to
it or switching to <literal>gitFull</literal>.
</para>
</listitem>
<listitem>
<para>
<literal>services.k3s.enable</literal> no longer implies
@ -582,6 +737,13 @@
work.
</para>
</listitem>
<listitem>
<para>
<literal>hbase</literal> version 0.98.24 has been removed. The
package now defaults to version 2.4.11. Versions 1.7.1 and
3.0.0-alpha-2 are also available.
</para>
</listitem>
<listitem>
<para>
<literal>services.paperless-ng</literal> was renamed to
@ -715,6 +877,136 @@
to the new location if the <literal>stateVersion</literal> is
updated.
</para>
<para>
As of Synapse 1.58.0, the old groups/communities feature has
been disabled by default. It will be completely removed with
Synapse 1.61.0.
</para>
</listitem>
<listitem>
<para>
The Keycloak package (<literal>pkgs.keycloak</literal>) has
been switched from the Wildfly version, which will soon be
deprecated, to the Quarkus based version. The Keycloak service
(<literal>services.keycloak</literal>) has been updated to
accommodate the change and now differs from the previous
version in a few ways:
</para>
<itemizedlist>
<listitem>
<para>
<literal>services.keycloak.extraConfig</literal> has been
removed in favor of the new
<link xlink:href="https://github.com/NixOS/rfcs/blob/master/rfcs/0042-config-option.md">settings-style</link>
<link linkend="opt-services.keycloak.settings"><literal>services.keycloak.settings</literal></link>
option. The available options correspond directly to
parameters in <literal>conf/keycloak.conf</literal>. Some
of the most important parameters are documented as
suboptions, the rest can be found in the
<link xlink:href="https://www.keycloak.org/server/all-config">All
configuration section of the Keycloak Server Installation
and Configuration Guide</link>. While the new
configuration is much simpler and cleaner than the old
JBoss CLI one, this unfortunately mean that theres no
straightforward way to convert an old configuration to the
new format and some settings may not even be available
anymore.
</para>
</listitem>
<listitem>
<para>
<literal>services.keycloak.frontendUrl</literal> was
removed and the frontend URL is now configured through the
<literal>hostname</literal> family of settings in
<link linkend="opt-services.keycloak.settings"><literal>services.keycloak.settings</literal></link>
instead. See the
<link xlink:href="https://www.keycloak.org/server/hostname">Hostname
section of the Keycloak Server Installation and
Configuration Guide</link> for more details. Additionally,
<literal>/auth</literal> was removed from the default
context path and needs to be added back in
<link linkend="opt-services.keycloak.settings.http-relative-path"><literal>services.keycloak.settings.http-relative-path</literal></link>
if you want to keep compatibility with your current
clients.
</para>
</listitem>
<listitem>
<para>
<literal>services.keycloak.bindAddress</literal>,
<literal>services.keycloak.forceBackendUrlToFrontendUrl</literal>,
<literal>services.keycloak.httpPort</literal> and
<literal>services.keycloak.httpsPort</literal> have been
removed in favor of their equivalent options in
<link linkend="opt-services.keycloak.settings"><literal>services.keycloak.settings</literal></link>.
<literal>httpPort</literal> and
<literal>httpsPort</literal> have additionally had their
types changed from <literal>str</literal> to
<literal>port</literal>.
</para>
<para>
The new names are as follows:
</para>
<itemizedlist spacing="compact">
<listitem>
<para>
<literal>bindAddress</literal>:
<link linkend="opt-services.keycloak.settings.http-host"><literal>services.keycloak.settings.http-host</literal></link>
</para>
</listitem>
<listitem>
<para>
<literal>forceBackendUrlToFrontendUrl</literal>:
<link linkend="opt-services.keycloak.settings.hostname-strict-backchannel"><literal>services.keycloak.settings.hostname-strict-backchannel</literal></link>
</para>
</listitem>
<listitem>
<para>
<literal>httpPort</literal>:
<link linkend="opt-services.keycloak.settings.http-port"><literal>services.keycloak.settings.http-port</literal></link>
</para>
</listitem>
<listitem>
<para>
<literal>httpsPort</literal>:
<link linkend="opt-services.keycloak.settings.https-port"><literal>services.keycloak.settings.https-port</literal></link>
</para>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist>
<para>
For example, when using a reverse proxy the migration could
look like this:
</para>
<para>
Before:
</para>
<programlisting language="bash">
services.keycloak = {
enable = true;
httpPort = &quot;8080&quot;;
frontendUrl = &quot;https://keycloak.example.com/auth&quot;;
database.passwordFile = &quot;/run/keys/db_password&quot;;
extraConfig = {
&quot;subsystem=undertow&quot;.&quot;server=default-server&quot;.&quot;http-listener=default&quot;.proxy-address-forwarding = true;
};
};
</programlisting>
<para>
After:
</para>
<programlisting language="bash">
services.keycloak = {
enable = true;
settings = {
http-port = 8080;
hostname = &quot;keycloak.example.com&quot;;
http-relative-path = &quot;/auth&quot;;
proxy = &quot;edge&quot;;
};
database.passwordFile = &quot;/run/keys/db_password&quot;;
};
</programlisting>
</listitem>
<listitem>
<para>
@ -850,6 +1142,14 @@
<literal>admin</literal> and <literal>password</literal>.
</para>
</listitem>
<listitem>
<para>
The <literal>taskserver</literal> module no longer implicitly
opens ports in the firewall configuration. This is now
controlled through the option
<literal>services.taskserver.openFirewall</literal>.
</para>
</listitem>
<listitem>
<para>
The <literal>autorestic</literal> package has been upgraded
@ -1052,6 +1352,14 @@
<literal>systemd.nspawn.&lt;name&gt;.execConfig.PrivateUsers = false</literal>
</para>
</listitem>
<listitem>
<para>
<literal>systemd-shutdown</literal> is now properly linked on
shutdown to unmount all filesystems and device mapper devices
cleanly. This can be disabled using
<literal>systemd.shutdownRamfs.enable</literal>.
</para>
</listitem>
<listitem>
<para>
The Tor SOCKS proxy is now actually disabled if
@ -1062,6 +1370,15 @@
<literal>true</literal>.
</para>
</listitem>
<listitem>
<para>
<literal>services.github-runner</literal> has been hardened.
Notably address families and system calls have been
restricted, which may adversely affect some kinds of testing,
e.g. using <literal>AF_BLUETOOTH</literal> to test bluetooth
devices.
</para>
</listitem>
<listitem>
<para>
The terraform 0.12 compatibility has been removed and the
@ -1106,6 +1423,16 @@
<literal>otelcorecol</literal> and enjoy a 7x smaller binary.
</para>
</listitem>
<listitem>
<para>
<literal>services.zookeeper</literal> has a new option
<literal>jre</literal> for specifying the JRE to start
zookeeper with. It defaults to the JRE that
<literal>pkgs.zookeeper</literal> was wrapped with, instead of
<literal>pkgs.jre</literal>. This changes the JRE to
<literal>pkgs.jdk11_headless</literal> by default.
</para>
</listitem>
<listitem>
<para>
<literal>pkgs.pgadmin</literal> now refers to
@ -1386,6 +1713,17 @@
<literal>pkgs.cosmoc</literal>.
</para>
</listitem>
<listitem>
<para>
<literal>pkgs.graalvmXX-ce</literal> packages no longer
provide support for Python/Ruby/WASM, instead focusing only in
Java and Native Image Support. If you need to add support
back, please see the
<literal>pkgs.graalvmCEPackages.mkGraal</literal> function to
create your own customized version of GraalVM with support for
what you need.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="sec-release-22.05-notable-changes">
@ -1499,6 +1837,14 @@
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
The auto-upgrade service now accepts persistent (default:
true) parameter. By default auto-upgrade will now run
immediately if it would have been triggered at least once
during the time when the timer was inactive.
</para>
</listitem>
<listitem>
<para>
If you are using Wayland you can choose to use the Ozone
@ -1647,6 +1993,29 @@
configuration.
</para>
</listitem>
<listitem>
<para>
The option
<link linkend="opt-services.xserver.desktopManager.runXdgAutostartIfNone">services.xserver.desktopManager.runXdgAutostartIfNone</link>
was added in order to automatically run XDG autostart files
for sessions without a desktop manager. This replaces helpers
like the <literal>dex</literal> package.
</para>
</listitem>
<listitem>
<para>
When setting
<link linkend="opt-i18n.inputMethod.enabled">i18n.inputMethod.enabled</link>
to <literal>fcitx5</literal>, it no longer creates
corresponding systemd user services. It now relies on XDG
autostart files to start and work properly in your desktop
sessions. If you are using only a window manager without a
desktop manager, you need to enable
<literal>services.xserver.desktopManager.runXdgAutostartIfNone</literal>
or using the <literal>dex</literal> package to make
<literal>fcitx5</literal> work.
</para>
</listitem>
<listitem>
<para>
A new module was added for the Envoy reverse proxy, providing
@ -1678,10 +2047,51 @@
</listitem>
<listitem>
<para>
ORY Kratos was updated to version 0.8.3-alpha.1.pre.0, which
ORY Kratos was updated to version 0.9.0-alpha.3, which
introduces some breaking changes:
</para>
<itemizedlist spacing="compact">
<listitem>
<para>
All endpoints at the Admin API are now exposed at
<literal>/admin/</literal>. For example, endpoint
<literal>https://kratos:4434/identities</literal> is now
exposed at
<literal>https://kratos:4434/admin/identities</literal>
</para>
</listitem>
<listitem>
<para>
Configuration key
<literal>selfservice.whitelisted_return_urls</literal> has
been renamed to <literal>allowed_return_urls</literal>
</para>
</listitem>
<listitem>
<para>
The <literal>password_identifier</literal> form field of
the password login strategy has been renamed to
<literal>identifier</literal> to make compatibility with
passwordless flows possible.
</para>
</listitem>
<listitem>
<para>
Instead of having a global
<literal>default_schema_url</literal> which developers
used to update their schema, you now need to define the
<literal>default_schema_id</literal> which must reference
schema ID in your config.
</para>
</listitem>
<listitem>
<para>
Calling <literal>/self-service/recovery</literal> without
flow ID or with an invalid flow ID while authenticated
will now respond with an error instead of redirecting to
the default page.
</para>
</listitem>
<listitem>
<para>
If you are relying on the SQLite images, update your
@ -1718,6 +2128,18 @@
Notes for v0.8.2-alpha-1</link>
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://github.com/ory/kratos/releases/tag/v0.9.0-alpha.1">Release
Notes for v0.9.0-alpha-1</link>
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://github.com/ory/kratos/releases/tag/v0.9.0-alpha.3">Release
Notes for v0.9.0-alpha-3</link>
</para>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist>
@ -1845,6 +2267,12 @@
<link xlink:href="https://github.com/uber/pam-ussh">pam-ussh</link>.
</para>
</listitem>
<listitem>
<para>
The <literal>vscode-extensions.ionide.ionide-fsharp</literal>
package has been updated to 6.0.0 and now requires .NET 6.0.
</para>
</listitem>
<listitem>
<para>
The <literal>zrepl</literal> package has been updated from
@ -1870,6 +2298,24 @@
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
The <literal>polybar</literal> package has been updated from
3.5.7 to 3.6.2. See
<link xlink:href="https://github.com/polybar/polybar/releases/tag/3.6.0">the
changelog</link> for more details.
</para>
<itemizedlist spacing="compact">
<listitem>
<para>
Breaking changes include changes to escaping rules in
configuration values, changes in behavior when
encountering invalid tag names, and changes to
inter-process-messaging (IPC).
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Renamed option
@ -1961,6 +2407,24 @@
(<link xlink:href="https://github.com/NixOS/nixpkgs/pull/158992">#158992</link>).
</para>
</listitem>
<listitem>
<para>
The <literal>nss</literal> package was split into
<literal>nss_esr</literal> and <literal>nss_latest</literal>,
with <literal>nss</literal> being an alias for
<literal>nss_esr</literal>. This was done to ease maintenance
of <literal>nss</literal> and dependent high-profile packages
like <literal>firefox</literal>.
</para>
</listitem>
<listitem>
<para>
The Nextcloud module now supports to create a Mysql database
automatically with
<literal>services.nextcloud.database.createLocally</literal>
enabled.
</para>
</listitem>
<listitem>
<para>
The <literal>spark3</literal> package has been updated from
@ -1992,6 +2456,15 @@
generating host-global NNCP configuration.
</para>
</listitem>
<listitem>
<para>
The option <literal>services.snapserver.openFirewall</literal>
will no longer default to <literal>true</literal> starting
with NixOS 22.11. Enable it explicitly if you need to control
Snapserver remotely or connect streamig clients from other
hosts.
</para>
</listitem>
</itemizedlist>
</section>
</section>

View file

@ -30,6 +30,37 @@ To check the content of an ISO image, mount it like so:
# mount -o loop -t iso9660 ./result/iso/cd.iso /mnt/iso
```
## Additional drivers or firmware {#sec-building-image-drivers}
If you need additional (non-distributable) drivers or firmware in the
installer, you might want to extend these configurations.
For example, to build the GNOME graphical installer ISO, but with support for
certain WiFi adapters present in some MacBooks, you can create the following
file at `modules/installer/cd-dvd/installation-cd-graphical-gnome-macbook.nix`:
```nix
{ config, ... }:
{
imports = [ ./installation-cd-graphical-gnome.nix ];
boot.initrd.kernelModules = [ "wl" ];
boot.kernelModules = [ "kvm-intel" "wl" ];
boot.extraModulePackages = [ config.boot.kernelPackages.broadcom_sta ];
}
```
Then build it like in the example above:
```ShellSession
$ git clone https://github.com/NixOS/nixpkgs.git
$ cd nixpkgs/nixos
$ export NIXPKGS_ALLOW_UNFREE=1
$ nix-build -A config.system.build.isoImage -I nixos-config=modules/installer/cd-dvd/installation-cd-graphical-gnome-macbook.nix default.nix
```
## Technical Notes {#sec-building-image-tech-notes}
The config value enforcement is implemented via `mkImageMediaOverride = mkOverride 60;`

View file

@ -14,4 +14,5 @@
<xi:include href="../from_md/installation/installing.chapter.xml" />
<xi:include href="../from_md/installation/changing-config.chapter.xml" />
<xi:include href="../from_md/installation/upgrading.chapter.xml" />
<xi:include href="../from_md/installation/building-nixos.chapter.xml" />
</part>

View file

@ -17,6 +17,21 @@ In addition to numerous new and upgraded packages, this release has the followin
- GNOME has been upgraded to 42. Please take a look at their [Release Notes](https://release.gnome.org/42/) for details. Notably, it replaces gedit with GNOME Text Editor, GNOME Terminal with GNOME Console (formerly Kings Cross), and GNOME Screenshot with a tool built into the Shell.
- `stdenv.mkDerivation` now supports a self-referencing `finalAttrs:` parameter
containing the final `mkDerivation` arguments including overrides.
`drv.overrideAttrs` now supports two parameters `finalAttrs: previousAttrs:`.
This allows packaging configuration to be overridden in a consistent manner by
providing an alternative to `rec {}` syntax.
Additionally, `passthru` can now reference `finalAttrs.finalPackage` containing
the final package, including attributes such as the output paths and
`overrideAttrs`.
New language integrations can be simplified by overriding a "prototype"
package containing the language-specific logic. This removes the need for a
extra layer of overriding for the "generic builder" arguments, thus removing a
usability problem and source of error.
- PHP 8.1 is now available
- Mattermost has been updated to extended support release 6.3, as the previously packaged extended support release 5.37 is [reaching its end of life](https://docs.mattermost.com/upgrade/extended-support-release.html).
@ -27,12 +42,19 @@ In addition to numerous new and upgraded packages, this release has the followin
- Systemd has been upgraded to the version 250.
- Pulseaudio has been upgraded to version 15.0 and now optionally [supports additional Bluetooth audio codecs](https://www.freedesktop.org/wiki/Software/PulseAudio/Notes/15.0/#supportforldacandaptxbluetoothcodecsplussbcxqsbcwithhigher-qualityparameters) like aptX or LDAC, with codec switching support being available in `pavucontrol`. This feature is disabled by default but can be enabled by using `hardware.pulseaudio.package = pkgs.pulseaudioFull;`.
Existing 3rd party modules that provided similar functionality, like `pulseaudio-modules-bt` or `pulseaudio-hsphfpd` are deprecated and have been removed.
- The new [`postgresqlTestHook`](https://nixos.org/manual/nixpkgs/stable/#sec-postgresqlTestHook) runs a PostgreSQL server for the duration of package checks.
- [`kops`](https://kops.sigs.k8s.io) defaults to 1.22.4, which will enable [Instance Metadata Service Version 2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html) and require tokens on new clusters with Kubernetes 1.22. This will increase security by default, but may break some types of workloads. See the [release notes](https://kops.sigs.k8s.io/releases/1.22-notes/) for details.
- Module authors can use `mkRenamedOptionModuleWith` to automate the deprecation cycle without annoying out-of-tree module authors and their users.
- The default GHC version has been updated from 8.10.7 to 9.0.2. `pkgs.haskellPackages` and `pkgs.ghc` will now use this version by default.
- The GNOME and Plasma installation CDs now use `pkgs.calamares` and `pkgs.calamares-nixos-extensions` to allow users to easily install and set up NixOS with a GUI.
## New Services {#sec-release-22.05-new-services}
- [aesmd](https://github.com/intel/linux-sgx#install-the-intelr-sgx-psw), the Intel SGX Architectural Enclave Service Manager. Available as [services.aesmd](#opt-services.aesmd.enable).
@ -105,6 +127,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- [headscale](https://github.com/juanfont/headscale), an Open Source implementation of the [Tailscale](https://tailscale.io) Control Server. Available as [services.headscale](options.html#opt-services.headscale.enable)
- [create_ap](https://github.com/lakinduakash/linux-wifi-hotspot), a module for creating wifi hotspots using the program linux-wifi-hotspot. Available as [services.create_ap](options.html#opt-services.create_ap.enable).
- [blocky](https://0xerr0r.github.io/blocky/), fast and lightweight DNS proxy as ad-blocker for local network with many features.
- [pacemaker](https://clusterlabs.org/pacemaker/) cluster resource manager
@ -147,9 +171,55 @@ In addition to numerous new and upgraded packages, this release has the followin
org-contrib, refer to the ones in `pkgs.emacsPackages.elpaPackages` and
`pkgs.emacsPackages.nongnuPackages` where the new versions will release.
- The configuration and state directories used by `nixos-containers` have been
moved from `/etc/containers` and `/var/lib/containers` to
`/etc/nixos-containers` and `/var/lib/nixos-containers`.
If you are changing `system.stateVersion` to `"22.05"` manually on an existing
system you are responsible for migrating these directories yourself.
This is to improve compatibility with `libcontainer` based software such as Podman and Skopeo
which assumes they have ownership over `/etc/containers`.
- For new installations `virtualisation.oci-containers.backend` is now set to `podman` by default.
If you still want to use Docker on systems where `system.stateVersion` is set to to `"22.05"` set `virtualisation.oci-containers.backend = "docker";`.Old systems with older `stateVersion`s stay with "docker".
- `security.klogd` was removed. Logging of kernel messages is handled
by systemd since Linux 3.5.
- `pkgs.ssmtp` has been dropped due to the program being unmaintained.
`pkgs.msmtp` can be used instead as a substitute `sendmail` implementation.
The corresponding options `services.ssmtp.*` have been removed as well.
`programs.msmtp.*` can be used instead for an equivalent setup. For example:
```nix
{
# Original ssmtp configuration:
services.ssmtp = {
enable = true;
useTLS = true;
useSTARTTLS = true;
hostName = "smtp.example:587";
authUser = "someone";
authPassFile = "/secrets/password.txt";
};
# Equivalent msmtp configuration:
programs.msmtp = {
enable = true;
accounts.default = {
tls = true;
tls_starttls = true;
auth = true;
host = "smtp.example";
port = 587;
user = "someone";
passwordeval = "cat /secrets/password.txt";
};
};
}
```
- `services.kubernetes.addons.dashboard` was removed due to it being an outdated version.
- `services.kubernetes.scheduler.{port,address}` now set `--secure-port` and `--bind-address` instead of `--port` and `--address`, since the former have been deprecated and are no longer functional in kubernetes>=1.23. Ensure that you are not relying on the insecure behaviour before upgrading.
@ -160,8 +230,12 @@ In addition to numerous new and upgraded packages, this release has the followin
(`services.pdns-recursor.dns.address`, `services.pdns-recursor.dns.allowFrom`);
- allow only local connections to the REST API server (`services.pdns-recursor.api.allowFrom`).
- In the ncdns module, the default value of `services.ncdns.address` has been changed to the IPv6 loopback address (`::1`).
- `openssh` has been update to 8.9p1, changing the FIDO security key middleware interface.
- `git` no longer hardcodes the path to openssh' ssh binary to reduce the amount of rebuilds. If you are using git with ssh remotes and do not have a ssh binary in your enviroment consider adding `openssh` to it or switching to `gitFull`.
- `services.k3s.enable` no longer implies `systemd.enableUnifiedCgroupHierarchy = false`, and will default to the 'systemd' cgroup driver when using `services.k3s.docker = true`.
This change may require a reboot to take effect, and k3s may not be able to run if the boot cgroup hierarchy does not match its configuration.
The previous behavior may be retained by explicitly setting `systemd.enableUnifiedCgroupHierarchy = false` in your configuration.
@ -190,6 +264,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- `services.ipfs.extraFlags` is now escaped with `utils.escapeSystemdExecArgs`. If you rely on systemd interpolating `extraFlags` in the service `ExecStart`, this will no longer work.
- `hbase` version 0.98.24 has been removed. The package now defaults to version 2.4.11. Versions 1.7.1 and 3.0.0-alpha-2 are also available.
- `services.paperless-ng` was renamed to `services.paperless`. Accordingly, the `paperless-ng-manage` script (located in `dataDir`) was renamed to `paperless-manage`. `services.paperless` now uses `paperless-ngx`.
- The `matrix-synapse` service (`services.matrix-synapse`) has been converted to use the `settings` option defined in RFC42.
@ -286,6 +362,83 @@ In addition to numerous new and upgraded packages, this release has the followin
`media_store_path` was changed from `${dataDir}/media` to `${dataDir}/media_store` if `system.stateVersion` is at least `22.05`. Files will need to be manually moved to the new
location if the `stateVersion` is updated.
As of Synapse 1.58.0, the old groups/communities feature has been disabled by default. It will be completely removed with Synapse 1.61.0.
- The Keycloak package (`pkgs.keycloak`) has been switched from the
Wildfly version, which will soon be deprecated, to the Quarkus based
version. The Keycloak service (`services.keycloak`) has been updated
to accommodate the change and now differs from the previous version
in a few ways:
- `services.keycloak.extraConfig` has been removed in favor of the
new [settings-style](https://github.com/NixOS/rfcs/blob/master/rfcs/0042-config-option.md)
[`services.keycloak.settings`](#opt-services.keycloak.settings)
option. The available options correspond directly to parameters in
`conf/keycloak.conf`. Some of the most important parameters are
documented as suboptions, the rest can be found in the [All
configuration section of the Keycloak Server Installation and
Configuration
Guide](https://www.keycloak.org/server/all-config). While the new
configuration is much simpler and cleaner than the old JBoss CLI
one, this unfortunately mean that there's no straightforward way
to convert an old configuration to the new format and some
settings may not even be available anymore.
- `services.keycloak.frontendUrl` was removed and the frontend URL
is now configured through the `hostname` family of settings in
[`services.keycloak.settings`](#opt-services.keycloak.settings)
instead. See the [Hostname section of the Keycloak Server
Installation and Configuration
Guide](https://www.keycloak.org/server/hostname) for more
details. Additionally, `/auth` was removed from the default
context path and needs to be added back in
[`services.keycloak.settings.http-relative-path`](#opt-services.keycloak.settings.http-relative-path)
if you want to keep compatibility with your current clients.
- `services.keycloak.bindAddress`,
`services.keycloak.forceBackendUrlToFrontendUrl`,
`services.keycloak.httpPort` and `services.keycloak.httpsPort`
have been removed in favor of their equivalent options in
[`services.keycloak.settings`](#opt-services.keycloak.settings). `httpPort`
and `httpsPort` have additionally had their types changed from
`str` to `port`.
The new names are as follows:
- `bindAddress`: [`services.keycloak.settings.http-host`](#opt-services.keycloak.settings.http-host)
- `forceBackendUrlToFrontendUrl`: [`services.keycloak.settings.hostname-strict-backchannel`](#opt-services.keycloak.settings.hostname-strict-backchannel)
- `httpPort`: [`services.keycloak.settings.http-port`](#opt-services.keycloak.settings.http-port)
- `httpsPort`: [`services.keycloak.settings.https-port`](#opt-services.keycloak.settings.https-port)
For example, when using a reverse proxy the migration could look
like this:
Before:
```nix
services.keycloak = {
enable = true;
httpPort = "8080";
frontendUrl = "https://keycloak.example.com/auth";
database.passwordFile = "/run/keys/db_password";
extraConfig = {
"subsystem=undertow"."server=default-server"."http-listener=default".proxy-address-forwarding = true;
};
};
```
After:
```nix
services.keycloak = {
enable = true;
settings = {
http-port = 8080;
hostname = "keycloak.example.com";
http-relative-path = "/auth";
proxy = "edge";
};
database.passwordFile = "/run/keys/db_password";
};
```
- The MoinMoin wiki engine (`services.moinmoin`) has been removed, because Python 2 is being retired from nixpkgs.
- Services in the `hadoop` module previously set `openFirewall` to true by default.
@ -327,6 +480,10 @@ In addition to numerous new and upgraded packages, this release has the followin
- `services.miniflux.adminCredentialFiles` is now required, instead of defaulting to `admin` and `password`.
- The `taskserver` module no longer implicitly opens ports in the firewall
configuration. This is now controlled through the option
`services.taskserver.openFirewall`.
- The `autorestic` package has been upgraded from 1.3.0 to 1.5.0 which introduces breaking changes in config file, check [their migration guide](https://autorestic.vercel.app/migration/1.4_1.5) for more details.
- For `pkgs.python3.pkgs.ipython`, its direct dependency `pkgs.python3.pkgs.matplotlib-inline`
@ -378,8 +535,14 @@ In addition to numerous new and upgraded packages, this release has the followin
- `systemd-nspawn@.service` settings have been reverted to the default systemd behaviour. User namespaces are now activated by default. If you want to keep running nspawn containers without user namespaces you need to set `systemd.nspawn.<name>.execConfig.PrivateUsers = false`
- `systemd-shutdown` is now properly linked on shutdown to unmount all filesystems and device mapper devices cleanly. This can be disabled using `systemd.shutdownRamfs.enable`.
- The Tor SOCKS proxy is now actually disabled if `services.tor.client.enable` is set to `false` (the default). If you are using this functionality but didn't change the setting or set it to `false`, you now need to set it to `true`.
- `services.github-runner` has been hardened. Notably address families and
system calls have been restricted, which may adversely affect some kinds of
testing, e.g. using `AF_BLUETOOTH` to test bluetooth devices.
- The terraform 0.12 compatibility has been removed and the `terraform.withPlugins` and `terraform-providers.mkProvider` implementations simplified. Providers now need to be stored under
`$out/libexec/terraform-providers/<registry>/<owner>/<name>/<version>/<os>_<arch>/terraform-provider-<name>_v<version>` (which mkProvider does).
@ -400,6 +563,10 @@ In addition to numerous new and upgraded packages, this release has the followin
you should change the package you refer to. If you don't need them update your
commands from `otelcontribcol` to `otelcorecol` and enjoy a 7x smaller binary.
- `services.zookeeper` has a new option `jre` for specifying the JRE to start
zookeeper with. It defaults to the JRE that `pkgs.zookeeper` was wrapped with,
instead of `pkgs.jre`. This changes the JRE to `pkgs.jdk11_headless` by default.
- `pkgs.pgadmin` now refers to `pkgs.pgadmin4`. `pgadmin3` has been removed.
- `pkgs.noto-fonts-cjk` is now deprecated in favor of `pkgs.noto-fonts-cjk-sans`
@ -477,6 +644,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- `pkgs.cosmopolitan` no longer provides the `cosmoc` command. It has been moved to `pkgs.cosmoc`.
- `pkgs.graalvmXX-ce` packages no longer provide support for Python/Ruby/WASM, instead focusing only in Java and Native Image Support. If you need to add support back, please see the `pkgs.graalvmCEPackages.mkGraal` function to create your own customized version of GraalVM with support for what you need.
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
## Other Notable Changes {#sec-release-22.05-notable-changes}
@ -518,6 +687,10 @@ In addition to numerous new and upgraded packages, this release has the followin
- Support for older versions of hadoop have been added to the module
- Overriding and extending site XML files has been made easier
- The auto-upgrade service now accepts persistent (default: true) parameter.
By default auto-upgrade will now run immediately if it would have been triggered at least
once during the time when the timer was inactive.
- If you are using Wayland you can choose to use the Ozone Wayland support
in Chrome and several Electron apps by setting the environment variable
`NIXOS_OZONE_WL=1` (for example via
@ -573,6 +746,17 @@ In addition to numerous new and upgraded packages, this release has the followin
- The `services.stubby` module was converted to a [settings-style](https://github.com/NixOS/rfcs/blob/master/rfcs/0042-config-option.md) configuration.
- The option
[services.xserver.desktopManager.runXdgAutostartIfNone](#opt-services.xserver.desktopManager.runXdgAutostartIfNone)
was added in order to automatically run XDG autostart files for sessions without a desktop manager.
This replaces helpers like the `dex` package.
- When setting [i18n.inputMethod.enabled](#opt-i18n.inputMethod.enabled) to `fcitx5`,
it no longer creates corresponding systemd user services.
It now relies on XDG autostart files to start and work properly in your desktop sessions.
If you are using only a window manager without a desktop manager, you need to enable
`services.xserver.desktopManager.runXdgAutostartIfNone` or using the `dex` package to make `fcitx5` work.
- A new module was added for the Envoy reverse proxy, providing the options `services.envoy.enable` and `services.envoy.settings`.
- The option `services.duplicati.dataDir` has been added to allow changing the location of duplicati's files.
@ -581,13 +765,21 @@ In addition to numerous new and upgraded packages, this release has the followin
- `nixos-generate-config` now puts the dhcp configuration in `hardware-configuration.nix` instead of `configuration.nix`.
- ORY Kratos was updated to version 0.8.3-alpha.1.pre.0, which introduces some breaking changes:
- ORY Kratos was updated to version 0.9.0-alpha.3, which introduces some breaking changes:
- All endpoints at the Admin API are now exposed at `/admin/`. For example, endpoint `https://kratos:4434/identities` is now exposed at `https://kratos:4434/admin/identities`
- Configuration key `selfservice.whitelisted_return_urls` has been renamed to `allowed_return_urls`
- The `password_identifier` form field of the password login strategy has been renamed to `identifier` to make compatibility with passwordless flows possible.
- Instead of having a global `default_schema_url` which developers used to update their schema, you now need to define the `default_schema_id` which must reference schema ID in your config.
- Calling `/self-service/recovery` without flow ID or with an invalid flow ID while authenticated will now respond with an error instead of redirecting to the default page.
- If you are relying on the SQLite images, update your Docker Pull commands as follows:
- `docker pull oryd/kratos:{version}`
- Additionally, all passwords now have to be at least 8 characters long.
- For more details, see:
- [Release Notes for v0.8.1-alpha-1](https://github.com/ory/kratos/releases/tag/v0.8.1-alpha.1)
- [Release Notes for v0.8.2-alpha-1](https://github.com/ory/kratos/releases/tag/v0.8.2-alpha.1)
- [Release Notes for v0.9.0-alpha-1](https://github.com/ory/kratos/releases/tag/v0.9.0-alpha.1)
- [Release Notes for v0.9.0-alpha-3](https://github.com/ory/kratos/releases/tag/v0.9.0-alpha.3)
- `fetchFromSourcehut` now allows fetching repositories recursively
using `fetchgit` or `fetchhg` if the argument `fetchSubmodules`
@ -627,11 +819,16 @@ In addition to numerous new and upgraded packages, this release has the followin
- `security.pam.ussh` has been added, which allows authorizing PAM sessions based on SSH _certificates_ held within an SSH agent, using [pam-ussh](https://github.com/uber/pam-ussh).
- The `vscode-extensions.ionide.ionide-fsharp` package has been updated to 6.0.0 and now requires .NET 6.0.
- The `zrepl` package has been updated from 0.4.0 to 0.5:
- The RPC protocol version was bumped; all zrepl daemons in a setup must be updated and restarted before replication can resume.
- A bug involving encrypt-on-receive has been fixed. Read the [zrepl documentation](https://zrepl.github.io/configuration/sendrecvoptions.html#job-recv-options-placeholder) and check the output of `zfs get -r encryption,zrepl:placeholder PATH_TO_ROOTFS` on the receiver.
- The `polybar` package has been updated from 3.5.7 to 3.6.2. See [the changelog](https://github.com/polybar/polybar/releases/tag/3.6.0) for more details.
- Breaking changes include changes to escaping rules in configuration values, changes in behavior when encountering invalid tag names, and changes to inter-process-messaging (IPC).
- Renamed option `services.openssh.challengeResponseAuthentication` to `services.openssh.kbdInteractiveAuthentication`.
Reason is that the old name has been deprecated upstream.
Using the old option name will still work, but produce a warning.
@ -662,6 +859,11 @@ In addition to numerous new and upgraded packages, this release has the followin
- The `R` package now builds again on `aarch64-darwin` ([#158992](https://github.com/NixOS/nixpkgs/pull/158992)).
- The `nss` package was split into `nss_esr` and `nss_latest`, with `nss` being an alias for `nss_esr`. This was done to ease maintenance of `nss` and dependent high-profile packages like `firefox`.
- The Nextcloud module now supports to create a Mysql database automatically
with `services.nextcloud.database.createLocally` enabled.
- The `spark3` package has been updated from 3.1.2 to 3.2.1 ([#160075](https://github.com/NixOS/nixpkgs/pull/160075)):
- Testing has been enabled for `aarch64-linux` in addition to `x86_64-linux`.
@ -669,4 +871,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- The `programs.nncp` options were added for generating host-global NNCP configuration.
- The option `services.snapserver.openFirewall` will no longer default to
`true` starting with NixOS 22.11. Enable it explicitly if you need to control
Snapserver remotely or connect streamig clients from other hosts.
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->

View file

@ -0,0 +1,322 @@
# Note: This is a private API, internal to NixOS. Its interface is subject
# to change without notice.
#
# The result of this builder is a single disk image, partitioned like this:
#
# * partition #1: a very small, 1MiB partition to leave room for Grub.
#
# * partition #2: boot, a partition formatted with FAT to be used for /boot.
# FAT is chosen to support EFI.
#
# * partition #3: nixos, a partition dedicated to a zpool.
#
# This single-disk approach does not satisfy ZFS's requirements for autoexpand,
# however automation can expand it anyway. For example, with
# `services.zfs.expandOnBoot`.
{ lib
, pkgs
, # The NixOS configuration to be installed onto the disk image.
config
, # size of the FAT partition, in megabytes.
bootSize ? 1024
, # The size of the root partition, in megabytes.
rootSize ? 2048
, # The name of the ZFS pool
rootPoolName ? "tank"
, # zpool properties
rootPoolProperties ? {
autoexpand = "on";
}
, # pool-wide filesystem properties
rootPoolFilesystemProperties ? {
acltype = "posixacl";
atime = "off";
compression = "on";
mountpoint = "legacy";
xattr = "sa";
}
, # datasets, with per-attribute options:
# mount: (optional) mount point in the VM
# properties: (optional) ZFS properties on the dataset, like filesystemProperties
# Notes:
# 1. datasets will be created from shorter to longer names as a simple topo-sort
# 2. you should define a root's dataset's mount for `/`
datasets ? { }
, # The files and directories to be placed in the target file system.
# This is a list of attribute sets {source, target} where `source'
# is the file system object (regular file or directory) to be
# grafted in the file system at path `target'.
contents ? [ ]
, # The initial NixOS configuration file to be copied to
# /etc/nixos/configuration.nix. This configuration will be embedded
# inside a configuration which includes the described ZFS fileSystems.
configFile ? null
, # Shell code executed after the VM has finished.
postVM ? ""
, name ? "nixos-disk-image"
, # Disk image format, one of qcow2, qcow2-compressed, vdi, vpc, raw.
format ? "raw"
, # Include a copy of Nixpkgs in the disk image
includeChannel ? true
}:
let
formatOpt = if format == "qcow2-compressed" then "qcow2" else format;
compress = lib.optionalString (format == "qcow2-compressed") "-c";
filenameSuffix = "." + {
qcow2 = "qcow2";
vdi = "vdi";
vpc = "vhd";
raw = "img";
}.${formatOpt} or formatOpt;
rootFilename = "nixos.root${filenameSuffix}";
# FIXME: merge with channel.nix / make-channel.nix.
channelSources =
let
nixpkgs = lib.cleanSource pkgs.path;
in
pkgs.runCommand "nixos-${config.system.nixos.version}" { } ''
mkdir -p $out
cp -prd ${nixpkgs.outPath} $out/nixos
chmod -R u+w $out/nixos
if [ ! -e $out/nixos/nixpkgs ]; then
ln -s . $out/nixos/nixpkgs
fi
rm -rf $out/nixos/.git
echo -n ${config.system.nixos.versionSuffix} > $out/nixos/.version-suffix
'';
closureInfo = pkgs.closureInfo {
rootPaths = [ config.system.build.toplevel ]
++ (lib.optional includeChannel channelSources);
};
modulesTree = pkgs.aggregateModules
(with config.boot.kernelPackages; [ kernel zfs ]);
tools = lib.makeBinPath (
with pkgs; [
config.system.build.nixos-enter
config.system.build.nixos-install
dosfstools
e2fsprogs
gptfdisk
nix
parted
utillinux
zfs
]
);
hasDefinedMount = disk: ((disk.mount or null) != null);
stringifyProperties = prefix: properties: lib.concatStringsSep " \\\n" (
lib.mapAttrsToList
(
property: value: "${prefix} ${lib.escapeShellArg property}=${lib.escapeShellArg value}"
)
properties
);
featuresToProperties = features:
lib.listToAttrs
(builtins.map
(feature: {
name = "feature@${feature}";
value = "enabled";
})
features);
createDatasets =
let
datasetlist = lib.mapAttrsToList lib.nameValuePair datasets;
sorted = lib.sort (left: right: (lib.stringLength left.name) < (lib.stringLength right.name)) datasetlist;
cmd = { name, value }:
let
properties = stringifyProperties "-o" (value.properties or { });
in
"zfs create -p ${properties} ${name}";
in
lib.concatMapStringsSep "\n" cmd sorted;
mountDatasets =
let
datasetlist = lib.mapAttrsToList lib.nameValuePair datasets;
mounts = lib.filter ({ value, ... }: hasDefinedMount value) datasetlist;
sorted = lib.sort (left: right: (lib.stringLength left.value.mount) < (lib.stringLength right.value.mount)) mounts;
cmd = { name, value }:
''
mkdir -p /mnt${lib.escapeShellArg value.mount}
mount -t zfs ${name} /mnt${lib.escapeShellArg value.mount}
'';
in
lib.concatMapStringsSep "\n" cmd sorted;
unmountDatasets =
let
datasetlist = lib.mapAttrsToList lib.nameValuePair datasets;
mounts = lib.filter ({ value, ... }: hasDefinedMount value) datasetlist;
sorted = lib.sort (left: right: (lib.stringLength left.value.mount) > (lib.stringLength right.value.mount)) mounts;
cmd = { name, value }:
''
umount /mnt${lib.escapeShellArg value.mount}
'';
in
lib.concatMapStringsSep "\n" cmd sorted;
fileSystemsCfgFile =
let
mountable = lib.filterAttrs (_: value: hasDefinedMount value) datasets;
in
pkgs.runCommand "filesystem-config.nix"
{
buildInputs = with pkgs; [ jq nixpkgs-fmt ];
filesystems = builtins.toJSON {
fileSystems = lib.mapAttrs'
(
dataset: attrs:
{
name = attrs.mount;
value = {
fsType = "zfs";
device = "${dataset}";
};
}
)
mountable;
};
passAsFile = [ "filesystems" ];
} ''
(
echo "builtins.fromJSON '''"
jq . < "$filesystemsPath"
echo "'''"
) > $out
nixpkgs-fmt $out
'';
mergedConfig =
if configFile == null
then fileSystemsCfgFile
else
pkgs.runCommand "configuration.nix"
{
buildInputs = with pkgs; [ nixpkgs-fmt ];
}
''
(
echo '{ imports = ['
printf "(%s)\n" "$(cat ${fileSystemsCfgFile})";
printf "(%s)\n" "$(cat ${configFile})";
echo ']; }'
) > $out
nixpkgs-fmt $out
'';
image = (
pkgs.vmTools.override {
rootModules =
[ "zfs" "9p" "9pnet_virtio" "virtio_pci" "virtio_blk" ] ++
(pkgs.lib.optional pkgs.stdenv.hostPlatform.isx86 "rtc_cmos");
kernel = modulesTree;
}
).runInLinuxVM (
pkgs.runCommand name
{
memSize = 1024;
QEMU_OPTS = "-drive file=$rootDiskImage,if=virtio,cache=unsafe,werror=report";
preVM = ''
PATH=$PATH:${pkgs.qemu_kvm}/bin
mkdir $out
rootDiskImage=root.raw
qemu-img create -f raw $rootDiskImage ${toString (bootSize + rootSize)}M
'';
postVM = ''
${if formatOpt == "raw" then ''
mv $rootDiskImage $out/${rootFilename}
'' else ''
${pkgs.qemu}/bin/qemu-img convert -f raw -O ${formatOpt} ${compress} $rootDiskImage $out/${rootFilename}
''}
rootDiskImage=$out/${rootFilename}
set -x
${postVM}
'';
} ''
export PATH=${tools}:$PATH
set -x
cp -sv /dev/vda /dev/sda
cp -sv /dev/vda /dev/xvda
parted --script /dev/vda -- \
mklabel gpt \
mkpart no-fs 1MiB 2MiB \
set 1 bios_grub on \
align-check optimal 1 \
mkpart primary fat32 2MiB ${toString bootSize}MiB \
align-check optimal 2 \
mkpart primary fat32 ${toString bootSize}MiB -1MiB \
align-check optimal 3 \
print
sfdisk --dump /dev/vda
zpool create \
${stringifyProperties " -o" rootPoolProperties} \
${stringifyProperties " -O" rootPoolFilesystemProperties} \
${rootPoolName} /dev/vda3
parted --script /dev/vda -- print
${createDatasets}
${mountDatasets}
mkdir -p /mnt/boot
mkfs.vfat -n ESP /dev/vda2
mount /dev/vda2 /mnt/boot
mount
# Install a configuration.nix
mkdir -p /mnt/etc/nixos
# `cat` so it is mutable on the fs
cat ${mergedConfig} > /mnt/etc/nixos/configuration.nix
export NIX_STATE_DIR=$TMPDIR/state
nix-store --load-db < ${closureInfo}/registration
nixos-install \
--root /mnt \
--no-root-passwd \
--system ${config.system.build.toplevel} \
--substituters "" \
${lib.optionalString includeChannel ''--channel ${channelSources}''}
df -h
umount /mnt/boot
${unmountDatasets}
zpool export ${rootPoolName}
''
);
in
image

View file

@ -1,4 +1,4 @@
{ lib, systemdUtils }:
{ lib, systemdUtils, pkgs }:
with systemdUtils.lib;
with systemdUtils.unitOptions;
@ -34,4 +34,36 @@ rec {
automounts = with types; listOf (submodule [ stage2AutomountOptions unitConfig automountConfig ]);
initrdAutomounts = with types; attrsOf (submodule [ stage1AutomountOptions unitConfig automountConfig ]);
initrdContents = types.attrsOf (types.submodule ({ config, options, name, ... }: {
options = {
enable = mkEnableOption "copying of this file and symlinking it" // { default = true; };
target = mkOption {
type = types.path;
description = ''
Path of the symlink.
'';
default = name;
};
text = mkOption {
default = null;
type = types.nullOr types.lines;
description = "Text of the file.";
};
source = mkOption {
type = types.path;
description = "Path of the source file.";
};
};
config = {
source = mkIf (config.text != null) (
let name' = "initrd-" + baseNameOf name;
in mkDerivedConfig options.text (pkgs.writeText name')
);
};
}));
}

View file

@ -526,10 +526,17 @@ class Machine:
self.run_callbacks()
self.connect()
if timeout is not None:
command = "timeout {} sh -c {}".format(timeout, shlex.quote(command))
# Always run command with shell opts
command = f"set -euo pipefail; {command}"
timeout_str = ""
if timeout is not None:
timeout_str = f"timeout {timeout}"
out_command = (
f"{timeout_str} sh -c {shlex.quote(command)} | (base64 --wrap 0; echo)\n"
)
out_command = f"( set -euo pipefail; {command} ) | (base64 --wrap 0; echo)\n"
assert self.shell
self.shell.send(out_command.encode())

View file

@ -213,6 +213,6 @@ rec {
systemdUtils = {
lib = import ./systemd-lib.nix { inherit lib config pkgs; };
unitOptions = import ./systemd-unit-options.nix { inherit lib systemdUtils; };
types = import ./systemd-types.nix { inherit lib systemdUtils; };
types = import ./systemd-types.nix { inherit lib systemdUtils pkgs; };
};
}

View file

@ -73,7 +73,7 @@ in {
}
'';
zfsBuilder = import ../../../lib/make-zfs-image.nix {
zfsBuilder = import ../../../lib/make-multi-disk-zfs-image.nix {
inherit lib config configFile;
inherit (cfg) contents format name;
pkgs = import ../../../.. { inherit (pkgs) system; }; # ensure we use the regular qemu-kvm package

View file

@ -0,0 +1,101 @@
# nix-build '<nixpkgs/nixos>' -A config.system.build.openstackImage --arg configuration "{ imports = [ ./nixos/maintainers/scripts/openstack/openstack-image.nix ]; }"
{ config, lib, pkgs, ... }:
let
inherit (lib) mkOption types;
copyChannel = true;
cfg = config.openstackImage;
imageBootMode = if config.openstack.efi then "uefi" else "legacy-bios";
in
{
imports = [
../../../modules/virtualisation/openstack-config.nix
] ++ (lib.optional copyChannel ../../../modules/installer/cd-dvd/channel.nix);
options.openstackImage = {
name = mkOption {
type = types.str;
description = "The name of the generated derivation";
default = "nixos-openstack-image-${config.system.nixos.label}-${pkgs.stdenv.hostPlatform.system}";
};
sizeMB = mkOption {
type = types.int;
default = 8192;
description = "The size in MB of the image";
};
format = mkOption {
type = types.enum [ "raw" "qcow2" ];
default = "qcow2";
description = "The image format to output";
};
};
config = {
documentation.enable = copyChannel;
openstack = {
efi = true;
zfs = {
enable = true;
datasets = {
"tank/system/root".mount = "/";
"tank/system/var".mount = "/var";
"tank/local/nix".mount = "/nix";
"tank/user/home".mount = "/home";
};
};
};
system.build.openstackImage = import ../../../lib/make-single-disk-zfs-image.nix {
inherit lib config;
inherit (cfg) contents format name;
pkgs = import ../../../.. { inherit (pkgs) system; }; # ensure we use the regular qemu-kvm package
configFile = pkgs.writeText "configuration.nix"
''
{ modulesPath, ... }: {
imports = [ "''${modulesPath}/virtualisation/openstack-config.nix" ];
openstack.zfs.enable = true;
}
'';
includeChannel = copyChannel;
bootSize = 1000;
rootSize = cfg.sizeMB;
rootPoolProperties = {
ashift = 12;
autoexpand = "on";
};
datasets = config.openstack.zfs.datasets;
postVM = ''
extension=''${rootDiskImage##*.}
friendlyName=$out/${cfg.name}
rootDisk="$friendlyName.root.$extension"
mv "$rootDiskImage" "$rootDisk"
mkdir -p $out/nix-support
echo "file ${cfg.format} $rootDisk" >> $out/nix-support/hydra-build-products
${pkgs.jq}/bin/jq -n \
--arg system_label ${lib.escapeShellArg config.system.nixos.label} \
--arg system ${lib.escapeShellArg pkgs.stdenv.hostPlatform.system} \
--arg root_logical_bytes "$(${pkgs.qemu}/bin/qemu-img info --output json "$rootDisk" | ${pkgs.jq}/bin/jq '."virtual-size"')" \
--arg boot_mode "${imageBootMode}" \
--arg root "$rootDisk" \
'{}
| .label = $system_label
| .boot_mode = $boot_mode
| .system = $system
| .disks.root.logical_bytes = $root_logical_bytes
| .disks.root.file = $root
' > $out/nix-support/image-info.json
'';
};
};
}

View file

@ -1,17 +1,18 @@
# nix-build '<nixpkgs/nixos>' -A config.system.build.openstackImage --arg configuration "{ imports = [ ./nixos/maintainers/scripts/openstack/openstack-image.nix ]; }"
{ config, lib, pkgs, ... }:
with lib;
let
copyChannel = true;
in
{
imports =
[ ../../../modules/installer/cd-dvd/channel.nix
../../../modules/virtualisation/openstack-config.nix
];
imports = [
../../../modules/virtualisation/openstack-config.nix
] ++ (lib.optional copyChannel ../../../modules/installer/cd-dvd/channel.nix);
documentation.enable = copyChannel;
system.build.openstackImage = import ../../../lib/make-disk-image.nix {
inherit lib config;
inherit lib config copyChannel;
additionalSpace = "1024M";
pkgs = import ../../../.. { inherit (pkgs) system; }; # ensure we use the regular qemu-kvm package
format = "qcow2";

View file

@ -149,8 +149,11 @@ in
'');
boot.initrd.systemd.contents = {
"/etc/kbd".source = "${consoleEnv config.boot.initrd.systemd.package.kbd}/share";
"/etc/vconsole.conf".source = vconsoleConf;
# Add everything if we want full console setup...
"/etc/kbd" = lib.mkIf cfg.earlySetup { source = "${consoleEnv config.boot.initrd.systemd.package.kbd}/share"; };
# ...but only the keymaps if we don't
"/etc/kbd/keymaps" = lib.mkIf (!cfg.earlySetup) { source = "${consoleEnv config.boot.initrd.systemd.package.kbd}/share/keymaps"; };
};
boot.initrd.systemd.storePaths = [
"${config.boot.initrd.systemd.package}/lib/systemd/systemd-vconsole-setup"
@ -180,7 +183,7 @@ in
];
})
(mkIf cfg.earlySetup {
(mkIf (cfg.earlySetup && !config.boot.initrd.systemd.enable) {
boot.initrd.extraUtilsCommands = ''
mkdir -p $out/share/consolefonts
${if substring 0 1 cfg.font == "/" then ''
@ -194,10 +197,6 @@ in
cp -L $font $out/share/consolefonts/font.psf
fi
'';
assertions = [{
assertion = !config.boot.initrd.systemd.enable;
message = "console.earlySetup is implied by systemd stage 1";
}];
})
]))
];

View file

@ -95,11 +95,14 @@ with lib;
config = {
assertions = [
{
# Prevent users from disabling nscd, with nssModules being set.
# If disabling nscd is really necessary, it's still possible to opt out
# by forcing config.system.nssModules to [].
assertion = config.system.nssModules.path != "" -> config.services.nscd.enable;
message = "Loading NSS modules from system.nssModules (${config.system.nssModules.path}), requires services.nscd.enable being set to true.";
message = ''
Loading NSS modules from system.nssModules (${config.system.nssModules.path}),
requires services.nscd.enable being set to true.
If disabling nscd is really necessary, it is possible to disable loading NSS modules
by setting `system.nssModules = lib.mkForce [];` in your configuration.nix.
'';
}
];

View file

@ -27,7 +27,8 @@ in {
};
hardware.enableRedistributableFirmware = mkOption {
default = false;
default = config.hardware.enableAllFirmware;
defaultText = lib.literalExpression "config.hardware.enableAllFirmware";
type = types.bool;
description = ''
Turn on this option if you want to enable all the firmware with a license allowing redistribution.
@ -71,7 +72,7 @@ in {
})
(mkIf cfg.enableAllFirmware {
assertions = [{
assertion = !cfg.enableAllFirmware || (config.nixpkgs.config.allowUnfree or false);
assertion = !cfg.enableAllFirmware || config.nixpkgs.config.allowUnfree;
message = ''
the list of hardware.enableAllFirmware contains non-redistributable licensed firmware files.
This requires nixpkgs.config.allowUnfree to be true.
@ -84,7 +85,10 @@ in {
b43Firmware_6_30_163_46
b43FirmwareCutter
xow_dongle-firmware
] ++ optional pkgs.stdenv.hostPlatform.isx86 facetimehd-firmware;
] ++ optionals pkgs.stdenv.hostPlatform.isx86 [
facetimehd-calibration
facetimehd-firmware
];
})
(mkIf cfg.wirelessRegulatoryDatabase {
hardware.firmware = [ pkgs.wireless-regdb ];

View file

@ -8,7 +8,10 @@ let
version = "2.40-13.0";
src = pkgs.fetchurl {
url = "https://downloads.linux.hpe.com/SDR/downloads/MCP/Ubuntu/pool/non-free/${pname}-${version}_amd64.deb";
urls = [
"https://downloads.linux.hpe.com/SDR/downloads/MCP/Ubuntu/pool/non-free/${pname}-${version}_amd64.deb"
"http://apt.netangels.net/pool/main/h/hpssacli/${pname}-${version}_amd64.deb"
];
sha256 = "11w7fwk93lmfw0yya4jpjwdmgjimqxx6412sqa166g1pz4jil4sw";
};

View file

@ -51,9 +51,10 @@ in
(isYes "KALLSYMS_ALL")
];
boot.initrd.extraUdevRulesCommands = ''
boot.initrd.extraUdevRulesCommands = mkIf (!config.boot.initrd.systemd.enable) ''
cp -v ${package}/etc/udev/rules.d/*.rules $out/
'';
boot.initrd.services.udev.packages = [ package ];
environment.systemPackages =
[ package.vulkan ] ++

View file

@ -24,6 +24,7 @@ let
primeEnabled = syncCfg.enable || offloadCfg.enable;
nvidiaPersistencedEnabled = cfg.nvidiaPersistenced;
nvidiaSettings = cfg.nvidiaSettings;
busIDType = types.strMatching "([[:print:]]+\:[0-9]{1,3}\:[0-9]{1,2}\:[0-9])?";
in
{
@ -68,7 +69,7 @@ in
};
hardware.nvidia.prime.nvidiaBusId = mkOption {
type = types.str;
type = busIDType;
default = "";
example = "PCI:1:0:0";
description = ''
@ -78,7 +79,7 @@ in
};
hardware.nvidia.prime.intelBusId = mkOption {
type = types.str;
type = busIDType;
default = "";
example = "PCI:0:2:0";
description = ''
@ -88,7 +89,7 @@ in
};
hardware.nvidia.prime.amdgpuBusId = mkOption {
type = types.str;
type = busIDType;
default = "";
example = "PCI:4:0:0";
description = ''
@ -363,11 +364,12 @@ in
services.udev.extraRules =
''
# Create /dev/nvidia-uvm when the nvidia-uvm module is loaded.
KERNEL=="nvidia", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidiactl c $$(grep nvidia-frontend /proc/devices | cut -d \ -f 1) 255'"
KERNEL=="nvidia_modeset", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidia-modeset c $$(grep nvidia-frontend /proc/devices | cut -d \ -f 1) 254'"
KERNEL=="card*", SUBSYSTEM=="drm", DRIVERS=="nvidia", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidia%n c $$(grep nvidia-frontend /proc/devices | cut -d \ -f 1) %n'"
KERNEL=="nvidia", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidiactl c 195 255'"
KERNEL=="nvidia_modeset", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidia-modeset c 195 254'"
KERNEL=="card*", SUBSYSTEM=="drm", DRIVERS=="nvidia", PROGRAM="${pkgs.gnugrep}/bin/grep 'Device Minor:' /proc/driver/nvidia/gpus/%b/information", \
RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidia%c{3} c 195 %c{3}"
KERNEL=="nvidia_uvm", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidia-uvm c $$(grep nvidia-uvm /proc/devices | cut -d \ -f 1) 0'"
KERNEL=="nvidia_uvm", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidia-uvm-tools c $$(grep nvidia-uvm /proc/devices | cut -d \ -f 1) 0'"
KERNEL=="nvidia_uvm", RUN+="${pkgs.runtimeShell} -c 'mknod -m 666 /dev/nvidia-uvm-tools c $$(grep nvidia-uvm /proc/devices | cut -d \ -f 1) 1'"
'' + optionalString cfg.powerManagement.finegrained ''
# Remove NVIDIA USB xHCI Host Controller devices, if present
ACTION=="add", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x0c0330", ATTR{remove}="1"

View file

@ -14,6 +14,18 @@ in
options.hardware.facetimehd.enable = mkEnableOption "facetimehd kernel module";
options.hardware.facetimehd.withCalibration = mkOption {
default = false;
example = true;
type = types.bool;
description = ''
Whether to include sensor calibration files for facetimehd.
This makes colors look much better but is experimental, see
<link xlink:href="https://github.com/patjak/facetimehd/wiki/Extracting-the-sensor-calibration-files"/>
for details.
'';
};
config = mkIf cfg.enable {
boot.kernelModules = [ "facetimehd" ];
@ -22,7 +34,8 @@ in
boot.extraModulePackages = [ kernelPackages.facetimehd ];
hardware.firmware = [ pkgs.facetimehd-firmware ];
hardware.firmware = [ pkgs.facetimehd-firmware ]
++ optional cfg.withCalibration pkgs.facetimehd-calibration;
# unload module during suspend/hibernate as it crashes the whole system
powerManagement.powerDownCommands = ''

View file

@ -5,7 +5,9 @@ with lib;
let
im = config.i18n.inputMethod;
cfg = im.fcitx5;
fcitx5Package = pkgs.fcitx5-with-addons.override { inherit (cfg) addons; };
addons = cfg.addons ++ optional cfg.enableRimeData pkgs.rime-data;
fcitx5Package = pkgs.fcitx5-with-addons.override { inherit addons; };
whetherRimeDataDir = any (p: p.pname == "fcitx5-rime") cfg.addons;
in {
options = {
i18n.inputMethod.fcitx5 = {
@ -17,22 +19,29 @@ in {
Enabled Fcitx5 addons.
'';
};
enableRimeData = mkEnableOption "default rime-data with fcitx5-rime";
};
};
config = mkIf (im.enabled == "fcitx5") {
i18n.inputMethod.package = fcitx5Package;
environment.variables = {
GTK_IM_MODULE = "fcitx";
QT_IM_MODULE = "fcitx";
XMODIFIERS = "@im=fcitx";
};
environment = mkMerge [{
variables = {
GTK_IM_MODULE = "fcitx";
QT_IM_MODULE = "fcitx";
XMODIFIERS = "@im=fcitx";
};
}
(mkIf whetherRimeDataDir {
pathsToLink = [
"/share/rime-data"
];
systemd.user.services.fcitx5-daemon = {
enable = true;
script = "${fcitx5Package}/bin/fcitx5";
wantedBy = [ "graphical-session.target" ];
};
variables = {
NIX_RIME_DATA_DIR = "/run/current-system/sw/share/rime-data";
};
})];
};
}

View file

@ -39,7 +39,8 @@ in
echo "unpacking the NixOS/Nixpkgs sources..."
mkdir -p /nix/var/nix/profiles/per-user/root
${config.nix.package.out}/bin/nix-env -p /nix/var/nix/profiles/per-user/root/channels \
-i ${channelSources} --quiet --option build-use-substitutes false
-i ${channelSources} --quiet --option build-use-substitutes false \
${optionalString config.boot.initrd.systemd.enable "--option sandbox false"} # There's an issue with pivot_root
mkdir -m 0700 -p /root/.nix-defexpr
ln -s /nix/var/nix/profiles/per-user/root/channels /root/.nix-defexpr/channels
mkdir -m 0755 -p /var/lib/nixos

View file

@ -35,22 +35,28 @@ with lib;
# Enable sound in graphical iso's.
hardware.pulseaudio.enable = true;
environment.systemPackages = [
# Spice guest additions
services.spice-vdagentd.enable = true;
# Enable plymouth
boot.plymouth.enable = true;
environment.defaultPackages = with pkgs; [
# Include gparted for partitioning disks.
pkgs.gparted
gparted
# Include some editors.
pkgs.vim
pkgs.bvi # binary editor
pkgs.joe
vim
nano
# Include some version control tools.
pkgs.git
git
rsync
# Firefox for reading the manual.
pkgs.firefox
firefox
pkgs.glxinfo
glxinfo
];
}

View file

@ -0,0 +1,59 @@
# This module defines a NixOS installation CD that contains GNOME.
{ pkgs, ... }:
{
imports = [ ./installation-cd-graphical-calamares.nix ];
isoImage.edition = "gnome";
services.xserver.desktopManager.gnome = {
# Add Firefox and other tools useful for installation to the launcher
favoriteAppsOverride = ''
[org.gnome.shell]
favorite-apps=[ 'firefox.desktop', 'nixos-manual.desktop', 'org.gnome.Console.desktop', 'org.gnome.Nautilus.desktop', 'gparted.desktop', 'io.calamares.calamares.desktop' ]
'';
# Override GNOME defaults to disable GNOME tour and disable suspend
extraGSettingsOverrides = ''
[org.gnome.shell]
welcome-dialog-last-shown-version='9999999999'
[org.gnome.settings-daemon.plugins.power]
sleep-inactive-ac-type='nothing'
sleep-inactive-battery-type='nothing'
'';
extraGSettingsOverridePackages = [ pkgs.gnome.gnome-settings-daemon ];
enable = true;
};
# Theme calamares with GNOME theme
qt5 = {
enable = true;
platformTheme = "gnome";
};
# Fix scaling for calamares on wayland
environment.variables = {
QT_QPA_PLATFORM = "$([[ $XDG_SESSION_TYPE = \"wayland\" ]] && echo \"wayland\")";
};
services.xserver.displayManager = {
gdm = {
enable = true;
# autoSuspend makes the machine automatically suspend after inactivity.
# It's possible someone could/try to ssh'd into the machine and obviously
# have issues because it's inactive.
# See:
# * https://github.com/NixOS/nixpkgs/pull/63790
# * https://gitlab.gnome.org/GNOME/gnome-control-center/issues/22
autoSuspend = false;
};
autoLogin = {
enable = true;
user = "nixos";
};
};
}

View file

@ -0,0 +1,49 @@
# This module defines a NixOS installation CD that contains X11 and
# Plasma 5.
{ pkgs, ... }:
{
imports = [ ./installation-cd-graphical-calamares.nix ];
isoImage.edition = "plasma5";
services.xserver = {
desktopManager.plasma5 = {
enable = true;
};
# Automatically login as nixos.
displayManager = {
sddm.enable = true;
autoLogin = {
enable = true;
user = "nixos";
};
};
};
environment.systemPackages = with pkgs; [
# Graphical text editor
kate
];
system.activationScripts.installerDesktop = let
# Comes from documentation.nix when xserver and nixos.enable are true.
manualDesktopFile = "/run/current-system/sw/share/applications/nixos-manual.desktop";
homeDir = "/home/nixos/";
desktopDir = homeDir + "Desktop/";
in ''
mkdir -p ${desktopDir}
chown nixos ${homeDir} ${desktopDir}
ln -sfT ${manualDesktopFile} ${desktopDir + "nixos-manual.desktop"}
ln -sfT ${pkgs.gparted}/share/applications/gparted.desktop ${desktopDir + "gparted.desktop"}
ln -sfT ${pkgs.konsole}/share/applications/org.kde.konsole.desktop ${desktopDir + "org.kde.konsole.desktop"}
ln -sfT ${pkgs.calamares-nixos}/share/applications/io.calamares.calamares.desktop ${desktopDir + "io.calamares.calamares.desktop"}
'';
}

View file

@ -0,0 +1,20 @@
# This module adds the calamares installer to the basic graphical NixOS
# installation CD.
{ pkgs, ... }:
let
calamares-nixos-autostart = pkgs.makeAutostartItem { name = "io.calamares.calamares"; package = pkgs.calamares-nixos; };
in
{
imports = [ ./installation-cd-graphical-base.nix ];
environment.systemPackages = with pkgs; [
# Calamares for graphical installation
libsForQt5.kpmcore
calamares-nixos
calamares-nixos-autostart
calamares-nixos-extensions
# Needed for calamares QML module packagechooserq
libsForQt5.full
];
}

View file

@ -1,8 +1,6 @@
# This module defines a NixOS installation CD that contains GNOME.
{ lib, ... }:
with lib;
{ ... }:
{
imports = [ ./installation-cd-graphical-base.nix ];

View file

@ -1,9 +1,7 @@
# This module defines a NixOS installation CD that contains X11 and
# Plasma 5.
{ config, lib, pkgs, ... }:
with lib;
{ pkgs, ... }:
{
imports = [ ./installation-cd-graphical-base.nix ];

View file

@ -87,19 +87,19 @@ in
boot.initrd.availableKernelModules =
[ "mvsdio" "reiserfs" "ext3" "ums-cypress" "rtc_mv" "ext4" ];
boot.postBootCommands =
boot.postBootCommands = lib.mkIf (!boot.initrd.systemd.enable)
''
mkdir -p /mnt
cp ${dummyConfiguration} /etc/nixos/configuration.nix
'';
boot.initrd.extraUtilsCommands =
boot.initrd.extraUtilsCommands = lib.mkIf (!boot.initrd.systemd.enable)
''
copy_bin_and_libs ${pkgs.util-linux}/sbin/hwclock
'';
boot.initrd.postDeviceCommands =
boot.initrd.postDeviceCommands = lib.mkIf (!boot.initrd.systemd.enable)
''
hwclock -s
'';

View file

@ -40,7 +40,7 @@
}
{
name = "bzImage";
path = "${config.system.build.kernel}/bzImage";
path = "${config.system.build.kernel}/${config.system.boot.loader.kernelFile}";
}
{
name = "kexec-boot";

View file

@ -39,6 +39,12 @@
# Supported in newer board revisions
arm_boost=1
[cm4]
# Enable host mode on the 2711 built-in XHCI USB controller.
# This line should be removed if the legacy DWC2 controller is required
# (e.g. for USB device mode) or if USB support is not required.
otg_mode=1
[all]
# Boot in 64-bit mode.
arm_64bit=1
@ -65,6 +71,9 @@
cp ${pkgs.ubootRaspberryPi4_64bit}/u-boot.bin firmware/u-boot-rpi4.bin
cp ${pkgs.raspberrypi-armstubs}/armstub8-gic.bin firmware/armstub8-gic.bin
cp ${pkgs.raspberrypifw}/share/raspberrypi/boot/bcm2711-rpi-4-b.dtb firmware/
cp ${pkgs.raspberrypifw}/share/raspberrypi/boot/bcm2711-rpi-400.dtb firmware/
cp ${pkgs.raspberrypifw}/share/raspberrypi/boot/bcm2711-rpi-cm4.dtb firmware/
cp ${pkgs.raspberrypifw}/share/raspberrypi/boot/bcm2711-rpi-cm4s.dtb firmware/
'';
populateRootCommands = ''
mkdir -p ./files/boot

View file

@ -1,7 +1,7 @@
{
x86_64-linux = "/nix/store/0n2wfvi1i3fg97cjc54wslvk0804y0sn-nix-2.7.0";
i686-linux = "/nix/store/4p27c1k9z99pli6x8cxfph20yfyzn9nh-nix-2.7.0";
aarch64-linux = "/nix/store/r9yr8ijsb0gi9r7y92y3yzyld59yp0kj-nix-2.7.0";
x86_64-darwin = "/nix/store/hyfj5imsd0c4amlcjpf8l6w4q2draaj3-nix-2.7.0";
aarch64-darwin = "/nix/store/9l96qllhbb6xrsjaai76dn74ap7rq92n-nix-2.7.0";
x86_64-linux = "/nix/store/yx36yzxpw1hn4fz8iyf1rfyd56jg3yf4-nix-2.8.0";
i686-linux = "/nix/store/c0hg806zvwg800qbszzj8ff4a224kjgf-nix-2.8.0";
aarch64-linux = "/nix/store/wic2832ll53q392r2wks4xr2nrk7p8p5-nix-2.8.0";
x86_64-darwin = "/nix/store/5yqdvnkmkrhl36xh0qy31pymdphjimdd-nix-2.8.0";
aarch64-darwin = "/nix/store/izc9592szrnpv8n86hr88bhpyc9g6b4s-nix-2.8.0";
}

View file

@ -206,6 +206,11 @@ in
# Or disable the firewall altogether.
# networking.firewall.enable = false;
# Copy the NixOS configuration file and link it from the resulting system
# (/run/current-system/configuration.nix). This is useful in case you
# accidentally delete configuration.nix.
# system.copySystemConfiguration = true;
# This value determines the NixOS release from which the default
# settings for stateful data, like file locations and database versions
# on your system were taken. Its perfectly fine and recommended to leave

View file

@ -27,7 +27,7 @@ in
locate = mkOption {
type = package;
default = pkgs.findutils;
default = pkgs.findutils.locate;
defaultText = literalExpression "pkgs.findutils";
example = literalExpression "pkgs.mlocate";
description = ''

View file

@ -172,6 +172,7 @@
./programs/java.nix
./programs/k40-whisperer.nix
./programs/kclock.nix
./programs/k3b.nix
./programs/kdeconnect.nix
./programs/kbdlight.nix
./programs/less.nix
@ -205,7 +206,6 @@
./programs/spacefm.nix
./programs/singularity.nix
./programs/ssh.nix
./programs/ssmtp.nix
./programs/sysdig.nix
./programs/systemtap.nix
./programs/starship.nix
@ -410,6 +410,7 @@
./services/display-managers/greetd.nix
./services/editors/emacs.nix
./services/editors/infinoted.nix
./services/editors/haste.nix
./services/finance/odoo.nix
./services/games/asf.nix
./services/games/crossfire-server.nix
@ -459,6 +460,7 @@
./services/hardware/udisks2.nix
./services/hardware/upower.nix
./services/hardware/usbmuxd.nix
./services/hardware/usbrelayd.nix
./services/hardware/thermald.nix
./services/hardware/undervolt.nix
./services/hardware/vdr.nix
@ -661,6 +663,7 @@
./services/monitoring/longview.nix
./services/monitoring/mackerel-agent.nix
./services/monitoring/metricbeat.nix
./services/monitoring/mimir.nix
./services/monitoring/monit.nix
./services/monitoring/munin.nix
./services/monitoring/nagios.nix
@ -734,12 +737,14 @@
./services/networking/blocky.nix
./services/networking/charybdis.nix
./services/networking/cjdns.nix
./services/networking/cloudflare-dyndns.nix
./services/networking/cntlm.nix
./services/networking/connman.nix
./services/networking/consul.nix
./services/networking/coredns.nix
./services/networking/corerad.nix
./services/networking/coturn.nix
./services/networking/create_ap.nix
./services/networking/croc.nix
./services/networking/dante.nix
./services/networking/ddclient.nix
@ -1180,13 +1185,14 @@
./system/boot/stage-2.nix
./system/boot/systemd.nix
./system/boot/systemd/coredump.nix
./system/boot/systemd/initrd-secrets.nix
./system/boot/systemd/initrd.nix
./system/boot/systemd/journald.nix
./system/boot/systemd/logind.nix
./system/boot/systemd/nspawn.nix
./system/boot/systemd/shutdown.nix
./system/boot/systemd/tmpfiles.nix
./system/boot/systemd/user.nix
./system/boot/systemd/initrd.nix
./system/boot/systemd/initrd-mdraid.nix
./system/boot/timesyncd.nix
./system/boot/tmp.nix
./system/etc/etc-activation.nix
@ -1240,6 +1246,7 @@
./virtualisation/amazon-options.nix
./virtualisation/hyperv-guest.nix
./virtualisation/kvmgt.nix
./virtualisation/openstack-options.nix
./virtualisation/openvswitch.nix
./virtualisation/parallels-guest.nix
./virtualisation/podman/default.nix
@ -1249,6 +1256,7 @@
./virtualisation/virtualbox-guest.nix
./virtualisation/virtualbox-host.nix
./virtualisation/vmware-guest.nix
./virtualisation/vmware-host.nix
./virtualisation/waydroid.nix
./virtualisation/xen-dom0.nix
./virtualisation/xe-guest-utilities.nix

View file

@ -40,6 +40,9 @@ in
# SD cards.
"sdhci_pci"
# NVMe drives
"nvme"
# Firewire support. Not tested.
"ohci1394" "sbp2"

Some files were not shown because too many files have changed in this diff Show more