Project import generated by Copybara.
GitOrigin-RevId: bd645e8668ec6612439a9ee7e71f7eac4099d4f6
This commit is contained in:
parent
bcb5de4f0c
commit
504525a148
11531 changed files with 288530 additions and 137378 deletions
3
third_party/nixpkgs/.editorconfig
vendored
3
third_party/nixpkgs/.editorconfig
vendored
|
@ -90,6 +90,9 @@ insert_final_newline = unset
|
|||
indent_style = unset
|
||||
trim_trailing_whitespace = unset
|
||||
|
||||
[pkgs/misc/documentation-highlighter/**]
|
||||
insert_final_newline = unset
|
||||
|
||||
[pkgs/servers/dict/wordnet_structures.py]
|
||||
trim_trailing_whitespace = unset
|
||||
|
||||
|
|
47
third_party/nixpkgs/.github/CODEOWNERS
vendored
47
third_party/nixpkgs/.github/CODEOWNERS
vendored
|
@ -25,9 +25,14 @@
|
|||
/lib/cli.nix @infinisil @Profpatsch
|
||||
/lib/debug.nix @infinisil @Profpatsch
|
||||
/lib/asserts.nix @infinisil @Profpatsch
|
||||
/lib/path.* @infinisil @fricklerhandwerk
|
||||
/lib/path.* @infinisil
|
||||
/lib/fileset @infinisil
|
||||
/doc/functions/fileset.section.md @infinisil
|
||||
## Libraries / Module system
|
||||
/lib/modules.nix @infinisil @roberth
|
||||
/lib/types.nix @infinisil @roberth
|
||||
/lib/options.nix @infinisil @roberth
|
||||
/lib/tests/modules.sh @infinisil @roberth
|
||||
/lib/tests/modules @infinisil @roberth
|
||||
|
||||
# Nixpkgs Internals
|
||||
/default.nix @Ericson2314
|
||||
|
@ -68,13 +73,14 @@
|
|||
# Contributor documentation
|
||||
/CONTRIBUTING.md @infinisil
|
||||
/.github/PULL_REQUEST_TEMPLATE.md @infinisil
|
||||
/doc/contributing/ @fricklerhandwerk @infinisil
|
||||
/doc/contributing/contributing-to-documentation.chapter.md @jtojnar @fricklerhandwerk @infinisil
|
||||
/doc/contributing/ @infinisil
|
||||
/doc/contributing/contributing-to-documentation.chapter.md @jtojnar @infinisil
|
||||
/lib/README.md @infinisil
|
||||
/doc/README.md @infinisil
|
||||
/nixos/README.md @infinisil
|
||||
/pkgs/README.md @infinisil
|
||||
/maintainers/README.md @infinisil
|
||||
/maintainers/* @piegamesde @Janik-Haag
|
||||
|
||||
# User-facing development documentation
|
||||
/doc/development.md @infinisil
|
||||
|
@ -100,6 +106,9 @@
|
|||
/nixos/lib/systemd-*.nix @NixOS/systemd
|
||||
/pkgs/os-specific/linux/systemd @NixOS/systemd
|
||||
|
||||
# Systemd-boot
|
||||
/nixos/modules/system/boot/loader/systemd-boot @JulienMalka
|
||||
|
||||
# Images and installer media
|
||||
/nixos/modules/installer/cd-dvd/ @samueldr
|
||||
/nixos/modules/installer/sd-card/ @samueldr
|
||||
|
@ -118,13 +127,13 @@
|
|||
/pkgs/development/interpreters/python/hooks @FRidh @jonringer
|
||||
|
||||
# Haskell
|
||||
/doc/languages-frameworks/haskell.section.md @cdepillabout @sternenseemann @maralorn
|
||||
/maintainers/scripts/haskell @cdepillabout @sternenseemann @maralorn
|
||||
/pkgs/development/compilers/ghc @cdepillabout @sternenseemann @maralorn
|
||||
/pkgs/development/haskell-modules @cdepillabout @sternenseemann @maralorn
|
||||
/pkgs/test/haskell @cdepillabout @sternenseemann @maralorn
|
||||
/pkgs/top-level/release-haskell.nix @cdepillabout @sternenseemann @maralorn
|
||||
/pkgs/top-level/haskell-packages.nix @cdepillabout @sternenseemann @maralorn
|
||||
/doc/languages-frameworks/haskell.section.md @cdepillabout @sternenseemann @maralorn @ncfavier
|
||||
/maintainers/scripts/haskell @cdepillabout @sternenseemann @maralorn @ncfavier
|
||||
/pkgs/development/compilers/ghc @cdepillabout @sternenseemann @maralorn @ncfavier
|
||||
/pkgs/development/haskell-modules @cdepillabout @sternenseemann @maralorn @ncfavier
|
||||
/pkgs/test/haskell @cdepillabout @sternenseemann @maralorn @ncfavier
|
||||
/pkgs/top-level/release-haskell.nix @cdepillabout @sternenseemann @maralorn @ncfavier
|
||||
/pkgs/top-level/haskell-packages.nix @cdepillabout @sternenseemann @maralorn @ncfavier
|
||||
|
||||
# Perl
|
||||
/pkgs/development/interpreters/perl @stigtsp @zakame @dasJ
|
||||
|
@ -259,13 +268,6 @@ pkgs/development/python-modules/buildcatrust/ @ajs124 @lukegb @mweinelt
|
|||
/pkgs/development/php-packages @aanderse @drupol @etu @globin @ma27 @talyz
|
||||
/pkgs/top-level/php-packages.nix @jtojnar @aanderse @drupol @etu @globin @ma27 @talyz
|
||||
|
||||
# Podman, CRI-O modules and related
|
||||
/nixos/modules/virtualisation/containers.nix @adisbladis
|
||||
/nixos/modules/virtualisation/cri-o.nix @adisbladis
|
||||
/nixos/modules/virtualisation/podman @adisbladis
|
||||
/nixos/tests/cri-o.nix @adisbladis
|
||||
/nixos/tests/podman @adisbladis
|
||||
|
||||
# Docker tools
|
||||
/pkgs/build-support/docker @roberth
|
||||
/nixos/tests/docker-tools* @roberth
|
||||
|
@ -323,15 +325,14 @@ pkgs/applications/version-management/forgejo @bendlas @emilylange
|
|||
/pkgs/development/ocaml-modules @ulrikstrid
|
||||
|
||||
# ZFS
|
||||
pkgs/os-specific/linux/zfs @raitobezarius
|
||||
nixos/lib/make-single-disk-zfs-image.nix @raitobezarius
|
||||
nixos/lib/make-multi-disk-zfs-image.nix @raitobezarius
|
||||
pkgs/os-specific/linux/zfs/2_1.nix @raitobezarius
|
||||
pkgs/os-specific/linux/zfs/generic.nix @raitobezarius
|
||||
nixos/modules/tasks/filesystems/zfs.nix @raitobezarius
|
||||
nixos/tests/zfs.nix @raitobezarius
|
||||
|
||||
# Zig
|
||||
/pkgs/development/compilers/zig @AndersonTorres @figsoda
|
||||
/doc/hooks/zig.section.md @AndersonTorres @figsoda
|
||||
/pkgs/development/compilers/zig @figsoda
|
||||
/doc/hooks/zig.section.md @figsoda
|
||||
|
||||
# Linux Kernel
|
||||
pkgs/os-specific/linux/kernel/manual-config.nix @amjoseph-nixpkgs
|
||||
|
|
|
@ -39,3 +39,10 @@ Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result.
|
|||
[user@system:~]$ nix-shell -p nix-info --run "nix-info -m"
|
||||
output here
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
Add a :+1: [reaction] to [issues you find important].
|
||||
|
||||
[reaction]: https://github.blog/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/
|
||||
[issues you find important]: https://github.com/NixOS/nixpkgs/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc
|
||||
|
|
|
@ -37,3 +37,10 @@ Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result.
|
|||
[user@system:~]$ nix-shell -p nix-info --run "nix-info -m"
|
||||
output here
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
Add a :+1: [reaction] to [issues you find important].
|
||||
|
||||
[reaction]: https://github.blog/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/
|
||||
[issues you find important]: https://github.com/NixOS/nixpkgs/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc
|
||||
|
|
|
@ -30,3 +30,9 @@ assignees: ''
|
|||
[open documentation issues]: https://github.com/NixOS/nixpkgs/issues?q=is%3Aissue+is%3Aopen+label%3A%229.needs%3A+documentation%22
|
||||
[open documentation pull requests]: https://github.com/NixOS/nixpkgs/pulls?q=is%3Aopen+is%3Apr+label%3A%228.has%3A+documentation%22%2C%226.topic%3A+documentation%22
|
||||
|
||||
---
|
||||
|
||||
Add a :+1: [reaction] to [issues you find important].
|
||||
|
||||
[reaction]: https://github.blog/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/
|
||||
[issues you find important]: https://github.com/NixOS/nixpkgs/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc
|
||||
|
|
|
@ -26,3 +26,10 @@ There's a high chance that you'll have the new version right away while helping
|
|||
-----
|
||||
|
||||
Note for maintainers: Please tag this issue in your PR.
|
||||
|
||||
---
|
||||
|
||||
Add a :+1: [reaction] to [issues you find important].
|
||||
|
||||
[reaction]: https://github.blog/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/
|
||||
[issues you find important]: https://github.com/NixOS/nixpkgs/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc
|
||||
|
|
|
@ -17,3 +17,10 @@ assignees: ''
|
|||
* source URL:
|
||||
* license: mit, bsd, gpl2+ , ...
|
||||
* platforms: unix, linux, darwin, ...
|
||||
|
||||
---
|
||||
|
||||
Add a :+1: [reaction] to [issues you find important].
|
||||
|
||||
[reaction]: https://github.blog/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/
|
||||
[issues you find important]: https://github.com/NixOS/nixpkgs/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc
|
||||
|
|
|
@ -85,3 +85,10 @@ nix log $(nix path-info --derivation nixpkgs#<package>)
|
|||
|
||||
(please share the relevant fragment of the diffoscope output here, and any
|
||||
additional analysis you may have done)
|
||||
|
||||
---
|
||||
|
||||
Add a :+1: [reaction] to [issues you find important].
|
||||
|
||||
[reaction]: https://github.blog/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/
|
||||
[issues you find important]: https://github.com/NixOS/nixpkgs/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc
|
||||
|
|
|
@ -24,7 +24,7 @@ For new packages please briefly describe the package or provide a link to its ho
|
|||
- made sure NixOS tests are [linked](https://nixos.org/manual/nixpkgs/unstable/#ssec-nixos-tests-linking) to the relevant packages
|
||||
- [ ] Tested compilation of all packages that depend on this change using `nix-shell -p nixpkgs-review --run "nixpkgs-review rev HEAD"`. Note: all changes have to be committed, also see [nixpkgs-review usage](https://github.com/Mic92/nixpkgs-review#usage)
|
||||
- [ ] Tested basic functionality of all binary files (usually in `./result/bin/`)
|
||||
- [23.11 Release Notes](https://github.com/NixOS/nixpkgs/blob/master/nixos/doc/manual/release-notes/rl-2311.section.md) (or backporting [23.05 Release notes](https://github.com/NixOS/nixpkgs/blob/master/nixos/doc/manual/release-notes/rl-2305.section.md))
|
||||
- [24.05 Release Notes](https://github.com/NixOS/nixpkgs/blob/master/nixos/doc/manual/release-notes/rl-2405.section.md) (or backporting [23.05](https://github.com/NixOS/nixpkgs/blob/master/nixos/doc/manual/release-notes/rl-2305.section.md) and [23.11](https://github.com/NixOS/nixpkgs/blob/master/nixos/doc/manual/release-notes/rl-2311.section.md) Release notes)
|
||||
- [ ] (Package updates) Added a release notes entry if the change is major or breaking
|
||||
- [ ] (Module updates) Added a release notes entry if the change is significant
|
||||
- [ ] (Module addition) Added a release notes entry if adding a new NixOS module
|
||||
|
@ -40,3 +40,10 @@ Thanks a lot if you do!
|
|||
List of open PRs: https://github.com/NixOS/nixpkgs/pulls
|
||||
Reviewing guidelines: https://nixos.org/manual/nixpkgs/unstable/#chap-reviewing-contributions
|
||||
-->
|
||||
|
||||
---
|
||||
|
||||
Add a :+1: [reaction] to [pull requests you find important].
|
||||
|
||||
[reaction]: https://github.blog/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/
|
||||
[pull requests you find important]: https://github.com/NixOS/nixpkgs/pulls?q=is%3Aopen+sort%3Areactions-%2B1-desc
|
||||
|
|
7
third_party/nixpkgs/.github/labeler.yml
vendored
7
third_party/nixpkgs/.github/labeler.yml
vendored
|
@ -65,6 +65,13 @@
|
|||
- pkgs/top-level/haskell-packages.nix
|
||||
- pkgs/top-level/release-haskell.nix
|
||||
|
||||
"6.topic: jupyter":
|
||||
- pkgs/development/python-modules/jupyter*/**/*
|
||||
- pkgs/development/python-modules/mkdocs-jupyter/*
|
||||
- nixos/modules/services/development/jupyter/**/*
|
||||
- pkgs/applications/editors/jupyter-kernels/**/*
|
||||
- pkgs/applications/editors/jupyter/**/*
|
||||
|
||||
"6.topic: kernel":
|
||||
- pkgs/build-support/kernel/**/*
|
||||
- pkgs/os-specific/linux/kernel/**/*
|
||||
|
|
|
@ -20,11 +20,11 @@ jobs:
|
|||
if: github.repository_owner == 'NixOS' && github.event.pull_request.merged == true && (github.event_name != 'labeled' || startsWith('backport', github.event.label.name))
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
|
||||
with:
|
||||
ref: ${{ github.event.pull_request.head.sha }}
|
||||
- name: Create backport PRs
|
||||
uses: korthout/backport-action@v2.1.1
|
||||
uses: korthout/backport-action@08bafb375e6e9a9a2b53a744b987e5d81a133191 # v2.1.1
|
||||
with:
|
||||
# Config README: https://github.com/korthout/backport-action#backport-action
|
||||
copy_labels_pattern: 'severity:\ssecurity'
|
||||
|
|
|
@ -18,9 +18,9 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
# we don't limit this action to only NixOS repo since the checks are cheap and useful developer feedback
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: cachix/install-nix-action@v23
|
||||
- uses: cachix/cachix-action@v12
|
||||
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
|
||||
- uses: cachix/install-nix-action@7ac1ec25491415c381d9b62f0657c7a028df52a7 # v24
|
||||
- uses: cachix/cachix-action@6a2e08b5ebf7a9f285ff57b1870a4262b06e0bee # v13
|
||||
with:
|
||||
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.
|
||||
name: nixpkgs-ci
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
# Checks pkgs/by-name (see pkgs/by-name/README.md)
|
||||
# using the nixpkgs-check-by-name tool (see pkgs/test/nixpkgs-check-by-name)
|
||||
#
|
||||
# When you make changes to this workflow, also update pkgs/test/nixpkgs-check-by-name/scripts/run-local.sh adequately
|
||||
name: Check pkgs/by-name
|
||||
|
||||
# The pre-built tool is fetched from a channel,
|
||||
|
@ -8,21 +10,33 @@ on:
|
|||
# Using pull_request_target instead of pull_request avoids having to approve first time contributors
|
||||
pull_request_target
|
||||
|
||||
# The tool doesn't need any permissions, it only outputs success or not based on the checkout
|
||||
permissions: {}
|
||||
permissions:
|
||||
# We need this permission to cancel the workflow run if there's a merge conflict
|
||||
actions: write
|
||||
|
||||
jobs:
|
||||
check:
|
||||
# This is x86_64-linux, for which the tool is always prebuilt on the nixos-* channels,
|
||||
# as specified in nixos/release-combined.nix
|
||||
runs-on: ubuntu-latest
|
||||
# This should take 1 minute at most, but let's be generous.
|
||||
# The default of 6 hours is definitely too long
|
||||
timeout-minutes: 10
|
||||
steps:
|
||||
# This step has to be in this file,
|
||||
# because it's needed to determine which revision of the repository to fetch,
|
||||
# and we can only use other files from the repository once it's fetched.
|
||||
- name: Resolving the merge commit
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
# This checks for mergeability of a pull request as recommended in
|
||||
# https://docs.github.com/en/rest/guides/using-the-rest-api-to-interact-with-your-git-database?apiVersion=2022-11-28#checking-mergeability-of-pull-requests
|
||||
|
||||
# Retry the API query this many times
|
||||
retryCount=3
|
||||
# Start with 5 seconds, but double every retry
|
||||
retryInterval=5
|
||||
while true; do
|
||||
echo "Checking whether the pull request can be merged"
|
||||
prInfo=$(gh api \
|
||||
|
@ -33,10 +47,19 @@ jobs:
|
|||
mergedSha=$(jq -r .merge_commit_sha <<< "$prInfo")
|
||||
|
||||
if [[ "$mergeable" == "null" ]]; then
|
||||
if (( retryCount == 0 )); then
|
||||
echo "Not retrying anymore, probably GitHub is having internal issues"
|
||||
exit 1
|
||||
else
|
||||
(( retryCount -= 1 )) || true
|
||||
|
||||
# null indicates that GitHub is still computing whether it's mergeable
|
||||
# Wait a couple seconds before trying again
|
||||
echo "GitHub is still computing whether this PR can be merged, waiting 5 seconds before trying again"
|
||||
sleep 5
|
||||
echo "GitHub is still computing whether this PR can be merged, waiting $retryInterval seconds before trying again ($retryCount retries left)"
|
||||
sleep "$retryInterval"
|
||||
|
||||
(( retryInterval *= 2 )) || true
|
||||
fi
|
||||
else
|
||||
break
|
||||
fi
|
||||
|
@ -45,129 +68,37 @@ jobs:
|
|||
if [[ "$mergeable" == "true" ]]; then
|
||||
echo "The PR can be merged, checking the merge commit $mergedSha"
|
||||
else
|
||||
echo "The PR cannot be merged, it has a merge conflict"
|
||||
echo "The PR cannot be merged, it has a merge conflict, cancelling the workflow.."
|
||||
gh api \
|
||||
--method POST \
|
||||
-H "Accept: application/vnd.github+json" \
|
||||
-H "X-GitHub-Api-Version: 2022-11-28" \
|
||||
/repos/"$GITHUB_REPOSITORY"/actions/runs/"$GITHUB_RUN_ID"/cancel
|
||||
sleep 60
|
||||
# If it's still not canceled after a minute, something probably went wrong, just exit
|
||||
exit 1
|
||||
fi
|
||||
echo "mergedSha=$mergedSha" >> "$GITHUB_ENV"
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
|
||||
with:
|
||||
# pull_request_target checks out the base branch by default
|
||||
ref: ${{ env.mergedSha }}
|
||||
# Fetches the merge commit and its parents
|
||||
fetch-depth: 2
|
||||
- name: Determining PR git hashes
|
||||
- name: Checking out base branch
|
||||
run: |
|
||||
# For pull_request_target this is the same as $GITHUB_SHA
|
||||
echo "baseSha=$(git rev-parse HEAD^1)" >> "$GITHUB_ENV"
|
||||
|
||||
echo "headSha=$(git rev-parse HEAD^2)" >> "$GITHUB_ENV"
|
||||
- uses: cachix/install-nix-action@v23
|
||||
- name: Determining channel to use for dependencies
|
||||
run: |
|
||||
echo "Determining which channel to use for PR base branch $GITHUB_BASE_REF"
|
||||
if [[ "$GITHUB_BASE_REF" =~ ^(release|staging|staging-next)-([0-9][0-9]\.[0-9][0-9])$ ]]; then
|
||||
# Use the release channel for all PRs to release-XX.YY, staging-XX.YY and staging-next-XX.YY
|
||||
channel=nixos-${BASH_REMATCH[2]}
|
||||
echo "PR is for a release branch, using release channel $channel"
|
||||
else
|
||||
# Use the nixos-unstable channel for all other PRs
|
||||
channel=nixos-unstable
|
||||
echo "PR is for a non-release branch, using unstable channel $channel"
|
||||
fi
|
||||
echo "channel=$channel" >> "$GITHUB_ENV"
|
||||
- name: Fetching latest version of channel
|
||||
run: |
|
||||
echo "Fetching latest version of channel $channel"
|
||||
# This is probably the easiest way to get Nix to output the path to a downloaded channel!
|
||||
nixpkgs=$(nix-instantiate --find-file nixpkgs -I nixpkgs=channel:"$channel")
|
||||
# This file only exists in channels
|
||||
rev=$(<"$nixpkgs"/.git-revision)
|
||||
echo "Channel $channel is at revision $rev"
|
||||
echo "nixpkgs=$nixpkgs" >> "$GITHUB_ENV"
|
||||
echo "rev=$rev" >> "$GITHUB_ENV"
|
||||
- name: Fetching pre-built nixpkgs-check-by-name from the channel
|
||||
run: |
|
||||
echo "Fetching pre-built nixpkgs-check-by-name from channel $channel at revision $rev"
|
||||
# Passing --max-jobs 0 makes sure that we won't build anything
|
||||
nix-build "$nixpkgs" -A tests.nixpkgs-check-by-name --max-jobs 0
|
||||
base=$(mktemp -d)
|
||||
git worktree add "$base" "$(git rev-parse HEAD^1)"
|
||||
echo "base=$base" >> "$GITHUB_ENV"
|
||||
- uses: cachix/install-nix-action@7ac1ec25491415c381d9b62f0657c7a028df52a7 # v24
|
||||
- name: Fetching the tool
|
||||
run: pkgs/test/nixpkgs-check-by-name/scripts/fetch-tool.sh "$GITHUB_BASE_REF" result
|
||||
- name: Running nixpkgs-check-by-name
|
||||
run: |
|
||||
echo "Checking whether the check succeeds on the base branch $GITHUB_BASE_REF"
|
||||
git checkout -q "$baseSha"
|
||||
if baseOutput=$(result/bin/nixpkgs-check-by-name . 2>&1); then
|
||||
baseSuccess=1
|
||||
if result/bin/nixpkgs-check-by-name --base "$base" .; then
|
||||
exit 0
|
||||
else
|
||||
baseSuccess=
|
||||
fi
|
||||
printf "%s\n" "$baseOutput"
|
||||
|
||||
echo "Checking whether the check would succeed after merging this pull request"
|
||||
git checkout -q "$mergedSha"
|
||||
if mergedOutput=$(result/bin/nixpkgs-check-by-name . 2>&1); then
|
||||
mergedSuccess=1
|
||||
exitCode=0
|
||||
else
|
||||
mergedSuccess=
|
||||
exitCode=1
|
||||
fi
|
||||
printf "%s\n" "$mergedOutput"
|
||||
|
||||
resultToEmoji() {
|
||||
if [[ -n "$1" ]]; then
|
||||
echo ":heavy_check_mark:"
|
||||
else
|
||||
echo ":x:"
|
||||
fi
|
||||
}
|
||||
|
||||
# Print a markdown summary in GitHub actions
|
||||
{
|
||||
echo "| Nixpkgs version | Check result |"
|
||||
echo "| --- | --- |"
|
||||
echo "| Latest base commit | $(resultToEmoji "$baseSuccess") |"
|
||||
echo "| After merging this PR | $(resultToEmoji "$mergedSuccess") |"
|
||||
echo ""
|
||||
|
||||
if [[ -n "$baseSuccess" ]]; then
|
||||
if [[ -n "$mergedSuccess" ]]; then
|
||||
echo "The check succeeds on both the base branch and after merging this PR"
|
||||
else
|
||||
echo "The check succeeds on the base branch, but would fail after merging this PR:"
|
||||
echo "\`\`\`"
|
||||
echo "$mergedOutput"
|
||||
echo "\`\`\`"
|
||||
echo ""
|
||||
fi
|
||||
else
|
||||
if [[ -n "$mergedSuccess" ]]; then
|
||||
echo "The check fails on the base branch, but this PR fixes it, nicely done!"
|
||||
else
|
||||
echo "The check fails on both the base branch and after merging this PR, unknown if only this PRs changes would satisfy the check, the base branch needs to be fixed first."
|
||||
echo ""
|
||||
echo "Failure on the base branch:"
|
||||
echo "\`\`\`"
|
||||
echo "$baseOutput"
|
||||
echo "\`\`\`"
|
||||
echo ""
|
||||
echo "Failure after merging this PR:"
|
||||
echo "\`\`\`"
|
||||
echo "$mergedOutput"
|
||||
echo "\`\`\`"
|
||||
echo ""
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "### Details"
|
||||
echo "- nixpkgs-check-by-name tool:"
|
||||
echo " - Channel: $channel"
|
||||
echo " - Nixpkgs commit: [$rev](https://github.com/${GITHUB_REPOSITORY}/commit/$rev)"
|
||||
echo " - Store path: \`$(realpath result)\`"
|
||||
echo "- Tested Nixpkgs:"
|
||||
echo " - Base branch: $GITHUB_BASE_REF"
|
||||
echo " - Latest base branch commit: [$baseSha](https://github.com/${GITHUB_REPOSITORY}/commit/$baseSha)"
|
||||
echo " - Latest PR commit: [$headSha](https://github.com/${GITHUB_REPOSITORY}/commit/$headSha)"
|
||||
echo " - Merge commit: [$mergedSha](https://github.com/${GITHUB_REPOSITORY}/commit/$mergedSha)"
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
exitCode=$?
|
||||
echo "To run locally: ./maintainers/scripts/check-by-name.sh $GITHUB_BASE_REF https://github.com/$GITHUB_REPOSITORY.git"
|
||||
exit "$exitCode"
|
||||
|
||||
fi
|
||||
|
|
|
@ -12,11 +12,11 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
if: github.repository_owner == 'NixOS'
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
|
||||
with:
|
||||
# pull_request_target checks out the base branch by default
|
||||
ref: refs/pull/${{ github.event.pull_request.number }}/merge
|
||||
- uses: cachix/install-nix-action@v23
|
||||
- uses: cachix/install-nix-action@7ac1ec25491415c381d9b62f0657c7a028df52a7 # v24
|
||||
with:
|
||||
# explicitly enable sandbox
|
||||
extra_nix_config: sandbox = true
|
||||
|
|
|
@ -24,11 +24,11 @@ jobs:
|
|||
- name: print list of changed files
|
||||
run: |
|
||||
cat "$HOME/changed_files"
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
|
||||
with:
|
||||
# pull_request_target checks out the base branch by default
|
||||
ref: refs/pull/${{ github.event.pull_request.number }}/merge
|
||||
- uses: cachix/install-nix-action@v23
|
||||
- uses: cachix/install-nix-action@7ac1ec25491415c381d9b62f0657c7a028df52a7 # v24
|
||||
with:
|
||||
# nixpkgs commit is pinned so that it doesn't break
|
||||
# editorconfig-checker 2.4.0
|
||||
|
|
|
@ -18,7 +18,7 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
if: "github.repository_owner == 'NixOS' && !contains(github.event.pull_request.title, '[skip treewide]')"
|
||||
steps:
|
||||
- uses: actions/labeler@v4
|
||||
- uses: actions/labeler@ac9175f8a1f3625fd0d4fb234536d26811351594 # v4.3.0
|
||||
with:
|
||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
sync-labels: true
|
||||
|
|
|
@ -14,15 +14,15 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
if: github.repository_owner == 'NixOS'
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
|
||||
with:
|
||||
# pull_request_target checks out the base branch by default
|
||||
ref: refs/pull/${{ github.event.pull_request.number }}/merge
|
||||
- uses: cachix/install-nix-action@v23
|
||||
- uses: cachix/install-nix-action@7ac1ec25491415c381d9b62f0657c7a028df52a7 # v24
|
||||
with:
|
||||
# explicitly enable sandbox
|
||||
extra_nix_config: sandbox = true
|
||||
- uses: cachix/cachix-action@v12
|
||||
- uses: cachix/cachix-action@6a2e08b5ebf7a9f285ff57b1870a4262b06e0bee # v13
|
||||
with:
|
||||
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.
|
||||
name: nixpkgs-ci
|
||||
|
|
|
@ -15,18 +15,18 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
if: github.repository_owner == 'NixOS'
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
|
||||
with:
|
||||
# pull_request_target checks out the base branch by default
|
||||
ref: refs/pull/${{ github.event.pull_request.number }}/merge
|
||||
- uses: cachix/install-nix-action@v23
|
||||
- uses: cachix/install-nix-action@7ac1ec25491415c381d9b62f0657c7a028df52a7 # v24
|
||||
with:
|
||||
# explicitly enable sandbox
|
||||
extra_nix_config: sandbox = true
|
||||
- uses: cachix/cachix-action@v12
|
||||
- uses: cachix/cachix-action@6a2e08b5ebf7a9f285ff57b1870a4262b06e0bee # v13
|
||||
with:
|
||||
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.
|
||||
name: nixpkgs-ci
|
||||
signingKey: '${{ secrets.CACHIX_SIGNING_KEY }}'
|
||||
- name: Building Nixpkgs manual
|
||||
run: NIX_PATH=nixpkgs=$(pwd) nix-build --option restrict-eval true pkgs/top-level/release.nix -A manual
|
||||
run: NIX_PATH=nixpkgs=$(pwd) nix-build --option restrict-eval true pkgs/top-level/release.nix -A manual -A manual.tests
|
||||
|
|
|
@ -13,6 +13,7 @@ on:
|
|||
# * is a special character in YAML so you have to quote this string
|
||||
# Merge every 24 hours
|
||||
- cron: '0 0 * * *'
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
@ -38,12 +39,16 @@ jobs:
|
|||
into: staging-next-23.05
|
||||
- from: staging-next-23.05
|
||||
into: staging-23.05
|
||||
- from: release-23.11
|
||||
into: staging-next-23.11
|
||||
- from: staging-next-23.11
|
||||
into: staging-23.11
|
||||
name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
|
||||
|
||||
- name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
|
||||
uses: devmasx/merge-branch@1.4.0
|
||||
uses: devmasx/merge-branch@854d3ac71ed1e9deb668e0074781b81fdd6e771f # 1.4.0
|
||||
with:
|
||||
type: now
|
||||
from_branch: ${{ matrix.pairs.from }}
|
||||
|
@ -51,7 +56,7 @@ jobs:
|
|||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Comment on failure
|
||||
uses: peter-evans/create-or-update-comment@v3
|
||||
uses: peter-evans/create-or-update-comment@23ff15729ef2fc348714a3bb66d2f655ca9066f2 # v3.1.0
|
||||
if: ${{ failure() }}
|
||||
with:
|
||||
issue-number: 105153
|
||||
|
|
|
@ -13,6 +13,7 @@ on:
|
|||
# * is a special character in YAML so you have to quote this string
|
||||
# Merge every 6 hours
|
||||
- cron: '0 */6 * * *'
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
@ -38,10 +39,10 @@ jobs:
|
|||
into: staging
|
||||
name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
|
||||
|
||||
- name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
|
||||
uses: devmasx/merge-branch@1.4.0
|
||||
uses: devmasx/merge-branch@854d3ac71ed1e9deb668e0074781b81fdd6e771f # 1.4.0
|
||||
with:
|
||||
type: now
|
||||
from_branch: ${{ matrix.pairs.from }}
|
||||
|
@ -49,7 +50,7 @@ jobs:
|
|||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Comment on failure
|
||||
uses: peter-evans/create-or-update-comment@v3
|
||||
uses: peter-evans/create-or-update-comment@23ff15729ef2fc348714a3bb66d2f655ca9066f2 # v3.1.0
|
||||
if: ${{ failure() }}
|
||||
with:
|
||||
issue-number: 105153
|
||||
|
|
|
@ -16,8 +16,8 @@ jobs:
|
|||
if: github.repository_owner == 'NixOS' && github.ref == 'refs/heads/master' # ensure workflow_dispatch only runs on master
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: cachix/install-nix-action@v23
|
||||
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
|
||||
- uses: cachix/install-nix-action@7ac1ec25491415c381d9b62f0657c7a028df52a7 # v24
|
||||
with:
|
||||
nix_path: nixpkgs=channel:nixpkgs-unstable
|
||||
- name: setup
|
||||
|
@ -46,7 +46,7 @@ jobs:
|
|||
run: |
|
||||
git clean -f
|
||||
- name: create PR
|
||||
uses: peter-evans/create-pull-request@v5
|
||||
uses: peter-evans/create-pull-request@153407881ec5c347639a548ade7d8ad1d6740e38 # v5.0.2
|
||||
with:
|
||||
body: |
|
||||
Automatic update by [update-terraform-providers](https://github.com/NixOS/nixpkgs/blob/master/.github/workflows/update-terraform-providers.yml) action.
|
||||
|
@ -60,7 +60,7 @@ jobs:
|
|||
|
||||
Check that all providers build with:
|
||||
```
|
||||
@ofborg build terraform.full
|
||||
@ofborg build opentofu.full
|
||||
```
|
||||
If there is more than ten commits in the PR `ofborg` won't build it automatically and you will need to use the above command.
|
||||
branch: terraform-providers-update
|
||||
|
|
2
third_party/nixpkgs/.mailmap
vendored
2
third_party/nixpkgs/.mailmap
vendored
|
@ -12,3 +12,5 @@ Sandro Jäckel <sandro.jaeckel@gmail.com> <sandro.jaeckel@sap.com>
|
|||
superherointj <5861043+superherointj@users.noreply.github.com>
|
||||
Vladimír Čunát <v@cunat.cz> <vcunat@gmail.com>
|
||||
Vladimír Čunát <v@cunat.cz> <vladimir.cunat@nic.cz>
|
||||
Yifei Sun <ysun@hey.com> StepBroBD <Hi@StepBroBD.com>
|
||||
Yifei Sun <ysun@hey.com> <ysun+git@stepbrobd.com>
|
||||
|
|
2
third_party/nixpkgs/.version
vendored
2
third_party/nixpkgs/.version
vendored
|
@ -1 +1 @@
|
|||
23.11
|
||||
24.05
|
15
third_party/nixpkgs/CONTRIBUTING.md
vendored
15
third_party/nixpkgs/CONTRIBUTING.md
vendored
|
@ -26,7 +26,7 @@ This file contains general contributing information, but individual parts also h
|
|||
|
||||
This section describes in some detail how changes can be made and proposed with pull requests.
|
||||
|
||||
> **Note**
|
||||
> [!Note]
|
||||
> Be aware that contributing implies licensing those contributions under the terms of [COPYING](./COPYING), an MIT-like license.
|
||||
|
||||
0. Set up a local version of Nixpkgs to work with using GitHub and Git
|
||||
|
@ -273,7 +273,7 @@ Once a pull request has been merged into `master`, a backport pull request to th
|
|||
|
||||
### Automatically backporting changes
|
||||
|
||||
> **Note**
|
||||
> [!Note]
|
||||
> You have to be a [Nixpkgs maintainer](./maintainers) to automatically create a backport pull request.
|
||||
|
||||
Add the [`backport release-YY.MM` label](https://github.com/NixOS/nixpkgs/labels?q=backport) to the pull request on the `master` branch.
|
||||
|
@ -285,14 +285,15 @@ This can be done on both open or already merged pull requests.
|
|||
To manually create a backport pull request, follow [the standard pull request process][pr-create], with these notable differences:
|
||||
|
||||
- Use `release-YY.MM` for the base branch, both for the local branch and the pull request.
|
||||
> **Warning**
|
||||
|
||||
> [!Warning]
|
||||
> Do not use the `nixos-YY.MM` branch, that is a branch pointing to the tested release channel commit
|
||||
|
||||
- Instead of manually making and committing the changes, use [`git cherry-pick -x`](https://git-scm.com/docs/git-cherry-pick) for each commit from the pull request you'd like to backport.
|
||||
Either `git cherry-pick -x <commit>` when the reason for the backport is obvious (such as minor versions, fixes, etc.), otherwise use `git cherry-pick -xe <commit>` to add a reason for the backport to the commit message.
|
||||
Here is [an example](https://github.com/nixos/nixpkgs/commit/5688c39af5a6c5f3d646343443683da880eaefb8) of this.
|
||||
|
||||
> **Warning**
|
||||
> [!Warning]
|
||||
> Ensure the commits exists on the master branch.
|
||||
> In the case of squashed or rebased merges, the commit hash will change and the new commits can be found in the merge message at the bottom of the master pull request.
|
||||
|
||||
|
@ -305,7 +306,7 @@ To manually create a backport pull request, follow [the standard pull request pr
|
|||
## How to review pull requests
|
||||
[pr-review]: #how-to-review-pull-requests
|
||||
|
||||
> **Warning**
|
||||
> [!Warning]
|
||||
> The following section is a draft, and the policy for reviewing is still being discussed in issues such as [#11166](https://github.com/NixOS/nixpkgs/issues/11166) and [#20836](https://github.com/NixOS/nixpkgs/issues/20836).
|
||||
|
||||
The Nixpkgs project receives a fairly high number of contributions via GitHub pull requests. Reviewing and approving these is an important task and a way to contribute to the project.
|
||||
|
@ -360,7 +361,7 @@ See [Nix Channel Status](https://status.nixos.org/) for the current channels and
|
|||
Here's a brief overview of the main Git branches and what channels they're used for:
|
||||
|
||||
- `master`: The main branch, used for the unstable channels such as `nixpkgs-unstable`, `nixos-unstable` and `nixos-unstable-small`.
|
||||
- `release-YY.MM` (e.g. `release-23.05`): The NixOS release branches, used for the stable channels such as `nixos-23.05`, `nixos-23.05-small` and `nixpkgs-23.05-darwin`.
|
||||
- `release-YY.MM` (e.g. `release-23.11`): The NixOS release branches, used for the stable channels such as `nixos-23.11`, `nixos-23.11-small` and `nixpkgs-23.11-darwin`.
|
||||
|
||||
When a channel is updated, a corresponding Git branch is also updated to point to the corresponding commit.
|
||||
So e.g. the [`nixpkgs-unstable` branch](https://github.com/nixos/nixpkgs/tree/nixpkgs-unstable) corresponds to the Git commit from the [`nixpkgs-unstable` channel](https://channels.nixos.org/nixpkgs-unstable).
|
||||
|
@ -384,7 +385,7 @@ By keeping the `staging-next` branch separate from `staging`, this batching does
|
|||
In order for the `staging` and `staging-next` branches to be up-to-date with the latest commits on `master`, there are regular _automated_ merges from `master` into `staging-next` and `staging`.
|
||||
This is implemented using GitHub workflows [here](.github/workflows/periodic-merge-6h.yml) and [here](.github/workflows/periodic-merge-24h.yml).
|
||||
|
||||
> **Note**
|
||||
> [!Note]
|
||||
> Changes must be sufficiently tested before being merged into any branch.
|
||||
> Hydra builds should not be used as testing platform.
|
||||
|
||||
|
|
4
third_party/nixpkgs/README.md
vendored
4
third_party/nixpkgs/README.md
vendored
|
@ -51,9 +51,9 @@ Nixpkgs and NixOS are built and tested by our continuous integration
|
|||
system, [Hydra](https://hydra.nixos.org/).
|
||||
|
||||
* [Continuous package builds for unstable/master](https://hydra.nixos.org/jobset/nixos/trunk-combined)
|
||||
* [Continuous package builds for the NixOS 23.05 release](https://hydra.nixos.org/jobset/nixos/release-23.05)
|
||||
* [Continuous package builds for the NixOS 23.11 release](https://hydra.nixos.org/jobset/nixos/release-23.11)
|
||||
* [Tests for unstable/master](https://hydra.nixos.org/job/nixos/trunk-combined/tested#tabs-constituents)
|
||||
* [Tests for the NixOS 23.05 release](https://hydra.nixos.org/job/nixos/release-23.05/tested#tabs-constituents)
|
||||
* [Tests for the NixOS 23.11 release](https://hydra.nixos.org/job/nixos/release-23.11/tested#tabs-constituents)
|
||||
|
||||
Artifacts successfully built with Hydra are published to cache at
|
||||
https://cache.nixos.org/. When successful build and test criteria are
|
||||
|
|
20
third_party/nixpkgs/doc/README.md
vendored
20
third_party/nixpkgs/doc/README.md
vendored
|
@ -1,14 +1,18 @@
|
|||
# Contributing to the Nixpkgs manual
|
||||
# Contributing to the Nixpkgs reference manual
|
||||
|
||||
This directory houses the sources files for the Nixpkgs manual.
|
||||
This directory houses the sources files for the Nixpkgs reference manual.
|
||||
|
||||
Going forward, it should only contain [reference](https://nix.dev/contributing/documentation/diataxis#reference) documentation.
|
||||
For tutorials, guides and explanations, contribute to <https://nix.dev/> instead.
|
||||
|
||||
For documentation only relevant for contributors, use Markdown files and code comments in the source code.
|
||||
|
||||
Rendered documentation:
|
||||
- [Unstable (from master)](https://nixos.org/manual/nixpkgs/unstable/)
|
||||
- [Stable (from latest release)](https://nixos.org/manual/nixpkgs/stable/)
|
||||
|
||||
You can find the [rendered documentation for Nixpkgs `unstable` on nixos.org](https://nixos.org/manual/nixpkgs/unstable/).
|
||||
The rendering tool is [nixos-render-docs](../pkgs/tools/nix/nixos-render-docs/src/nixos_render_docs), sometimes abbreviated `nrd`.
|
||||
|
||||
[Docs for Nixpkgs stable](https://nixos.org/manual/nixpkgs/stable/) are also available.
|
||||
|
||||
If you're only getting started with Nix, go to [nixos.org/learn](https://nixos.org/learn).
|
||||
|
||||
## Contributing to this documentation
|
||||
|
||||
You can quickly check your edits with `nix-build`:
|
||||
|
@ -48,7 +52,7 @@ It uses the widely compatible [header attributes](https://github.com/jgm/commonm
|
|||
## Syntax {#sec-contributing-markup}
|
||||
```
|
||||
|
||||
> **Note**
|
||||
> [!Note]
|
||||
> NixOS option documentation does not support headings in general.
|
||||
|
||||
#### Inline Anchors
|
||||
|
|
|
@ -1,48 +1,167 @@
|
|||
# pkgs.appimageTools {#sec-pkgs-appimageTools}
|
||||
|
||||
`pkgs.appimageTools` is a set of functions for extracting and wrapping [AppImage](https://appimage.org/) files. They are meant to be used if traditional packaging from source is infeasible, or it would take too long. To quickly run an AppImage file, `pkgs.appimage-run` can be used as well.
|
||||
`pkgs.appimageTools` is a set of functions for extracting and wrapping [AppImage](https://appimage.org/) files.
|
||||
They are meant to be used if traditional packaging from source is infeasible, or if it would take too long.
|
||||
To quickly run an AppImage file, `pkgs.appimage-run` can be used as well.
|
||||
|
||||
::: {.warning}
|
||||
The `appimageTools` API is unstable and may be subject to backwards-incompatible changes in the future.
|
||||
:::
|
||||
|
||||
## AppImage formats {#ssec-pkgs-appimageTools-formats}
|
||||
|
||||
There are different formats for AppImages, see [the specification](https://github.com/AppImage/AppImageSpec/blob/74ad9ca2f94bf864a4a0dac1f369dd4f00bd1c28/draft.md#image-format) for details.
|
||||
|
||||
- Type 1 images are ISO 9660 files that are also ELF executables.
|
||||
- Type 2 images are ELF executables with an appended filesystem.
|
||||
|
||||
They can be told apart with `file -k`:
|
||||
|
||||
```ShellSession
|
||||
$ file -k type1.AppImage
|
||||
type1.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV) ISO 9660 CD-ROM filesystem data 'AppImage' (Lepton 3.x), scale 0-0,
|
||||
spot sensor temperature 0.000000, unit celsius, color scheme 0, calibration: offset 0.000000, slope 0.000000, dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=d629f6099d2344ad82818172add1d38c5e11bc6d, stripped\012- data
|
||||
|
||||
$ file -k type2.AppImage
|
||||
type2.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV) (Lepton 3.x), scale 232-60668, spot sensor temperature -4.187500, color scheme 15, show scale bar, calibration: offset -0.000000, slope 0.000000 (Lepton 2.x), scale 4111-45000, spot sensor temperature 412442.250000, color scheme 3, minimum point enabled, calibration: offset -75402534979642766821519867692934234112.000000, slope 5815371847733706829839455140374904832.000000, dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=79dcc4e55a61c293c5e19edbd8d65b202842579f, stripped\012- data
|
||||
```
|
||||
|
||||
Note how the type 1 AppImage is described as an `ISO 9660 CD-ROM filesystem`, and the type 2 AppImage is not.
|
||||
|
||||
## Wrapping {#ssec-pkgs-appimageTools-wrapping}
|
||||
|
||||
Depending on the type of AppImage you're wrapping, you'll have to use `wrapType1` or `wrapType2`.
|
||||
Use `wrapType2` to wrap any AppImage.
|
||||
This will create a FHS environment with many packages [expected to exist](https://github.com/AppImage/pkg2appimage/blob/master/excludelist) for the AppImage to work.
|
||||
`wrapType2` expects an argument with the `src` attribute, and either a `name` attribute or `pname` and `version` attributes.
|
||||
|
||||
It will eventually call into [`buildFHSEnv`](#sec-fhs-environments), and any extra attributes in the argument to `wrapType2` will be passed through to it.
|
||||
This means that you can pass the `extraInstallCommands` attribute, for example, and it will have the same effect as described in [`buildFHSEnv`](#sec-fhs-environments).
|
||||
|
||||
::: {.note}
|
||||
In the past, `appimageTools` provided both `wrapType1` and `wrapType2`, to be used depending on the type of AppImage that was being wrapped.
|
||||
However, [those were unified early 2020](https://github.com/NixOS/nixpkgs/pull/81833), meaning that both `wrapType1` and `wrapType2` have the same behaviour now.
|
||||
:::
|
||||
|
||||
:::{.example #ex-wrapping-appimage-from-github}
|
||||
|
||||
# Wrapping an AppImage from GitHub
|
||||
|
||||
```nix
|
||||
appimageTools.wrapType2 { # or wrapType1
|
||||
name = "patchwork";
|
||||
{ appimageTools, fetchurl }:
|
||||
let
|
||||
pname = "nuclear";
|
||||
version = "0.6.30";
|
||||
|
||||
src = fetchurl {
|
||||
url = "https://github.com/ssbc/patchwork/releases/download/v3.11.4/Patchwork-3.11.4-linux-x86_64.AppImage";
|
||||
hash = "sha256-OqTitCeZ6xmWbqYTXp8sDrmVgTNjPZNW0hzUPW++mq4=";
|
||||
url = "https://github.com/nukeop/nuclear/releases/download/v${version}/${pname}-v${version}.AppImage";
|
||||
hash = "sha256-he1uGC1M/nFcKpMM9JKY4oeexJcnzV0ZRxhTjtJz6xw=";
|
||||
};
|
||||
extraPkgs = pkgs: with pkgs; [ ];
|
||||
in
|
||||
appimageTools.wrapType2 {
|
||||
inherit pname version src;
|
||||
}
|
||||
```
|
||||
|
||||
- `name` specifies the name of the resulting image.
|
||||
- `src` specifies the AppImage file to extract.
|
||||
- `extraPkgs` allows you to pass a function to include additional packages inside the FHS environment your AppImage is going to run in. There are a few ways to learn which dependencies an application needs:
|
||||
- Looking through the extracted AppImage files, reading its scripts and running `patchelf` and `ldd` on its executables. This can also be done in `appimage-run`, by setting `APPIMAGE_DEBUG_EXEC=bash`.
|
||||
:::
|
||||
|
||||
The argument passed to `wrapType2` can also contain an `extraPkgs` attribute, which allows you to include additional packages inside the FHS environment your AppImage is going to run in.
|
||||
`extraPkgs` must be a function that returns a list of packages.
|
||||
There are a few ways to learn which dependencies an application needs:
|
||||
|
||||
- Looking through the extracted AppImage files, reading its scripts and running `patchelf` and `ldd` on its executables.
|
||||
This can also be done in `appimage-run`, by setting `APPIMAGE_DEBUG_EXEC=bash`.
|
||||
- Running `strace -vfefile` on the wrapped executable, looking for libraries that can't be found.
|
||||
|
||||
:::{.example #ex-wrapping-appimage-with-extrapkgs}
|
||||
|
||||
# Wrapping an AppImage with extra packages
|
||||
|
||||
```nix
|
||||
{ appimageTools, fetchurl }:
|
||||
let
|
||||
pname = "irccloud";
|
||||
version = "0.16.0";
|
||||
|
||||
src = fetchurl {
|
||||
url = "https://github.com/irccloud/irccloud-desktop/releases/download/v${version}/IRCCloud-${version}-linux-x86_64.AppImage";
|
||||
sha256 = "sha256-/hMPvYdnVB1XjKgU2v47HnVvW4+uC3rhRjbucqin4iI=";
|
||||
};
|
||||
in appimageTools.wrapType2 {
|
||||
inherit pname version src;
|
||||
extraPkgs = pkgs: [ pkgs.at-spi2-core ];
|
||||
}
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
## Extracting {#ssec-pkgs-appimageTools-extracting}
|
||||
|
||||
Use `extract` if you need to extract the contents of an AppImage.
|
||||
This is usually used in Nixpkgs to install extra files in addition to [wrapping](#ssec-pkgs-appimageTools-wrapping) the AppImage.
|
||||
`extract` expects an argument with the `src` attribute, and either a `name` attribute or `pname` and `version` attributes.
|
||||
|
||||
::: {.note}
|
||||
In the past, `appimageTools` provided both `extractType1` and `extractType2`, to be used depending on the type of AppImage that was being extracted.
|
||||
However, [those were unified early 2020](https://github.com/NixOS/nixpkgs/pull/81572), meaning that both `extractType1` and `extractType2` have the same behaviour as `extract` now.
|
||||
:::
|
||||
|
||||
:::{.example #ex-extracting-appimage}
|
||||
|
||||
# Extracting an AppImage to install extra files
|
||||
|
||||
This example was adapted from a real package in Nixpkgs to show how `extract` is usually used in combination with `wrapType2`.
|
||||
Note how `appimageContents` is used in `extraInstallCommands` to install additional files that were extracted from the AppImage.
|
||||
|
||||
```nix
|
||||
{ appimageTools, fetchurl }:
|
||||
let
|
||||
pname = "irccloud";
|
||||
version = "0.16.0";
|
||||
|
||||
src = fetchurl {
|
||||
url = "https://github.com/irccloud/irccloud-desktop/releases/download/v${version}/IRCCloud-${version}-linux-x86_64.AppImage";
|
||||
sha256 = "sha256-/hMPvYdnVB1XjKgU2v47HnVvW4+uC3rhRjbucqin4iI=";
|
||||
};
|
||||
|
||||
appimageContents = appimageTools.extract {
|
||||
inherit pname version src;
|
||||
};
|
||||
in appimageTools.wrapType2 {
|
||||
inherit pname version src;
|
||||
|
||||
extraPkgs = pkgs: [ pkgs.at-spi2-core ];
|
||||
|
||||
extraInstallCommands = ''
|
||||
mv $out/bin/${pname}-${version} $out/bin/${pname}
|
||||
install -m 444 -D ${appimageContents}/irccloud.desktop $out/share/applications/irccloud.desktop
|
||||
install -m 444 -D ${appimageContents}/usr/share/icons/hicolor/512x512/apps/irccloud.png \
|
||||
$out/share/icons/hicolor/512x512/apps/irccloud.png
|
||||
substituteInPlace $out/share/applications/irccloud.desktop \
|
||||
--replace 'Exec=AppRun' 'Exec=${pname}'
|
||||
'';
|
||||
}
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
The argument passed to `extract` can also contain a `postExtract` attribute, which allows you to execute additional commands after the files are extracted from the AppImage.
|
||||
`postExtract` must be a string with commands to run.
|
||||
|
||||
:::{.example #ex-extracting-appimage-with-postextract}
|
||||
|
||||
# Extracting an AppImage to install extra files, using `postExtract`
|
||||
|
||||
This is a rewrite of [](#ex-extracting-appimage) to use `postExtract`.
|
||||
|
||||
```nix
|
||||
{ appimageTools, fetchurl }:
|
||||
let
|
||||
pname = "irccloud";
|
||||
version = "0.16.0";
|
||||
|
||||
src = fetchurl {
|
||||
url = "https://github.com/irccloud/irccloud-desktop/releases/download/v${version}/IRCCloud-${version}-linux-x86_64.AppImage";
|
||||
sha256 = "sha256-/hMPvYdnVB1XjKgU2v47HnVvW4+uC3rhRjbucqin4iI=";
|
||||
};
|
||||
|
||||
appimageContents = appimageTools.extract {
|
||||
inherit pname version src;
|
||||
postExtract = ''
|
||||
substituteInPlace $out/irccloud.desktop --replace 'Exec=AppRun' 'Exec=${pname}'
|
||||
'';
|
||||
};
|
||||
in appimageTools.wrapType2 {
|
||||
inherit pname version src;
|
||||
|
||||
extraPkgs = pkgs: [ pkgs.at-spi2-core ];
|
||||
|
||||
extraInstallCommands = ''
|
||||
mv $out/bin/${pname}-${version} $out/bin/${pname}
|
||||
install -m 444 -D ${appimageContents}/irccloud.desktop $out/share/applications/irccloud.desktop
|
||||
install -m 444 -D ${appimageContents}/usr/share/icons/hicolor/512x512/apps/irccloud.png \
|
||||
$out/share/icons/hicolor/512x512/apps/irccloud.png
|
||||
'';
|
||||
}
|
||||
```
|
||||
|
||||
:::
|
||||
|
|
|
@ -1,49 +1,58 @@
|
|||
# pkgs.mkBinaryCache {#sec-pkgs-binary-cache}
|
||||
|
||||
`pkgs.mkBinaryCache` is a function for creating Nix flat-file binary caches. Such a cache exists as a directory on disk, and can be used as a Nix substituter by passing `--substituter file:///path/to/cache` to Nix commands.
|
||||
`pkgs.mkBinaryCache` is a function for creating Nix flat-file binary caches.
|
||||
Such a cache exists as a directory on disk, and can be used as a Nix substituter by passing `--substituter file:///path/to/cache` to Nix commands.
|
||||
|
||||
Nix packages are most commonly shared between machines using [HTTP, SSH, or S3](https://nixos.org/manual/nix/stable/package-management/sharing-packages.html), but a flat-file binary cache can still be useful in some situations. For example, you can copy it directly to another machine, or make it available on a network file system. It can also be a convenient way to make some Nix packages available inside a container via bind-mounting.
|
||||
Nix packages are most commonly shared between machines using [HTTP, SSH, or S3](https://nixos.org/manual/nix/stable/package-management/sharing-packages.html), but a flat-file binary cache can still be useful in some situations.
|
||||
For example, you can copy it directly to another machine, or make it available on a network file system.
|
||||
It can also be a convenient way to make some Nix packages available inside a container via bind-mounting.
|
||||
|
||||
Note that this function is meant for advanced use-cases. The more idiomatic way to work with flat-file binary caches is via the [nix-copy-closure](https://nixos.org/manual/nix/stable/command-ref/nix-copy-closure.html) command. You may also want to consider [dockerTools](#sec-pkgs-dockerTools) for your containerization needs.
|
||||
`mkBinaryCache` expects an argument with the `rootPaths` attribute.
|
||||
`rootPaths` must be a list of derivations.
|
||||
The transitive closure of these derivations' outputs will be copied into the cache.
|
||||
|
||||
## Example {#sec-pkgs-binary-cache-example}
|
||||
::: {.note}
|
||||
This function is meant for advanced use cases.
|
||||
The more idiomatic way to work with flat-file binary caches is via the [nix-copy-closure](https://nixos.org/manual/nix/stable/command-ref/nix-copy-closure.html) command.
|
||||
You may also want to consider [dockerTools](#sec-pkgs-dockerTools) for your containerization needs.
|
||||
:::
|
||||
|
||||
[]{#sec-pkgs-binary-cache-example}
|
||||
:::{.example #ex-mkbinarycache-copying-package-closure}
|
||||
|
||||
# Copying a package and its closure to another machine with `mkBinaryCache`
|
||||
|
||||
The following derivation will construct a flat-file binary cache containing the closure of `hello`.
|
||||
|
||||
```nix
|
||||
{ mkBinaryCache, hello }:
|
||||
mkBinaryCache {
|
||||
rootPaths = [hello];
|
||||
}
|
||||
```
|
||||
|
||||
- `rootPaths` specifies a list of root derivations. The transitive closure of these derivations' outputs will be copied into the cache.
|
||||
|
||||
Here's an example of building and using the cache.
|
||||
|
||||
Build the cache on one machine, `host1`:
|
||||
Build the cache on a machine.
|
||||
Note that the command still builds the exact nix package above, but adds some boilerplate to build it directly from an expression.
|
||||
|
||||
```shellSession
|
||||
nix-build -E 'with import <nixpkgs> {}; mkBinaryCache { rootPaths = [hello]; }'
|
||||
$ nix-build -E 'let pkgs = import <nixpkgs> {}; in pkgs.callPackage ({ mkBinaryCache, hello }: mkBinaryCache { rootPaths = [hello]; }) {}'
|
||||
/nix/store/azf7xay5xxdnia4h9fyjiv59wsjdxl0g-binary-cache
|
||||
```
|
||||
|
||||
Copy the resulting directory to another machine, which we'll call `host2`:
|
||||
|
||||
```shellSession
|
||||
/nix/store/cc0562q828rnjqjyfj23d5q162gb424g-binary-cache
|
||||
$ scp result host2:/tmp/hello-cache
|
||||
```
|
||||
|
||||
Copy the resulting directory to the other machine, `host2`:
|
||||
At this point, the cache can be used as a substituter when building derivations on `host2`:
|
||||
|
||||
```shellSession
|
||||
scp result host2:/tmp/hello-cache
|
||||
```
|
||||
|
||||
Substitute the derivation using the flat-file binary cache on the other machine, `host2`:
|
||||
```shellSession
|
||||
nix-build -A hello '<nixpkgs>' \
|
||||
$ nix-build -A hello '<nixpkgs>' \
|
||||
--option require-sigs false \
|
||||
--option trusted-substituters file:///tmp/hello-cache \
|
||||
--option substituters file:///tmp/hello-cache
|
||||
/nix/store/zhl06z4lrfrkw5rp0hnjjfrgsclzvxpm-hello-2.12.1
|
||||
```
|
||||
|
||||
```shellSession
|
||||
/nix/store/gl5a41azbpsadfkfmbilh9yk40dh5dl0-hello-2.12.1
|
||||
```
|
||||
:::
|
||||
|
|
|
@ -7,4 +7,5 @@ special/fhs-environments.section.md
|
|||
special/makesetuphook.section.md
|
||||
special/mkshell.section.md
|
||||
special/vm-tools.section.md
|
||||
special/checkpoint-build.section.md
|
||||
```
|
||||
|
|
36
third_party/nixpkgs/doc/build-helpers/special/checkpoint-build.section.md
vendored
Normal file
36
third_party/nixpkgs/doc/build-helpers/special/checkpoint-build.section.md
vendored
Normal file
|
@ -0,0 +1,36 @@
|
|||
# pkgs.checkpointBuildTools {#sec-checkpoint-build}
|
||||
|
||||
`pkgs.checkpointBuildTools` provides a way to build derivations incrementally. It consists of two functions to make checkpoint builds using Nix possible.
|
||||
|
||||
For hermeticity, Nix derivations do not allow any state to carry over between builds, making a transparent incremental build within a derivation impossible.
|
||||
|
||||
However, we can tell Nix explicitly what the previous build state was, by representing that previous state as a derivation output. This allows the passed build state to be used for an incremental build.
|
||||
|
||||
To change a normal derivation to a checkpoint based build, these steps must be taken:
|
||||
- apply `prepareCheckpointBuild` on the desired derivation
|
||||
e.g.:
|
||||
```nix
|
||||
checkpointArtifacts = (pkgs.checkpointBuildTools.prepareCheckpointBuild pkgs.virtualbox);
|
||||
```
|
||||
- change something you want in the sources of the package. (e.g. using a source override)
|
||||
```nix
|
||||
changedVBox = pkgs.virtualbox.overrideAttrs (old: {
|
||||
src = path/to/vbox/sources;
|
||||
}
|
||||
```
|
||||
- use `mkCheckpointedBuild changedVBox buildOutput`
|
||||
- enjoy shorter build times
|
||||
|
||||
## Example {#sec-checkpoint-build-example}
|
||||
```nix
|
||||
{ pkgs ? import <nixpkgs> {} }: with (pkgs) checkpointBuildTools;
|
||||
let
|
||||
helloCheckpoint = checkpointBuildTools.prepareCheckpointBuild pkgs.hello;
|
||||
changedHello = pkgs.hello.overrideAttrs (_: {
|
||||
doCheck = false;
|
||||
patchPhase = ''
|
||||
sed -i 's/Hello, world!/Hello, Nix!/g' src/hello.c
|
||||
'';
|
||||
});
|
||||
in checkpointBuildTools.mkCheckpointBuild changedHello helloCheckpoint
|
||||
```
|
|
@ -1,4 +1,5 @@
|
|||
# Testers {#chap-testers}
|
||||
|
||||
This chapter describes several testing builders which are available in the `testers` namespace.
|
||||
|
||||
## `hasPkgConfigModules` {#tester-hasPkgConfigModules}
|
||||
|
@ -6,19 +7,11 @@ This chapter describes several testing builders which are available in the `test
|
|||
<!-- Old anchor name so links still work -->
|
||||
[]{#tester-hasPkgConfigModule}
|
||||
Checks whether a package exposes a given list of `pkg-config` modules.
|
||||
If the `moduleNames` argument is omitted, `hasPkgConfigModules` will
|
||||
use `meta.pkgConfigModules`.
|
||||
If the `moduleNames` argument is omitted, `hasPkgConfigModules` will use `meta.pkgConfigModules`.
|
||||
|
||||
Example:
|
||||
:::{.example #ex-haspkgconfigmodules-defaultvalues}
|
||||
|
||||
```nix
|
||||
passthru.tests.pkg-config = testers.hasPkgConfigModules {
|
||||
package = finalAttrs.finalPackage;
|
||||
moduleNames = [ "libfoo" ];
|
||||
};
|
||||
```
|
||||
|
||||
If the package in question has `meta.pkgConfigModules` set, it is even simpler:
|
||||
# Check that `pkg-config` modules are exposed using default values
|
||||
|
||||
```nix
|
||||
passthru.tests.pkg-config = testers.hasPkgConfigModules {
|
||||
|
@ -28,40 +21,66 @@ passthru.tests.pkg-config = testers.hasPkgConfigModules {
|
|||
meta.pkgConfigModules = [ "libfoo" ];
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
:::{.example #ex-haspkgconfigmodules-explicitmodules}
|
||||
|
||||
# Check that `pkg-config` modules are exposed using explicit module names
|
||||
|
||||
```nix
|
||||
passthru.tests.pkg-config = testers.hasPkgConfigModules {
|
||||
package = finalAttrs.finalPackage;
|
||||
moduleNames = [ "libfoo" ];
|
||||
};
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
## `testVersion` {#tester-testVersion}
|
||||
|
||||
Checks the command output contains the specified version
|
||||
Checks that the output from running a command contains the specified version string in it as a whole word.
|
||||
|
||||
Although simplistic, this test assures that the main program
|
||||
can run. While there's no substitute for a real test case,
|
||||
it does catch dynamic linking errors and such. It also provides
|
||||
some protection against accidentally building the wrong version,
|
||||
for example when using an 'old' hash in a fixed-output derivation.
|
||||
Although simplistic, this test assures that the main program can run.
|
||||
While there's no substitute for a real test case, it does catch dynamic linking errors and such.
|
||||
It also provides some protection against accidentally building the wrong version, for example when using an "old" hash in a fixed-output derivation.
|
||||
|
||||
Examples:
|
||||
By default, the command to be run will be inferred from the given `package` attribute:
|
||||
it will check `meta.mainProgram` first, and fall back to `pname` or `name`.
|
||||
The default argument to the command is `--version`, and the version to be checked will be inferred from the given `package` attribute as well.
|
||||
|
||||
:::{.example #ex-testversion-hello}
|
||||
|
||||
# Check a program version using all the default values
|
||||
|
||||
This example will run the command `hello --version`, and then check that the version of the `hello` package is in the output of the command.
|
||||
|
||||
```nix
|
||||
passthru.tests.version = testers.testVersion { package = hello; };
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
:::{.example #ex-testversion-different-commandversion}
|
||||
|
||||
# Check the program version using a specified command and expected version string
|
||||
|
||||
This example will run the command `leetcode -V`, and then check that `leetcode 0.4.2` is in the output of the command as a whole word (separated by whitespaces).
|
||||
This means that an output like "leetcode 0.4.21" would fail the tests, and an output like "You're running leetcode 0.4.2" would pass the tests.
|
||||
|
||||
A common usage of the `version` attribute is to specify `version = "v${version}"`.
|
||||
|
||||
```nix
|
||||
version = "0.4.2";
|
||||
|
||||
passthru.tests.version = testers.testVersion {
|
||||
package = seaweedfs;
|
||||
command = "weed version";
|
||||
};
|
||||
|
||||
passthru.tests.version = testers.testVersion {
|
||||
package = key;
|
||||
command = "KeY --help";
|
||||
# Wrong '2.5' version in the code. Drop on next version.
|
||||
version = "2.5";
|
||||
};
|
||||
|
||||
passthru.tests.version = testers.testVersion {
|
||||
package = ghr;
|
||||
# The output needs to contain the 'version' string without any prefix or suffix.
|
||||
version = "v${version}";
|
||||
package = leetcode-cli;
|
||||
command = "leetcode -V";
|
||||
version = "leetcode ${version}";
|
||||
};
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
## `testBuildFailure` {#tester-testBuildFailure}
|
||||
|
||||
Make sure that a build does not succeed. This is useful for testing testers.
|
||||
|
@ -72,7 +91,18 @@ This returns a derivation with an override on the builder, with the following ef
|
|||
- Move `$out` to `$out/result`, if it exists (assuming `out` is the default output)
|
||||
- Save the build log to `$out/testBuildFailure.log` (same)
|
||||
|
||||
Example:
|
||||
While `testBuildFailure` is designed to keep changes to the original builder's environment to a minimum, some small changes are inevitable:
|
||||
|
||||
- The file `$TMPDIR/testBuildFailure.log` is present. It should not be deleted.
|
||||
- `stdout` and `stderr` are a pipe instead of a tty. This could be improved.
|
||||
- One or two extra processes are present in the sandbox during the original builder's execution.
|
||||
- The derivation and output hashes are different, but not unusual.
|
||||
- The derivation includes a dependency on `buildPackages.bash` and `expect-failure.sh`, which is built to include a transitive dependency on `buildPackages.coreutils` and possibly more.
|
||||
These are not added to `PATH` or any other environment variable, so they should be hard to observe.
|
||||
|
||||
:::{.example #ex-testBuildFailure-showingenvironmentchanges}
|
||||
|
||||
# Check that a build fails, and verify the changes made during build
|
||||
|
||||
```nix
|
||||
runCommand "example" {
|
||||
|
@ -89,24 +119,15 @@ runCommand "example" {
|
|||
'';
|
||||
```
|
||||
|
||||
While `testBuildFailure` is designed to keep changes to the original builder's
|
||||
environment to a minimum, some small changes are inevitable.
|
||||
|
||||
- The file `$TMPDIR/testBuildFailure.log` is present. It should not be deleted.
|
||||
- `stdout` and `stderr` are a pipe instead of a tty. This could be improved.
|
||||
- One or two extra processes are present in the sandbox during the original
|
||||
builder's execution.
|
||||
- The derivation and output hashes are different, but not unusual.
|
||||
- The derivation includes a dependency on `buildPackages.bash` and
|
||||
`expect-failure.sh`, which is built to include a transitive dependency on
|
||||
`buildPackages.coreutils` and possibly more. These are not added to `PATH`
|
||||
or any other environment variable, so they should be hard to observe.
|
||||
:::
|
||||
|
||||
## `testEqualContents` {#tester-equalContents}
|
||||
|
||||
Check that two paths have the same contents.
|
||||
|
||||
Example:
|
||||
:::{.example #ex-testEqualContents-toyexample}
|
||||
|
||||
# Check that two paths have the same contents
|
||||
|
||||
```nix
|
||||
testers.testEqualContents {
|
||||
|
@ -126,17 +147,20 @@ testers.testEqualContents {
|
|||
}
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
## `testEqualDerivation` {#tester-testEqualDerivation}
|
||||
|
||||
Checks that two packages produce the exact same build instructions.
|
||||
|
||||
This can be used to make sure that a certain difference of configuration,
|
||||
such as the presence of an overlay does not cause a cache miss.
|
||||
This can be used to make sure that a certain difference of configuration, such as the presence of an overlay does not cause a cache miss.
|
||||
|
||||
When the derivations are equal, the return value is an empty file.
|
||||
Otherwise, the build log explains the difference via `nix-diff`.
|
||||
|
||||
Example:
|
||||
:::{.example #ex-testEqualDerivation-hello}
|
||||
|
||||
# Check that two packages produce the same derivation
|
||||
|
||||
```nix
|
||||
testers.testEqualDerivation
|
||||
|
@ -145,29 +169,28 @@ testers.testEqualDerivation
|
|||
(hello.overrideAttrs(o: { doCheck = true; }))
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
## `invalidateFetcherByDrvHash` {#tester-invalidateFetcherByDrvHash}
|
||||
|
||||
Use the derivation hash to invalidate the output via name, for testing.
|
||||
|
||||
Type: `(a@{ name, ... } -> Derivation) -> a -> Derivation`
|
||||
|
||||
Normally, fixed output derivations can and should be cached by their output
|
||||
hash only, but for testing we want to re-fetch everytime the fetcher changes.
|
||||
Normally, fixed output derivations can and should be cached by their output hash only, but for testing we want to re-fetch everytime the fetcher changes.
|
||||
|
||||
Changes to the fetcher become apparent in the drvPath, which is a hash of
|
||||
how to fetch, rather than a fixed store path.
|
||||
By inserting this hash into the name, we can make sure to re-run the fetcher
|
||||
every time the fetcher changes.
|
||||
Changes to the fetcher become apparent in the drvPath, which is a hash of how to fetch, rather than a fixed store path.
|
||||
By inserting this hash into the name, we can make sure to re-run the fetcher every time the fetcher changes.
|
||||
|
||||
This relies on the assumption that Nix isn't clever enough to reuse its
|
||||
database of local store contents to optimize fetching.
|
||||
This relies on the assumption that Nix isn't clever enough to reuse its database of local store contents to optimize fetching.
|
||||
|
||||
You might notice that the "salted" name derives from the normal invocation,
|
||||
not the final derivation. `invalidateFetcherByDrvHash` has to invoke the fetcher
|
||||
function twice: once to get a derivation hash, and again to produce the final
|
||||
fixed output derivation.
|
||||
You might notice that the "salted" name derives from the normal invocation, not the final derivation.
|
||||
`invalidateFetcherByDrvHash` has to invoke the fetcher function twice:
|
||||
once to get a derivation hash, and again to produce the final fixed output derivation.
|
||||
|
||||
Example:
|
||||
:::{.example #ex-invalidateFetcherByDrvHash-nix}
|
||||
|
||||
# Prevent nix from reusing the output of a fetcher
|
||||
|
||||
```nix
|
||||
tests.fetchgit = testers.invalidateFetcherByDrvHash fetchgit {
|
||||
|
@ -178,13 +201,17 @@ tests.fetchgit = testers.invalidateFetcherByDrvHash fetchgit {
|
|||
};
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
## `runNixOSTest` {#tester-runNixOSTest}
|
||||
|
||||
A helper function that behaves exactly like the NixOS `runTest`, except it also assigns this Nixpkgs package set as the `pkgs` of the test and makes the `nixpkgs.*` options read-only.
|
||||
|
||||
If your test is part of the Nixpkgs repository, or if you need a more general entrypoint, see ["Calling a test" in the NixOS manual](https://nixos.org/manual/nixos/stable/index.html#sec-calling-nixos-tests).
|
||||
|
||||
Example:
|
||||
:::{.example #ex-runNixOSTest-hello}
|
||||
|
||||
# Run a NixOS test using `runNixOSTest`
|
||||
|
||||
```nix
|
||||
pkgs.testers.runNixOSTest ({ lib, ... }: {
|
||||
|
@ -198,19 +225,17 @@ pkgs.testers.runNixOSTest ({ lib, ... }: {
|
|||
})
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
## `nixosTest` {#tester-nixosTest}
|
||||
|
||||
Run a NixOS VM network test using this evaluation of Nixpkgs.
|
||||
|
||||
NOTE: This function is primarily for external use. NixOS itself uses `make-test-python.nix` directly. Packages defined in Nixpkgs [reuse NixOS tests via `nixosTests`, plural](#ssec-nixos-tests-linking).
|
||||
|
||||
It is mostly equivalent to the function `import ./make-test-python.nix` from the
|
||||
[NixOS manual](https://nixos.org/nixos/manual/index.html#sec-nixos-tests),
|
||||
except that the current application of Nixpkgs (`pkgs`) will be used, instead of
|
||||
letting NixOS invoke Nixpkgs anew.
|
||||
It is mostly equivalent to the function `import ./make-test-python.nix` from the [NixOS manual](https://nixos.org/nixos/manual/index.html#sec-nixos-tests), except that the current application of Nixpkgs (`pkgs`) will be used, instead of letting NixOS invoke Nixpkgs anew.
|
||||
|
||||
If a test machine needs to set NixOS options under `nixpkgs`, it must set only the
|
||||
`nixpkgs.pkgs` option.
|
||||
If a test machine needs to set NixOS options under `nixpkgs`, it must set only the `nixpkgs.pkgs` option.
|
||||
|
||||
### Parameter {#tester-nixosTest-parameter}
|
||||
|
||||
|
|
23
third_party/nixpkgs/doc/default.nix
vendored
23
third_party/nixpkgs/doc/default.nix
vendored
|
@ -24,6 +24,7 @@ let
|
|||
{ name = "cli"; description = "command-line serialization functions"; }
|
||||
{ name = "gvariant"; description = "GVariant formatted string serialization functions"; }
|
||||
{ name = "customisation"; description = "Functions to customise (derivation-related) functions, derivatons, or attribute sets"; }
|
||||
{ name = "meta"; description = "functions for derivation metadata"; }
|
||||
];
|
||||
};
|
||||
|
||||
|
@ -148,4 +149,26 @@ in pkgs.stdenv.mkDerivation {
|
|||
echo "doc manual $dest ${common.indexPath}" >> $out/nix-support/hydra-build-products
|
||||
echo "doc manual $dest nixpkgs-manual.epub" >> $out/nix-support/hydra-build-products
|
||||
'';
|
||||
|
||||
passthru.tests.manpage-urls = with pkgs; testers.invalidateFetcherByDrvHash
|
||||
({ name ? "manual_check-manpage-urls"
|
||||
, script
|
||||
, urlsFile
|
||||
}: runCommand name {
|
||||
nativeBuildInputs = [
|
||||
cacert
|
||||
(python3.withPackages (p: with p; [
|
||||
aiohttp
|
||||
rich
|
||||
structlog
|
||||
]))
|
||||
];
|
||||
outputHash = "sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU="; # Empty output
|
||||
} ''
|
||||
python3 ${script} ${urlsFile}
|
||||
touch $out
|
||||
'') {
|
||||
script = ./tests/manpage-urls.py;
|
||||
urlsFile = ./manpage-urls.json;
|
||||
};
|
||||
}
|
||||
|
|
1
third_party/nixpkgs/doc/functions.md
vendored
1
third_party/nixpkgs/doc/functions.md
vendored
|
@ -8,5 +8,4 @@ functions/generators.section.md
|
|||
functions/debug.section.md
|
||||
functions/prefer-remote-fetch.section.md
|
||||
functions/nix-gitignore.section.md
|
||||
functions/fileset.section.md
|
||||
```
|
||||
|
|
|
@ -1,48 +0,0 @@
|
|||
<!-- TODO: Render this document in front of function documentation in case https://github.com/nix-community/nixdoc/issues/19 is ever supported -->
|
||||
|
||||
# File sets {#sec-fileset}
|
||||
|
||||
The [`lib.fileset`](#sec-functions-library-fileset) library allows you to work with _file sets_.
|
||||
A file set is a mathematical set of local files that can be added to the Nix store for use in Nix derivations.
|
||||
File sets are easy and safe to use, providing obvious and composable semantics with good error messages to prevent mistakes.
|
||||
|
||||
See the [function reference](#sec-functions-library-fileset) for function-specific documentation.
|
||||
|
||||
## Implicit coercion from paths to file sets {#sec-fileset-path-coercion}
|
||||
|
||||
All functions accepting file sets as arguments can also accept [paths](https://nixos.org/manual/nix/stable/language/values.html#type-path) as arguments.
|
||||
Such path arguments are implicitly coerced to file sets containing all files under that path:
|
||||
- A path to a file turns into a file set containing that single file.
|
||||
- A path to a directory turns into a file set containing all files _recursively_ in that directory.
|
||||
|
||||
If the path points to a non-existent location, an error is thrown.
|
||||
|
||||
::: {.note}
|
||||
Just like in Git, file sets cannot represent empty directories.
|
||||
Because of this, a path to a directory that contains no files (recursively) will turn into a file set containing no files.
|
||||
:::
|
||||
|
||||
:::{.note}
|
||||
File set coercion does _not_ add any of the files under the coerced paths to the store.
|
||||
Only the [`toSource`](#function-library-lib.fileset.toSource) function adds files to the Nix store, and only those files contained in the `fileset` argument.
|
||||
This is in contrast to using [paths in string interpolation](https://nixos.org/manual/nix/stable/language/values.html#type-path), which does add the entire referenced path to the store.
|
||||
:::
|
||||
|
||||
### Example {#sec-fileset-path-coercion-example}
|
||||
|
||||
Assume we are in a local directory with a file hierarchy like this:
|
||||
```
|
||||
├─ a/
|
||||
│ ├─ x (file)
|
||||
│ └─ b/
|
||||
│ └─ y (file)
|
||||
└─ c/
|
||||
└─ d/
|
||||
```
|
||||
|
||||
Here's a listing of which files get included when different path expressions get coerced to file sets:
|
||||
- `./.` as a file set contains both `a/x` and `a/b/y` (`c/` does not contain any files and is therefore omitted).
|
||||
- `./a` as a file set contains both `a/x` and `a/b/y`.
|
||||
- `./a/x` as a file set contains only `a/x`.
|
||||
- `./a/b` as a file set contains only `a/b/y`.
|
||||
- `./c` as a file set is empty, since neither `c` nor `c/d` contain any files.
|
|
@ -68,16 +68,45 @@ All new projects should use the CUDA redistributables available in [`cudaPackage
|
|||
### Updating CUDA redistributables {#updating-cuda-redistributables}
|
||||
|
||||
1. Go to NVIDIA's index of CUDA redistributables: <https://developer.download.nvidia.com/compute/cuda/redist/>
|
||||
2. Copy the `redistrib_*.json` corresponding to the release to `pkgs/development/compilers/cudatoolkit/redist/manifests`.
|
||||
3. Generate the `redistrib_features_*.json` file by running:
|
||||
2. Make a note of the new version of CUDA available.
|
||||
3. Run
|
||||
|
||||
```bash
|
||||
nix run github:ConnorBaker/cuda-redist-find-features -- <path to manifest>
|
||||
nix run github:connorbaker/cuda-redist-find-features -- \
|
||||
download-manifests \
|
||||
--log-level DEBUG \
|
||||
--version <newest CUDA version> \
|
||||
https://developer.download.nvidia.com/compute/cuda/redist \
|
||||
./pkgs/development/cuda-modules/cuda/manifests
|
||||
```
|
||||
|
||||
That command will generate the `redistrib_features_*.json` file in the same directory as the manifest.
|
||||
This will download a copy of the manifest for the new version of CUDA.
|
||||
4. Run
|
||||
|
||||
4. Include the path to the new manifest in `pkgs/development/compilers/cudatoolkit/redist/extension.nix`.
|
||||
```bash
|
||||
nix run github:connorbaker/cuda-redist-find-features -- \
|
||||
process-manifests \
|
||||
--log-level DEBUG \
|
||||
--version <newest CUDA version> \
|
||||
https://developer.download.nvidia.com/compute/cuda/redist \
|
||||
./pkgs/development/cuda-modules/cuda/manifests
|
||||
```
|
||||
|
||||
This will generate a `redistrib_features_<newest CUDA version>.json` file in the same directory as the manifest.
|
||||
5. Update the `cudaVersionMap` attribute set in `pkgs/development/cuda-modules/cuda/extension.nix`.
|
||||
|
||||
### Updating cuTensor {#updating-cutensor}
|
||||
|
||||
1. Repeat the steps present in [Updating CUDA redistributables](#updating-cuda-redistributables) with the following changes:
|
||||
- Use the index of cuTensor redistributables: <https://developer.download.nvidia.com/compute/cutensor/redist>
|
||||
- Use the newest version of cuTensor available instead of the newest version of CUDA.
|
||||
- Use `pkgs/development/cuda-modules/cutensor/manifests` instead of `pkgs/development/cuda-modules/cuda/manifests`.
|
||||
- Skip the step of updating `cudaVersionMap` in `pkgs/development/cuda-modules/cuda/extension.nix`.
|
||||
|
||||
### Updating supported compilers and GPUs {#updating-supported-compilers-and-gpus}
|
||||
|
||||
1. Update `nvcc-compatibilities.nix` in `pkgs/development/cuda-modules/` to include the newest release of NVCC, as well as any newly supported host compilers.
|
||||
2. Update `gpus.nix` in `pkgs/development/cuda-modules/` to include any new GPUs supported by the new release of CUDA.
|
||||
|
||||
### Updating the CUDA Toolkit runfile installer {#updating-the-cuda-toolkit}
|
||||
|
||||
|
@ -99,7 +128,7 @@ All new projects should use the CUDA redistributables available in [`cudaPackage
|
|||
nix store prefetch-file --hash-type sha256 <link>
|
||||
```
|
||||
|
||||
4. Update `pkgs/development/compilers/cudatoolkit/versions.toml` to include the release.
|
||||
4. Update `pkgs/development/cuda-modules/cudatoolkit/releases.nix` to include the release.
|
||||
|
||||
### Updating the CUDA package set {#updating-the-cuda-package-set}
|
||||
|
||||
|
@ -107,7 +136,7 @@ All new projects should use the CUDA redistributables available in [`cudaPackage
|
|||
|
||||
- NOTE: Changing the default CUDA package set should occur in a separate PR, allowing time for additional testing.
|
||||
|
||||
2. Successfully build the closure of the new package set, updating `pkgs/development/compilers/cudatoolkit/redist/overrides.nix` as needed. Below are some common failures:
|
||||
2. Successfully build the closure of the new package set, updating `pkgs/development/cuda-modules/cuda/overrides.nix` as needed. Below are some common failures:
|
||||
|
||||
| Unable to ... | During ... | Reason | Solution | Note |
|
||||
| --- | --- | --- | --- | --- |
|
||||
|
|
|
@ -132,7 +132,6 @@ Arguments to pass to the Go linker tool via the `-ldflags` argument of `go build
|
|||
|
||||
```nix
|
||||
ldflags = [
|
||||
"-s" "-w"
|
||||
"-X main.Version=${version}"
|
||||
"-X main.Commit=${version}"
|
||||
];
|
||||
|
|
|
@ -24,6 +24,7 @@ idris.section.md
|
|||
ios.section.md
|
||||
java.section.md
|
||||
javascript.section.md
|
||||
julia.section.md
|
||||
lisp.section.md
|
||||
lua.section.md
|
||||
maven.section.md
|
||||
|
|
69
third_party/nixpkgs/doc/languages-frameworks/julia.section.md
vendored
Normal file
69
third_party/nixpkgs/doc/languages-frameworks/julia.section.md
vendored
Normal file
|
@ -0,0 +1,69 @@
|
|||
# Julia {#language-julia}
|
||||
|
||||
## Introduction {#julia-introduction}
|
||||
|
||||
Nixpkgs includes Julia as the `julia` derivation.
|
||||
You can get specific versions by looking at the other `julia*` top-level derivations available.
|
||||
For example, `julia_19` corresponds to Julia 1.9.
|
||||
We also provide the current stable version as `julia-stable`, and an LTS version as `julia-lts`.
|
||||
|
||||
Occasionally, a Julia version has been too difficult to build from source in Nixpkgs and has been fetched prebuilt instead.
|
||||
These Julia versions are differentiated with the `*-bin` suffix; for example, `julia-stable-bin`.
|
||||
|
||||
## julia.withPackages {#julia-withpackage}
|
||||
|
||||
The basic Julia derivations only provide the built-in packages that come with the distribution.
|
||||
|
||||
You can build Julia environments with additional packages using the `julia.withPackages` command.
|
||||
This function accepts a list of strings representing Julia package names.
|
||||
For example, you can build a Julia environment with the `Plots` package as follows.
|
||||
|
||||
```nix
|
||||
julia.withPackages ["Plots"]
|
||||
```
|
||||
|
||||
Arguments can be passed using `.override`.
|
||||
For example:
|
||||
|
||||
```nix
|
||||
(julia.withPackages.override {
|
||||
precompile = false; # Turn off precompilation
|
||||
}) ["Plots"]
|
||||
```
|
||||
|
||||
Here's a nice way to run a Julia environment with a shell one-liner:
|
||||
|
||||
```sh
|
||||
nix-shell -p 'julia.withPackages ["Plots"]' --run julia
|
||||
```
|
||||
|
||||
### Arguments {#julia-withpackage-arguments}
|
||||
|
||||
* `precompile`: Whether to run `Pkg.precompile()` on the generated environment.
|
||||
|
||||
This will make package imports faster, but may fail in some cases.
|
||||
For example, there is an upstream issue with `Gtk.jl` that prevents precompilation from working in the Nix build sandbox, because the precompiled code tries to access a display.
|
||||
Packages like this will work fine if you build with `precompile=false`, and then precompile as needed once your environment starts.
|
||||
|
||||
Defaults: `true`
|
||||
|
||||
* `extraLibs`: Extra library dependencies that will be placed on the `LD_LIBRARY_PATH` for Julia.
|
||||
|
||||
Should not be needed as we try to obtain library dependencies automatically using Julia's artifacts system.
|
||||
|
||||
* `makeWrapperArgs`: Extra arguments to pass to the `makeWrapper` call which we use to wrap the Julia binary.
|
||||
* `setDefaultDepot`: Whether to automatically prepend `$HOME/.julia` to the `JULIA_DEPOT_PATH`.
|
||||
|
||||
This is useful because Julia expects a writable depot path as the first entry, which the one we build in Nixpkgs is not.
|
||||
If there's no writable depot, then Julia will show a warning and be unable to save command history logs etc.
|
||||
|
||||
Default: `true`
|
||||
|
||||
* `packageOverrides`: Allows you to override packages by name by passing an alternative source.
|
||||
|
||||
For example, you can use a custom version of the `LanguageServer` package by passing `packageOverrides = { "LanguageServer" = fetchFromGitHub {...}; }`.
|
||||
|
||||
* `augmentedRegistry`: Allows you to change the registry from which Julia packages are drawn.
|
||||
|
||||
This normally points at a special augmented version of the Julia [General packages registry](https://github.com/JuliaRegistries/General).
|
||||
If you want to use a bleeding-edge version to pick up the latest package updates, you can plug in a later revision than the one in Nixpkgs.
|
|
@ -1,74 +1,38 @@
|
|||
# Nim {#nim}
|
||||
|
||||
## Overview {#nim-overview}
|
||||
|
||||
The Nim compiler, a builder function, and some packaged libraries are available
|
||||
in Nixpkgs. Until now each compiler release has been effectively backwards
|
||||
compatible so only the latest version is available.
|
||||
|
||||
## Nim program packages in Nixpkgs {#nim-program-packages-in-nixpkgs}
|
||||
|
||||
Nim programs can be built using `nimPackages.buildNimPackage`. In the
|
||||
case of packages not containing exported library code the attribute
|
||||
`nimBinOnly` should be set to `true`.
|
||||
The Nim compiler and a builder function is available.
|
||||
Nim programs are built using `buildNimPackage` and a lockfile containing Nim dependencies.
|
||||
|
||||
The following example shows a Nim program that depends only on Nim libraries:
|
||||
|
||||
```nix
|
||||
{ lib, nimPackages, fetchFromGitHub }:
|
||||
{ lib, buildNimPackage, fetchFromGitHub }:
|
||||
|
||||
nimPackages.buildNimPackage (finalAttrs: {
|
||||
buildNimPackage { } (finalAttrs: {
|
||||
pname = "ttop";
|
||||
version = "1.0.1";
|
||||
nimBinOnly = true;
|
||||
version = "1.2.7";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "inv2004";
|
||||
repo = "ttop";
|
||||
rev = "v${finalAttrs.version}";
|
||||
hash = "sha256-x4Uczksh6p3XX/IMrOFtBxIleVHdAPX9e8n32VAUTC4=";
|
||||
hash = "sha256-oPdaUqh6eN1X5kAYVvevOndkB/xnQng9QVLX9bu5P5E=";
|
||||
};
|
||||
|
||||
buildInputs = with nimPackages; [ asciigraph illwill parsetoml zippy ];
|
||||
lockFile = ./lock.json;
|
||||
|
||||
})
|
||||
```
|
||||
|
||||
## Nim library packages in Nixpkgs {#nim-library-packages-in-nixpkgs}
|
||||
|
||||
|
||||
Nim libraries can also be built using `nimPackages.buildNimPackage`, but
|
||||
often the product of a fetcher is sufficient to satisfy a dependency.
|
||||
The `fetchgit`, `fetchFromGitHub`, and `fetchNimble` functions yield an
|
||||
output that can be discovered during the `configurePhase` of `buildNimPackage`.
|
||||
|
||||
Nim library packages are listed in
|
||||
[pkgs/top-level/nim-packages.nix](https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/nim-packages.nix) and implemented at
|
||||
[pkgs/development/nim-packages](https://github.com/NixOS/nixpkgs/tree/master/pkgs/development/nim-packages).
|
||||
|
||||
The following example shows a Nim library that propagates a dependency on a
|
||||
non-Nim package:
|
||||
```nix
|
||||
{ lib, buildNimPackage, fetchNimble, SDL2 }:
|
||||
|
||||
buildNimPackage (finalAttrs: {
|
||||
pname = "sdl2";
|
||||
version = "2.0.4";
|
||||
src = fetchNimble {
|
||||
inherit (finalAttrs) pname version;
|
||||
hash = "sha256-Vtcj8goI4zZPQs2TbFoBFlcR5UqDtOldaXSH/+/xULk=";
|
||||
};
|
||||
propagatedBuildInputs = [ SDL2 ];
|
||||
nimFlags = [
|
||||
"-d:NimblePkgVersion=${finalAttrs.version}"
|
||||
];
|
||||
})
|
||||
```
|
||||
|
||||
## `buildNimPackage` parameters {#buildnimpackage-parameters}
|
||||
|
||||
All parameters from `stdenv.mkDerivation` function are still supported. The
|
||||
following are specific to `buildNimPackage`:
|
||||
The `buildNimPackage` function takes an attrset of parameters that are passed on to `stdenv.mkDerivation`.
|
||||
|
||||
* `nimBinOnly ? false`: If `true` then build only the programs listed in
|
||||
the Nimble file in the packages sources.
|
||||
The following parameters are specific to `buildNimPackage`:
|
||||
|
||||
* `lockFile`: JSON formatted lockfile.
|
||||
* `nimbleFile`: Specify the Nimble file location of the package being built
|
||||
rather than discover the file at build-time.
|
||||
* `nimRelease ? true`: Build the package in *release* mode.
|
||||
|
@ -77,6 +41,85 @@ following are specific to `buildNimPackage`:
|
|||
Use this to specify defines with arguments in the form of `-d:${name}=${value}`.
|
||||
* `nimDoc` ? false`: Build and install HTML documentation.
|
||||
|
||||
* `buildInputs` ? []: The packages listed here will be searched for `*.nimble`
|
||||
files which are used to populate the Nim library path. Otherwise the standard
|
||||
behavior is in effect.
|
||||
## Lockfiles {#nim-lockfiles}
|
||||
Nim lockfiles are created with the `nim_lk` utility.
|
||||
Run `nim_lk` with the source directory as an argument and it will print a lockfile to stdout.
|
||||
```sh
|
||||
$ cd nixpkgs
|
||||
$ nix build -f . ttop.src
|
||||
$ nix run -f . nim_lk ./result | jq --sort-keys > pkgs/by-name/tt/ttop/lock.json
|
||||
```
|
||||
|
||||
## Overriding Nim packages {#nim-overrides}
|
||||
|
||||
The `buildNimPackage` function generates flags and additional build dependencies from the `lockFile` parameter passed to `buildNimPackage`. Using [`overrideAttrs`](#sec-pkg-overrideAttrs) on the final package will apply after this has already been generated, so this can't be used to override the `lockFile` in a package built with `buildNimPackage`. To be able to override parameters before flags and build dependencies are generated from the `lockFile`, use `overrideNimAttrs` instead with the same syntax as `overrideAttrs`:
|
||||
|
||||
```nix
|
||||
pkgs.nitter.overrideNimAttrs {
|
||||
# using a different source which has different dependencies from the standard package
|
||||
src = pkgs.fetchFromGithub { /* … */ };
|
||||
# new lock file generated from the source
|
||||
lockFile = ./custom-lock.json;
|
||||
}
|
||||
```
|
||||
|
||||
## Lockfile dependency overrides {#nim-lock-overrides}
|
||||
|
||||
The `buildNimPackage` function matches the libraries specified by `lockFile` to attrset of override functions that are then applied to the package derivation.
|
||||
The default overrides are maintained as the top-level `nimOverrides` attrset at `pkgs/top-level/nim-overrides.nix`.
|
||||
|
||||
For example, to propagate a dependency on SDL2 for lockfiles that select the Nim `sdl2` library, an overlay is added to the set in the `nim-overrides.nix` file:
|
||||
```nix
|
||||
{ lib
|
||||
/* … */
|
||||
, SDL2
|
||||
/* … */
|
||||
}:
|
||||
|
||||
{
|
||||
/* … */
|
||||
sdl2 =
|
||||
lockAttrs:
|
||||
finalAttrs:
|
||||
{ buildInputs ? [ ], ... }:
|
||||
{
|
||||
buildInputs = buildInputs ++ [ SDL2 ];
|
||||
};
|
||||
/* … */
|
||||
}
|
||||
```
|
||||
|
||||
The annotations in the `nim-overrides.nix` set are functions that take three arguments and return a new attrset to be overlayed on the package being built.
|
||||
- lockAttrs: the attrset for this library from within a lockfile. This can be used to implement library version constraints, such as marking libraries as broken or insecure.
|
||||
- finalAttrs: the final attrset passed by `buildNimPackage` to `stdenv.mkDerivation`.
|
||||
- prevAttrs: the attrset produced by initial arguments to `buildNimPackage` and any preceding lockfile overlays.
|
||||
|
||||
### Overriding an Nim library override {#nim-lock-overrides-overrides}
|
||||
|
||||
The `nimOverrides` attrset makes it possible to modify overrides in a few different ways.
|
||||
|
||||
Override a package internal to its definition:
|
||||
```nix
|
||||
{ lib, buildNimPackage, nimOverrides, libressl }:
|
||||
|
||||
let
|
||||
buildNimPackage' = buildNimPackage.override {
|
||||
nimOverrides = nimOverrides.override { openssl = libressl; };
|
||||
};
|
||||
in buildNimPackage' (finalAttrs: {
|
||||
pname = "foo";
|
||||
# …
|
||||
})
|
||||
|
||||
```
|
||||
|
||||
Override a package externally:
|
||||
```nix
|
||||
{ pkgs }: {
|
||||
foo = pkgs.foo.override {
|
||||
buildNimPackage = pkgs.buildNimPackage.override {
|
||||
nimOverrides = pkgs.nimOverrides.override { openssl = libressl; };
|
||||
};
|
||||
};
|
||||
}
|
||||
```
|
||||
|
|
|
@ -299,14 +299,13 @@ python3Packages.buildPythonApplication rec {
|
|||
hash = "sha256-Pe229rT0aHwA98s+nTHQMEFKZPo/yw6sot8MivFDvAw=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [
|
||||
python3Packages.setuptools
|
||||
python3Packages.wheel
|
||||
nativeBuildInputs = with python3Packages; [
|
||||
setuptools
|
||||
];
|
||||
|
||||
propagatedBuildInputs = [
|
||||
python3Packages.tornado
|
||||
python3Packages.python-daemon
|
||||
propagatedBuildInputs = with python3Packages; [
|
||||
tornado
|
||||
python-daemon
|
||||
];
|
||||
|
||||
meta = with lib; {
|
||||
|
@ -2061,7 +2060,7 @@ and create update commits, and supports the `fetchPypi`, `fetchurl` and
|
|||
hosted on GitHub, exporting a `GITHUB_API_TOKEN` is highly recommended.
|
||||
|
||||
Updating packages in bulk leads to lots of breakages, which is why a
|
||||
stabilization period on the `python-unstable` branch is required.
|
||||
stabilization period on the `python-updates` branch is required.
|
||||
|
||||
If a package is fragile and often breaks during these bulks updates, it
|
||||
may be reasonable to set `passthru.skipBulkUpdate = true` in the
|
||||
|
|
|
@ -963,7 +963,7 @@ repository:
|
|||
lib.updateManyAttrsByPath [{
|
||||
path = [ "packages" "stable" ];
|
||||
update = old: old.overrideScope(final: prev: {
|
||||
rustc = prev.rustc.overrideAttrs (_: {
|
||||
rustc-unwrapped = prev.rustc-unwrapped.overrideAttrs (_: {
|
||||
src = lib.cleanSource /git/scratch/rust;
|
||||
# do *not* put passthru.isReleaseTarball=true here
|
||||
});
|
||||
|
@ -1003,4 +1003,3 @@ nix-build $NIXPKGS -A package-broken-by-rust-changes
|
|||
The `git submodule update --init` and `cargo vendor` commands above
|
||||
require network access, so they can't be performed from within the
|
||||
`rustc` derivation, unfortunately.
|
||||
|
||||
|
|
|
@ -98,24 +98,30 @@ Release 23.11 ships with a new interface that will eventually replace `texlive.c
|
|||
|
||||
## Custom packages {#sec-language-texlive-custom-packages}
|
||||
|
||||
You may find that you need to use an external TeX package. A derivation for such package has to provide the contents of the "texmf" directory in its output and provide the appropriate `tlType` attribute (one of `"run"`, `"bin"`, `"doc"`, `"source"`). Dependencies on other TeX packages can be listed in the attribute `tlDeps`.
|
||||
You may find that you need to use an external TeX package. A derivation for such package has to provide the contents of the "texmf" directory in its `"tex"` output, according to the [TeX Directory Structure](https://tug.ctan.org/tds/tds.html). Dependencies on other TeX packages can be listed in the attribute `tlDeps`.
|
||||
|
||||
Such derivation must then be listed in the attribute `pkgs` of an attribute set passed to `texlive.combine`, for instance by passing `extraPkgs = { pkgs = [ custom_package ]; };`. Within Nixpkgs, `pkgs` should be part of the derivation itself, allowing users to call `texlive.combine { inherit (texlive) scheme-small; inherit some_tex_package; }`.
|
||||
The functions `texlive.combine` and `texlive.withPackages` recognise the following outputs:
|
||||
|
||||
Here is a (very verbose) example where the attribute `pkgs` is attached to the derivation itself, which requires creating a fixed point. See also the packages `auctex`, `eukleides`, `mftrace` for more examples.
|
||||
- `"out"`: contents are linked in the TeX Live environment, and binaries in the `$out/bin` folder are wrapped;
|
||||
- `"tex"`: linked in `$TEXMFDIST`; files should follow the TDS (for instance `$tex/tex/latex/foiltex/foiltex.cls`);
|
||||
- `"texdoc"`, `"texsource"`: ignored by default, treated as `"tex"`;
|
||||
- `"tlpkg"`: linked in `$TEXMFROOT/tlpkg`;
|
||||
- `"man"`, `"info"`, ...: the other outputs are combined into separate outputs.
|
||||
|
||||
When using `pkgFilter`, `texlive.combine` will assign `tlType` respectively `"bin"`, `"run"`, `"doc"`, `"source"`, `"tlpkg"` to the above outputs.
|
||||
|
||||
Here is a (very verbose) example. See also the packages `auctex`, `eukleides`, `mftrace` for more examples.
|
||||
|
||||
```nix
|
||||
with import <nixpkgs> {};
|
||||
|
||||
let
|
||||
foiltex = stdenvNoCC.mkDerivation (finalAttrs: {
|
||||
foiltex = stdenvNoCC.mkDerivation {
|
||||
pname = "latex-foiltex";
|
||||
version = "2.1.4b";
|
||||
passthru = {
|
||||
pkgs = [ finalAttrs.finalPackage ];
|
||||
tlDeps = with texlive; [ latex ];
|
||||
tlType = "run";
|
||||
};
|
||||
|
||||
outputs = [ "tex" "texdoc" ];
|
||||
passthru.tlDeps = with texlive; [ latex ];
|
||||
|
||||
srcs = [
|
||||
(fetchurl {
|
||||
|
@ -138,7 +144,13 @@ let
|
|||
runHook postUnpack
|
||||
'';
|
||||
|
||||
nativeBuildInputs = [ texlive.combined.scheme-small ];
|
||||
nativeBuildInputs = [
|
||||
(texliveSmall.withPackages (ps: with ps; [ cm-super hypdoc latexmk ]))
|
||||
# multiple-outputs.sh fails if $out is not defined
|
||||
(writeShellScript "force-tex-output.sh" ''
|
||||
out="''${tex-}"
|
||||
'')
|
||||
];
|
||||
|
||||
dontConfigure = true;
|
||||
|
||||
|
@ -148,15 +160,23 @@ let
|
|||
# Generate the style files
|
||||
latex foiltex.ins
|
||||
|
||||
# Generate the documentation
|
||||
export HOME=.
|
||||
latexmk -pdf foiltex.dtx
|
||||
|
||||
runHook postBuild
|
||||
'';
|
||||
|
||||
installPhase = ''
|
||||
runHook preInstall
|
||||
|
||||
path="$out/tex/latex/foiltex"
|
||||
path="$tex/tex/latex/foiltex"
|
||||
mkdir -p "$path"
|
||||
cp *.{cls,def,clo} "$path/"
|
||||
cp *.{cls,def,clo,sty} "$path/"
|
||||
|
||||
path="$texdoc/doc/tex/latex/foiltex"
|
||||
mkdir -p "$path"
|
||||
cp *.pdf "$path/"
|
||||
|
||||
runHook postInstall
|
||||
'';
|
||||
|
@ -167,12 +187,9 @@ let
|
|||
maintainers = with maintainers; [ veprbl ];
|
||||
platforms = platforms.all;
|
||||
};
|
||||
});
|
||||
|
||||
latex_with_foiltex = texlive.combine {
|
||||
inherit (texlive) scheme-small;
|
||||
inherit foiltex;
|
||||
};
|
||||
|
||||
latex_with_foiltex = texliveSmall.withPackages (_: [ foiltex ]);
|
||||
in
|
||||
runCommand "test.pdf" {
|
||||
nativeBuildInputs = [ latex_with_foiltex ];
|
||||
|
|
294
third_party/nixpkgs/doc/manpage-urls.json
vendored
294
third_party/nixpkgs/doc/manpage-urls.json
vendored
|
@ -1,32 +1,318 @@
|
|||
{
|
||||
"gnunet.conf(5)": "https://docs.gnunet.org/users/configuration.html",
|
||||
"gnunet.conf(5)": "https://docs.gnunet.org/latest/users/configuration.html",
|
||||
"mpd(1)": "https://mpd.readthedocs.io/en/latest/mpd.1.html",
|
||||
"mpd.conf(5)": "https://mpd.readthedocs.io/en/latest/mpd.conf.5.html",
|
||||
"nix.conf(5)": "https://nixos.org/manual/nix/stable/command-ref/conf-file.html",
|
||||
|
||||
"portals.conf(5)": "https://github.com/flatpak/xdg-desktop-portal/blob/1.18.1/doc/portals.conf.rst.in",
|
||||
|
||||
"bootctl(1)": "https://www.freedesktop.org/software/systemd/man/bootctl.html",
|
||||
"busctl(1)": "https://www.freedesktop.org/software/systemd/man/busctl.html",
|
||||
"coredumpctl(1)": "https://www.freedesktop.org/software/systemd/man/coredumpctl.html",
|
||||
"homectl(1)": "https://www.freedesktop.org/software/systemd/man/homectl.html",
|
||||
"hostnamectl(1)": "https://www.freedesktop.org/software/systemd/man/hostnamectl.html",
|
||||
"init(1)": "https://www.freedesktop.org/software/systemd/man/init.html",
|
||||
"journalctl(1)": "https://www.freedesktop.org/software/systemd/man/journalctl.html",
|
||||
"localectl(1)": "https://www.freedesktop.org/software/systemd/man/localectl.html",
|
||||
"loginctl(1)": "https://www.freedesktop.org/software/systemd/man/loginctl.html",
|
||||
"machinectl(1)": "https://www.freedesktop.org/software/systemd/man/machinectl.html",
|
||||
"mount.ddi(1)": "https://www.freedesktop.org/software/systemd/man/mount.ddi.html",
|
||||
"networkctl(1)": "https://www.freedesktop.org/software/systemd/man/networkctl.html",
|
||||
"oomctl(1)": "https://www.freedesktop.org/software/systemd/man/oomctl.html",
|
||||
"portablectl(1)": "https://www.freedesktop.org/software/systemd/man/portablectl.html",
|
||||
"resolvconf(1)": "https://www.freedesktop.org/software/systemd/man/resolvconf.html",
|
||||
"resolvectl(1)": "https://www.freedesktop.org/software/systemd/man/resolvectl.html",
|
||||
"systemctl(1)": "https://www.freedesktop.org/software/systemd/man/systemctl.html",
|
||||
"systemd-ac-power(1)": "https://www.freedesktop.org/software/systemd/man/systemd-ac-power.html",
|
||||
"systemd-analyze(1)": "https://www.freedesktop.org/software/systemd/man/systemd-analyze.html",
|
||||
"systemd-ask-password(1)": "https://www.freedesktop.org/software/systemd/man/systemd-ask-password.html",
|
||||
"systemd-cat(1)": "https://www.freedesktop.org/software/systemd/man/systemd-cat.html",
|
||||
"systemd-cgls(1)": "https://www.freedesktop.org/software/systemd/man/systemd-cgls.html",
|
||||
"systemd-cgtop(1)": "https://www.freedesktop.org/software/systemd/man/systemd-cgtop.html",
|
||||
"systemd-creds(1)": "https://www.freedesktop.org/software/systemd/man/systemd-creds.html",
|
||||
"systemd-cryptenroll(1)": "https://www.freedesktop.org/software/systemd/man/systemd-cryptenroll.html",
|
||||
"systemd-delta(1)": "https://www.freedesktop.org/software/systemd/man/systemd-delta.html",
|
||||
"systemd-detect-virt(1)": "https://www.freedesktop.org/software/systemd/man/systemd-detect-virt.html",
|
||||
"systemd-dissect(1)": "https://www.freedesktop.org/software/systemd/man/systemd-dissect.html",
|
||||
"systemd-escape(1)": "https://www.freedesktop.org/software/systemd/man/systemd-escape.html",
|
||||
"systemd-id128(1)": "https://www.freedesktop.org/software/systemd/man/systemd-id128.html",
|
||||
"systemd-inhibit(1)": "https://www.freedesktop.org/software/systemd/man/systemd-inhibit.html",
|
||||
"systemd-machine-id-setup(1)": "https://www.freedesktop.org/software/systemd/man/systemd-machine-id-setup.html",
|
||||
"systemd-measure(1)": "https://www.freedesktop.org/software/systemd/man/systemd-measure.html",
|
||||
"systemd-mount(1)": "https://www.freedesktop.org/software/systemd/man/systemd-mount.html",
|
||||
"systemd-notify(1)": "https://www.freedesktop.org/software/systemd/man/systemd-notify.html",
|
||||
"systemd-nspawn(1)": "https://www.freedesktop.org/software/systemd/man/systemd-nspawn.html",
|
||||
"systemd-path(1)": "https://www.freedesktop.org/software/systemd/man/systemd-path.html",
|
||||
"systemd-run(1)": "https://www.freedesktop.org/software/systemd/man/systemd-run.html",
|
||||
"systemd-socket-activate(1)": "https://www.freedesktop.org/software/systemd/man/systemd-socket-activate.html",
|
||||
"systemd-stdio-bridge(1)": "https://www.freedesktop.org/software/systemd/man/systemd-stdio-bridge.html",
|
||||
"systemd-tty-ask-password-agent(1)": "https://www.freedesktop.org/software/systemd/man/systemd-tty-ask-password-agent.html",
|
||||
"systemd-umount(1)": "https://www.freedesktop.org/software/systemd/man/systemd-umount.html",
|
||||
"systemd(1)": "https://www.freedesktop.org/software/systemd/man/systemd.html",
|
||||
"timedatectl(1)": "https://www.freedesktop.org/software/systemd/man/timedatectl.html",
|
||||
"userdbctl(1)": "https://www.freedesktop.org/software/systemd/man/userdbctl.html",
|
||||
"binfmt.d(5)": "https://www.freedesktop.org/software/systemd/man/binfmt.d.html",
|
||||
"coredump.conf(5)": "https://www.freedesktop.org/software/systemd/man/coredump.conf.html",
|
||||
"coredump.conf.d(5)": "https://www.freedesktop.org/software/systemd/man/coredump.conf.d.html",
|
||||
"crypttab(5)": "https://www.freedesktop.org/software/systemd/man/crypttab.html",
|
||||
"dnssec-trust-anchors.d(5)": "https://www.freedesktop.org/software/systemd/man/dnssec-trust-anchors.d.html",
|
||||
"environment.d(5)": "https://www.freedesktop.org/software/systemd/man/environment.d.html",
|
||||
"extension-release(5)": "https://www.freedesktop.org/software/systemd/man/extension-release.html",
|
||||
"homed.conf(5)": "https://www.freedesktop.org/software/systemd/man/homed.conf.html",
|
||||
"homed.conf.d(5)": "https://www.freedesktop.org/software/systemd/man/homed.conf.d.html",
|
||||
"hostname(5)": "https://www.freedesktop.org/software/systemd/man/hostname.html",
|
||||
"initrd-release(5)": "https://www.freedesktop.org/software/systemd/man/initrd-release.html",
|
||||
"integritytab(5)": "https://www.freedesktop.org/software/systemd/man/integritytab.html",
|
||||
"iocost.conf(5)": "https://www.freedesktop.org/software/systemd/man/iocost.conf.html",
|
||||
"journal-remote.conf(5)": "https://www.freedesktop.org/software/systemd/man/journal-remote.conf.html",
|
||||
"journal-remote.conf.d(5)": "https://www.freedesktop.org/software/systemd/man/journal-remote.conf.d.html",
|
||||
"journal-upload.conf(5)": "https://www.freedesktop.org/software/systemd/man/journal-upload.conf.html",
|
||||
"journal-upload.conf.d(5)": "https://www.freedesktop.org/software/systemd/man/journal-upload.conf.d.html",
|
||||
"journald.conf(5)": "https://www.freedesktop.org/software/systemd/man/journald.conf.html",
|
||||
"journald.conf.d(5)": "https://www.freedesktop.org/software/systemd/man/journald.conf.d.html",
|
||||
"journald@.conf(5)": "https://www.freedesktop.org/software/systemd/man/journald@.conf.html",
|
||||
"loader.conf(5)": "https://www.freedesktop.org/software/systemd/man/loader.conf.html",
|
||||
"locale.conf(5)": "https://www.freedesktop.org/software/systemd/man/locale.conf.html",
|
||||
"localtime(5)": "https://www.freedesktop.org/software/systemd/man/localtime.html",
|
||||
"logind.conf(5)": "https://www.freedesktop.org/software/systemd/man/logind.conf.html",
|
||||
"logind.conf.d(5)": "https://www.freedesktop.org/software/systemd/man/logind.conf.d.html",
|
||||
"machine-id(5)": "https://www.freedesktop.org/software/systemd/man/machine-id.html",
|
||||
"machine-info(5)": "https://www.freedesktop.org/software/systemd/man/machine-info.html",
|
||||
"modules-load.d(5)": "https://www.freedesktop.org/software/systemd/man/modules-load.d.html",
|
||||
"networkd.conf(5)": "https://www.freedesktop.org/software/systemd/man/networkd.conf.html",
|
||||
"networkd.conf.d(5)": "https://www.freedesktop.org/software/systemd/man/networkd.conf.d.html",
|
||||
"oomd.conf(5)": "https://www.freedesktop.org/software/systemd/man/oomd.conf.html",
|
||||
"oomd.conf.d(5)": "https://www.freedesktop.org/software/systemd/man/oomd.conf.d.html",
|
||||
"org.freedesktop.LogControl1(5)": "https://www.freedesktop.org/software/systemd/man/org.freedesktop.LogControl1.html",
|
||||
"org.freedesktop.home1(5)": "https://www.freedesktop.org/software/systemd/man/org.freedesktop.home1.html",
|
||||
"org.freedesktop.hostname1(5)": "https://www.freedesktop.org/software/systemd/man/org.freedesktop.hostname1.html",
|
||||
"org.freedesktop.import1(5)": "https://www.freedesktop.org/software/systemd/man/org.freedesktop.import1.html",
|
||||
"org.freedesktop.locale1(5)": "https://www.freedesktop.org/software/systemd/man/org.freedesktop.locale1.html",
|
||||
"org.freedesktop.login1(5)": "https://www.freedesktop.org/software/systemd/man/org.freedesktop.login1.html",
|
||||
"org.freedesktop.machine1(5)": "https://www.freedesktop.org/software/systemd/man/org.freedesktop.machine1.html",
|
||||
"org.freedesktop.network1(5)": "https://www.freedesktop.org/software/systemd/man/org.freedesktop.network1.html",
|
||||
"org.freedesktop.oom1(5)": "https://www.freedesktop.org/software/systemd/man/org.freedesktop.oom1.html",
|
||||
"org.freedesktop.portable1(5)": "https://www.freedesktop.org/software/systemd/man/org.freedesktop.portable1.html",
|
||||
"org.freedesktop.resolve1(5)": "https://www.freedesktop.org/software/systemd/man/org.freedesktop.resolve1.html",
|
||||
"org.freedesktop.systemd1(5)": "https://www.freedesktop.org/software/systemd/man/org.freedesktop.systemd1.html",
|
||||
"org.freedesktop.timedate1(5)": "https://www.freedesktop.org/software/systemd/man/org.freedesktop.timedate1.html",
|
||||
"os-release(5)": "https://www.freedesktop.org/software/systemd/man/os-release.html",
|
||||
"pstore.conf(5)": "https://www.freedesktop.org/software/systemd/man/pstore.conf.html",
|
||||
"pstore.conf.d(5)": "https://www.freedesktop.org/software/systemd/man/pstore.conf.d.html",
|
||||
"repart.d(5)": "https://www.freedesktop.org/software/systemd/man/repart.d.html",
|
||||
"resolved.conf(5)": "https://www.freedesktop.org/software/systemd/man/resolved.conf.html",
|
||||
"resolved.conf.d(5)": "https://www.freedesktop.org/software/systemd/man/resolved.conf.d.html",
|
||||
"sleep.conf.d(5)": "https://www.freedesktop.org/software/systemd/man/sleep.conf.d.html",
|
||||
"sysctl.d(5)": "https://www.freedesktop.org/software/systemd/man/sysctl.d.html",
|
||||
"system.conf.d(5)": "https://www.freedesktop.org/software/systemd/man/system.conf.d.html",
|
||||
"systemd-sleep.conf(5)": "https://www.freedesktop.org/software/systemd/man/systemd-sleep.conf.html",
|
||||
"systemd-system.conf(5)": "https://www.freedesktop.org/software/systemd/man/systemd-system.conf.html",
|
||||
"systemd-user-runtime-dir(5)": "https://www.freedesktop.org/software/systemd/man/systemd-user-runtime-dir.html",
|
||||
"systemd-user.conf(5)": "https://www.freedesktop.org/software/systemd/man/systemd-user.conf.html",
|
||||
"systemd.automount(5)": "https://www.freedesktop.org/software/systemd/man/systemd.automount.html",
|
||||
"systemd.device(5)": "https://www.freedesktop.org/software/systemd/man/systemd.device.html",
|
||||
"systemd.dnssd(5)": "https://www.freedesktop.org/software/systemd/man/systemd.dnssd.html",
|
||||
"systemd.exec(5)": "https://www.freedesktop.org/software/systemd/man/systemd.exec.html",
|
||||
"systemd.kill(5)": "https://www.freedesktop.org/software/systemd/man/systemd.kill.html",
|
||||
"systemd.link(5)": "https://www.freedesktop.org/software/systemd/man/systemd.link.html",
|
||||
"systemd.mount(5)": "https://www.freedesktop.org/software/systemd/man/systemd.mount.html",
|
||||
"systemd.negative(5)": "https://www.freedesktop.org/software/systemd/man/systemd.negative.html",
|
||||
"systemd.netdev(5)": "https://www.freedesktop.org/software/systemd/man/systemd.netdev.html",
|
||||
"systemd.network(5)": "https://www.freedesktop.org/software/systemd/man/systemd.network.html",
|
||||
"systemd.nspawn(5)": "https://www.freedesktop.org/software/systemd/man/systemd.nspawn.html",
|
||||
"systemd.path(5)": "https://www.freedesktop.org/software/systemd/man/systemd.path.html",
|
||||
"systemd.positive(5)": "https://www.freedesktop.org/software/systemd/man/systemd.positive.html",
|
||||
"systemd.preset(5)": "https://www.freedesktop.org/software/systemd/man/systemd.preset.html",
|
||||
"systemd.resource-control(5)": "https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html",
|
||||
"systemd.scope(5)": "https://www.freedesktop.org/software/systemd/man/systemd.scope.html",
|
||||
"systemd.service(5)": "https://www.freedesktop.org/software/systemd/man/systemd.service.html",
|
||||
"systemd.slice(5)": "https://www.freedesktop.org/software/systemd/man/systemd.slice.html",
|
||||
"systemd.socket(5)": "https://www.freedesktop.org/software/systemd/man/systemd.socket.html",
|
||||
"systemd.swap(5)": "https://www.freedesktop.org/software/systemd/man/systemd.swap.html",
|
||||
"systemd.target(5)": "https://www.freedesktop.org/software/systemd/man/systemd.target.html",
|
||||
"systemd.timer(5)": "https://www.freedesktop.org/software/systemd/man/systemd.timer.html",
|
||||
"systemd.unit(5)": "https://www.freedesktop.org/software/systemd/man/systemd.unit.html",
|
||||
"systemd-system.conf(5)": "https://www.freedesktop.org/software/systemd/man/systemd-system.conf.html",
|
||||
"systemd-user.conf(5)": "https://www.freedesktop.org/software/systemd/man/systemd-user.conf.html",
|
||||
"sysupdate.d(5)": "https://www.freedesktop.org/software/systemd/man/sysupdate.d.html",
|
||||
"sysusers.d(5)": "https://www.freedesktop.org/software/systemd/man/sysusers.d.html",
|
||||
"timesyncd.conf(5)": "https://www.freedesktop.org/software/systemd/man/timesyncd.conf.html",
|
||||
"timesyncd.conf.d(5)": "https://www.freedesktop.org/software/systemd/man/timesyncd.conf.d.html",
|
||||
"tmpfiles.d(5)": "https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html",
|
||||
"udev.conf(5)": "https://www.freedesktop.org/software/systemd/man/udev.conf.html",
|
||||
"user-runtime-dir@.service(5)": "https://www.freedesktop.org/software/systemd/man/user-runtime-dir@.service.html",
|
||||
"user.conf.d(5)": "https://www.freedesktop.org/software/systemd/man/user.conf.d.html",
|
||||
"user@.service(5)": "https://www.freedesktop.org/software/systemd/man/user@.service.html",
|
||||
"vconsole.conf(5)": "https://www.freedesktop.org/software/systemd/man/vconsole.conf.html",
|
||||
"veritytab(5)": "https://www.freedesktop.org/software/systemd/man/veritytab.html",
|
||||
"bootup(7)": "https://www.freedesktop.org/software/systemd/man/bootup.html",
|
||||
"daemon(7)": "https://www.freedesktop.org/software/systemd/man/daemon.html",
|
||||
"file-hierarchy(7)": "https://www.freedesktop.org/software/systemd/man/file-hierarchy.html",
|
||||
"hwdb(7)": "https://www.freedesktop.org/software/systemd/man/hwdb.html",
|
||||
"kernel-command-line(7)": "https://www.freedesktop.org/software/systemd/man/kernel-command-line.html",
|
||||
"linuxaa64.efi.stub(7)": "https://www.freedesktop.org/software/systemd/man/linuxaa64.efi.stub.html",
|
||||
"linuxia32.efi.stub(7)": "https://www.freedesktop.org/software/systemd/man/linuxia32.efi.stub.html",
|
||||
"linuxx64.efi.stub(7)": "https://www.freedesktop.org/software/systemd/man/linuxx64.efi.stub.html",
|
||||
"sd-boot(7)": "https://www.freedesktop.org/software/systemd/man/sd-boot.html",
|
||||
"sd-stub(7)": "https://www.freedesktop.org/software/systemd/man/sd-stub.html",
|
||||
"smbios-type-11(7)": "https://www.freedesktop.org/software/systemd/man/smbios-type-11.html",
|
||||
"systemd-boot(7)": "https://www.freedesktop.org/software/systemd/man/systemd-boot.html",
|
||||
"systemd-stub(7)": "https://www.freedesktop.org/software/systemd/man/systemd-stub.html",
|
||||
"systemd.directives(7)": "https://www.freedesktop.org/software/systemd/man/systemd.directives.html",
|
||||
"systemd.environment-generator(7)": "https://www.freedesktop.org/software/systemd/man/systemd.environment-generator.html",
|
||||
"systemd.generator(7)": "https://www.freedesktop.org/software/systemd/man/systemd.generator.html",
|
||||
"systemd.image-policy(7)": "https://www.freedesktop.org/software/systemd/man/systemd.image-policy.html",
|
||||
"systemd.index(7)": "https://www.freedesktop.org/software/systemd/man/systemd.index.html",
|
||||
"systemd.journal-fields(7)": "https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html",
|
||||
"systemd.net-naming-scheme(7)": "https://www.freedesktop.org/software/systemd/man/systemd.net-naming-scheme.html",
|
||||
"systemd.offline-updates(7)": "https://www.freedesktop.org/software/systemd/man/systemd.offline-updates.html",
|
||||
"systemd.special(7)": "https://www.freedesktop.org/software/systemd/man/systemd.special.html",
|
||||
"systemd.syntax(7)": "https://www.freedesktop.org/software/systemd/man/systemd.syntax.html",
|
||||
"systemd.system-credentials(7)": "https://www.freedesktop.org/software/systemd/man/systemd.system-credentials.html",
|
||||
"systemd.time(7)": "https://www.freedesktop.org/software/systemd/man/systemd.time.html",
|
||||
"udev(7)": "https://www.freedesktop.org/software/systemd/man/udev.html",
|
||||
"30-systemd-environment-d-generator(8)": "https://www.freedesktop.org/software/systemd/man/30-systemd-environment-d-generator.html",
|
||||
"halt(8)": "https://www.freedesktop.org/software/systemd/man/halt.html",
|
||||
"kernel-install(8)": "https://www.freedesktop.org/software/systemd/man/kernel-install.html",
|
||||
"libnss_myhostname.so.2(8)": "https://www.freedesktop.org/software/systemd/man/libnss_myhostname.so.2.html",
|
||||
"libnss_mymachines.so.2(8)": "https://www.freedesktop.org/software/systemd/man/libnss_mymachines.so.2.html",
|
||||
"libnss_resolve.so.2(8)": "https://www.freedesktop.org/software/systemd/man/libnss_resolve.so.2.html",
|
||||
"libnss_systemd.so.2(8)": "https://www.freedesktop.org/software/systemd/man/libnss_systemd.so.2.html",
|
||||
"nss-myhostname(8)": "https://www.freedesktop.org/software/systemd/man/nss-myhostname.html",
|
||||
"nss-mymachines(8)": "https://www.freedesktop.org/software/systemd/man/nss-mymachines.html",
|
||||
"nss-resolve(8)": "https://www.freedesktop.org/software/systemd/man/nss-resolve.html",
|
||||
"nss-systemd(8)": "https://www.freedesktop.org/software/systemd/man/nss-systemd.html",
|
||||
"pam_systemd(8)": "https://www.freedesktop.org/software/systemd/man/pam_systemd.html",
|
||||
"pam_systemd_home(8)": "https://www.freedesktop.org/software/systemd/man/pam_systemd_home.html",
|
||||
"poweroff(8)": "https://www.freedesktop.org/software/systemd/man/poweroff.html",
|
||||
"reboot(8)": "https://www.freedesktop.org/software/systemd/man/reboot.html",
|
||||
"shutdown(8)": "https://www.freedesktop.org/software/systemd/man/shutdown.html",
|
||||
"systemd-ask-password-console.path(8)": "https://www.freedesktop.org/software/systemd/man/systemd-ask-password-console.path.html",
|
||||
"systemd-ask-password-console.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-ask-password-console.service.html",
|
||||
"systemd-ask-password-wall.path(8)": "https://www.freedesktop.org/software/systemd/man/systemd-ask-password-wall.path.html",
|
||||
"systemd-ask-password-wall.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-ask-password-wall.service.html",
|
||||
"systemd-backlight(8)": "https://www.freedesktop.org/software/systemd/man/systemd-backlight.html",
|
||||
"systemd-backlight@.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-backlight@.service.html",
|
||||
"systemd-battery-check(8)": "https://www.freedesktop.org/software/systemd/man/systemd-battery-check.html",
|
||||
"systemd-binfmt(8)": "https://www.freedesktop.org/software/systemd/man/systemd-binfmt.html",
|
||||
"systemd-bless-boot-generator(8)": "https://www.freedesktop.org/software/systemd/man/systemd-bless-boot-generator.html",
|
||||
"systemd-bless-boot(8)": "https://www.freedesktop.org/software/systemd/man/systemd-bless-boot.html",
|
||||
"systemd-boot-check-no-failures(8)": "https://www.freedesktop.org/software/systemd/man/systemd-boot-check-no-failures.html",
|
||||
"systemd-boot-random-seed.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-boot-random-seed.service.html",
|
||||
"systemd-confext(8)": "https://www.freedesktop.org/software/systemd/man/systemd-confext.html",
|
||||
"systemd-confext.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-confext.service.html",
|
||||
"systemd-coredump(8)": "https://www.freedesktop.org/software/systemd/man/systemd-coredump.html",
|
||||
"systemd-coredump.socket(8)": "https://www.freedesktop.org/software/systemd/man/systemd-coredump.socket.html",
|
||||
"systemd-coredump@.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-coredump@.service.html",
|
||||
"systemd-cryptsetup-generator(8)": "https://www.freedesktop.org/software/systemd/man/systemd-cryptsetup-generator.html",
|
||||
"systemd-cryptsetup(8)": "https://www.freedesktop.org/software/systemd/man/systemd-cryptsetup.html",
|
||||
"systemd-cryptsetup@.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-cryptsetup@.service.html",
|
||||
"systemd-debug-generator(8)": "https://www.freedesktop.org/software/systemd/man/systemd-debug-generator.html",
|
||||
"systemd-environment-d-generator(8)": "https://www.freedesktop.org/software/systemd/man/systemd-environment-d-generator.html",
|
||||
"systemd-fsck-root.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-fsck-root.service.html",
|
||||
"systemd-fsck-usr.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-fsck-usr.service.html",
|
||||
"systemd-fsck(8)": "https://www.freedesktop.org/software/systemd/man/systemd-fsck.html",
|
||||
"systemd-fsck@.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-fsck@.service.html",
|
||||
"systemd-fstab-generator(8)": "https://www.freedesktop.org/software/systemd/man/systemd-fstab-generator.html",
|
||||
"systemd-networkd-wait-online.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-networkd-wait-online.service.html"
|
||||
"systemd-getty-generator(8)": "https://www.freedesktop.org/software/systemd/man/systemd-getty-generator.html",
|
||||
"systemd-gpt-auto-generator(8)": "https://www.freedesktop.org/software/systemd/man/systemd-gpt-auto-generator.html",
|
||||
"systemd-growfs-root.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-growfs-root.service.html",
|
||||
"systemd-growfs(8)": "https://www.freedesktop.org/software/systemd/man/systemd-growfs.html",
|
||||
"systemd-growfs@.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-growfs@.service.html",
|
||||
"systemd-halt.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-halt.service.html",
|
||||
"systemd-hibernate-resume-generator(8)": "https://www.freedesktop.org/software/systemd/man/systemd-hibernate-resume-generator.html",
|
||||
"systemd-hibernate-resume(8)": "https://www.freedesktop.org/software/systemd/man/systemd-hibernate-resume.html",
|
||||
"systemd-hibernate.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-hibernate.service.html",
|
||||
"systemd-homed(8)": "https://www.freedesktop.org/software/systemd/man/systemd-homed.html",
|
||||
"systemd-hostnamed(8)": "https://www.freedesktop.org/software/systemd/man/systemd-hostnamed.html",
|
||||
"systemd-hwdb(8)": "https://www.freedesktop.org/software/systemd/man/systemd-hwdb.html",
|
||||
"systemd-hybrid-sleep.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-hybrid-sleep.service.html",
|
||||
"systemd-importd(8)": "https://www.freedesktop.org/software/systemd/man/systemd-importd.html",
|
||||
"systemd-integritysetup-generator(8)": "https://www.freedesktop.org/software/systemd/man/systemd-integritysetup-generator.html",
|
||||
"systemd-integritysetup(8)": "https://www.freedesktop.org/software/systemd/man/systemd-integritysetup.html",
|
||||
"systemd-integritysetup@.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-integritysetup@.service.html",
|
||||
"systemd-journal-gatewayd(8)": "https://www.freedesktop.org/software/systemd/man/systemd-journal-gatewayd.html",
|
||||
"systemd-journal-gatewayd.socket(8)": "https://www.freedesktop.org/software/systemd/man/systemd-journal-gatewayd.socket.html",
|
||||
"systemd-journal-remote(8)": "https://www.freedesktop.org/software/systemd/man/systemd-journal-remote.html",
|
||||
"systemd-journal-remote.socket(8)": "https://www.freedesktop.org/software/systemd/man/systemd-journal-remote.socket.html",
|
||||
"systemd-journal-upload(8)": "https://www.freedesktop.org/software/systemd/man/systemd-journal-upload.html",
|
||||
"systemd-journald-audit.socket(8)": "https://www.freedesktop.org/software/systemd/man/systemd-journald-audit.socket.html",
|
||||
"systemd-journald-dev-log.socket(8)": "https://www.freedesktop.org/software/systemd/man/systemd-journald-dev-log.socket.html",
|
||||
"systemd-journald-varlink@.socket(8)": "https://www.freedesktop.org/software/systemd/man/systemd-journald-varlink@.socket.html",
|
||||
"systemd-journald(8)": "https://www.freedesktop.org/software/systemd/man/systemd-journald.html",
|
||||
"systemd-journald.socket(8)": "https://www.freedesktop.org/software/systemd/man/systemd-journald.socket.html",
|
||||
"systemd-journald@.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-journald@.service.html",
|
||||
"systemd-journald@.socket(8)": "https://www.freedesktop.org/software/systemd/man/systemd-journald@.socket.html",
|
||||
"systemd-kexec.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-kexec.service.html",
|
||||
"systemd-localed(8)": "https://www.freedesktop.org/software/systemd/man/systemd-localed.html",
|
||||
"systemd-logind(8)": "https://www.freedesktop.org/software/systemd/man/systemd-logind.html",
|
||||
"systemd-machine-id-commit.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-machine-id-commit.service.html",
|
||||
"systemd-machined(8)": "https://www.freedesktop.org/software/systemd/man/systemd-machined.html",
|
||||
"systemd-makefs(8)": "https://www.freedesktop.org/software/systemd/man/systemd-makefs.html",
|
||||
"systemd-makefs@.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-makefs@.service.html",
|
||||
"systemd-mkswap@.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-mkswap@.service.html",
|
||||
"systemd-modules-load(8)": "https://www.freedesktop.org/software/systemd/man/systemd-modules-load.html",
|
||||
"systemd-network-generator(8)": "https://www.freedesktop.org/software/systemd/man/systemd-network-generator.html",
|
||||
"systemd-networkd-wait-online(8)": "https://www.freedesktop.org/software/systemd/man/systemd-networkd-wait-online.html",
|
||||
"systemd-networkd-wait-online@.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-networkd-wait-online@.service.html",
|
||||
"systemd-networkd(8)": "https://www.freedesktop.org/software/systemd/man/systemd-networkd.html",
|
||||
"systemd-oomd(8)": "https://www.freedesktop.org/software/systemd/man/systemd-oomd.html",
|
||||
"systemd-pcrfs-root.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-pcrfs-root.service.html",
|
||||
"systemd-pcrfs@.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-pcrfs@.service.html",
|
||||
"systemd-pcrmachine.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-pcrmachine.service.html",
|
||||
"systemd-pcrphase-initrd.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-pcrphase-initrd.service.html",
|
||||
"systemd-pcrphase-sysinit.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-pcrphase-sysinit.service.html",
|
||||
"systemd-pcrphase(8)": "https://www.freedesktop.org/software/systemd/man/systemd-pcrphase.html",
|
||||
"systemd-portabled(8)": "https://www.freedesktop.org/software/systemd/man/systemd-portabled.html",
|
||||
"systemd-poweroff.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-poweroff.service.html",
|
||||
"systemd-pstore(8)": "https://www.freedesktop.org/software/systemd/man/systemd-pstore.html",
|
||||
"systemd-random-seed(8)": "https://www.freedesktop.org/software/systemd/man/systemd-random-seed.html",
|
||||
"systemd-reboot.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-reboot.service.html",
|
||||
"systemd-remount-fs(8)": "https://www.freedesktop.org/software/systemd/man/systemd-remount-fs.html",
|
||||
"systemd-repart(8)": "https://www.freedesktop.org/software/systemd/man/systemd-repart.html",
|
||||
"systemd-repart.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-repart.service.html",
|
||||
"systemd-resolved(8)": "https://www.freedesktop.org/software/systemd/man/systemd-resolved.html",
|
||||
"systemd-rfkill(8)": "https://www.freedesktop.org/software/systemd/man/systemd-rfkill.html",
|
||||
"systemd-rfkill.socket(8)": "https://www.freedesktop.org/software/systemd/man/systemd-rfkill.socket.html",
|
||||
"systemd-run-generator(8)": "https://www.freedesktop.org/software/systemd/man/systemd-run-generator.html",
|
||||
"systemd-shutdown(8)": "https://www.freedesktop.org/software/systemd/man/systemd-shutdown.html",
|
||||
"systemd-sleep(8)": "https://www.freedesktop.org/software/systemd/man/systemd-sleep.html",
|
||||
"systemd-socket-proxyd(8)": "https://www.freedesktop.org/software/systemd/man/systemd-socket-proxyd.html",
|
||||
"systemd-soft-reboot.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-soft-reboot.service.html",
|
||||
"systemd-suspend-then-hibernate.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-suspend-then-hibernate.service.html",
|
||||
"systemd-suspend.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-suspend.service.html",
|
||||
"systemd-sysctl(8)": "https://www.freedesktop.org/software/systemd/man/systemd-sysctl.html",
|
||||
"systemd-sysext(8)": "https://www.freedesktop.org/software/systemd/man/systemd-sysext.html",
|
||||
"systemd-sysext.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-sysext.service.html",
|
||||
"systemd-system-update-generator(8)": "https://www.freedesktop.org/software/systemd/man/systemd-system-update-generator.html",
|
||||
"systemd-sysupdate-reboot.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-sysupdate-reboot.service.html",
|
||||
"systemd-sysupdate-reboot.timer(8)": "https://www.freedesktop.org/software/systemd/man/systemd-sysupdate-reboot.timer.html",
|
||||
"systemd-sysupdate(8)": "https://www.freedesktop.org/software/systemd/man/systemd-sysupdate.html",
|
||||
"systemd-sysupdate.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-sysupdate.service.html",
|
||||
"systemd-sysupdate.timer(8)": "https://www.freedesktop.org/software/systemd/man/systemd-sysupdate.timer.html",
|
||||
"systemd-sysusers(8)": "https://www.freedesktop.org/software/systemd/man/systemd-sysusers.html",
|
||||
"systemd-sysusers.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-sysusers.service.html",
|
||||
"systemd-time-wait-sync(8)": "https://www.freedesktop.org/software/systemd/man/systemd-time-wait-sync.html",
|
||||
"systemd-timedated(8)": "https://www.freedesktop.org/software/systemd/man/systemd-timedated.html",
|
||||
"systemd-timesyncd(8)": "https://www.freedesktop.org/software/systemd/man/systemd-timesyncd.html",
|
||||
"systemd-tmpfiles-clean.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-tmpfiles-clean.service.html",
|
||||
"systemd-tmpfiles-clean.timer(8)": "https://www.freedesktop.org/software/systemd/man/systemd-tmpfiles-clean.timer.html",
|
||||
"systemd-tmpfiles-setup-dev-early.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-tmpfiles-setup-dev-early.service.html",
|
||||
"systemd-tmpfiles-setup-dev.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-tmpfiles-setup-dev.service.html",
|
||||
"systemd-tmpfiles-setup.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-tmpfiles-setup.service.html",
|
||||
"systemd-tmpfiles(8)": "https://www.freedesktop.org/software/systemd/man/systemd-tmpfiles.html",
|
||||
"systemd-udev-settle.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-udev-settle.service.html",
|
||||
"systemd-udevd-control.socket(8)": "https://www.freedesktop.org/software/systemd/man/systemd-udevd-control.socket.html",
|
||||
"systemd-udevd-kernel.socket(8)": "https://www.freedesktop.org/software/systemd/man/systemd-udevd-kernel.socket.html",
|
||||
"systemd-udevd(8)": "https://www.freedesktop.org/software/systemd/man/systemd-udevd.html",
|
||||
"systemd-update-done(8)": "https://www.freedesktop.org/software/systemd/man/systemd-update-done.html",
|
||||
"systemd-update-utmp-runlevel.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-update-utmp-runlevel.service.html",
|
||||
"systemd-update-utmp(8)": "https://www.freedesktop.org/software/systemd/man/systemd-update-utmp.html",
|
||||
"systemd-user-sessions(8)": "https://www.freedesktop.org/software/systemd/man/systemd-user-sessions.html",
|
||||
"systemd-userdbd(8)": "https://www.freedesktop.org/software/systemd/man/systemd-userdbd.html",
|
||||
"systemd-vconsole-setup(8)": "https://www.freedesktop.org/software/systemd/man/systemd-vconsole-setup.html",
|
||||
"systemd-veritysetup-generator(8)": "https://www.freedesktop.org/software/systemd/man/systemd-veritysetup-generator.html",
|
||||
"systemd-veritysetup(8)": "https://www.freedesktop.org/software/systemd/man/systemd-veritysetup.html",
|
||||
"systemd-veritysetup@.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-veritysetup@.service.html",
|
||||
"systemd-volatile-root(8)": "https://www.freedesktop.org/software/systemd/man/systemd-volatile-root.html",
|
||||
"systemd-xdg-autostart-generator(8)": "https://www.freedesktop.org/software/systemd/man/systemd-xdg-autostart-generator.html",
|
||||
"udevadm(8)": "https://www.freedesktop.org/software/systemd/man/udevadm.html"
|
||||
}
|
||||
|
|
2
third_party/nixpkgs/doc/manual.md.in
vendored
2
third_party/nixpkgs/doc/manual.md.in
vendored
|
@ -1,4 +1,4 @@
|
|||
# Nixpkgs Manual {#nixpkgs-manual}
|
||||
# Nixpkgs Reference Manual {#nixpkgs-manual}
|
||||
## Version @MANUAL_VERSION@
|
||||
|
||||
```{=include=} chapters
|
||||
|
|
|
@ -94,7 +94,11 @@ $ sudo launchctl kickstart -k system/org.nixos.nix-daemon
|
|||
system = linuxSystem;
|
||||
modules = [
|
||||
"${nixpkgs}/nixos/modules/profiles/macos-builder.nix"
|
||||
{ virtualisation.host.pkgs = pkgs; }
|
||||
{ virtualisation = {
|
||||
host.pkgs = pkgs;
|
||||
darwin-builder.workingDirectory = "/var/lib/darwin-builder";
|
||||
};
|
||||
};
|
||||
];
|
||||
};
|
||||
in {
|
||||
|
|
126
third_party/nixpkgs/doc/packages/linux.section.md
vendored
126
third_party/nixpkgs/doc/packages/linux.section.md
vendored
|
@ -2,9 +2,21 @@
|
|||
|
||||
The Nix expressions to build the Linux kernel are in [`pkgs/os-specific/linux/kernel`](https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kernel).
|
||||
|
||||
The function that builds the kernel has an argument `kernelPatches` which should be a list of `{name, patch, extraConfig}` attribute sets, where `name` is the name of the patch (which is included in the kernel’s `meta.description` attribute), `patch` is the patch itself (possibly compressed), and `extraConfig` (optional) is a string specifying extra options to be concatenated to the kernel configuration file (`.config`).
|
||||
The function [`pkgs.buildLinux`](https://github.com/NixOS/nixpkgs/blob/d77bda728d5041c1294a68fb25c79e2d161f62b9/pkgs/os-specific/linux/kernel/generic.nix) builds a kernel with [common configuration values](https://github.com/NixOS/nixpkgs/blob/d77bda728d5041c1294a68fb25c79e2d161f62b9/pkgs/os-specific/linux/kernel/common-config.nix).
|
||||
This is the preferred option unless you have a very specific use case.
|
||||
Most kernels packaged in Nixpkgs are built that way, and it will also generate kernels suitable for NixOS.
|
||||
[`pkgs.linuxManualConfig`](https://github.com/NixOS/nixpkgs/blob/d77bda728d5041c1294a68fb25c79e2d161f62b9/pkgs/os-specific/linux/kernel/manual-config.nix) requires a complete configuration to be passed.
|
||||
It has fewer additional features than `pkgs.buildLinux`, which provides common configuration values and exposes the `features` attribute, as explained below.
|
||||
|
||||
The kernel derivation exports an attribute `features` specifying whether optional functionality is or isn’t enabled. This is used in NixOS to implement kernel-specific behaviour. For instance, if the kernel has the `iwlwifi` feature (i.e., has built-in support for Intel wireless chipsets), then NixOS doesn’t have to build the external `iwlwifi` package:
|
||||
Both functions have an argument `kernelPatches` which should be a list of `{name, patch, extraConfig}` attribute sets, where `name` is the name of the patch (which is included in the kernel’s `meta.description` attribute), `patch` is the patch itself (possibly compressed), and `extraConfig` (optional) is a string specifying extra options to be concatenated to the kernel configuration file (`.config`).
|
||||
|
||||
The kernel derivation created with `pkgs.buildLinux` exports an attribute `features` specifying whether optional functionality is or isn’t enabled. This is used in NixOS to implement kernel-specific behaviour.
|
||||
|
||||
:::{.example #ex-skip-package-from-kernel-feature}
|
||||
|
||||
# Skipping an external package because of a kernel feature
|
||||
|
||||
For instance, if the kernel has the `iwlwifi` feature (i.e., has built-in support for Intel wireless chipsets), then NixOS doesn’t have to build the external `iwlwifi` package:
|
||||
|
||||
```nix
|
||||
modulesTree = [kernel]
|
||||
|
@ -12,30 +24,104 @@ modulesTree = [kernel]
|
|||
++ ...;
|
||||
```
|
||||
|
||||
How to add a new (major) version of the Linux kernel to Nixpkgs:
|
||||
:::
|
||||
|
||||
1. Copy the old Nix expression (e.g., `linux-2.6.21.nix`) to the new one (e.g., `linux-2.6.22.nix`) and update it.
|
||||
If you are using a kernel packaged in Nixpkgs, you can customize it by overriding its arguments. For details on how each argument affects the generated kernel, refer to [the `pkgs.buildLinux` source code](https://github.com/NixOS/nixpkgs/blob/d77bda728d5041c1294a68fb25c79e2d161f62b9/pkgs/os-specific/linux/kernel/generic.nix).
|
||||
|
||||
2. Add the new kernel to the `kernels` attribute set in `linux-kernels.nix` (e.g., create an attribute `kernel_2_6_22`).
|
||||
:::{.example #ex-overriding-kernel-derivation}
|
||||
|
||||
3. Now we’re going to update the kernel configuration. First unpack the kernel. Then for each supported platform (`i686`, `x86_64`, `uml`) do the following:
|
||||
# Overriding the kernel derivation
|
||||
|
||||
1. Make a copy from the old config (e.g., `config-2.6.21-i686-smp`) to the new one (e.g., `config-2.6.22-i686-smp`).
|
||||
Assuming you are using the kernel from `pkgs.linux_latest`:
|
||||
|
||||
2. Copy the config file for this platform (e.g., `config-2.6.22-i686-smp`) to `.config` in the kernel source tree.
|
||||
|
||||
3. Run `make oldconfig ARCH={i386,x86_64,um}` and answer all questions. (For the uml configuration, also add `SHELL=bash`.) Make sure to keep the configuration consistent between platforms (i.e., don’t enable some feature on `i686` and disable it on `x86_64`).
|
||||
|
||||
4. If needed, you can also run `make menuconfig`:
|
||||
|
||||
```ShellSession
|
||||
$ nix-env -f "<nixpkgs>" -iA ncurses
|
||||
$ export NIX_CFLAGS_LINK=-lncurses
|
||||
$ make menuconfig ARCH=arch
|
||||
```nix
|
||||
pkgs.linux_latest.override {
|
||||
ignoreConfigErrors = true;
|
||||
autoModules = false;
|
||||
kernelPreferBuiltin = true;
|
||||
extraStructuredConfig = with lib.kernel; {
|
||||
DEBUG_KERNEL = yes;
|
||||
FRAME_POINTER = yes;
|
||||
KGDB = yes;
|
||||
KGDB_SERIAL_CONSOLE = yes;
|
||||
DEBUG_INFO = yes;
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
5. Copy `.config` over the new config file (e.g., `config-2.6.22-i686-smp`).
|
||||
:::
|
||||
|
||||
4. Test building the kernel: `nix-build -A linuxKernel.kernels.kernel_2_6_22`. If it compiles, ship it! For extra credit, try booting NixOS with it.
|
||||
## Manual kernel configuration {#sec-manual-kernel-configuration}
|
||||
|
||||
5. It may be that the new kernel requires updating the external kernel modules and kernel-dependent packages listed in the `linuxPackagesFor` function in `linux-kernels.nix` (such as the NVIDIA drivers, AUFS, etc.). If the updated packages aren’t backwards compatible with older kernels, you may need to keep the older versions around.
|
||||
Sometimes it may not be desirable to use kernels built with `pkgs.buildLinux`, especially if most of the common configuration has to be altered or disabled to achieve a kernel as expected by the target use case.
|
||||
An example of this is building a kernel for use in a VM or micro VM. You can use `pkgs.linuxManualConfig` in these cases. It requires the `src`, `version`, and `configfile` attributes to be specified.
|
||||
|
||||
:::{.example #ex-using-linux-manual-config}
|
||||
|
||||
# Using `pkgs.linuxManualConfig` with a specific source, version, and config file
|
||||
|
||||
```nix
|
||||
{ pkgs, ... }: {
|
||||
version = "6.1.55";
|
||||
src = pkgs.fetchurl {
|
||||
url = "https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-${version}.tar.xz";
|
||||
hash = "sha256:1h0mzx52q9pvdv7rhnvb8g68i7bnlc9rf8gy9qn4alsxq4g28zm8";
|
||||
};
|
||||
configfile = ./path_to_config_file;
|
||||
linux = pkgs.linuxManualConfig {
|
||||
inherit version src configfile;
|
||||
allowImportFromDerivation = true;
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
If necessary, the version string can be slightly modified to explicitly mark it as a custom version. If you do so, ensure the `modDirVersion` attribute matches the source's version, otherwise the build will fail.
|
||||
|
||||
```nix
|
||||
{ pkgs, ... }: {
|
||||
version = "6.1.55-custom";
|
||||
modDirVersion = "6.1.55";
|
||||
src = pkgs.fetchurl {
|
||||
url = "https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-${modDirVersion}.tar.xz";
|
||||
hash = "sha256:1h0mzx52q9pvdv7rhnvb8g68i7bnlc9rf8gy9qn4alsxq4g28zm8";
|
||||
};
|
||||
configfile = ./path_to_config_file;
|
||||
linux = pkgs.linuxManualConfig {
|
||||
inherit version modDirVersion src configfile;
|
||||
allowImportFromDerivation = true;
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
Additional attributes can be used with `linuxManualConfig` for further customisation. You're encouraged to read [the `pkgs.linuxManualConfig` source code](https://github.com/NixOS/nixpkgs/blob/d77bda728d5041c1294a68fb25c79e2d161f62b9/pkgs/os-specific/linux/kernel/manual-config.nix) to understand how to use them.
|
||||
|
||||
To edit the `.config` file for Linux X.Y from within Nix, proceed as follows:
|
||||
|
||||
```ShellSession
|
||||
$ nix-shell '<nixpkgs>' -A linuxKernel.kernels.linux_X_Y.configEnv
|
||||
$ unpackPhase
|
||||
$ cd linux-*
|
||||
$ make nconfig
|
||||
```
|
||||
|
||||
## Developing kernel modules {#sec-linux-kernel-developing-modules}
|
||||
|
||||
When developing kernel modules it's often convenient to run the edit-compile-run loop as quickly as possible.
|
||||
See the snippet below as an example.
|
||||
|
||||
:::{.example #ex-edit-compile-run-kernel-modules}
|
||||
|
||||
# Edit-compile-run loop when developing `mellanox` drivers
|
||||
|
||||
```ShellSession
|
||||
$ nix-build '<nixpkgs>' -A linuxPackages.kernel.dev
|
||||
$ nix-shell '<nixpkgs>' -A linuxPackages.kernel
|
||||
$ unpackPhase
|
||||
$ cd linux-*
|
||||
$ make -C $dev/lib/modules/*/build M=$(pwd)/drivers/net/ethernet/mellanox modules
|
||||
# insmod ./drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.ko
|
||||
```
|
||||
|
||||
:::
|
||||
|
|
14
third_party/nixpkgs/doc/preface.chapter.md
vendored
14
third_party/nixpkgs/doc/preface.chapter.md
vendored
|
@ -6,11 +6,15 @@ The Nix Packages collection (Nixpkgs) is a set of thousands of packages for the
|
|||
Packages are available for several platforms, and can be used with the Nix
|
||||
package manager on most GNU/Linux distributions as well as [NixOS](https://nixos.org/nixos).
|
||||
|
||||
This manual primarily describes how to write packages for the Nix Packages collection
|
||||
(Nixpkgs). Thus it’s mainly for packagers and developers who want to add packages to
|
||||
Nixpkgs. If you like to learn more about the Nix package manager and the Nix
|
||||
expression language, then you are kindly referred to the [Nix manual](https://nixos.org/nix/manual/).
|
||||
The NixOS distribution is documented in the [NixOS manual](https://nixos.org/nixos/manual/).
|
||||
This document is the user [_reference_](https://nix.dev/contributing/documentation/diataxis#reference) manual for Nixpkgs.
|
||||
It describes entire public interface of Nixpkgs in a concise and orderly manner, and all relevant behaviors, with examples and cross-references.
|
||||
|
||||
To discover other kinds of documentation:
|
||||
- [nix.dev](https://nix.dev/): Tutorials and guides for getting things done with Nix
|
||||
- [NixOS **Option Search**](https://search.nixos.org/options) and reference documentation
|
||||
- [Nixpkgs **Package Search**](https://search.nixos.org/packages)
|
||||
- [**NixOS** manual](https://nixos.org/manual/nixos/stable/): Reference documentation for the NixOS Linux distribution
|
||||
- [`CONTRIBUTING.md`](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md): Contributing to Nixpkgs, including this manual
|
||||
|
||||
## Overview of Nixpkgs {#overview-of-nixpkgs}
|
||||
|
||||
|
|
63
third_party/nixpkgs/doc/stdenv/stdenv.chapter.md
vendored
63
third_party/nixpkgs/doc/stdenv/stdenv.chapter.md
vendored
|
@ -119,13 +119,18 @@ phases="${prePhases[*]:-} unpackPhase patchPhase" genericBuild
|
|||
```
|
||||
|
||||
Then, run more phases up until the failure is reached.
|
||||
For example, if the failure is in the build phase, the following phases would be required:
|
||||
If the failure is in the build or check phase, the following phases would be required:
|
||||
|
||||
```bash
|
||||
phases="${preConfigurePhases[*]:-} configurePhase ${preBuildPhases[*]:-} buildPhase" genericBuild
|
||||
phases="${preConfigurePhases[*]:-} configurePhase ${preBuildPhases[*]:-} buildPhase checkPhase" genericBuild
|
||||
```
|
||||
|
||||
Re-run a single phase as many times as necessary to examine the failure like so:
|
||||
Use this command to run all install phases:
|
||||
```bash
|
||||
phases="${preInstallPhases[*]:-} installPhase ${preFixupPhases[*]:-} fixupPhase installCheckPhase" genericBuild
|
||||
```
|
||||
|
||||
Single phase can be re-run as many times as necessary to examine the failure like so:
|
||||
|
||||
```bash
|
||||
phases="buildPhase" genericBuild
|
||||
|
@ -256,14 +261,50 @@ For more complex cases, like libraries linked into an executable which is then e
|
|||
|
||||
As described in the Nix manual, almost any `*.drv` store path in a derivation’s attribute set will induce a dependency on that derivation. `mkDerivation`, however, takes a few attributes intended to include all the dependencies of a package. This is done both for structure and consistency, but also so that certain other setup can take place. For example, certain dependencies need their bin directories added to the `PATH`. That is built-in, but other setup is done via a pluggable mechanism that works in conjunction with these dependency attributes. See [](#ssec-setup-hooks) for details.
|
||||
|
||||
Dependencies can be broken down along three axes: their host and target platforms relative to the new derivation’s, and whether they are propagated. The platform distinctions are motivated by cross compilation; see [](#chap-cross) for exactly what each platform means. [^footnote-stdenv-ignored-build-platform] But even if one is not cross compiling, the platforms imply whether or not the dependency is needed at run-time or build-time, a concept that makes perfect sense outside of cross compilation. By default, the run-time/build-time distinction is just a hint for mental clarity, but with `strictDeps` set it is mostly enforced even in the native case.
|
||||
Dependencies can be broken down along these axes: their host and target platforms relative to the new derivation’s. The platform distinctions are motivated by cross compilation; see [](#chap-cross) for exactly what each platform means. [^footnote-stdenv-ignored-build-platform] But even if one is not cross compiling, the platforms imply whether a dependency is needed at run-time or build-time.
|
||||
|
||||
The extension of `PATH` with dependencies, alluded to above, proceeds according to the relative platforms alone. The process is carried out only for dependencies whose host platform matches the new derivation’s build platform i.e. dependencies which run on the platform where the new derivation will be built. [^footnote-stdenv-native-dependencies-in-path] For each dependency \<dep\> of those dependencies, `dep/bin`, if present, is added to the `PATH` environment variable.
|
||||
|
||||
A dependency is said to be **propagated** when some of its other-transitive (non-immediate) downstream dependencies also need it as an immediate dependency.
|
||||
[^footnote-stdenv-propagated-dependencies]
|
||||
### Dependency propagation {#ssec-stdenv-dependencies-propagated}
|
||||
|
||||
It is important to note that dependencies are not necessarily propagated as the same sort of dependency that they were before, but rather as the corresponding sort so that the platform rules still line up. To determine the exact rules for dependency propagation, we start by assigning to each dependency a couple of ternary numbers (`-1` for `build`, `0` for `host`, and `1` for `target`) representing its [dependency type](#possible-dependency-types), which captures how its host and target platforms are each "offset" from the depending derivation’s host and target platforms. The following table summarize the different combinations that can be obtained:
|
||||
Propagated dependencies are made available to all downstream dependencies.
|
||||
This is particularly useful for interpreted languages, where all transitive dependencies have to be present in the same environment.
|
||||
Therefore it is used for the Python infrastructure in Nixpkgs.
|
||||
|
||||
:::{.note}
|
||||
Propagated dependencies should be used with care, because they obscure the actual build inputs of dependent derivations and cause side effects through setup hooks.
|
||||
This can lead to conflicting dependencies that cannot easily be resolved.
|
||||
:::
|
||||
|
||||
:::{.example}
|
||||
# A propagated dependency
|
||||
|
||||
```nix
|
||||
with import <nixpkgs> {};
|
||||
let
|
||||
bar = stdenv.mkDerivation {
|
||||
name = "bar";
|
||||
dontUnpack = true;
|
||||
# `hello` is also made available to dependents, such as `foo`
|
||||
propagatedBuildInputs = [ hello ];
|
||||
postInstall = "mkdir $out";
|
||||
};
|
||||
foo = stdenv.mkDerivation {
|
||||
name = "foo";
|
||||
dontUnpack = true;
|
||||
# `bar` is a direct dependency, which implicitly includes the propagated `hello`
|
||||
buildInputs = [ bar ];
|
||||
# The `hello` binary is available!
|
||||
postInstall = "hello > $out";
|
||||
};
|
||||
in
|
||||
foo
|
||||
```
|
||||
:::
|
||||
|
||||
Dependency propagation takes cross compilation into account, meaning that dependencies that cross platform boundaries are properly adjusted.
|
||||
|
||||
To determine the exact rules for dependency propagation, we start by assigning to each dependency a couple of ternary numbers (`-1` for `build`, `0` for `host`, and `1` for `target`) representing its [dependency type](#possible-dependency-types), which captures how its host and target platforms are each "offset" from the depending derivation’s host and target platforms. The following table summarize the different combinations that can be obtained:
|
||||
|
||||
| `host → target` | attribute name | offset |
|
||||
| ------------------- | ------------------- | -------- |
|
||||
|
@ -586,7 +627,7 @@ See also the section about [`passthru.tests`](#var-meta-tests).
|
|||
|
||||
`stdenv.mkDerivation` sets the Nix [derivation](https://nixos.org/manual/nix/stable/expressions/derivations.html#derivations)'s builder to a script that loads the stdenv `setup.sh` bash library and calls `genericBuild`. Most packaging functions rely on this default builder.
|
||||
|
||||
This generic command invokes a number of *phases*. Package builds are split into phases to make it easier to override specific parts of the build (e.g., unpacking the sources or installing the binaries).
|
||||
This generic command either invokes a script at *buildCommandPath*, or a *buildCommand*, or a number of *phases*. Package builds are split into phases to make it easier to override specific parts of the build (e.g., unpacking the sources or installing the binaries).
|
||||
|
||||
Each phase can be overridden in its entirety either by setting the environment variable `namePhase` to a string containing some shell commands to be executed, or by redefining the shell function `namePhase`. The former is convenient to override a phase from the derivation, while the latter is convenient from a build script. However, typically one only wants to *add* some commands to a phase, e.g. by defining `postInstall` or `preFixup`, as skipping some of the default actions may have unexpected consequences. The default script for each phase is defined in the file `pkgs/stdenv/generic/setup.sh`.
|
||||
|
||||
|
@ -826,7 +867,7 @@ Note that shell arrays cannot be passed through environment variables, so you ca
|
|||
|
||||
##### `buildFlags` / `buildFlagsArray` {#var-stdenv-buildFlags}
|
||||
|
||||
A list of strings passed as additional flags to `make`. Like `makeFlags` and `makeFlagsArray`, but only used by the build phase.
|
||||
A list of strings passed as additional flags to `make`. Like `makeFlags` and `makeFlagsArray`, but only used by the build phase. Any build targets should be specified as part of the `buildFlags`.
|
||||
|
||||
##### `preBuild` {#var-stdenv-preBuild}
|
||||
|
||||
|
@ -867,7 +908,7 @@ If unset, use `check` if it exists, otherwise `test`; if neither is found, do no
|
|||
|
||||
##### `checkFlags` / `checkFlagsArray` {#var-stdenv-checkFlags}
|
||||
|
||||
A list of strings passed as additional flags to `make`. Like `makeFlags` and `makeFlagsArray`, but only used by the check phase.
|
||||
A list of strings passed as additional flags to `make`. Like `makeFlags` and `makeFlagsArray`, but only used by the check phase. Unlike with `buildFlags`, the `checkTarget` is automatically added to the `make` invocation in addition to any `checkFlags` specified.
|
||||
|
||||
##### `checkInputs` {#var-stdenv-checkInputs}
|
||||
|
||||
|
@ -909,7 +950,7 @@ installTargets = "install-bin install-doc";
|
|||
|
||||
##### `installFlags` / `installFlagsArray` {#var-stdenv-installFlags}
|
||||
|
||||
A list of strings passed as additional flags to `make`. Like `makeFlags` and `makeFlagsArray`, but only used by the install phase.
|
||||
A list of strings passed as additional flags to `make`. Like `makeFlags` and `makeFlagsArray`, but only used by the install phase. Unlike with `buildFlags`, the `installTargets` are automatically added to the `make` invocation in addition to any `installFlags` specified.
|
||||
|
||||
##### `preInstall` {#var-stdenv-preInstall}
|
||||
|
||||
|
|
109
third_party/nixpkgs/doc/tests/manpage-urls.py
vendored
Executable file
109
third_party/nixpkgs/doc/tests/manpage-urls.py
vendored
Executable file
|
@ -0,0 +1,109 @@
|
|||
#! /usr/bin/env nix-shell
|
||||
#! nix-shell -i "python3 -I" -p "python3.withPackages(p: with p; [ aiohttp rich structlog ])"
|
||||
|
||||
from argparse import ArgumentParser, Namespace
|
||||
from collections import defaultdict
|
||||
from collections.abc import Mapping, Sequence
|
||||
from enum import IntEnum
|
||||
from http import HTTPStatus
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
import asyncio, json, logging
|
||||
|
||||
import aiohttp, structlog
|
||||
from structlog.contextvars import bound_contextvars as log_context
|
||||
|
||||
|
||||
LogLevel = IntEnum('LogLevel', {
|
||||
lvl: getattr(logging, lvl)
|
||||
for lvl in ('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL')
|
||||
})
|
||||
LogLevel.__str__ = lambda self: self.name
|
||||
|
||||
|
||||
EXPECTED_STATUS=frozenset((
|
||||
HTTPStatus.OK, HTTPStatus.FOUND,
|
||||
HTTPStatus.NOT_FOUND,
|
||||
))
|
||||
|
||||
async def check(session: aiohttp.ClientSession, manpage: str, url: str) -> HTTPStatus:
|
||||
with log_context(manpage=manpage, url=url):
|
||||
logger.debug("Checking")
|
||||
async with session.head(url) as resp:
|
||||
st = HTTPStatus(resp.status)
|
||||
match st:
|
||||
case HTTPStatus.OK | HTTPStatus.FOUND:
|
||||
logger.debug("OK!")
|
||||
case HTTPStatus.NOT_FOUND:
|
||||
logger.error("Broken link!")
|
||||
case _ if st < 400:
|
||||
logger.info("Unexpected code", status=st)
|
||||
case _ if 400 <= st < 600:
|
||||
logger.warn("Unexpected error", status=st)
|
||||
|
||||
return st
|
||||
|
||||
async def main(urls_path: Path) -> Mapping[HTTPStatus, int]:
|
||||
logger.info(f"Parsing {urls_path}")
|
||||
with urls_path.open() as urls_file:
|
||||
urls = json.load(urls_file)
|
||||
|
||||
count: defaultdict[HTTPStatus, int] = defaultdict(lambda: 0)
|
||||
|
||||
logger.info(f"Checking URLs from {urls_path}")
|
||||
async with aiohttp.ClientSession() as session:
|
||||
for status in asyncio.as_completed([
|
||||
check(session, manpage, url)
|
||||
for manpage, url in urls.items()
|
||||
]):
|
||||
count[await status]+=1
|
||||
|
||||
ok = count[HTTPStatus.OK] + count[HTTPStatus.FOUND]
|
||||
broken = count[HTTPStatus.NOT_FOUND]
|
||||
unknown = sum(c for st, c in count.items() if st not in EXPECTED_STATUS)
|
||||
logger.info(f"Done: {broken} broken links, "
|
||||
f"{ok} correct links, and {unknown} unexpected status")
|
||||
|
||||
return count
|
||||
|
||||
|
||||
def parse_args(args: Optional[Sequence[str]] = None) -> Namespace:
|
||||
parser = ArgumentParser(
|
||||
prog = 'check-manpage-urls',
|
||||
description = 'Check the validity of the manpage URLs linked in the nixpkgs manual',
|
||||
)
|
||||
parser.add_argument(
|
||||
'-l', '--log-level',
|
||||
default = os.getenv('LOG_LEVEL', 'INFO'),
|
||||
type = lambda s: LogLevel[s],
|
||||
choices = list(LogLevel),
|
||||
)
|
||||
parser.add_argument(
|
||||
'file',
|
||||
type = Path,
|
||||
nargs = '?',
|
||||
)
|
||||
|
||||
return parser.parse_args(args)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import os, sys
|
||||
|
||||
args = parse_args()
|
||||
|
||||
structlog.configure(
|
||||
wrapper_class=structlog.make_filtering_bound_logger(args.log_level),
|
||||
)
|
||||
logger = structlog.getLogger("check-manpage-urls.py")
|
||||
|
||||
urls_path = args.file
|
||||
if urls_path is None:
|
||||
REPO_ROOT = Path(__file__).parent.parent.parent.parent
|
||||
logger.info(f"Assuming we are in a nixpkgs repo rooted at {REPO_ROOT}")
|
||||
|
||||
urls_path = REPO_ROOT / 'doc' / 'manpage-urls.json'
|
||||
|
||||
count = asyncio.run(main(urls_path))
|
||||
|
||||
sys.exit(0 if count[HTTPStatus.NOT_FOUND] == 0 else 1)
|
43
third_party/nixpkgs/flake.nix
vendored
43
third_party/nixpkgs/flake.nix
vendored
|
@ -9,7 +9,8 @@
|
|||
nixpkgs = self;
|
||||
};
|
||||
|
||||
lib = import ./lib;
|
||||
libVersionInfoOverlay = import ./lib/flake-version-info.nix self;
|
||||
lib = (import ./lib).extend libVersionInfoOverlay;
|
||||
|
||||
forAllSystems = lib.genAttrs lib.systems.flakeExposed;
|
||||
in
|
||||
|
@ -20,22 +21,38 @@
|
|||
|
||||
nixosSystem = args:
|
||||
import ./nixos/lib/eval-config.nix (
|
||||
args // {
|
||||
modules = args.modules ++ [{
|
||||
system.nixos.versionSuffix =
|
||||
".${final.substring 0 8 (self.lastModifiedDate or self.lastModified or "19700101")}.${self.shortRev or "dirty"}";
|
||||
system.nixos.revision = final.mkIf (self ? rev) self.rev;
|
||||
}];
|
||||
} // lib.optionalAttrs (! args?system) {
|
||||
{
|
||||
lib = final;
|
||||
# Allow system to be set modularly in nixpkgs.system.
|
||||
# We set it to null, to remove the "legacy" entrypoint's
|
||||
# non-hermetic default.
|
||||
system = null;
|
||||
}
|
||||
} // args
|
||||
);
|
||||
});
|
||||
|
||||
checks.x86_64-linux.tarball = jobs.tarball;
|
||||
checks.x86_64-linux = {
|
||||
tarball = jobs.tarball;
|
||||
# Test that ensures that the nixosSystem function can accept a lib argument
|
||||
# Note: prefer not to extend or modify `lib`, especially if you want to share reusable modules
|
||||
# alternatives include: `import` a file, or put a custom library in an option or in `_module.args.<libname>`
|
||||
nixosSystemAcceptsLib = (self.lib.nixosSystem {
|
||||
lib = self.lib.extend (final: prev: {
|
||||
ifThisFunctionIsMissingTheTestFails = final.id;
|
||||
});
|
||||
modules = [
|
||||
./nixos/modules/profiles/minimal.nix
|
||||
({ lib, ... }: lib.ifThisFunctionIsMissingTheTestFails {
|
||||
# Define a minimal config without eval warnings
|
||||
nixpkgs.hostPlatform = "x86_64-linux";
|
||||
boot.loader.grub.enable = false;
|
||||
fileSystems."/".device = "nodev";
|
||||
# See https://search.nixos.org/options?show=system.stateVersion&query=stateversion
|
||||
system.stateVersion = lib.versions.majorMinor lib.version; # DON'T do this in real configs!
|
||||
})
|
||||
];
|
||||
}).config.system.build.toplevel;
|
||||
};
|
||||
|
||||
htmlDocs = {
|
||||
nixpkgsManual = jobs.manual;
|
||||
|
@ -53,7 +70,11 @@
|
|||
# attribute it displays `omitted` instead of evaluating all packages,
|
||||
# which keeps `nix flake show` on Nixpkgs reasonably fast, though less
|
||||
# information rich.
|
||||
legacyPackages = forAllSystems (system: import ./. { inherit system; });
|
||||
legacyPackages = forAllSystems (system:
|
||||
(import ./. { inherit system; }).extend (final: prev: {
|
||||
lib = prev.lib.extend libVersionInfoOverlay;
|
||||
})
|
||||
);
|
||||
|
||||
nixosModules = {
|
||||
notDetected = ./nixos/modules/installer/scan/not-detected.nix;
|
||||
|
|
65
third_party/nixpkgs/lib/README.md
vendored
65
third_party/nixpkgs/lib/README.md
vendored
|
@ -36,13 +36,76 @@ The [module system](https://nixos.org/manual/nixpkgs/#module-system) spans multi
|
|||
- [`options.nix`](options.nix): `lib.options` for anything relating to option definitions
|
||||
- [`types.nix`](types.nix): `lib.types` for module system types
|
||||
|
||||
## PR Guidelines
|
||||
|
||||
Follow these guidelines for proposing a change to the interface of `lib`.
|
||||
|
||||
### Provide a Motivation
|
||||
|
||||
Clearly describe why the change is necessary and its use cases.
|
||||
|
||||
Make sure that the change benefits the user more than the added mental effort of looking it up and keeping track of its definition.
|
||||
If the same can reasonably be done with the existing interface,
|
||||
consider just updating the documentation with more examples and links.
|
||||
This is also known as the [Fairbairn Threshold](https://wiki.haskell.org/Fairbairn_threshold).
|
||||
|
||||
Through this principle we avoid the human cost of duplicated functionality in an overly large library.
|
||||
|
||||
### Make one PR for each change
|
||||
|
||||
Don't have multiple changes in one PR, instead split it up into multiple ones.
|
||||
|
||||
This keeps the conversation focused and has a higher chance of getting merged.
|
||||
|
||||
### Name the interface appropriately
|
||||
|
||||
When introducing new names to the interface, such as new function, or new function attributes,
|
||||
make sure to name it appropriately.
|
||||
|
||||
Names should be self-explanatory and consistent with the rest of `lib`.
|
||||
If there's no obvious best name, include the alternatives you considered.
|
||||
|
||||
### Write documentation
|
||||
|
||||
Update the [reference documentation](#reference-documentation) to reflect the change.
|
||||
|
||||
Be generous with links to related functionality.
|
||||
|
||||
### Write tests
|
||||
|
||||
Add good test coverage for the change, including:
|
||||
|
||||
- Tests for edge cases, such as empty values or lists.
|
||||
- Tests for tricky inputs, such as a string with string context or a path that doesn't exist.
|
||||
- Test all code paths, such as `if-then-else` branches and returned attributes.
|
||||
- If the tests for the sub-library are written in bash,
|
||||
test messages of custom errors, such as `throw` or `abortMsg`,
|
||||
|
||||
At the time this is only not necessary for sub-libraries tested with [`tests/misc.nix`](./tests/misc.nix).
|
||||
|
||||
See [running tests](#running-tests) for more details on the test suites.
|
||||
|
||||
### Write tidy code
|
||||
|
||||
Name variables well, even if they're internal.
|
||||
The code should be as self-explanatory as possible.
|
||||
Be generous with code comments when appropriate.
|
||||
|
||||
As a baseline, follow the [Nixpkgs code conventions](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md#code-conventions).
|
||||
|
||||
### Write efficient code
|
||||
|
||||
Nix generally does not have free abstractions.
|
||||
Be aware that seemingly straightforward changes can cause more allocations and a decrease in performance.
|
||||
That said, don't optimise prematurely, especially in new code.
|
||||
|
||||
## Reference documentation
|
||||
|
||||
Reference documentation for library functions is written above each function as a multi-line comment.
|
||||
These comments are processed using [nixdoc](https://github.com/nix-community/nixdoc) and [rendered in the Nixpkgs manual](https://nixos.org/manual/nixpkgs/stable/#chap-functions).
|
||||
The nixdoc README describes the [comment format](https://github.com/nix-community/nixdoc#comment-format).
|
||||
|
||||
See the [chapter on contributing to the Nixpkgs manual](https://nixos.org/manual/nixpkgs/#chap-contributing) for how to build the manual.
|
||||
See [doc/README.md](../doc/README.md) for how to build the manual.
|
||||
|
||||
## Running tests
|
||||
|
||||
|
|
161
third_party/nixpkgs/lib/attrsets.nix
vendored
161
third_party/nixpkgs/lib/attrsets.nix
vendored
|
@ -1,5 +1,5 @@
|
|||
/* Operations on attribute sets. */
|
||||
{ lib }:
|
||||
# Operations on attribute sets.
|
||||
|
||||
let
|
||||
inherit (builtins) head tail length;
|
||||
|
@ -14,6 +14,14 @@ rec {
|
|||
|
||||
/* Return an attribute from nested attribute sets.
|
||||
|
||||
Nix has an [attribute selection operator `. or`](https://nixos.org/manual/nix/stable/language/operators#attribute-selection) which is sufficient for such queries, as long as the number of attributes is static. For example:
|
||||
|
||||
```nix
|
||||
(x.a.b or 6) == attrByPath ["a" "b"] 6 x
|
||||
# and
|
||||
(x.${f p}."example.com" or 6) == attrByPath [ (f p) "example.com" ] 6 x
|
||||
```
|
||||
|
||||
Example:
|
||||
x = { a = { b = 3; }; }
|
||||
# ["a" "b"] is equivalent to x.a.b
|
||||
|
@ -34,21 +42,44 @@ rec {
|
|||
default:
|
||||
# The nested attribute set to select values from
|
||||
set:
|
||||
let attr = head attrPath;
|
||||
let
|
||||
lenAttrPath = length attrPath;
|
||||
attrByPath' = n: s: (
|
||||
if n == lenAttrPath then s
|
||||
else (
|
||||
let
|
||||
attr = elemAt attrPath n;
|
||||
in
|
||||
if attrPath == [] then set
|
||||
else if set ? ${attr}
|
||||
then attrByPath (tail attrPath) default set.${attr}
|
||||
else default;
|
||||
if s ? ${attr} then attrByPath' (n + 1) s.${attr}
|
||||
else default
|
||||
)
|
||||
);
|
||||
in
|
||||
attrByPath' 0 set;
|
||||
|
||||
/* Return if an attribute from nested attribute set exists.
|
||||
|
||||
Nix has a [has attribute operator `?`](https://nixos.org/manual/nix/stable/language/operators#has-attribute), which is sufficient for such queries, as long as the number of attributes is static. For example:
|
||||
|
||||
```nix
|
||||
(x?a.b) == hasAttryByPath ["a" "b"] x
|
||||
# and
|
||||
(x?${f p}."example.com") == hasAttryByPath [ (f p) "example.com" ] x
|
||||
```
|
||||
|
||||
**Laws**:
|
||||
1. ```nix
|
||||
hasAttrByPath [] x == true
|
||||
```
|
||||
|
||||
Example:
|
||||
x = { a = { b = 3; }; }
|
||||
hasAttrByPath ["a" "b"] x
|
||||
=> true
|
||||
hasAttrByPath ["z" "z"] x
|
||||
=> false
|
||||
hasAttrByPath [] (throw "no need")
|
||||
=> true
|
||||
|
||||
Type:
|
||||
hasAttrByPath :: [String] -> AttrSet -> Bool
|
||||
|
@ -58,13 +89,84 @@ rec {
|
|||
attrPath:
|
||||
# The nested attribute set to check
|
||||
e:
|
||||
let attr = head attrPath;
|
||||
let
|
||||
lenAttrPath = length attrPath;
|
||||
hasAttrByPath' = n: s: (
|
||||
n == lenAttrPath || (
|
||||
let
|
||||
attr = elemAt attrPath n;
|
||||
in
|
||||
if attrPath == [] then true
|
||||
else if e ? ${attr}
|
||||
then hasAttrByPath (tail attrPath) e.${attr}
|
||||
else false;
|
||||
if s ? ${attr} then hasAttrByPath' (n + 1) s.${attr}
|
||||
else false
|
||||
)
|
||||
);
|
||||
in
|
||||
hasAttrByPath' 0 e;
|
||||
|
||||
/*
|
||||
Return the longest prefix of an attribute path that refers to an existing attribute in a nesting of attribute sets.
|
||||
|
||||
Can be used after [`mapAttrsRecursiveCond`](#function-library-lib.attrsets.mapAttrsRecursiveCond) to apply a condition,
|
||||
although this will evaluate the predicate function on sibling attributes as well.
|
||||
|
||||
Note that the empty attribute path is valid for all values, so this function only throws an exception if any of its inputs does.
|
||||
|
||||
**Laws**:
|
||||
1. ```nix
|
||||
attrsets.longestValidPathPrefix [] x == []
|
||||
```
|
||||
|
||||
2. ```nix
|
||||
hasAttrByPath (attrsets.longestValidPathPrefix p x) x == true
|
||||
```
|
||||
|
||||
Example:
|
||||
x = { a = { b = 3; }; }
|
||||
attrsets.longestValidPathPrefix ["a" "b" "c"] x
|
||||
=> ["a" "b"]
|
||||
attrsets.longestValidPathPrefix ["a"] x
|
||||
=> ["a"]
|
||||
attrsets.longestValidPathPrefix ["z" "z"] x
|
||||
=> []
|
||||
attrsets.longestValidPathPrefix ["z" "z"] (throw "no need")
|
||||
=> []
|
||||
|
||||
Type:
|
||||
attrsets.longestValidPathPrefix :: [String] -> Value -> [String]
|
||||
*/
|
||||
longestValidPathPrefix =
|
||||
# A list of strings representing the longest possible path that may be returned.
|
||||
attrPath:
|
||||
# The nested attribute set to check.
|
||||
v:
|
||||
let
|
||||
lenAttrPath = length attrPath;
|
||||
getPrefixForSetAtIndex =
|
||||
# The nested attribute set to check, if it is an attribute set, which
|
||||
# is not a given.
|
||||
remainingSet:
|
||||
# The index of the attribute we're about to check, as well as
|
||||
# the length of the prefix we've already checked.
|
||||
remainingPathIndex:
|
||||
|
||||
if remainingPathIndex == lenAttrPath then
|
||||
# All previously checked attributes exist, and no attr names left,
|
||||
# so we return the whole path.
|
||||
attrPath
|
||||
else
|
||||
let
|
||||
attr = elemAt attrPath remainingPathIndex;
|
||||
in
|
||||
if remainingSet ? ${attr} then
|
||||
getPrefixForSetAtIndex
|
||||
remainingSet.${attr} # advance from the set to the attribute value
|
||||
(remainingPathIndex + 1) # advance the path
|
||||
else
|
||||
# The attribute doesn't exist, so we return the prefix up to the
|
||||
# previously checked length.
|
||||
take remainingPathIndex attrPath;
|
||||
in
|
||||
getPrefixForSetAtIndex v 0;
|
||||
|
||||
/* Create a new attribute set with `value` set at the nested attribute location specified in `attrPath`.
|
||||
|
||||
|
@ -91,6 +193,14 @@ rec {
|
|||
/* Like `attrByPath`, but without a default value. If it doesn't find the
|
||||
path it will throw an error.
|
||||
|
||||
Nix has an [attribute selection operator](https://nixos.org/manual/nix/stable/language/operators#attribute-selection) which is sufficient for such queries, as long as the number of attributes is static. For example:
|
||||
|
||||
```nix
|
||||
x.a.b == getAttrByPath ["a" "b"] x
|
||||
# and
|
||||
x.${f p}."example.com" == getAttrByPath [ (f p) "example.com" ] x
|
||||
```
|
||||
|
||||
Example:
|
||||
x = { a = { b = 3; }; }
|
||||
getAttrFromPath ["a" "b"] x
|
||||
|
@ -883,7 +993,10 @@ rec {
|
|||
recursiveUpdateUntil (path: lhs: rhs: !(isAttrs lhs && isAttrs rhs)) lhs rhs;
|
||||
|
||||
|
||||
/* Returns true if the pattern is contained in the set. False otherwise.
|
||||
/*
|
||||
Recurse into every attribute set of the first argument and check that:
|
||||
- Each attribute path also exists in the second argument.
|
||||
- If the attribute's value is not a nested attribute set, it must have the same value in the right argument.
|
||||
|
||||
Example:
|
||||
matchAttrs { cpu = {}; } { cpu = { bits = 64; }; }
|
||||
|
@ -895,16 +1008,24 @@ rec {
|
|||
matchAttrs =
|
||||
# Attribute set structure to match
|
||||
pattern:
|
||||
# Attribute set to find patterns in
|
||||
# Attribute set to check
|
||||
attrs:
|
||||
assert isAttrs pattern;
|
||||
all id (attrValues (zipAttrsWithNames (attrNames pattern) (n: values:
|
||||
let pat = head values; val = elemAt values 1; in
|
||||
if length values == 1 then false
|
||||
else if isAttrs pat then isAttrs val && matchAttrs pat val
|
||||
else pat == val
|
||||
) [pattern attrs]));
|
||||
|
||||
all
|
||||
( # Compare equality between `pattern` & `attrs`.
|
||||
attr:
|
||||
# Missing attr, not equal.
|
||||
attrs ? ${attr} && (
|
||||
let
|
||||
lhs = pattern.${attr};
|
||||
rhs = attrs.${attr};
|
||||
in
|
||||
# If attrset check recursively
|
||||
if isAttrs lhs then isAttrs rhs && matchAttrs lhs rhs
|
||||
else lhs == rhs
|
||||
)
|
||||
)
|
||||
(attrNames pattern);
|
||||
|
||||
/* Override only the attributes that are already present in the old set
|
||||
useful for deep-overriding.
|
||||
|
|
94
third_party/nixpkgs/lib/customisation.nix
vendored
94
third_party/nixpkgs/lib/customisation.nix
vendored
|
@ -1,5 +1,17 @@
|
|||
{ lib }:
|
||||
|
||||
let
|
||||
inherit (builtins)
|
||||
intersectAttrs;
|
||||
inherit (lib)
|
||||
functionArgs isFunction mirrorFunctionArgs isAttrs setFunctionArgs
|
||||
optionalAttrs attrNames filter elemAt concatStringsSep sortOn take length
|
||||
filterAttrs optionalString flip pathIsDirectory head pipe isDerivation listToAttrs
|
||||
mapAttrs seq flatten deepSeq warnIf isInOldestRelease extends
|
||||
;
|
||||
inherit (lib.strings) levenshtein levenshteinAtMost;
|
||||
|
||||
in
|
||||
rec {
|
||||
|
||||
|
||||
|
@ -43,15 +55,15 @@ rec {
|
|||
overrideDerivation = drv: f:
|
||||
let
|
||||
newDrv = derivation (drv.drvAttrs // (f drv));
|
||||
in lib.flip (extendDerivation (builtins.seq drv.drvPath true)) newDrv (
|
||||
in flip (extendDerivation (seq drv.drvPath true)) newDrv (
|
||||
{ meta = drv.meta or {};
|
||||
passthru = if drv ? passthru then drv.passthru else {};
|
||||
}
|
||||
//
|
||||
(drv.passthru or {})
|
||||
//
|
||||
lib.optionalAttrs (drv ? __spliced) {
|
||||
__spliced = {} // (lib.mapAttrs (_: sDrv: overrideDerivation sDrv f) drv.__spliced);
|
||||
optionalAttrs (drv ? __spliced) {
|
||||
__spliced = {} // (mapAttrs (_: sDrv: overrideDerivation sDrv f) drv.__spliced);
|
||||
});
|
||||
|
||||
|
||||
|
@ -79,30 +91,30 @@ rec {
|
|||
makeOverridable = f:
|
||||
let
|
||||
# Creates a functor with the same arguments as f
|
||||
mirrorArgs = lib.mirrorFunctionArgs f;
|
||||
mirrorArgs = mirrorFunctionArgs f;
|
||||
in
|
||||
mirrorArgs (origArgs:
|
||||
let
|
||||
result = f origArgs;
|
||||
|
||||
# Changes the original arguments with (potentially a function that returns) a set of new attributes
|
||||
overrideWith = newArgs: origArgs // (if lib.isFunction newArgs then newArgs origArgs else newArgs);
|
||||
overrideWith = newArgs: origArgs // (if isFunction newArgs then newArgs origArgs else newArgs);
|
||||
|
||||
# Re-call the function but with different arguments
|
||||
overrideArgs = mirrorArgs (newArgs: makeOverridable f (overrideWith newArgs));
|
||||
# Change the result of the function call by applying g to it
|
||||
overrideResult = g: makeOverridable (mirrorArgs (args: g (f args))) origArgs;
|
||||
in
|
||||
if builtins.isAttrs result then
|
||||
if isAttrs result then
|
||||
result // {
|
||||
override = overrideArgs;
|
||||
overrideDerivation = fdrv: overrideResult (x: overrideDerivation x fdrv);
|
||||
${if result ? overrideAttrs then "overrideAttrs" else null} = fdrv:
|
||||
overrideResult (x: x.overrideAttrs fdrv);
|
||||
}
|
||||
else if lib.isFunction result then
|
||||
else if isFunction result then
|
||||
# Transform the result into a functor while propagating its arguments
|
||||
lib.setFunctionArgs result (lib.functionArgs result) // {
|
||||
setFunctionArgs result (functionArgs result) // {
|
||||
override = overrideArgs;
|
||||
}
|
||||
else result);
|
||||
|
@ -140,39 +152,39 @@ rec {
|
|||
*/
|
||||
callPackageWith = autoArgs: fn: args:
|
||||
let
|
||||
f = if lib.isFunction fn then fn else import fn;
|
||||
fargs = lib.functionArgs f;
|
||||
f = if isFunction fn then fn else import fn;
|
||||
fargs = functionArgs f;
|
||||
|
||||
# All arguments that will be passed to the function
|
||||
# This includes automatic ones and ones passed explicitly
|
||||
allArgs = builtins.intersectAttrs fargs autoArgs // args;
|
||||
allArgs = intersectAttrs fargs autoArgs // args;
|
||||
|
||||
# a list of argument names that the function requires, but
|
||||
# wouldn't be passed to it
|
||||
missingArgs = lib.attrNames
|
||||
missingArgs =
|
||||
# Filter out arguments that have a default value
|
||||
(lib.filterAttrs (name: value: ! value)
|
||||
(filterAttrs (name: value: ! value)
|
||||
# Filter out arguments that would be passed
|
||||
(removeAttrs fargs (lib.attrNames allArgs)));
|
||||
(removeAttrs fargs (attrNames allArgs)));
|
||||
|
||||
# Get a list of suggested argument names for a given missing one
|
||||
getSuggestions = arg: lib.pipe (autoArgs // args) [
|
||||
lib.attrNames
|
||||
getSuggestions = arg: pipe (autoArgs // args) [
|
||||
attrNames
|
||||
# Only use ones that are at most 2 edits away. While mork would work,
|
||||
# levenshteinAtMost is only fast for 2 or less.
|
||||
(lib.filter (lib.strings.levenshteinAtMost 2 arg))
|
||||
(filter (levenshteinAtMost 2 arg))
|
||||
# Put strings with shorter distance first
|
||||
(lib.sort (x: y: lib.strings.levenshtein x arg < lib.strings.levenshtein y arg))
|
||||
(sortOn (levenshtein arg))
|
||||
# Only take the first couple results
|
||||
(lib.take 3)
|
||||
(take 3)
|
||||
# Quote all entries
|
||||
(map (x: "\"" + x + "\""))
|
||||
];
|
||||
|
||||
prettySuggestions = suggestions:
|
||||
if suggestions == [] then ""
|
||||
else if lib.length suggestions == 1 then ", did you mean ${lib.elemAt suggestions 0}?"
|
||||
else ", did you mean ${lib.concatStringsSep ", " (lib.init suggestions)} or ${lib.last suggestions}?";
|
||||
else if length suggestions == 1 then ", did you mean ${elemAt suggestions 0}?"
|
||||
else ", did you mean ${concatStringsSep ", " (lib.init suggestions)} or ${lib.last suggestions}?";
|
||||
|
||||
errorForArg = arg:
|
||||
let
|
||||
|
@ -180,16 +192,18 @@ rec {
|
|||
# loc' can be removed once lib/minver.nix is >2.3.4, since that includes
|
||||
# https://github.com/NixOS/nix/pull/3468 which makes loc be non-null
|
||||
loc' = if loc != null then loc.file + ":" + toString loc.line
|
||||
else if ! lib.isFunction fn then
|
||||
toString fn + lib.optionalString (lib.sources.pathIsDirectory fn) "/default.nix"
|
||||
else if ! isFunction fn then
|
||||
toString fn + optionalString (pathIsDirectory fn) "/default.nix"
|
||||
else "<unknown location>";
|
||||
in "Function called without required argument \"${arg}\" at "
|
||||
+ "${loc'}${prettySuggestions (getSuggestions arg)}";
|
||||
|
||||
# Only show the error for the first missing argument
|
||||
error = errorForArg (lib.head missingArgs);
|
||||
error = errorForArg (head (attrNames missingArgs));
|
||||
|
||||
in if missingArgs == [] then makeOverridable f allArgs else abort error;
|
||||
in if missingArgs == {}
|
||||
then makeOverridable f allArgs
|
||||
else throw "lib.customisation.callPackageWith: ${error}";
|
||||
|
||||
|
||||
/* Like callPackage, but for a function that returns an attribute
|
||||
|
@ -201,17 +215,17 @@ rec {
|
|||
*/
|
||||
callPackagesWith = autoArgs: fn: args:
|
||||
let
|
||||
f = if lib.isFunction fn then fn else import fn;
|
||||
auto = builtins.intersectAttrs (lib.functionArgs f) autoArgs;
|
||||
f = if isFunction fn then fn else import fn;
|
||||
auto = intersectAttrs (functionArgs f) autoArgs;
|
||||
origArgs = auto // args;
|
||||
pkgs = f origArgs;
|
||||
mkAttrOverridable = name: _: makeOverridable (newArgs: (f newArgs).${name}) origArgs;
|
||||
in
|
||||
if lib.isDerivation pkgs then throw
|
||||
if isDerivation pkgs then throw
|
||||
("function `callPackages` was called on a *single* derivation "
|
||||
+ ''"${pkgs.name or "<unknown-name>"}";''
|
||||
+ " did you mean to use `callPackage` instead?")
|
||||
else lib.mapAttrs mkAttrOverridable pkgs;
|
||||
else mapAttrs mkAttrOverridable pkgs;
|
||||
|
||||
|
||||
/* Add attributes to each output of a derivation without changing
|
||||
|
@ -224,7 +238,7 @@ rec {
|
|||
let
|
||||
outputs = drv.outputs or [ "out" ];
|
||||
|
||||
commonAttrs = drv // (builtins.listToAttrs outputsList) //
|
||||
commonAttrs = drv // (listToAttrs outputsList) //
|
||||
({ all = map (x: x.value) outputsList; }) // passthru;
|
||||
|
||||
outputToAttrListElement = outputName:
|
||||
|
@ -238,7 +252,7 @@ rec {
|
|||
# TODO: give the derivation control over the outputs.
|
||||
# `overrideAttrs` may not be the only attribute that needs
|
||||
# updating when switching outputs.
|
||||
lib.optionalAttrs (passthru?overrideAttrs) {
|
||||
optionalAttrs (passthru?overrideAttrs) {
|
||||
# TODO: also add overrideAttrs when overrideAttrs is not custom, e.g. when not splicing.
|
||||
overrideAttrs = f: (passthru.overrideAttrs f).${outputName};
|
||||
};
|
||||
|
@ -264,11 +278,11 @@ rec {
|
|||
|
||||
commonAttrs =
|
||||
{ inherit (drv) name system meta; inherit outputs; }
|
||||
// lib.optionalAttrs (drv._hydraAggregate or false) {
|
||||
// optionalAttrs (drv._hydraAggregate or false) {
|
||||
_hydraAggregate = true;
|
||||
constituents = map hydraJob (lib.flatten drv.constituents);
|
||||
constituents = map hydraJob (flatten drv.constituents);
|
||||
}
|
||||
// (lib.listToAttrs outputsList);
|
||||
// (listToAttrs outputsList);
|
||||
|
||||
makeOutput = outputName:
|
||||
let output = drv.${outputName}; in
|
||||
|
@ -283,9 +297,9 @@ rec {
|
|||
|
||||
outputsList = map makeOutput outputs;
|
||||
|
||||
drv' = (lib.head outputsList).value;
|
||||
drv' = (head outputsList).value;
|
||||
in if drv == null then null else
|
||||
lib.deepSeq drv' drv';
|
||||
deepSeq drv' drv';
|
||||
|
||||
/* Make a set of packages with a common scope. All packages called
|
||||
with the provided `callPackage` will be evaluated with the same
|
||||
|
@ -304,11 +318,11 @@ rec {
|
|||
let self = f self // {
|
||||
newScope = scope: newScope (self // scope);
|
||||
callPackage = self.newScope {};
|
||||
overrideScope = g: makeScope newScope (lib.fixedPoints.extends g f);
|
||||
overrideScope = g: makeScope newScope (extends g f);
|
||||
# Remove after 24.11 is released.
|
||||
overrideScope' = g: lib.warnIf (lib.isInOldestRelease 2311)
|
||||
overrideScope' = g: warnIf (isInOldestRelease 2311)
|
||||
"`overrideScope'` (from `lib.makeScope`) has been renamed to `overrideScope`."
|
||||
(makeScope newScope (lib.fixedPoints.extends g f));
|
||||
(makeScope newScope (extends g f));
|
||||
packages = f;
|
||||
};
|
||||
in self;
|
||||
|
@ -384,7 +398,7 @@ rec {
|
|||
overrideScope = g: (makeScopeWithSplicing'
|
||||
{ inherit splicePackages newScope; }
|
||||
{ inherit otherSplices keep extra;
|
||||
f = lib.fixedPoints.extends g f;
|
||||
f = extends g f;
|
||||
});
|
||||
packages = f;
|
||||
};
|
||||
|
|
5
third_party/nixpkgs/lib/default.nix
vendored
5
third_party/nixpkgs/lib/default.nix
vendored
|
@ -91,7 +91,7 @@ let
|
|||
inherit (self.lists) singleton forEach foldr fold foldl foldl' imap0 imap1
|
||||
concatMap flatten remove findSingle findFirst any all count
|
||||
optional optionals toList range replicate partition zipListsWith zipLists
|
||||
reverseList listDfs toposort sort naturalSort compareLists take
|
||||
reverseList listDfs toposort sort sortOn naturalSort compareLists take
|
||||
drop sublist last init crossLists unique allUnique intersectLists
|
||||
subtractLists mutuallyExclusive groupBy groupBy';
|
||||
inherit (self.strings) concatStrings concatMapStrings concatImapStrings
|
||||
|
@ -120,7 +120,8 @@ let
|
|||
inherit (self.meta) addMetaAttrs dontDistribute setName updateName
|
||||
appendToName mapDerivationAttrset setPrio lowPrio lowPrioSet hiPrio
|
||||
hiPrioSet getLicenseFromSpdxId getExe getExe';
|
||||
inherit (self.filesystem) pathType pathIsDirectory pathIsRegularFile;
|
||||
inherit (self.filesystem) pathType pathIsDirectory pathIsRegularFile
|
||||
packagesFromDirectoryRecursive;
|
||||
inherit (self.sources) cleanSourceFilter
|
||||
cleanSource sourceByRegex sourceFilesBySuffices
|
||||
commitIdFromGitRepo cleanSourceWith pathHasContext
|
||||
|
|
14
third_party/nixpkgs/lib/fileset/README.md
vendored
14
third_party/nixpkgs/lib/fileset/README.md
vendored
|
@ -253,7 +253,15 @@ The `fileFilter` function takes a path, and not a file set, as its second argume
|
|||
it would change the `subpath`/`components` value depending on which files are included.
|
||||
- (+) If necessary, this restriction can be relaxed later, the opposite wouldn't be possible
|
||||
|
||||
## To update in the future
|
||||
### Strict path existence checking
|
||||
|
||||
Here's a list of places in the library that need to be updated in the future:
|
||||
- If/Once a function exists that can optionally include a path depending on whether it exists, the error message for the path not existing in `_coerce` should mention the new function
|
||||
Coercing paths that don't exist to file sets always gives an error.
|
||||
|
||||
- (-) Sometimes you want to remove a file that may not always exist using `difference ./. ./does-not-exist`,
|
||||
but this does not work because coercion of `./does-not-exist` fails,
|
||||
even though its existence would have no influence on the result.
|
||||
- (+) This is dangerous, because you wouldn't be protected against typos anymore.
|
||||
E.g. when trying to prevent `./secret` from being imported, a typo like `difference ./. ./sercet` would import it regardless.
|
||||
- (+) `difference ./. (maybeMissing ./does-not-exist)` can be used to do this more explicitly.
|
||||
- (+) `difference ./. (difference ./foo ./foo/bar)` should report an error when `./foo/bar` does not exist ("double negation"). Unfortunately, the current internal representation does not lend itself to a behavior where both `difference x ./does-not-exists` and double negation are handled and checked correctly.
|
||||
This could be fixed, but would require significant changes to the internal representation that are not worth the effort and the risk of introducing implicit behavior.
|
||||
|
|
583
third_party/nixpkgs/lib/fileset/default.nix
vendored
583
third_party/nixpkgs/lib/fileset/default.nix
vendored
|
@ -1,3 +1,98 @@
|
|||
/*
|
||||
<!-- This anchor is here for backwards compatibity -->
|
||||
[]{#sec-fileset}
|
||||
|
||||
The [`lib.fileset`](#sec-functions-library-fileset) library allows you to work with _file sets_.
|
||||
A file set is a (mathematical) set of local files that can be added to the Nix store for use in Nix derivations.
|
||||
File sets are easy and safe to use, providing obvious and composable semantics with good error messages to prevent mistakes.
|
||||
|
||||
## Overview {#sec-fileset-overview}
|
||||
|
||||
Basics:
|
||||
- [Implicit coercion from paths to file sets](#sec-fileset-path-coercion)
|
||||
|
||||
- [`lib.fileset.maybeMissing`](#function-library-lib.fileset.maybeMissing):
|
||||
|
||||
Create a file set from a path that may be missing.
|
||||
|
||||
- [`lib.fileset.trace`](#function-library-lib.fileset.trace)/[`lib.fileset.traceVal`](#function-library-lib.fileset.trace):
|
||||
|
||||
Pretty-print file sets for debugging.
|
||||
|
||||
- [`lib.fileset.toSource`](#function-library-lib.fileset.toSource):
|
||||
|
||||
Add files in file sets to the store to use as derivation sources.
|
||||
|
||||
Combinators:
|
||||
- [`lib.fileset.union`](#function-library-lib.fileset.union)/[`lib.fileset.unions`](#function-library-lib.fileset.unions):
|
||||
|
||||
Create a larger file set from all the files in multiple file sets.
|
||||
|
||||
- [`lib.fileset.intersection`](#function-library-lib.fileset.intersection):
|
||||
|
||||
Create a smaller file set from only the files in both file sets.
|
||||
|
||||
- [`lib.fileset.difference`](#function-library-lib.fileset.difference):
|
||||
|
||||
Create a smaller file set containing all files that are in one file set, but not another one.
|
||||
|
||||
Filtering:
|
||||
- [`lib.fileset.fileFilter`](#function-library-lib.fileset.fileFilter):
|
||||
|
||||
Create a file set from all files that satisisfy a predicate in a directory.
|
||||
|
||||
Utilities:
|
||||
- [`lib.fileset.fromSource`](#function-library-lib.fileset.fromSource):
|
||||
|
||||
Create a file set from a `lib.sources`-based value.
|
||||
|
||||
- [`lib.fileset.gitTracked`](#function-library-lib.fileset.gitTracked)/[`lib.fileset.gitTrackedWith`](#function-library-lib.fileset.gitTrackedWith):
|
||||
|
||||
Create a file set from all tracked files in a local Git repository.
|
||||
|
||||
If you need more file set functions,
|
||||
see [this issue](https://github.com/NixOS/nixpkgs/issues/266356) to request it.
|
||||
|
||||
|
||||
## Implicit coercion from paths to file sets {#sec-fileset-path-coercion}
|
||||
|
||||
All functions accepting file sets as arguments can also accept [paths](https://nixos.org/manual/nix/stable/language/values.html#type-path) as arguments.
|
||||
Such path arguments are implicitly coerced to file sets containing all files under that path:
|
||||
- A path to a file turns into a file set containing that single file.
|
||||
- A path to a directory turns into a file set containing all files _recursively_ in that directory.
|
||||
|
||||
If the path points to a non-existent location, an error is thrown.
|
||||
|
||||
::: {.note}
|
||||
Just like in Git, file sets cannot represent empty directories.
|
||||
Because of this, a path to a directory that contains no files (recursively) will turn into a file set containing no files.
|
||||
:::
|
||||
|
||||
:::{.note}
|
||||
File set coercion does _not_ add any of the files under the coerced paths to the store.
|
||||
Only the [`toSource`](#function-library-lib.fileset.toSource) function adds files to the Nix store, and only those files contained in the `fileset` argument.
|
||||
This is in contrast to using [paths in string interpolation](https://nixos.org/manual/nix/stable/language/values.html#type-path), which does add the entire referenced path to the store.
|
||||
:::
|
||||
|
||||
### Example {#sec-fileset-path-coercion-example}
|
||||
|
||||
Assume we are in a local directory with a file hierarchy like this:
|
||||
```
|
||||
├─ a/
|
||||
│ ├─ x (file)
|
||||
│ └─ b/
|
||||
│ └─ y (file)
|
||||
└─ c/
|
||||
└─ d/
|
||||
```
|
||||
|
||||
Here's a listing of which files get included when different path expressions get coerced to file sets:
|
||||
- `./.` as a file set contains both `a/x` and `a/b/y` (`c/` does not contain any files and is therefore omitted).
|
||||
- `./a` as a file set contains both `a/x` and `a/b/y`.
|
||||
- `./a/x` as a file set contains only `a/x`.
|
||||
- `./a/b` as a file set contains only `a/b/y`.
|
||||
- `./c` as a file set is empty, since neither `c` nor `c/d` contain any files.
|
||||
*/
|
||||
{ lib }:
|
||||
let
|
||||
|
||||
|
@ -12,8 +107,9 @@ let
|
|||
_printFileset
|
||||
_intersection
|
||||
_difference
|
||||
_mirrorStorePath
|
||||
_fromFetchGit
|
||||
_fetchGitSubmodulesMinver
|
||||
_emptyWithoutBase
|
||||
;
|
||||
|
||||
inherit (builtins)
|
||||
|
@ -52,11 +148,126 @@ let
|
|||
inherit (lib.trivial)
|
||||
isFunction
|
||||
pipe
|
||||
inPureEvalMode
|
||||
;
|
||||
|
||||
in {
|
||||
|
||||
/*
|
||||
Create a file set from a path that may or may not exist:
|
||||
- If the path does exist, the path is [coerced to a file set](#sec-fileset-path-coercion).
|
||||
- If the path does not exist, a file set containing no files is returned.
|
||||
|
||||
Type:
|
||||
maybeMissing :: Path -> FileSet
|
||||
|
||||
Example:
|
||||
# All files in the current directory, but excluding main.o if it exists
|
||||
difference ./. (maybeMissing ./main.o)
|
||||
*/
|
||||
maybeMissing =
|
||||
path:
|
||||
if ! isPath path then
|
||||
if isStringLike path then
|
||||
throw ''
|
||||
lib.fileset.maybeMissing: Argument ("${toString path}") is a string-like value, but it should be a path instead.''
|
||||
else
|
||||
throw ''
|
||||
lib.fileset.maybeMissing: Argument is of type ${typeOf path}, but it should be a path instead.''
|
||||
else if ! pathExists path then
|
||||
_emptyWithoutBase
|
||||
else
|
||||
_singleton path;
|
||||
|
||||
/*
|
||||
Incrementally evaluate and trace a file set in a pretty way.
|
||||
This function is only intended for debugging purposes.
|
||||
The exact tracing format is unspecified and may change.
|
||||
|
||||
This function takes a final argument to return.
|
||||
In comparison, [`traceVal`](#function-library-lib.fileset.traceVal) returns
|
||||
the given file set argument.
|
||||
|
||||
This variant is useful for tracing file sets in the Nix repl.
|
||||
|
||||
Type:
|
||||
trace :: FileSet -> Any -> Any
|
||||
|
||||
Example:
|
||||
trace (unions [ ./Makefile ./src ./tests/run.sh ]) null
|
||||
=>
|
||||
trace: /home/user/src/myProject
|
||||
trace: - Makefile (regular)
|
||||
trace: - src (all files in directory)
|
||||
trace: - tests
|
||||
trace: - run.sh (regular)
|
||||
null
|
||||
*/
|
||||
trace =
|
||||
/*
|
||||
The file set to trace.
|
||||
|
||||
This argument can also be a path,
|
||||
which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
|
||||
*/
|
||||
fileset:
|
||||
let
|
||||
# "fileset" would be a better name, but that would clash with the argument name,
|
||||
# and we cannot change that because of https://github.com/nix-community/nixdoc/issues/76
|
||||
actualFileset = _coerce "lib.fileset.trace: Argument" fileset;
|
||||
in
|
||||
seq
|
||||
(_printFileset actualFileset)
|
||||
(x: x);
|
||||
|
||||
/*
|
||||
Incrementally evaluate and trace a file set in a pretty way.
|
||||
This function is only intended for debugging purposes.
|
||||
The exact tracing format is unspecified and may change.
|
||||
|
||||
This function returns the given file set.
|
||||
In comparison, [`trace`](#function-library-lib.fileset.trace) takes another argument to return.
|
||||
|
||||
This variant is useful for tracing file sets passed as arguments to other functions.
|
||||
|
||||
Type:
|
||||
traceVal :: FileSet -> FileSet
|
||||
|
||||
Example:
|
||||
toSource {
|
||||
root = ./.;
|
||||
fileset = traceVal (unions [
|
||||
./Makefile
|
||||
./src
|
||||
./tests/run.sh
|
||||
]);
|
||||
}
|
||||
=>
|
||||
trace: /home/user/src/myProject
|
||||
trace: - Makefile (regular)
|
||||
trace: - src (all files in directory)
|
||||
trace: - tests
|
||||
trace: - run.sh (regular)
|
||||
"/nix/store/...-source"
|
||||
*/
|
||||
traceVal =
|
||||
/*
|
||||
The file set to trace and return.
|
||||
|
||||
This argument can also be a path,
|
||||
which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
|
||||
*/
|
||||
fileset:
|
||||
let
|
||||
# "fileset" would be a better name, but that would clash with the argument name,
|
||||
# and we cannot change that because of https://github.com/nix-community/nixdoc/issues/76
|
||||
actualFileset = _coerce "lib.fileset.traceVal: Argument" fileset;
|
||||
in
|
||||
seq
|
||||
(_printFileset actualFileset)
|
||||
# We could also return the original fileset argument here,
|
||||
# but that would then duplicate work for consumers of the fileset, because then they have to coerce it again
|
||||
actualFileset;
|
||||
|
||||
/*
|
||||
Add the local files contained in `fileset` to the store as a single [store path](https://nixos.org/manual/nix/stable/glossary#gloss-store-path) rooted at `root`.
|
||||
|
||||
|
@ -201,75 +412,6 @@ in {
|
|||
filter = sourceFilter;
|
||||
};
|
||||
|
||||
/*
|
||||
Create a file set with the same files as a `lib.sources`-based value.
|
||||
This does not import any of the files into the store.
|
||||
|
||||
This can be used to gradually migrate from `lib.sources`-based filtering to `lib.fileset`.
|
||||
|
||||
A file set can be turned back into a source using [`toSource`](#function-library-lib.fileset.toSource).
|
||||
|
||||
:::{.note}
|
||||
File sets cannot represent empty directories.
|
||||
Turning the result of this function back into a source using `toSource` will therefore not preserve empty directories.
|
||||
:::
|
||||
|
||||
Type:
|
||||
fromSource :: SourceLike -> FileSet
|
||||
|
||||
Example:
|
||||
# There's no cleanSource-like function for file sets yet,
|
||||
# but we can just convert cleanSource to a file set and use it that way
|
||||
toSource {
|
||||
root = ./.;
|
||||
fileset = fromSource (lib.sources.cleanSource ./.);
|
||||
}
|
||||
|
||||
# Keeping a previous sourceByRegex (which could be migrated to `lib.fileset.unions`),
|
||||
# but removing a subdirectory using file set functions
|
||||
difference
|
||||
(fromSource (lib.sources.sourceByRegex ./. [
|
||||
"^README\.md$"
|
||||
# This regex includes everything in ./doc
|
||||
"^doc(/.*)?$"
|
||||
])
|
||||
./doc/generated
|
||||
|
||||
# Use cleanSource, but limit it to only include ./Makefile and files under ./src
|
||||
intersection
|
||||
(fromSource (lib.sources.cleanSource ./.))
|
||||
(unions [
|
||||
./Makefile
|
||||
./src
|
||||
]);
|
||||
*/
|
||||
fromSource = source:
|
||||
let
|
||||
# This function uses `._isLibCleanSourceWith`, `.origSrc` and `.filter`,
|
||||
# which are technically internal to lib.sources,
|
||||
# but we'll allow this since both libraries are in the same code base
|
||||
# and this function is a bridge between them.
|
||||
isFiltered = source ? _isLibCleanSourceWith;
|
||||
path = if isFiltered then source.origSrc else source;
|
||||
in
|
||||
# We can only support sources created from paths
|
||||
if ! isPath path then
|
||||
if isStringLike path then
|
||||
throw ''
|
||||
lib.fileset.fromSource: The source origin of the argument is a string-like value ("${toString path}"), but it should be a path instead.
|
||||
Sources created from paths in strings cannot be turned into file sets, use `lib.sources` or derivations instead.''
|
||||
else
|
||||
throw ''
|
||||
lib.fileset.fromSource: The source origin of the argument is of type ${typeOf path}, but it should be a path instead.''
|
||||
else if ! pathExists path then
|
||||
throw ''
|
||||
lib.fileset.fromSource: The source origin (${toString path}) of the argument does not exist.''
|
||||
else if isFiltered then
|
||||
_fromSourceFilter path source.filter
|
||||
else
|
||||
# If there's no filter, no need to run the expensive conversion, all subpaths will be included
|
||||
_singleton path;
|
||||
|
||||
/*
|
||||
The file set containing all files that are in either of two given file sets.
|
||||
This is the same as [`unions`](#function-library-lib.fileset.unions),
|
||||
|
@ -362,66 +504,6 @@ in {
|
|||
_unionMany
|
||||
];
|
||||
|
||||
/*
|
||||
Filter a file set to only contain files matching some predicate.
|
||||
|
||||
Type:
|
||||
fileFilter ::
|
||||
({
|
||||
name :: String,
|
||||
type :: String,
|
||||
...
|
||||
} -> Bool)
|
||||
-> Path
|
||||
-> FileSet
|
||||
|
||||
Example:
|
||||
# Include all regular `default.nix` files in the current directory
|
||||
fileFilter (file: file.name == "default.nix") ./.
|
||||
|
||||
# Include all non-Nix files from the current directory
|
||||
fileFilter (file: ! hasSuffix ".nix" file.name) ./.
|
||||
|
||||
# Include all files that start with a "." in the current directory
|
||||
fileFilter (file: hasPrefix "." file.name) ./.
|
||||
|
||||
# Include all regular files (not symlinks or others) in the current directory
|
||||
fileFilter (file: file.type == "regular") ./.
|
||||
*/
|
||||
fileFilter =
|
||||
/*
|
||||
The predicate function to call on all files contained in given file set.
|
||||
A file is included in the resulting file set if this function returns true for it.
|
||||
|
||||
This function is called with an attribute set containing these attributes:
|
||||
|
||||
- `name` (String): The name of the file
|
||||
|
||||
- `type` (String, one of `"regular"`, `"symlink"` or `"unknown"`): The type of the file.
|
||||
This matches result of calling [`builtins.readFileType`](https://nixos.org/manual/nix/stable/language/builtins.html#builtins-readFileType) on the file's path.
|
||||
|
||||
Other attributes may be added in the future.
|
||||
*/
|
||||
predicate:
|
||||
# The path whose files to filter
|
||||
path:
|
||||
if ! isFunction predicate then
|
||||
throw ''
|
||||
lib.fileset.fileFilter: First argument is of type ${typeOf predicate}, but it should be a function instead.''
|
||||
else if ! isPath path then
|
||||
if path._type or "" == "fileset" then
|
||||
throw ''
|
||||
lib.fileset.fileFilter: Second argument is a file set, but it should be a path instead.
|
||||
If you need to filter files in a file set, use `intersection fileset (fileFilter pred ./.)` instead.''
|
||||
else
|
||||
throw ''
|
||||
lib.fileset.fileFilter: Second argument is of type ${typeOf path}, but it should be a path instead.''
|
||||
else if ! pathExists path then
|
||||
throw ''
|
||||
lib.fileset.fileFilter: Second argument (${toString path}) is a path that does not exist.''
|
||||
else
|
||||
_fileFilter predicate path;
|
||||
|
||||
/*
|
||||
The file set containing all files that are in both of two given file sets.
|
||||
See also [Intersection (set theory)](https://en.wikipedia.org/wiki/Intersection_(set_theory)).
|
||||
|
@ -514,94 +596,140 @@ in {
|
|||
(elemAt filesets 1);
|
||||
|
||||
/*
|
||||
Incrementally evaluate and trace a file set in a pretty way.
|
||||
This function is only intended for debugging purposes.
|
||||
The exact tracing format is unspecified and may change.
|
||||
|
||||
This function takes a final argument to return.
|
||||
In comparison, [`traceVal`](#function-library-lib.fileset.traceVal) returns
|
||||
the given file set argument.
|
||||
|
||||
This variant is useful for tracing file sets in the Nix repl.
|
||||
Filter a file set to only contain files matching some predicate.
|
||||
|
||||
Type:
|
||||
trace :: FileSet -> Any -> Any
|
||||
fileFilter ::
|
||||
({
|
||||
name :: String,
|
||||
type :: String,
|
||||
hasExt :: String -> Bool,
|
||||
...
|
||||
} -> Bool)
|
||||
-> Path
|
||||
-> FileSet
|
||||
|
||||
Example:
|
||||
trace (unions [ ./Makefile ./src ./tests/run.sh ]) null
|
||||
=>
|
||||
trace: /home/user/src/myProject
|
||||
trace: - Makefile (regular)
|
||||
trace: - src (all files in directory)
|
||||
trace: - tests
|
||||
trace: - run.sh (regular)
|
||||
null
|
||||
*/
|
||||
trace =
|
||||
/*
|
||||
The file set to trace.
|
||||
# Include all regular `default.nix` files in the current directory
|
||||
fileFilter (file: file.name == "default.nix") ./.
|
||||
|
||||
This argument can also be a path,
|
||||
which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
|
||||
# Include all non-Nix files from the current directory
|
||||
fileFilter (file: ! file.hasExt "nix") ./.
|
||||
|
||||
# Include all files that start with a "." in the current directory
|
||||
fileFilter (file: hasPrefix "." file.name) ./.
|
||||
|
||||
# Include all regular files (not symlinks or others) in the current directory
|
||||
fileFilter (file: file.type == "regular") ./.
|
||||
*/
|
||||
fileset:
|
||||
let
|
||||
# "fileset" would be a better name, but that would clash with the argument name,
|
||||
# and we cannot change that because of https://github.com/nix-community/nixdoc/issues/76
|
||||
actualFileset = _coerce "lib.fileset.trace: Argument" fileset;
|
||||
in
|
||||
seq
|
||||
(_printFileset actualFileset)
|
||||
(x: x);
|
||||
fileFilter =
|
||||
/*
|
||||
The predicate function to call on all files contained in given file set.
|
||||
A file is included in the resulting file set if this function returns true for it.
|
||||
|
||||
This function is called with an attribute set containing these attributes:
|
||||
|
||||
- `name` (String): The name of the file
|
||||
|
||||
- `type` (String, one of `"regular"`, `"symlink"` or `"unknown"`): The type of the file.
|
||||
This matches result of calling [`builtins.readFileType`](https://nixos.org/manual/nix/stable/language/builtins.html#builtins-readFileType) on the file's path.
|
||||
|
||||
- `hasExt` (String -> Bool): Whether the file has a certain file extension.
|
||||
`hasExt ext` is true only if `hasSuffix ".${ext}" name`.
|
||||
|
||||
This also means that e.g. for a file with name `.gitignore`,
|
||||
`hasExt "gitignore"` is true.
|
||||
|
||||
Other attributes may be added in the future.
|
||||
*/
|
||||
predicate:
|
||||
# The path whose files to filter
|
||||
path:
|
||||
if ! isFunction predicate then
|
||||
throw ''
|
||||
lib.fileset.fileFilter: First argument is of type ${typeOf predicate}, but it should be a function instead.''
|
||||
else if ! isPath path then
|
||||
if path._type or "" == "fileset" then
|
||||
throw ''
|
||||
lib.fileset.fileFilter: Second argument is a file set, but it should be a path instead.
|
||||
If you need to filter files in a file set, use `intersection fileset (fileFilter pred ./.)` instead.''
|
||||
else
|
||||
throw ''
|
||||
lib.fileset.fileFilter: Second argument is of type ${typeOf path}, but it should be a path instead.''
|
||||
else if ! pathExists path then
|
||||
throw ''
|
||||
lib.fileset.fileFilter: Second argument (${toString path}) is a path that does not exist.''
|
||||
else
|
||||
_fileFilter predicate path;
|
||||
|
||||
/*
|
||||
Incrementally evaluate and trace a file set in a pretty way.
|
||||
This function is only intended for debugging purposes.
|
||||
The exact tracing format is unspecified and may change.
|
||||
Create a file set with the same files as a `lib.sources`-based value.
|
||||
This does not import any of the files into the store.
|
||||
|
||||
This function returns the given file set.
|
||||
In comparison, [`trace`](#function-library-lib.fileset.trace) takes another argument to return.
|
||||
This can be used to gradually migrate from `lib.sources`-based filtering to `lib.fileset`.
|
||||
|
||||
This variant is useful for tracing file sets passed as arguments to other functions.
|
||||
A file set can be turned back into a source using [`toSource`](#function-library-lib.fileset.toSource).
|
||||
|
||||
:::{.note}
|
||||
File sets cannot represent empty directories.
|
||||
Turning the result of this function back into a source using `toSource` will therefore not preserve empty directories.
|
||||
:::
|
||||
|
||||
Type:
|
||||
traceVal :: FileSet -> FileSet
|
||||
fromSource :: SourceLike -> FileSet
|
||||
|
||||
Example:
|
||||
# There's no cleanSource-like function for file sets yet,
|
||||
# but we can just convert cleanSource to a file set and use it that way
|
||||
toSource {
|
||||
root = ./.;
|
||||
fileset = traceVal (unions [
|
||||
fileset = fromSource (lib.sources.cleanSource ./.);
|
||||
}
|
||||
|
||||
# Keeping a previous sourceByRegex (which could be migrated to `lib.fileset.unions`),
|
||||
# but removing a subdirectory using file set functions
|
||||
difference
|
||||
(fromSource (lib.sources.sourceByRegex ./. [
|
||||
"^README\.md$"
|
||||
# This regex includes everything in ./doc
|
||||
"^doc(/.*)?$"
|
||||
])
|
||||
./doc/generated
|
||||
|
||||
# Use cleanSource, but limit it to only include ./Makefile and files under ./src
|
||||
intersection
|
||||
(fromSource (lib.sources.cleanSource ./.))
|
||||
(unions [
|
||||
./Makefile
|
||||
./src
|
||||
./tests/run.sh
|
||||
]);
|
||||
}
|
||||
=>
|
||||
trace: /home/user/src/myProject
|
||||
trace: - Makefile (regular)
|
||||
trace: - src (all files in directory)
|
||||
trace: - tests
|
||||
trace: - run.sh (regular)
|
||||
"/nix/store/...-source"
|
||||
*/
|
||||
traceVal =
|
||||
/*
|
||||
The file set to trace and return.
|
||||
|
||||
This argument can also be a path,
|
||||
which gets [implicitly coerced to a file set](#sec-fileset-path-coercion).
|
||||
*/
|
||||
fileset:
|
||||
fromSource = source:
|
||||
let
|
||||
# "fileset" would be a better name, but that would clash with the argument name,
|
||||
# and we cannot change that because of https://github.com/nix-community/nixdoc/issues/76
|
||||
actualFileset = _coerce "lib.fileset.traceVal: Argument" fileset;
|
||||
# This function uses `._isLibCleanSourceWith`, `.origSrc` and `.filter`,
|
||||
# which are technically internal to lib.sources,
|
||||
# but we'll allow this since both libraries are in the same code base
|
||||
# and this function is a bridge between them.
|
||||
isFiltered = source ? _isLibCleanSourceWith;
|
||||
path = if isFiltered then source.origSrc else source;
|
||||
in
|
||||
seq
|
||||
(_printFileset actualFileset)
|
||||
# We could also return the original fileset argument here,
|
||||
# but that would then duplicate work for consumers of the fileset, because then they have to coerce it again
|
||||
actualFileset;
|
||||
# We can only support sources created from paths
|
||||
if ! isPath path then
|
||||
if isStringLike path then
|
||||
throw ''
|
||||
lib.fileset.fromSource: The source origin of the argument is a string-like value ("${toString path}"), but it should be a path instead.
|
||||
Sources created from paths in strings cannot be turned into file sets, use `lib.sources` or derivations instead.''
|
||||
else
|
||||
throw ''
|
||||
lib.fileset.fromSource: The source origin of the argument is of type ${typeOf path}, but it should be a path instead.''
|
||||
else if ! pathExists path then
|
||||
throw ''
|
||||
lib.fileset.fromSource: The source origin (${toString path}) of the argument is a path that does not exist.''
|
||||
else if isFiltered then
|
||||
_fromSourceFilter path source.filter
|
||||
else
|
||||
# If there's no filter, no need to run the expensive conversion, all subpaths will be included
|
||||
_singleton path;
|
||||
|
||||
/*
|
||||
Create a file set containing all [Git-tracked files](https://git-scm.com/book/en/v2/Git-Basics-Recording-Changes-to-the-Repository) in a repository.
|
||||
|
@ -625,23 +753,22 @@ in {
|
|||
This directory must contain a `.git` file or subdirectory.
|
||||
*/
|
||||
path:
|
||||
# See the gitTrackedWith implementation for more explanatory comments
|
||||
let
|
||||
fetchResult = builtins.fetchGit path;
|
||||
in
|
||||
if inPureEvalMode then
|
||||
throw "lib.fileset.gitTracked: This function is currently not supported in pure evaluation mode, since it currently relies on `builtins.fetchGit`. See https://github.com/NixOS/nix/issues/9292."
|
||||
else if ! isPath path then
|
||||
throw "lib.fileset.gitTracked: Expected the argument to be a path, but it's a ${typeOf path} instead."
|
||||
else if ! pathExists (path + "/.git") then
|
||||
throw "lib.fileset.gitTracked: Expected the argument (${toString path}) to point to a local working tree of a Git repository, but it's not."
|
||||
else
|
||||
_mirrorStorePath path fetchResult.outPath;
|
||||
_fromFetchGit
|
||||
"gitTracked"
|
||||
"argument"
|
||||
path
|
||||
{};
|
||||
|
||||
/*
|
||||
Create a file set containing all [Git-tracked files](https://git-scm.com/book/en/v2/Git-Basics-Recording-Changes-to-the-Repository) in a repository.
|
||||
The first argument allows configuration with an attribute set,
|
||||
while the second argument is the path to the Git working tree.
|
||||
|
||||
`gitTrackedWith` does not perform any filtering when the path is a [Nix store path](https://nixos.org/manual/nix/stable/store/store-path.html#store-path) and not a repository.
|
||||
In this way, it accommodates the use case where the expression that makes the `gitTracked` call does not reside in an actual git repository anymore,
|
||||
and has presumably already been fetched in a way that excludes untracked files.
|
||||
Fetchers with such equivalent behavior include `builtins.fetchGit`, `builtins.fetchTree` (experimental), and `pkgs.fetchgit` when used without `leaveDotGit`.
|
||||
|
||||
If you don't need the configuration,
|
||||
you can use [`gitTracked`](#function-library-lib.fileset.gitTracked) instead.
|
||||
|
||||
|
@ -678,35 +805,19 @@ in {
|
|||
This directory must contain a `.git` file or subdirectory.
|
||||
*/
|
||||
path:
|
||||
let
|
||||
# This imports the files unnecessarily, which currently can't be avoided
|
||||
# because `builtins.fetchGit` is the only function exposing which files are tracked by Git.
|
||||
# With the [lazy trees PR](https://github.com/NixOS/nix/pull/6530),
|
||||
# the unnecessarily import could be avoided.
|
||||
# However a simpler alternative still would be [a builtins.gitLsFiles](https://github.com/NixOS/nix/issues/2944).
|
||||
fetchResult = builtins.fetchGit {
|
||||
url = path;
|
||||
|
||||
# This is the only `fetchGit` parameter that makes sense in this context.
|
||||
# We can't just pass `submodules = recurseSubmodules` here because
|
||||
# this would fail for Nix versions that don't support `submodules`.
|
||||
${if recurseSubmodules then "submodules" else null} = true;
|
||||
};
|
||||
in
|
||||
if inPureEvalMode then
|
||||
throw "lib.fileset.gitTrackedWith: This function is currently not supported in pure evaluation mode, since it currently relies on `builtins.fetchGit`. See https://github.com/NixOS/nix/issues/9292."
|
||||
else if ! isBool recurseSubmodules then
|
||||
if ! isBool recurseSubmodules then
|
||||
throw "lib.fileset.gitTrackedWith: Expected the attribute `recurseSubmodules` of the first argument to be a boolean, but it's a ${typeOf recurseSubmodules} instead."
|
||||
else if recurseSubmodules && versionOlder nixVersion _fetchGitSubmodulesMinver then
|
||||
throw "lib.fileset.gitTrackedWith: Setting the attribute `recurseSubmodules` to `true` is only supported for Nix version ${_fetchGitSubmodulesMinver} and after, but Nix version ${nixVersion} is used."
|
||||
else if ! isPath path then
|
||||
throw "lib.fileset.gitTrackedWith: Expected the second argument to be a path, but it's a ${typeOf path} instead."
|
||||
# We can identify local working directories by checking for .git,
|
||||
# see https://git-scm.com/docs/gitrepository-layout#_description.
|
||||
# Note that `builtins.fetchGit` _does_ work for bare repositories (where there's no `.git`),
|
||||
# even though `git ls-files` wouldn't return any files in that case.
|
||||
else if ! pathExists (path + "/.git") then
|
||||
throw "lib.fileset.gitTrackedWith: Expected the second argument (${toString path}) to point to a local working tree of a Git repository, but it's not."
|
||||
else
|
||||
_mirrorStorePath path fetchResult.outPath;
|
||||
_fromFetchGit
|
||||
"gitTrackedWith"
|
||||
"second argument"
|
||||
path
|
||||
# This is the only `fetchGit` parameter that makes sense in this context.
|
||||
# We can't just pass `submodules = recurseSubmodules` here because
|
||||
# this would fail for Nix versions that don't support `submodules`.
|
||||
(lib.optionalAttrs recurseSubmodules {
|
||||
submodules = true;
|
||||
});
|
||||
}
|
||||
|
|
73
third_party/nixpkgs/lib/fileset/internal.nix
vendored
73
third_party/nixpkgs/lib/fileset/internal.nix
vendored
|
@ -10,6 +10,7 @@ let
|
|||
split
|
||||
trace
|
||||
typeOf
|
||||
fetchGit
|
||||
;
|
||||
|
||||
inherit (lib.attrsets)
|
||||
|
@ -40,6 +41,8 @@ let
|
|||
inherit (lib.path)
|
||||
append
|
||||
splitRoot
|
||||
hasStorePathPrefix
|
||||
splitStorePath
|
||||
;
|
||||
|
||||
inherit (lib.path.subpath)
|
||||
|
@ -52,8 +55,12 @@ let
|
|||
concatStringsSep
|
||||
substring
|
||||
stringLength
|
||||
hasSuffix
|
||||
;
|
||||
|
||||
inherit (lib.trivial)
|
||||
inPureEvalMode
|
||||
;
|
||||
in
|
||||
# Rare case of justified usage of rec:
|
||||
# - This file is internal, so the return value doesn't matter, no need to make things overridable
|
||||
|
@ -181,7 +188,8 @@ rec {
|
|||
${context} is of type ${typeOf value}, but it should be a file set or a path instead.''
|
||||
else if ! pathExists value then
|
||||
throw ''
|
||||
${context} (${toString value}) is a path that does not exist.''
|
||||
${context} (${toString value}) is a path that does not exist.
|
||||
To create a file set from a path that may not exist, use `lib.fileset.maybeMissing`.''
|
||||
else
|
||||
_singleton value;
|
||||
|
||||
|
@ -381,7 +389,7 @@ rec {
|
|||
|
||||
# Turn a fileset into a source filter function suitable for `builtins.path`
|
||||
# Only directories recursively containing at least one files are recursed into
|
||||
# Type: Path -> fileset -> (String -> String -> Bool)
|
||||
# Type: fileset -> (String -> String -> Bool)
|
||||
_toSourceFilter = fileset:
|
||||
let
|
||||
# Simplify the tree, necessary to make sure all empty directories are null
|
||||
|
@ -796,9 +804,11 @@ rec {
|
|||
if
|
||||
predicate {
|
||||
inherit name type;
|
||||
hasExt = ext: hasSuffix ".${ext}" name;
|
||||
|
||||
# To ensure forwards compatibility with more arguments being added in the future,
|
||||
# adding an attribute which can't be deconstructed :)
|
||||
"lib.fileset.fileFilter: The predicate function passed as the first argument must be able to handle extra attributes for future compatibility. If you're using `{ name, file }:`, use `{ name, file, ... }:` instead." = null;
|
||||
"lib.fileset.fileFilter: The predicate function passed as the first argument must be able to handle extra attributes for future compatibility. If you're using `{ name, file, hasExt }:`, use `{ name, file, hasExt, ... }:` instead." = null;
|
||||
}
|
||||
then
|
||||
type
|
||||
|
@ -848,4 +858,61 @@ rec {
|
|||
in
|
||||
_create localPath
|
||||
(recurse storePath);
|
||||
|
||||
# Create a file set from the files included in the result of a fetchGit call
|
||||
# Type: String -> String -> Path -> Attrs -> FileSet
|
||||
_fromFetchGit = function: argument: path: extraFetchGitAttrs:
|
||||
let
|
||||
# The code path for when isStorePath is true
|
||||
tryStorePath =
|
||||
if pathExists (path + "/.git") then
|
||||
# If there is a `.git` directory in the path,
|
||||
# it means that the path was imported unfiltered into the Nix store.
|
||||
# This function should throw in such a case, because
|
||||
# - `fetchGit` doesn't generally work with `.git` directories in store paths
|
||||
# - Importing the entire path could include Git-tracked files
|
||||
throw ''
|
||||
lib.fileset.${function}: The ${argument} (${toString path}) is a store path within a working tree of a Git repository.
|
||||
This indicates that a source directory was imported into the store using a method such as `import "''${./.}"` or `path:.`.
|
||||
This function currently does not support such a use case, since it currently relies on `builtins.fetchGit`.
|
||||
You could make this work by using a fetcher such as `fetchGit` instead of copying the whole repository.
|
||||
If you can't avoid copying the repo to the store, see https://github.com/NixOS/nix/issues/9292.''
|
||||
else
|
||||
# Otherwise we're going to assume that the path was a Git directory originally,
|
||||
# but it was fetched using a method that already removed files not tracked by Git,
|
||||
# such as `builtins.fetchGit`, `pkgs.fetchgit` or others.
|
||||
# So we can just import the path in its entirety.
|
||||
_singleton path;
|
||||
|
||||
# The code path for when isStorePath is false
|
||||
tryFetchGit =
|
||||
let
|
||||
# This imports the files unnecessarily, which currently can't be avoided
|
||||
# because `builtins.fetchGit` is the only function exposing which files are tracked by Git.
|
||||
# With the [lazy trees PR](https://github.com/NixOS/nix/pull/6530),
|
||||
# the unnecessarily import could be avoided.
|
||||
# However a simpler alternative still would be [a builtins.gitLsFiles](https://github.com/NixOS/nix/issues/2944).
|
||||
fetchResult = fetchGit ({
|
||||
url = path;
|
||||
} // extraFetchGitAttrs);
|
||||
in
|
||||
# We can identify local working directories by checking for .git,
|
||||
# see https://git-scm.com/docs/gitrepository-layout#_description.
|
||||
# Note that `builtins.fetchGit` _does_ work for bare repositories (where there's no `.git`),
|
||||
# even though `git ls-files` wouldn't return any files in that case.
|
||||
if ! pathExists (path + "/.git") then
|
||||
throw "lib.fileset.${function}: Expected the ${argument} (${toString path}) to point to a local working tree of a Git repository, but it's not."
|
||||
else
|
||||
_mirrorStorePath path fetchResult.outPath;
|
||||
|
||||
in
|
||||
if ! isPath path then
|
||||
throw "lib.fileset.${function}: Expected the ${argument} to be a path, but it's a ${typeOf path} instead."
|
||||
else if pathType path != "directory" then
|
||||
throw "lib.fileset.${function}: Expected the ${argument} (${toString path}) to be a directory, but it's a file instead."
|
||||
else if hasStorePathPrefix path then
|
||||
tryStorePath
|
||||
else
|
||||
tryFetchGit;
|
||||
|
||||
}
|
||||
|
|
183
third_party/nixpkgs/lib/fileset/tests.sh
vendored
183
third_party/nixpkgs/lib/fileset/tests.sh
vendored
|
@ -43,29 +43,17 @@ crudeUnquoteJSON() {
|
|||
cut -d \" -f2
|
||||
}
|
||||
|
||||
prefixExpression() {
|
||||
echo 'let
|
||||
lib =
|
||||
(import <nixpkgs/lib>)
|
||||
'
|
||||
if [[ "${1:-}" == "--simulate-pure-eval" ]]; then
|
||||
echo '
|
||||
.extend (final: prev: {
|
||||
trivial = prev.trivial // {
|
||||
inPureEvalMode = true;
|
||||
};
|
||||
})'
|
||||
fi
|
||||
echo '
|
||||
;
|
||||
prefixExpression='
|
||||
let
|
||||
lib = import <nixpkgs/lib>;
|
||||
internal = import <nixpkgs/lib/fileset/internal.nix> {
|
||||
inherit lib;
|
||||
};
|
||||
in
|
||||
with lib;
|
||||
with internal;
|
||||
with lib.fileset;'
|
||||
}
|
||||
with lib.fileset;
|
||||
'
|
||||
|
||||
# Check that two nix expression successfully evaluate to the same value.
|
||||
# The expressions have `lib.fileset` in scope.
|
||||
|
@ -74,7 +62,7 @@ expectEqual() {
|
|||
local actualExpr=$1
|
||||
local expectedExpr=$2
|
||||
if actualResult=$(nix-instantiate --eval --strict --show-trace 2>"$tmp"/actualStderr \
|
||||
--expr "$(prefixExpression) ($actualExpr)"); then
|
||||
--expr "$prefixExpression ($actualExpr)"); then
|
||||
actualExitCode=$?
|
||||
else
|
||||
actualExitCode=$?
|
||||
|
@ -82,7 +70,7 @@ expectEqual() {
|
|||
actualStderr=$(< "$tmp"/actualStderr)
|
||||
|
||||
if expectedResult=$(nix-instantiate --eval --strict --show-trace 2>"$tmp"/expectedStderr \
|
||||
--expr "$(prefixExpression) ($expectedExpr)"); then
|
||||
--expr "$prefixExpression ($expectedExpr)"); then
|
||||
expectedExitCode=$?
|
||||
else
|
||||
expectedExitCode=$?
|
||||
|
@ -110,7 +98,7 @@ expectEqual() {
|
|||
expectStorePath() {
|
||||
local expr=$1
|
||||
if ! result=$(nix-instantiate --eval --strict --json --read-write-mode --show-trace 2>"$tmp"/stderr \
|
||||
--expr "$(prefixExpression) ($expr)"); then
|
||||
--expr "$prefixExpression ($expr)"); then
|
||||
cat "$tmp/stderr" >&2
|
||||
die "$expr failed to evaluate, but it was expected to succeed"
|
||||
fi
|
||||
|
@ -123,16 +111,10 @@ expectStorePath() {
|
|||
# The expression has `lib.fileset` in scope.
|
||||
# Usage: expectFailure NIX REGEX
|
||||
expectFailure() {
|
||||
if [[ "$1" == "--simulate-pure-eval" ]]; then
|
||||
maybePure="--simulate-pure-eval"
|
||||
shift
|
||||
else
|
||||
maybePure=""
|
||||
fi
|
||||
local expr=$1
|
||||
local expectedErrorRegex=$2
|
||||
if result=$(nix-instantiate --eval --strict --read-write-mode --show-trace 2>"$tmp/stderr" \
|
||||
--expr "$(prefixExpression $maybePure) $expr"); then
|
||||
--expr "$prefixExpression $expr"); then
|
||||
die "$expr evaluated successfully to $result, but it was expected to fail"
|
||||
fi
|
||||
stderr=$(<"$tmp/stderr")
|
||||
|
@ -149,12 +131,12 @@ expectTrace() {
|
|||
local expectedTrace=$2
|
||||
|
||||
nix-instantiate --eval --show-trace >/dev/null 2>"$tmp"/stderrTrace \
|
||||
--expr "$(prefixExpression) trace ($expr)" || true
|
||||
--expr "$prefixExpression trace ($expr)" || true
|
||||
|
||||
actualTrace=$(sed -n 's/^trace: //p' "$tmp/stderrTrace")
|
||||
|
||||
nix-instantiate --eval --show-trace >/dev/null 2>"$tmp"/stderrTraceVal \
|
||||
--expr "$(prefixExpression) traceVal ($expr)" || true
|
||||
--expr "$prefixExpression traceVal ($expr)" || true
|
||||
|
||||
actualTraceVal=$(sed -n 's/^trace: //p' "$tmp/stderrTraceVal")
|
||||
|
||||
|
@ -413,7 +395,8 @@ expectFailure 'toSource { root = ./.; fileset = cleanSourceWith { src = ./.; };
|
|||
\s*Note that this only works for sources created from paths.'
|
||||
|
||||
# Path coercion errors for non-existent paths
|
||||
expectFailure 'toSource { root = ./.; fileset = ./a; }' 'lib.fileset.toSource: `fileset` \('"$work"'/a\) is a path that does not exist.'
|
||||
expectFailure 'toSource { root = ./.; fileset = ./a; }' 'lib.fileset.toSource: `fileset` \('"$work"'/a\) is a path that does not exist.
|
||||
\s*To create a file set from a path that may not exist, use `lib.fileset.maybeMissing`.'
|
||||
|
||||
# File sets cannot be evaluated directly
|
||||
expectFailure 'union ./. ./.' 'lib.fileset: Directly evaluating a file set is not supported.
|
||||
|
@ -846,7 +829,7 @@ checkFileset 'fileFilter (file: abort "this is not needed") ./.'
|
|||
|
||||
# The predicate must be able to handle extra attributes
|
||||
touch a
|
||||
expectFailure 'toSource { root = ./.; fileset = fileFilter ({ name, type }: true) ./.; }' 'called with unexpected argument '\''"lib.fileset.fileFilter: The predicate function passed as the first argument must be able to handle extra attributes for future compatibility. If you'\''re using `\{ name, file \}:`, use `\{ name, file, ... \}:` instead."'\'
|
||||
expectFailure 'toSource { root = ./.; fileset = fileFilter ({ name, type, hasExt }: true) ./.; }' 'called with unexpected argument '\''"lib.fileset.fileFilter: The predicate function passed as the first argument must be able to handle extra attributes for future compatibility. If you'\''re using `\{ name, file, hasExt \}:`, use `\{ name, file, hasExt, ... \}:` instead."'\'
|
||||
rm -rf -- *
|
||||
|
||||
# .name is the name, and it works correctly, even recursively
|
||||
|
@ -894,6 +877,39 @@ expectEqual \
|
|||
'toSource { root = ./.; fileset = union ./d/a ./d/b; }'
|
||||
rm -rf -- *
|
||||
|
||||
# Check that .hasExt checks for the file extension
|
||||
# The empty extension is the same as a file ending with a .
|
||||
tree=(
|
||||
[a]=0
|
||||
[a.]=1
|
||||
[a.b]=0
|
||||
[a.b.]=1
|
||||
[a.b.c]=0
|
||||
)
|
||||
checkFileset 'fileFilter (file: file.hasExt "") ./.'
|
||||
|
||||
# It can check for the last extension
|
||||
tree=(
|
||||
[a]=0
|
||||
[.a]=1
|
||||
[.a.]=0
|
||||
[.b.a]=1
|
||||
[.b.a.]=0
|
||||
)
|
||||
checkFileset 'fileFilter (file: file.hasExt "a") ./.'
|
||||
|
||||
# It can check for any extension
|
||||
tree=(
|
||||
[a.b.c.d]=1
|
||||
)
|
||||
checkFileset 'fileFilter (file:
|
||||
all file.hasExt [
|
||||
"b.c.d"
|
||||
"c.d"
|
||||
"d"
|
||||
]
|
||||
) ./.'
|
||||
|
||||
# It's lazy
|
||||
tree=(
|
||||
[b]=1
|
||||
|
@ -1064,13 +1080,18 @@ rm -rf -- *
|
|||
## lib.fileset.fromSource
|
||||
|
||||
# Check error messages
|
||||
expectFailure 'fromSource null' 'lib.fileset.fromSource: The source origin of the argument is of type null, but it should be a path instead.'
|
||||
|
||||
# String-like values are not supported
|
||||
expectFailure 'fromSource (lib.cleanSource "")' 'lib.fileset.fromSource: The source origin of the argument is a string-like value \(""\), but it should be a path instead.
|
||||
\s*Sources created from paths in strings cannot be turned into file sets, use `lib.sources` or derivations instead.'
|
||||
|
||||
# Wrong type
|
||||
expectFailure 'fromSource null' 'lib.fileset.fromSource: The source origin of the argument is of type null, but it should be a path instead.'
|
||||
expectFailure 'fromSource (lib.cleanSource null)' 'lib.fileset.fromSource: The source origin of the argument is of type null, but it should be a path instead.'
|
||||
|
||||
# fromSource on non-existent paths gives an error
|
||||
expectFailure 'fromSource ./a' 'lib.fileset.fromSource: The source origin \('"$work"'/a\) of the argument is a path that does not exist.'
|
||||
|
||||
# fromSource on a path works and is the same as coercing that path
|
||||
mkdir a
|
||||
touch a/b c
|
||||
|
@ -1278,6 +1299,12 @@ rm -rf -- *
|
|||
expectFailure 'gitTracked null' 'lib.fileset.gitTracked: Expected the argument to be a path, but it'\''s a null instead.'
|
||||
expectFailure 'gitTrackedWith {} null' 'lib.fileset.gitTrackedWith: Expected the second argument to be a path, but it'\''s a null instead.'
|
||||
|
||||
# The path must be a directory
|
||||
touch a
|
||||
expectFailure 'gitTracked ./a' 'lib.fileset.gitTracked: Expected the argument \('"$work"'/a\) to be a directory, but it'\''s a file instead'
|
||||
expectFailure 'gitTrackedWith {} ./a' 'lib.fileset.gitTrackedWith: Expected the second argument \('"$work"'/a\) to be a directory, but it'\''s a file instead'
|
||||
rm -rf -- *
|
||||
|
||||
# The path has to contain a .git directory
|
||||
expectFailure 'gitTracked ./.' 'lib.fileset.gitTracked: Expected the argument \('"$work"'\) to point to a local working tree of a Git repository, but it'\''s not.'
|
||||
expectFailure 'gitTrackedWith {} ./.' 'lib.fileset.gitTrackedWith: Expected the second argument \('"$work"'\) to point to a local working tree of a Git repository, but it'\''s not.'
|
||||
|
@ -1286,7 +1313,7 @@ expectFailure 'gitTrackedWith {} ./.' 'lib.fileset.gitTrackedWith: Expected the
|
|||
expectFailure 'gitTrackedWith { recurseSubmodules = null; } ./.' 'lib.fileset.gitTrackedWith: Expected the attribute `recurseSubmodules` of the first argument to be a boolean, but it'\''s a null instead.'
|
||||
|
||||
# recurseSubmodules = true is not supported on all Nix versions
|
||||
if [[ "$(nix-instantiate --eval --expr "$(prefixExpression) (versionAtLeast builtins.nixVersion _fetchGitSubmodulesMinver)")" == true ]]; then
|
||||
if [[ "$(nix-instantiate --eval --expr "$prefixExpression (versionAtLeast builtins.nixVersion _fetchGitSubmodulesMinver)")" == true ]]; then
|
||||
fetchGitSupportsSubmodules=1
|
||||
else
|
||||
fetchGitSupportsSubmodules=
|
||||
|
@ -1356,10 +1383,60 @@ createGitRepo() {
|
|||
git -C "$1" commit -q --allow-empty -m "Empty commit"
|
||||
}
|
||||
|
||||
# Check the error message for pure eval mode
|
||||
# Check that gitTracked[With] works as expected when evaluated out-of-tree
|
||||
|
||||
## First we create a git repositories (and a subrepository) with `default.nix` files referring to their local paths
|
||||
## Simulating how it would be used in the wild
|
||||
createGitRepo .
|
||||
expectFailure --simulate-pure-eval 'toSource { root = ./.; fileset = gitTracked ./.; }' 'lib.fileset.gitTracked: This function is currently not supported in pure evaluation mode, since it currently relies on `builtins.fetchGit`. See https://github.com/NixOS/nix/issues/9292.'
|
||||
expectFailure --simulate-pure-eval 'toSource { root = ./.; fileset = gitTrackedWith {} ./.; }' 'lib.fileset.gitTrackedWith: This function is currently not supported in pure evaluation mode, since it currently relies on `builtins.fetchGit`. See https://github.com/NixOS/nix/issues/9292.'
|
||||
echo '{ fs }: fs.toSource { root = ./.; fileset = fs.gitTracked ./.; }' > default.nix
|
||||
git add .
|
||||
|
||||
## We can evaluate it locally just fine, `fetchGit` is used underneath to filter git-tracked files
|
||||
expectEqual '(import ./. { fs = lib.fileset; }).outPath' '(builtins.fetchGit ./.).outPath'
|
||||
|
||||
## We can also evaluate when importing from fetched store paths
|
||||
storePath=$(expectStorePath 'builtins.fetchGit ./.')
|
||||
expectEqual '(import '"$storePath"' { fs = lib.fileset; }).outPath' \""$storePath"\"
|
||||
|
||||
## But it fails if the path is imported with a fetcher that doesn't remove .git (like just using "${./.}")
|
||||
expectFailure 'import "${./.}" { fs = lib.fileset; }' 'lib.fileset.gitTracked: The argument \(.*\) is a store path within a working tree of a Git repository.
|
||||
\s*This indicates that a source directory was imported into the store using a method such as `import "\$\{./.\}"` or `path:.`.
|
||||
\s*This function currently does not support such a use case, since it currently relies on `builtins.fetchGit`.
|
||||
\s*You could make this work by using a fetcher such as `fetchGit` instead of copying the whole repository.
|
||||
\s*If you can'\''t avoid copying the repo to the store, see https://github.com/NixOS/nix/issues/9292.'
|
||||
|
||||
## Even with submodules
|
||||
if [[ -n "$fetchGitSupportsSubmodules" ]]; then
|
||||
## Both the main repo with the submodule
|
||||
echo '{ fs }: fs.toSource { root = ./.; fileset = fs.gitTrackedWith { recurseSubmodules = true; } ./.; }' > default.nix
|
||||
createGitRepo sub
|
||||
git submodule add ./sub sub >/dev/null
|
||||
## But also the submodule itself
|
||||
echo '{ fs }: fs.toSource { root = ./.; fileset = fs.gitTracked ./.; }' > sub/default.nix
|
||||
git -C sub add .
|
||||
|
||||
## We can evaluate it locally just fine, `fetchGit` is used underneath to filter git-tracked files
|
||||
expectEqual '(import ./. { fs = lib.fileset; }).outPath' '(builtins.fetchGit { url = ./.; submodules = true; }).outPath'
|
||||
expectEqual '(import ./sub { fs = lib.fileset; }).outPath' '(builtins.fetchGit ./sub).outPath'
|
||||
|
||||
## We can also evaluate when importing from fetched store paths
|
||||
storePathWithSub=$(expectStorePath 'builtins.fetchGit { url = ./.; submodules = true; }')
|
||||
expectEqual '(import '"$storePathWithSub"' { fs = lib.fileset; }).outPath' \""$storePathWithSub"\"
|
||||
storePathSub=$(expectStorePath 'builtins.fetchGit ./sub')
|
||||
expectEqual '(import '"$storePathSub"' { fs = lib.fileset; }).outPath' \""$storePathSub"\"
|
||||
|
||||
## But it fails if the path is imported with a fetcher that doesn't remove .git (like just using "${./.}")
|
||||
expectFailure 'import "${./.}" { fs = lib.fileset; }' 'lib.fileset.gitTrackedWith: The second argument \(.*\) is a store path within a working tree of a Git repository.
|
||||
\s*This indicates that a source directory was imported into the store using a method such as `import "\$\{./.\}"` or `path:.`.
|
||||
\s*This function currently does not support such a use case, since it currently relies on `builtins.fetchGit`.
|
||||
\s*You could make this work by using a fetcher such as `fetchGit` instead of copying the whole repository.
|
||||
\s*If you can'\''t avoid copying the repo to the store, see https://github.com/NixOS/nix/issues/9292.'
|
||||
expectFailure 'import "${./.}/sub" { fs = lib.fileset; }' 'lib.fileset.gitTracked: The argument \(.*/sub\) is a store path within a working tree of a Git repository.
|
||||
\s*This indicates that a source directory was imported into the store using a method such as `import "\$\{./.\}"` or `path:.`.
|
||||
\s*This function currently does not support such a use case, since it currently relies on `builtins.fetchGit`.
|
||||
\s*You could make this work by using a fetcher such as `fetchGit` instead of copying the whole repository.
|
||||
\s*If you can'\''t avoid copying the repo to the store, see https://github.com/NixOS/nix/issues/9292.'
|
||||
fi
|
||||
rm -rf -- *
|
||||
|
||||
# Go through all stages of Git files
|
||||
|
@ -1445,6 +1522,40 @@ checkGitTracked
|
|||
|
||||
rm -rf -- *
|
||||
|
||||
## lib.fileset.maybeMissing
|
||||
|
||||
# Argument must be a path
|
||||
expectFailure 'maybeMissing "someString"' 'lib.fileset.maybeMissing: Argument \("someString"\) is a string-like value, but it should be a path instead.'
|
||||
expectFailure 'maybeMissing null' 'lib.fileset.maybeMissing: Argument is of type null, but it should be a path instead.'
|
||||
|
||||
tree=(
|
||||
)
|
||||
checkFileset 'maybeMissing ./a'
|
||||
checkFileset 'maybeMissing ./b'
|
||||
checkFileset 'maybeMissing ./b/c'
|
||||
|
||||
# Works on single files
|
||||
tree=(
|
||||
[a]=1
|
||||
[b/c]=0
|
||||
[b/d]=0
|
||||
)
|
||||
checkFileset 'maybeMissing ./a'
|
||||
tree=(
|
||||
[a]=0
|
||||
[b/c]=1
|
||||
[b/d]=0
|
||||
)
|
||||
checkFileset 'maybeMissing ./b/c'
|
||||
|
||||
# Works on directories
|
||||
tree=(
|
||||
[a]=0
|
||||
[b/c]=1
|
||||
[b/d]=1
|
||||
)
|
||||
checkFileset 'maybeMissing ./b'
|
||||
|
||||
# TODO: Once we have combinators and a property testing library, derive property tests from https://en.wikipedia.org/wiki/Algebra_of_sets
|
||||
|
||||
echo >&2 tests ok
|
||||
|
|
160
third_party/nixpkgs/lib/filesystem.nix
vendored
160
third_party/nixpkgs/lib/filesystem.nix
vendored
|
@ -1,5 +1,7 @@
|
|||
# Functions for querying information about the filesystem
|
||||
# without copying any files to the Nix store.
|
||||
/*
|
||||
Functions for querying information about the filesystem
|
||||
without copying any files to the Nix store.
|
||||
*/
|
||||
{ lib }:
|
||||
|
||||
# Tested in lib/tests/filesystem.sh
|
||||
|
@ -7,11 +9,22 @@ let
|
|||
inherit (builtins)
|
||||
readDir
|
||||
pathExists
|
||||
toString
|
||||
;
|
||||
|
||||
inherit (lib.attrsets)
|
||||
mapAttrs'
|
||||
filterAttrs
|
||||
;
|
||||
|
||||
inherit (lib.filesystem)
|
||||
pathType
|
||||
;
|
||||
|
||||
inherit (lib.strings)
|
||||
hasSuffix
|
||||
removeSuffix
|
||||
;
|
||||
in
|
||||
|
||||
{
|
||||
|
@ -152,4 +165,147 @@ in
|
|||
dir + "/${name}"
|
||||
) (builtins.readDir dir));
|
||||
|
||||
/*
|
||||
Transform a directory tree containing package files suitable for
|
||||
`callPackage` into a matching nested attribute set of derivations.
|
||||
|
||||
For a directory tree like this:
|
||||
|
||||
```
|
||||
my-packages
|
||||
├── a.nix
|
||||
├── b.nix
|
||||
├── c
|
||||
│ ├── my-extra-feature.patch
|
||||
│ ├── package.nix
|
||||
│ └── support-definitions.nix
|
||||
└── my-namespace
|
||||
├── d.nix
|
||||
├── e.nix
|
||||
└── f
|
||||
└── package.nix
|
||||
```
|
||||
|
||||
`packagesFromDirectoryRecursive` will produce an attribute set like this:
|
||||
|
||||
```nix
|
||||
# packagesFromDirectoryRecursive {
|
||||
# callPackage = pkgs.callPackage;
|
||||
# directory = ./my-packages;
|
||||
# }
|
||||
{
|
||||
a = pkgs.callPackage ./my-packages/a.nix { };
|
||||
b = pkgs.callPackage ./my-packages/b.nix { };
|
||||
c = pkgs.callPackage ./my-packages/c/package.nix { };
|
||||
my-namespace = {
|
||||
d = pkgs.callPackage ./my-packages/my-namespace/d.nix { };
|
||||
e = pkgs.callPackage ./my-packages/my-namespace/e.nix { };
|
||||
f = pkgs.callPackage ./my-packages/my-namespace/f/package.nix { };
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
In particular:
|
||||
- If the input directory contains a `package.nix` file, then
|
||||
`callPackage <directory>/package.nix { }` is returned.
|
||||
- Otherwise, the input directory's contents are listed and transformed into
|
||||
an attribute set.
|
||||
- If a file name has the `.nix` extension, it is turned into attribute
|
||||
where:
|
||||
- The attribute name is the file name without the `.nix` extension
|
||||
- The attribute value is `callPackage <file path> { }`
|
||||
- Other files are ignored.
|
||||
- Directories are turned into an attribute where:
|
||||
- The attribute name is the name of the directory
|
||||
- The attribute value is the result of calling
|
||||
`packagesFromDirectoryRecursive { ... }` on the directory.
|
||||
|
||||
As a result, directories with no `.nix` files (including empty
|
||||
directories) will be transformed into empty attribute sets.
|
||||
|
||||
Example:
|
||||
packagesFromDirectoryRecursive {
|
||||
inherit (pkgs) callPackage;
|
||||
directory = ./my-packages;
|
||||
}
|
||||
=> { ... }
|
||||
|
||||
lib.makeScope pkgs.newScope (
|
||||
self: packagesFromDirectoryRecursive {
|
||||
callPackage = self.callPackage;
|
||||
directory = ./my-packages;
|
||||
}
|
||||
)
|
||||
=> { ... }
|
||||
|
||||
Type:
|
||||
packagesFromDirectoryRecursive :: AttrSet -> AttrSet
|
||||
*/
|
||||
packagesFromDirectoryRecursive =
|
||||
# Options.
|
||||
{
|
||||
/*
|
||||
`pkgs.callPackage`
|
||||
|
||||
Type:
|
||||
Path -> AttrSet -> a
|
||||
*/
|
||||
callPackage,
|
||||
/*
|
||||
The directory to read package files from
|
||||
|
||||
Type:
|
||||
Path
|
||||
*/
|
||||
directory,
|
||||
...
|
||||
}:
|
||||
let
|
||||
# Determine if a directory entry from `readDir` indicates a package or
|
||||
# directory of packages.
|
||||
directoryEntryIsPackage = basename: type:
|
||||
type == "directory" || hasSuffix ".nix" basename;
|
||||
|
||||
# List directory entries that indicate packages in the given `path`.
|
||||
packageDirectoryEntries = path:
|
||||
filterAttrs directoryEntryIsPackage (readDir path);
|
||||
|
||||
# Transform a directory entry (a `basename` and `type` pair) into a
|
||||
# package.
|
||||
directoryEntryToAttrPair = subdirectory: basename: type:
|
||||
let
|
||||
path = subdirectory + "/${basename}";
|
||||
in
|
||||
if type == "regular"
|
||||
then
|
||||
{
|
||||
name = removeSuffix ".nix" basename;
|
||||
value = callPackage path { };
|
||||
}
|
||||
else
|
||||
if type == "directory"
|
||||
then
|
||||
{
|
||||
name = basename;
|
||||
value = packagesFromDirectory path;
|
||||
}
|
||||
else
|
||||
throw
|
||||
''
|
||||
lib.filesystem.packagesFromDirectoryRecursive: Unsupported file type ${type} at path ${toString subdirectory}
|
||||
'';
|
||||
|
||||
# Transform a directory into a package (if there's a `package.nix`) or
|
||||
# set of packages (otherwise).
|
||||
packagesFromDirectory = path:
|
||||
let
|
||||
defaultPackagePath = path + "/package.nix";
|
||||
in
|
||||
if pathExists defaultPackagePath
|
||||
then callPackage defaultPackagePath { }
|
||||
else mapAttrs'
|
||||
(directoryEntryToAttrPair path)
|
||||
(packageDirectoryEntries path);
|
||||
in
|
||||
packagesFromDirectory directory;
|
||||
}
|
||||
|
|
20
third_party/nixpkgs/lib/flake-version-info.nix
vendored
Normal file
20
third_party/nixpkgs/lib/flake-version-info.nix
vendored
Normal file
|
@ -0,0 +1,20 @@
|
|||
# This function produces a lib overlay to be used by the nixpkgs
|
||||
# & nixpkgs/lib flakes to provide meaningful values for
|
||||
# `lib.trivial.version` et al..
|
||||
#
|
||||
# Internal and subject to change, don't use this anywhere else!
|
||||
# Instead, consider using a public interface, such as this flake here
|
||||
# in this directory, `lib/`, or use the nixpkgs flake, which applies
|
||||
# this logic for you in its `lib` output attribute.
|
||||
|
||||
self: # from the flake
|
||||
|
||||
finalLib: prevLib: # lib overlay
|
||||
|
||||
{
|
||||
trivial = prevLib.trivial // {
|
||||
versionSuffix =
|
||||
".${finalLib.substring 0 8 (self.lastModifiedDate or "19700101")}.${self.shortRev or "dirty"}";
|
||||
revisionWithDefault = default: self.rev or default;
|
||||
};
|
||||
}
|
7
third_party/nixpkgs/lib/flake.nix
vendored
7
third_party/nixpkgs/lib/flake.nix
vendored
|
@ -1,5 +1,10 @@
|
|||
{
|
||||
description = "Library of low-level helper functions for nix expressions.";
|
||||
|
||||
outputs = { self }: { lib = import ./.; };
|
||||
outputs = { self }:
|
||||
let
|
||||
lib0 = import ./.;
|
||||
in {
|
||||
lib = lib0.extend (import ./flake-version-info.nix self);
|
||||
};
|
||||
}
|
||||
|
|
2
third_party/nixpkgs/lib/generators.nix
vendored
2
third_party/nixpkgs/lib/generators.nix
vendored
|
@ -525,6 +525,8 @@ ${expr "" v}
|
|||
"(${v.expr})"
|
||||
else if v == { } then
|
||||
"{}"
|
||||
else if libAttr.isDerivation v then
|
||||
''"${toString v}"''
|
||||
else
|
||||
"{${introSpace}${concatItems (
|
||||
lib.attrsets.mapAttrsToList (key: value: "[${builtins.toJSON key}] = ${toLua innerArgs value}") v
|
||||
|
|
19
third_party/nixpkgs/lib/gvariant.nix
vendored
19
third_party/nixpkgs/lib/gvariant.nix
vendored
|
@ -1,16 +1,17 @@
|
|||
/*
|
||||
A partial and basic implementation of GVariant formatted strings.
|
||||
See [GVariant Format Strings](https://docs.gtk.org/glib/gvariant-format-strings.html) for details.
|
||||
|
||||
:::{.warning}
|
||||
This API is not considered fully stable and it might therefore
|
||||
change in backwards incompatible ways without prior notice.
|
||||
:::
|
||||
*/
|
||||
|
||||
# This file is based on https://github.com/nix-community/home-manager
|
||||
# Copyright (c) 2017-2022 Home Manager contributors
|
||||
#
|
||||
|
||||
|
||||
{ lib }:
|
||||
|
||||
/* A partial and basic implementation of GVariant formatted strings.
|
||||
See https://docs.gtk.org/glib/gvariant-format-strings.html for detauls.
|
||||
|
||||
Note, this API is not considered fully stable and it might therefore
|
||||
change in backwards incompatible ways without prior notice.
|
||||
*/
|
||||
let
|
||||
inherit (lib)
|
||||
concatMapStringsSep concatStrings escape head replaceStrings;
|
||||
|
|
22
third_party/nixpkgs/lib/licenses.nix
vendored
22
third_party/nixpkgs/lib/licenses.nix
vendored
|
@ -38,6 +38,13 @@ in mkLicense lset) ({
|
|||
redistributable = false;
|
||||
};
|
||||
|
||||
activision = {
|
||||
# https://doomwiki.org/wiki/Raven_source_code_licensing
|
||||
fullName = "Activision EULA";
|
||||
url = "https://www.doomworld.com/eternity/activision_eula.txt";
|
||||
free = false;
|
||||
};
|
||||
|
||||
afl20 = {
|
||||
spdxId = "AFL-2.0";
|
||||
fullName = "Academic Free License v2.0";
|
||||
|
@ -97,6 +104,7 @@ in mkLicense lset) ({
|
|||
};
|
||||
|
||||
arphicpl = {
|
||||
spdxId = "Arphic-1999";
|
||||
fullName = "Arphic Public License";
|
||||
url = "https://www.freedesktop.org/wiki/Arphic_Public_License/";
|
||||
};
|
||||
|
@ -229,6 +237,7 @@ in mkLicense lset) ({
|
|||
};
|
||||
|
||||
cal10 = {
|
||||
spdxId = "CAL-1.0";
|
||||
fullName = "Cryptographic Autonomy License version 1.0 (CAL-1.0)";
|
||||
url = "https://opensource.org/licenses/CAL-1.0";
|
||||
};
|
||||
|
@ -422,6 +431,7 @@ in mkLicense lset) ({
|
|||
};
|
||||
|
||||
elastic20 = {
|
||||
spdxId = "Elastic-2.0";
|
||||
fullName = "Elastic License 2.0";
|
||||
url = "https://github.com/elastic/elasticsearch/blob/main/licenses/ELASTIC-LICENSE-2.0.txt";
|
||||
free = false;
|
||||
|
@ -591,6 +601,7 @@ in mkLicense lset) ({
|
|||
|
||||
# Intel's license, seems free
|
||||
iasl = {
|
||||
spdxId = "Intel-ACPI";
|
||||
fullName = "iASL";
|
||||
url = "https://old.calculate-linux.org/packages/licenses/iASL";
|
||||
};
|
||||
|
@ -602,7 +613,7 @@ in mkLicense lset) ({
|
|||
|
||||
imagemagick = {
|
||||
fullName = "ImageMagick License";
|
||||
spdxId = "imagemagick";
|
||||
spdxId = "ImageMagick";
|
||||
};
|
||||
|
||||
imlib2 = {
|
||||
|
@ -796,6 +807,7 @@ in mkLicense lset) ({
|
|||
};
|
||||
|
||||
miros = {
|
||||
spdxId = "MirOS";
|
||||
fullName = "MirOS License";
|
||||
url = "https://opensource.org/licenses/MirOS";
|
||||
};
|
||||
|
@ -1061,6 +1073,12 @@ in mkLicense lset) ({
|
|||
url = "https://github.com/thestk/stk/blob/master/LICENSE";
|
||||
};
|
||||
|
||||
sudo = {
|
||||
shortName = "sudo";
|
||||
fullName = "Sudo License (ISC-style)";
|
||||
url = "https://www.sudo.ws/about/license/";
|
||||
};
|
||||
|
||||
sustainableUse = {
|
||||
shortName = "sustainable";
|
||||
fullName = "Sustainable Use License";
|
||||
|
@ -1125,6 +1143,7 @@ in mkLicense lset) ({
|
|||
};
|
||||
|
||||
upl = {
|
||||
spdxId = "UPL-1.0";
|
||||
fullName = "Universal Permissive License";
|
||||
url = "https://oss.oracle.com/licenses/upl/";
|
||||
};
|
||||
|
@ -1181,6 +1200,7 @@ in mkLicense lset) ({
|
|||
};
|
||||
|
||||
xfig = {
|
||||
spdxId = "Xfig";
|
||||
fullName = "xfig";
|
||||
url = "https://mcj.sourceforge.net/authors.html#xfig";
|
||||
};
|
||||
|
|
48
third_party/nixpkgs/lib/lists.nix
vendored
48
third_party/nixpkgs/lib/lists.nix
vendored
|
@ -1,10 +1,10 @@
|
|||
# General list operations.
|
||||
|
||||
/* General list operations. */
|
||||
{ lib }:
|
||||
let
|
||||
inherit (lib.strings) toInt;
|
||||
inherit (lib.trivial) compare min id;
|
||||
inherit (lib.attrsets) mapAttrs;
|
||||
inherit (lib.lists) sort;
|
||||
in
|
||||
rec {
|
||||
|
||||
|
@ -592,9 +592,15 @@ rec {
|
|||
the second argument. The returned list is sorted in an increasing
|
||||
order. The implementation does a quick-sort.
|
||||
|
||||
See also [`sortOn`](#function-library-lib.lists.sortOn), which applies the
|
||||
default comparison on a function-derived property, and may be more efficient.
|
||||
|
||||
Example:
|
||||
sort (a: b: a < b) [ 5 3 7 ]
|
||||
sort (p: q: p < q) [ 5 3 7 ]
|
||||
=> [ 3 5 7 ]
|
||||
|
||||
Type:
|
||||
sort :: (a -> a -> Bool) -> [a] -> [a]
|
||||
*/
|
||||
sort = builtins.sort or (
|
||||
strictLess: list:
|
||||
|
@ -613,6 +619,42 @@ rec {
|
|||
if len < 2 then list
|
||||
else (sort strictLess pivot.left) ++ [ first ] ++ (sort strictLess pivot.right));
|
||||
|
||||
/*
|
||||
Sort a list based on the default comparison of a derived property `b`.
|
||||
|
||||
The items are returned in `b`-increasing order.
|
||||
|
||||
**Performance**:
|
||||
|
||||
The passed function `f` is only evaluated once per item,
|
||||
unlike an unprepared [`sort`](#function-library-lib.lists.sort) using
|
||||
`f p < f q`.
|
||||
|
||||
**Laws**:
|
||||
```nix
|
||||
sortOn f == sort (p: q: f p < f q)
|
||||
```
|
||||
|
||||
Example:
|
||||
sortOn stringLength [ "aa" "b" "cccc" ]
|
||||
=> [ "b" "aa" "cccc" ]
|
||||
|
||||
Type:
|
||||
sortOn :: (a -> b) -> [a] -> [a], for comparable b
|
||||
*/
|
||||
sortOn = f: list:
|
||||
let
|
||||
# Heterogenous list as pair may be ugly, but requires minimal allocations.
|
||||
pairs = map (x: [(f x) x]) list;
|
||||
in
|
||||
map
|
||||
(x: builtins.elemAt x 1)
|
||||
(sort
|
||||
# Compare the first element of the pairs
|
||||
# Do not factor out the `<`, to avoid calls in hot code; duplicate instead.
|
||||
(a: b: head a < head b)
|
||||
pairs);
|
||||
|
||||
/* Compare two lists element-by-element.
|
||||
|
||||
Example:
|
||||
|
|
52
third_party/nixpkgs/lib/meta.nix
vendored
52
third_party/nixpkgs/lib/meta.nix
vendored
|
@ -3,6 +3,11 @@
|
|||
|
||||
{ lib }:
|
||||
|
||||
let
|
||||
inherit (lib) matchAttrs any all isDerivation getBin assertMsg;
|
||||
inherit (builtins) isString match typeOf;
|
||||
|
||||
in
|
||||
rec {
|
||||
|
||||
|
||||
|
@ -83,14 +88,21 @@ rec {
|
|||
We can inject these into a pattern for the whole of a structured platform,
|
||||
and then match that.
|
||||
*/
|
||||
platformMatch = platform: elem: let
|
||||
pattern =
|
||||
if builtins.isString elem
|
||||
then { system = elem; }
|
||||
else if elem?parsed
|
||||
then elem
|
||||
else { parsed = elem; };
|
||||
in lib.matchAttrs pattern platform;
|
||||
platformMatch = platform: elem: (
|
||||
# Check with simple string comparison if elem was a string.
|
||||
#
|
||||
# The majority of comparisons done with this function will be against meta.platforms
|
||||
# which contains a simple platform string.
|
||||
#
|
||||
# Avoiding an attrset allocation results in significant performance gains (~2-30) across the board in OfBorg
|
||||
# because this is a hot path for nixpkgs.
|
||||
if isString elem then platform ? system && elem == platform.system
|
||||
else matchAttrs (
|
||||
# Normalize platform attrset.
|
||||
if elem ? parsed then elem
|
||||
else { parsed = elem; }
|
||||
) platform
|
||||
);
|
||||
|
||||
/* Check if a package is available on a given platform.
|
||||
|
||||
|
@ -102,8 +114,8 @@ rec {
|
|||
2. None of `meta.badPlatforms` pattern matches the given platform.
|
||||
*/
|
||||
availableOn = platform: pkg:
|
||||
((!pkg?meta.platforms) || lib.any (platformMatch platform) pkg.meta.platforms) &&
|
||||
lib.all (elem: !platformMatch platform elem) (pkg.meta.badPlatforms or []);
|
||||
((!pkg?meta.platforms) || any (platformMatch platform) pkg.meta.platforms) &&
|
||||
all (elem: !platformMatch platform elem) (pkg.meta.badPlatforms or []);
|
||||
|
||||
/* Get the corresponding attribute in lib.licenses
|
||||
from the SPDX ID.
|
||||
|
@ -142,16 +154,12 @@ rec {
|
|||
getExe pkgs.mustache-go
|
||||
=> "/nix/store/am9ml4f4ywvivxnkiaqwr0hyxka1xjsf-mustache-go-1.3.0/bin/mustache"
|
||||
*/
|
||||
getExe = x:
|
||||
let
|
||||
y = x.meta.mainProgram or (
|
||||
getExe = x: getExe' x (x.meta.mainProgram or (
|
||||
# This could be turned into an error when 23.05 is at end of life
|
||||
lib.warn "getExe: Package ${lib.strings.escapeNixIdentifier x.meta.name or x.pname or x.name} does not have the meta.mainProgram attribute. We'll assume that the main program has the same name for now, but this behavior is deprecated, because it leads to surprising errors when the assumption does not hold. If the package has a main program, please set `meta.mainProgram` in its definition to make this warning go away. Otherwise, if the package does not have a main program, or if you don't control its definition, use getExe' to specify the name to the program, such as lib.getExe' foo \"bar\"."
|
||||
lib.getName
|
||||
x
|
||||
);
|
||||
in
|
||||
getExe' x y;
|
||||
));
|
||||
|
||||
/* Get the path of a program of a derivation.
|
||||
|
||||
|
@ -163,11 +171,11 @@ rec {
|
|||
=> "/nix/store/5rs48jamq7k6sal98ymj9l4k2bnwq515-imagemagick-7.1.1-15/bin/convert"
|
||||
*/
|
||||
getExe' = x: y:
|
||||
assert lib.assertMsg (lib.isDerivation x)
|
||||
"lib.meta.getExe': The first argument is of type ${builtins.typeOf x}, but it should be a derivation instead.";
|
||||
assert lib.assertMsg (lib.isString y)
|
||||
"lib.meta.getExe': The second argument is of type ${builtins.typeOf y}, but it should be a string instead.";
|
||||
assert lib.assertMsg (builtins.length (lib.splitString "/" y) == 1)
|
||||
assert assertMsg (isDerivation x)
|
||||
"lib.meta.getExe': The first argument is of type ${typeOf x}, but it should be a derivation instead.";
|
||||
assert assertMsg (isString y)
|
||||
"lib.meta.getExe': The second argument is of type ${typeOf y}, but it should be a string instead.";
|
||||
assert assertMsg (match ".*\/.*" y == null)
|
||||
"lib.meta.getExe': The second argument \"${y}\" is a nested path with a \"/\" character, but it should just be the name of the executable instead.";
|
||||
"${lib.getBin x}/bin/${y}";
|
||||
"${getBin x}/bin/${y}";
|
||||
}
|
||||
|
|
2
third_party/nixpkgs/lib/modules.nix
vendored
2
third_party/nixpkgs/lib/modules.nix
vendored
|
@ -275,6 +275,8 @@ let
|
|||
"The option `${optText}' does not exist. Definition values:${defText}";
|
||||
in
|
||||
if attrNames options == [ "_module" ]
|
||||
# No options were declared at all (`_module` is built in)
|
||||
# but we do have unmatched definitions, and no freeformType (earlier conditions)
|
||||
then
|
||||
let
|
||||
optionName = showOption prefix;
|
||||
|
|
2
third_party/nixpkgs/lib/options.nix
vendored
2
third_party/nixpkgs/lib/options.nix
vendored
|
@ -1,4 +1,4 @@
|
|||
# Nixpkgs/NixOS option handling.
|
||||
/* Nixpkgs/NixOS option handling. */
|
||||
{ lib }:
|
||||
|
||||
let
|
||||
|
|
84
third_party/nixpkgs/lib/path/default.nix
vendored
84
third_party/nixpkgs/lib/path/default.nix
vendored
|
@ -1,4 +1,5 @@
|
|||
# Functions for working with paths, see ./path.md
|
||||
/* Functions for working with path values. */
|
||||
# See ./README.md for internal docs
|
||||
{ lib }:
|
||||
let
|
||||
|
||||
|
@ -8,6 +9,7 @@ let
|
|||
split
|
||||
match
|
||||
typeOf
|
||||
storeDir
|
||||
;
|
||||
|
||||
inherit (lib.lists)
|
||||
|
@ -23,6 +25,8 @@ let
|
|||
drop
|
||||
;
|
||||
|
||||
listHasPrefix = lib.lists.hasPrefix;
|
||||
|
||||
inherit (lib.strings)
|
||||
concatStringsSep
|
||||
substring
|
||||
|
@ -119,6 +123,28 @@ let
|
|||
else recurse ([ (baseNameOf base) ] ++ components) (dirOf base);
|
||||
in recurse [];
|
||||
|
||||
# The components of the store directory, typically [ "nix" "store" ]
|
||||
storeDirComponents = splitRelPath ("./" + storeDir);
|
||||
# The number of store directory components, typically 2
|
||||
storeDirLength = length storeDirComponents;
|
||||
|
||||
# Type: [ String ] -> Bool
|
||||
#
|
||||
# Whether path components have a store path as a prefix, according to
|
||||
# https://nixos.org/manual/nix/stable/store/store-path.html#store-path.
|
||||
componentsHaveStorePathPrefix = components:
|
||||
# path starts with the store directory (typically /nix/store)
|
||||
listHasPrefix storeDirComponents components
|
||||
# is not the store directory itself, meaning there's at least one extra component
|
||||
&& storeDirComponents != components
|
||||
# and the first component after the store directory has the expected format.
|
||||
# NOTE: We could change the hash regex to be [0-9a-df-np-sv-z],
|
||||
# because these are the actual ASCII characters used by Nix's base32 implementation,
|
||||
# but this is not fully specified, so let's tie this too much to the currently implemented concept of store paths.
|
||||
# Similar reasoning applies to the validity of the name part.
|
||||
# We care more about discerning store path-ness on realistic values. Making it airtight would be fragile and slow.
|
||||
&& match ".{32}-.+" (elemAt components storeDirLength) != null;
|
||||
|
||||
in /* No rec! Add dependencies on this file at the top. */ {
|
||||
|
||||
/*
|
||||
|
@ -320,6 +346,62 @@ in /* No rec! Add dependencies on this file at the top. */ {
|
|||
subpath = joinRelPath deconstructed.components;
|
||||
};
|
||||
|
||||
/*
|
||||
Whether a [path](https://nixos.org/manual/nix/stable/language/values.html#type-path)
|
||||
has a [store path](https://nixos.org/manual/nix/stable/store/store-path.html#store-path)
|
||||
as a prefix.
|
||||
|
||||
:::{.note}
|
||||
As with all functions of this `lib.path` library, it does not work on paths in strings,
|
||||
which is how you'd typically get store paths.
|
||||
|
||||
Instead, this function only handles path values themselves,
|
||||
which occur when Nix files in the store use relative path expressions.
|
||||
:::
|
||||
|
||||
Type:
|
||||
hasStorePathPrefix :: Path -> Bool
|
||||
|
||||
Example:
|
||||
# Subpaths of derivation outputs have a store path as a prefix
|
||||
hasStorePathPrefix /nix/store/nvl9ic0pj1fpyln3zaqrf4cclbqdfn1j-foo/bar/baz
|
||||
=> true
|
||||
|
||||
# The store directory itself is not a store path
|
||||
hasStorePathPrefix /nix/store
|
||||
=> false
|
||||
|
||||
# Derivation outputs are store paths themselves
|
||||
hasStorePathPrefix /nix/store/nvl9ic0pj1fpyln3zaqrf4cclbqdfn1j-foo
|
||||
=> true
|
||||
|
||||
# Paths outside the Nix store don't have a store path prefix
|
||||
hasStorePathPrefix /home/user
|
||||
=> false
|
||||
|
||||
# Not all paths under the Nix store are store paths
|
||||
hasStorePathPrefix /nix/store/.links/10gg8k3rmbw8p7gszarbk7qyd9jwxhcfq9i6s5i0qikx8alkk4hq
|
||||
=> false
|
||||
|
||||
# Store derivations are also store paths themselves
|
||||
hasStorePathPrefix /nix/store/nvl9ic0pj1fpyln3zaqrf4cclbqdfn1j-foo.drv
|
||||
=> true
|
||||
*/
|
||||
hasStorePathPrefix = path:
|
||||
let
|
||||
deconstructed = deconstructPath path;
|
||||
in
|
||||
assert assertMsg
|
||||
(isPath path)
|
||||
"lib.path.hasStorePathPrefix: Argument is of type ${typeOf path}, but a path was expected";
|
||||
assert assertMsg
|
||||
# This function likely breaks or needs adjustment if used with other filesystem roots, if they ever get implemented.
|
||||
# Let's try to error nicely in such a case, though it's unclear how an implementation would work even and whether this could be detected.
|
||||
# See also https://github.com/NixOS/nix/pull/6530#discussion_r1422843117
|
||||
(deconstructed.root == /. && toString deconstructed.root == "/")
|
||||
"lib.path.hasStorePathPrefix: Argument has a filesystem root (${toString deconstructed.root}) that's not /, which is currently not supported.";
|
||||
componentsHaveStorePathPrefix deconstructed.components;
|
||||
|
||||
/*
|
||||
Whether a value is a valid subpath string.
|
||||
|
||||
|
|
|
@ -6,16 +6,19 @@
|
|||
overlays = [];
|
||||
inherit system;
|
||||
},
|
||||
nixVersions ? import ../../tests/nix-for-tests.nix { inherit pkgs; },
|
||||
libpath ? ../..,
|
||||
# Random seed
|
||||
seed ? null,
|
||||
}:
|
||||
|
||||
pkgs.runCommand "lib-path-tests" {
|
||||
nativeBuildInputs = with pkgs; [
|
||||
nix
|
||||
nativeBuildInputs = [
|
||||
nixVersions.stable
|
||||
] ++ (with pkgs; [
|
||||
jq
|
||||
bc
|
||||
];
|
||||
]);
|
||||
} ''
|
||||
# Needed to make Nix evaluation work
|
||||
export TEST_ROOT=$(pwd)/test-tmp
|
||||
|
|
30
third_party/nixpkgs/lib/path/tests/unit.nix
vendored
30
third_party/nixpkgs/lib/path/tests/unit.nix
vendored
|
@ -3,7 +3,10 @@
|
|||
{ libpath }:
|
||||
let
|
||||
lib = import libpath;
|
||||
inherit (lib.path) hasPrefix removePrefix append splitRoot subpath;
|
||||
inherit (lib.path) hasPrefix removePrefix append splitRoot hasStorePathPrefix subpath;
|
||||
|
||||
# This is not allowed generally, but we're in the tests here, so we'll allow ourselves.
|
||||
storeDirPath = /. + builtins.storeDir;
|
||||
|
||||
cases = lib.runTests {
|
||||
# Test examples from the lib.path.append documentation
|
||||
|
@ -91,6 +94,31 @@ let
|
|||
expected = false;
|
||||
};
|
||||
|
||||
testHasStorePathPrefixExample1 = {
|
||||
expr = hasStorePathPrefix (storeDirPath + "/nvl9ic0pj1fpyln3zaqrf4cclbqdfn1j-foo/bar/baz");
|
||||
expected = true;
|
||||
};
|
||||
testHasStorePathPrefixExample2 = {
|
||||
expr = hasStorePathPrefix storeDirPath;
|
||||
expected = false;
|
||||
};
|
||||
testHasStorePathPrefixExample3 = {
|
||||
expr = hasStorePathPrefix (storeDirPath + "/nvl9ic0pj1fpyln3zaqrf4cclbqdfn1j-foo");
|
||||
expected = true;
|
||||
};
|
||||
testHasStorePathPrefixExample4 = {
|
||||
expr = hasStorePathPrefix /home/user;
|
||||
expected = false;
|
||||
};
|
||||
testHasStorePathPrefixExample5 = {
|
||||
expr = hasStorePathPrefix (storeDirPath + "/.links/10gg8k3rmbw8p7gszarbk7qyd9jwxhcfq9i6s5i0qikx8alkk4hq");
|
||||
expected = false;
|
||||
};
|
||||
testHasStorePathPrefixExample6 = {
|
||||
expr = hasStorePathPrefix (storeDirPath + "/nvl9ic0pj1fpyln3zaqrf4cclbqdfn1j-foo.drv");
|
||||
expected = true;
|
||||
};
|
||||
|
||||
# Test examples from the lib.path.subpath.isValid documentation
|
||||
testSubpathIsValidExample1 = {
|
||||
expr = subpath.isValid null;
|
||||
|
|
2
third_party/nixpkgs/lib/sources.nix
vendored
2
third_party/nixpkgs/lib/sources.nix
vendored
|
@ -1,4 +1,4 @@
|
|||
# Functions for copying sources to the Nix store.
|
||||
/* Functions for copying sources to the Nix store. */
|
||||
{ lib }:
|
||||
|
||||
# Tested in lib/tests/sources.sh
|
||||
|
|
31
third_party/nixpkgs/lib/strings.nix
vendored
31
third_party/nixpkgs/lib/strings.nix
vendored
|
@ -715,10 +715,10 @@ rec {
|
|||
getName pkgs.youtube-dl
|
||||
=> "youtube-dl"
|
||||
*/
|
||||
getName = x:
|
||||
let
|
||||
getName = let
|
||||
parse = drv: (parseDrvName drv).name;
|
||||
in if isString x
|
||||
in x:
|
||||
if isString x
|
||||
then parse x
|
||||
else x.pname or (parse x.name);
|
||||
|
||||
|
@ -732,10 +732,10 @@ rec {
|
|||
getVersion pkgs.youtube-dl
|
||||
=> "2016.01.01"
|
||||
*/
|
||||
getVersion = x:
|
||||
let
|
||||
getVersion = let
|
||||
parse = drv: (parseDrvName drv).version;
|
||||
in if isString x
|
||||
in x:
|
||||
if isString x
|
||||
then parse x
|
||||
else x.version or (parse x.name);
|
||||
|
||||
|
@ -771,12 +771,13 @@ rec {
|
|||
cmakeOptionType "string" "ENGINE" "sdl2"
|
||||
=> "-DENGINE:STRING=sdl2"
|
||||
*/
|
||||
cmakeOptionType = type: feature: value:
|
||||
assert (lib.elem (lib.toUpper type)
|
||||
[ "BOOL" "FILEPATH" "PATH" "STRING" "INTERNAL" ]);
|
||||
assert (lib.isString feature);
|
||||
assert (lib.isString value);
|
||||
"-D${feature}:${lib.toUpper type}=${value}";
|
||||
cmakeOptionType = let
|
||||
types = [ "BOOL" "FILEPATH" "PATH" "STRING" "INTERNAL" ];
|
||||
in type: feature: value:
|
||||
assert (elem (toUpper type) types);
|
||||
assert (isString feature);
|
||||
assert (isString value);
|
||||
"-D${feature}:${toUpper type}=${value}";
|
||||
|
||||
/* Create a -D<condition>={TRUE,FALSE} string that can be passed to typical
|
||||
CMake invocations.
|
||||
|
@ -977,9 +978,11 @@ rec {
|
|||
Many types of value are coercible to string this way, including int, float,
|
||||
null, bool, list of similarly coercible values.
|
||||
*/
|
||||
isConvertibleWithToString = x:
|
||||
isConvertibleWithToString = let
|
||||
types = [ "null" "int" "float" "bool" ];
|
||||
in x:
|
||||
isStringLike x ||
|
||||
elem (typeOf x) [ "null" "int" "float" "bool" ] ||
|
||||
elem (typeOf x) types ||
|
||||
(isList x && lib.all isConvertibleWithToString x);
|
||||
|
||||
/* Check whether a value can be coerced to a string.
|
||||
|
|
192
third_party/nixpkgs/lib/systems/default.nix
vendored
192
third_party/nixpkgs/lib/systems/default.nix
vendored
|
@ -45,7 +45,7 @@ rec {
|
|||
else args';
|
||||
|
||||
# TODO: deprecate args.rustc in favour of args.rust after 23.05 is EOL.
|
||||
rust = assert !(args ? rust && args ? rustc); args.rust or args.rustc or {};
|
||||
rust = args.rust or args.rustc or {};
|
||||
|
||||
final = {
|
||||
# Prefer to parse `config` as it is strictly more informative.
|
||||
|
@ -89,6 +89,13 @@ rec {
|
|||
# is why we use the more obscure "bfd" and not "binutils" for this
|
||||
# choice.
|
||||
else "bfd";
|
||||
# The standard lib directory name that non-nixpkgs binaries distributed
|
||||
# for this platform normally assume.
|
||||
libDir = if final.isLinux then
|
||||
if final.isx86_64 || final.isMips64 || final.isPower64
|
||||
then "lib64"
|
||||
else "lib"
|
||||
else null;
|
||||
extensions = lib.optionalAttrs final.hasSharedLibraries {
|
||||
sharedLibrary =
|
||||
if final.isDarwin then ".dylib"
|
||||
|
@ -169,96 +176,6 @@ rec {
|
|||
# TODO: remove after 23.05 is EOL, with an error pointing to the rust.* attrs.
|
||||
rustc = args.rustc or {};
|
||||
|
||||
rust = rust // {
|
||||
# Once args.rustc.platform.target-family is deprecated and
|
||||
# removed, there will no longer be any need to modify any
|
||||
# values from args.rust.platform, so we can drop all the
|
||||
# "args ? rust" etc. checks, and merge args.rust.platform in
|
||||
# /after/.
|
||||
platform = rust.platform or {} // {
|
||||
# https://doc.rust-lang.org/reference/conditional-compilation.html#target_arch
|
||||
arch =
|
||||
/**/ if rust ? platform then rust.platform.arch
|
||||
else if final.isAarch32 then "arm"
|
||||
else if final.isMips64 then "mips64" # never add "el" suffix
|
||||
else if final.isPower64 then "powerpc64" # never add "le" suffix
|
||||
else final.parsed.cpu.name;
|
||||
|
||||
# https://doc.rust-lang.org/reference/conditional-compilation.html#target_os
|
||||
os =
|
||||
/**/ if rust ? platform then rust.platform.os or "none"
|
||||
else if final.isDarwin then "macos"
|
||||
else final.parsed.kernel.name;
|
||||
|
||||
# https://doc.rust-lang.org/reference/conditional-compilation.html#target_family
|
||||
target-family =
|
||||
/**/ if args ? rust.platform.target-family then args.rust.platform.target-family
|
||||
else if args ? rustc.platform.target-family
|
||||
then
|
||||
(
|
||||
# Since https://github.com/rust-lang/rust/pull/84072
|
||||
# `target-family` is a list instead of single value.
|
||||
let
|
||||
f = args.rustc.platform.target-family;
|
||||
in
|
||||
if builtins.isList f then f else [ f ]
|
||||
)
|
||||
else lib.optional final.isUnix "unix"
|
||||
++ lib.optional final.isWindows "windows";
|
||||
|
||||
# https://doc.rust-lang.org/reference/conditional-compilation.html#target_vendor
|
||||
vendor = let
|
||||
inherit (final.parsed) vendor;
|
||||
in rust.platform.vendor or {
|
||||
"w64" = "pc";
|
||||
}.${vendor.name} or vendor.name;
|
||||
};
|
||||
|
||||
# The name of the rust target, even if it is custom. Adjustments are
|
||||
# because rust has slightly different naming conventions than we do.
|
||||
rustcTarget = let
|
||||
inherit (final.parsed) cpu kernel abi;
|
||||
cpu_ = rust.platform.arch or {
|
||||
"armv7a" = "armv7";
|
||||
"armv7l" = "armv7";
|
||||
"armv6l" = "arm";
|
||||
"armv5tel" = "armv5te";
|
||||
"riscv64" = "riscv64gc";
|
||||
}.${cpu.name} or cpu.name;
|
||||
vendor_ = final.rust.platform.vendor;
|
||||
in rust.config
|
||||
or "${cpu_}-${vendor_}-${kernel.name}${lib.optionalString (abi.name != "unknown") "-${abi.name}"}";
|
||||
|
||||
# The name of the rust target if it is standard, or the json file
|
||||
# containing the custom target spec.
|
||||
rustcTargetSpec =
|
||||
/**/ if rust ? platform
|
||||
then builtins.toFile (final.rust.rustcTarget + ".json") (builtins.toJSON rust.platform)
|
||||
else final.rust.rustcTarget;
|
||||
|
||||
# The name of the rust target if it is standard, or the
|
||||
# basename of the file containing the custom target spec,
|
||||
# without the .json extension.
|
||||
#
|
||||
# This is the name used by Cargo for target subdirectories.
|
||||
cargoShortTarget =
|
||||
lib.removeSuffix ".json" (baseNameOf "${final.rust.rustcTargetSpec}");
|
||||
|
||||
# When used as part of an environment variable name, triples are
|
||||
# uppercased and have all hyphens replaced by underscores:
|
||||
#
|
||||
# https://github.com/rust-lang/cargo/pull/9169
|
||||
# https://github.com/rust-lang/cargo/issues/8285#issuecomment-634202431
|
||||
cargoEnvVarTarget =
|
||||
lib.strings.replaceStrings ["-"] ["_"]
|
||||
(lib.strings.toUpper final.rust.cargoShortTarget);
|
||||
|
||||
# True if the target is no_std
|
||||
# https://github.com/rust-lang/rust/blob/2e44c17c12cec45b6a682b1e53a04ac5b5fcc9d2/src/bootstrap/config.rs#L415-L421
|
||||
isNoStdTarget =
|
||||
builtins.any (t: lib.hasInfix t final.rust.rustcTarget) ["-none" "nvptx" "switch" "-uefi"];
|
||||
};
|
||||
|
||||
linuxArch =
|
||||
if final.isAarch32 then "arm"
|
||||
else if final.isAarch64 then "arm64"
|
||||
|
@ -356,7 +273,98 @@ rec {
|
|||
|
||||
}) // mapAttrs (n: v: v final.parsed) inspect.predicates
|
||||
// mapAttrs (n: v: v final.gcc.arch or "default") architectures.predicates
|
||||
// args;
|
||||
// args // {
|
||||
rust = rust // {
|
||||
# Once args.rustc.platform.target-family is deprecated and
|
||||
# removed, there will no longer be any need to modify any
|
||||
# values from args.rust.platform, so we can drop all the
|
||||
# "args ? rust" etc. checks, and merge args.rust.platform in
|
||||
# /after/.
|
||||
platform = rust.platform or {} // {
|
||||
# https://doc.rust-lang.org/reference/conditional-compilation.html#target_arch
|
||||
arch =
|
||||
/**/ if rust ? platform then rust.platform.arch
|
||||
else if final.isAarch32 then "arm"
|
||||
else if final.isMips64 then "mips64" # never add "el" suffix
|
||||
else if final.isPower64 then "powerpc64" # never add "le" suffix
|
||||
else final.parsed.cpu.name;
|
||||
|
||||
# https://doc.rust-lang.org/reference/conditional-compilation.html#target_os
|
||||
os =
|
||||
/**/ if rust ? platform then rust.platform.os or "none"
|
||||
else if final.isDarwin then "macos"
|
||||
else final.parsed.kernel.name;
|
||||
|
||||
# https://doc.rust-lang.org/reference/conditional-compilation.html#target_family
|
||||
target-family =
|
||||
/**/ if args ? rust.platform.target-family then args.rust.platform.target-family
|
||||
else if args ? rustc.platform.target-family
|
||||
then
|
||||
(
|
||||
# Since https://github.com/rust-lang/rust/pull/84072
|
||||
# `target-family` is a list instead of single value.
|
||||
let
|
||||
f = args.rustc.platform.target-family;
|
||||
in
|
||||
if builtins.isList f then f else [ f ]
|
||||
)
|
||||
else lib.optional final.isUnix "unix"
|
||||
++ lib.optional final.isWindows "windows";
|
||||
|
||||
# https://doc.rust-lang.org/reference/conditional-compilation.html#target_vendor
|
||||
vendor = let
|
||||
inherit (final.parsed) vendor;
|
||||
in rust.platform.vendor or {
|
||||
"w64" = "pc";
|
||||
}.${vendor.name} or vendor.name;
|
||||
};
|
||||
|
||||
# The name of the rust target, even if it is custom. Adjustments are
|
||||
# because rust has slightly different naming conventions than we do.
|
||||
rustcTarget = let
|
||||
inherit (final.parsed) cpu kernel abi;
|
||||
cpu_ = rust.platform.arch or {
|
||||
"armv7a" = "armv7";
|
||||
"armv7l" = "armv7";
|
||||
"armv6l" = "arm";
|
||||
"armv5tel" = "armv5te";
|
||||
"riscv64" = "riscv64gc";
|
||||
}.${cpu.name} or cpu.name;
|
||||
vendor_ = final.rust.platform.vendor;
|
||||
# TODO: deprecate args.rustc in favour of args.rust after 23.05 is EOL.
|
||||
in args.rust.rustcTarget or args.rustc.config
|
||||
or "${cpu_}-${vendor_}-${kernel.name}${lib.optionalString (abi.name != "unknown") "-${abi.name}"}";
|
||||
|
||||
# The name of the rust target if it is standard, or the json file
|
||||
# containing the custom target spec.
|
||||
rustcTargetSpec = rust.rustcTargetSpec or (
|
||||
/**/ if rust ? platform
|
||||
then builtins.toFile (final.rust.rustcTarget + ".json") (builtins.toJSON rust.platform)
|
||||
else final.rust.rustcTarget);
|
||||
|
||||
# The name of the rust target if it is standard, or the
|
||||
# basename of the file containing the custom target spec,
|
||||
# without the .json extension.
|
||||
#
|
||||
# This is the name used by Cargo for target subdirectories.
|
||||
cargoShortTarget =
|
||||
lib.removeSuffix ".json" (baseNameOf "${final.rust.rustcTargetSpec}");
|
||||
|
||||
# When used as part of an environment variable name, triples are
|
||||
# uppercased and have all hyphens replaced by underscores:
|
||||
#
|
||||
# https://github.com/rust-lang/cargo/pull/9169
|
||||
# https://github.com/rust-lang/cargo/issues/8285#issuecomment-634202431
|
||||
cargoEnvVarTarget =
|
||||
lib.strings.replaceStrings ["-"] ["_"]
|
||||
(lib.strings.toUpper final.rust.cargoShortTarget);
|
||||
|
||||
# True if the target is no_std
|
||||
# https://github.com/rust-lang/rust/blob/2e44c17c12cec45b6a682b1e53a04ac5b5fcc9d2/src/bootstrap/config.rs#L415-L421
|
||||
isNoStdTarget =
|
||||
builtins.any (t: lib.hasInfix t final.rust.rustcTarget) ["-none" "nvptx" "switch" "-uefi"];
|
||||
};
|
||||
};
|
||||
in assert final.useAndroidPrebuilt -> final.isAndroid;
|
||||
assert lib.foldl
|
||||
(pass: { assertion, message }:
|
||||
|
|
152
third_party/nixpkgs/lib/tests/misc.nix
vendored
152
third_party/nixpkgs/lib/tests/misc.nix
vendored
|
@ -650,6 +650,28 @@ runTests {
|
|||
expected = [2 30 40 42];
|
||||
};
|
||||
|
||||
testSortOn = {
|
||||
expr = sortOn stringLength [ "aa" "b" "cccc" ];
|
||||
expected = [ "b" "aa" "cccc" ];
|
||||
};
|
||||
|
||||
testSortOnEmpty = {
|
||||
expr = sortOn (throw "nope") [ ];
|
||||
expected = [ ];
|
||||
};
|
||||
|
||||
testSortOnIncomparable = {
|
||||
expr =
|
||||
map
|
||||
(x: x.f x.ok)
|
||||
(sortOn (x: x.ok) [
|
||||
{ ok = 1; f = x: x; }
|
||||
{ ok = 3; f = x: x + 3; }
|
||||
{ ok = 2; f = x: x; }
|
||||
]);
|
||||
expected = [ 1 2 6 ];
|
||||
};
|
||||
|
||||
testReplicate = {
|
||||
expr = replicate 3 "a";
|
||||
expected = ["a" "a" "a"];
|
||||
|
@ -675,6 +697,51 @@ runTests {
|
|||
expected = false;
|
||||
};
|
||||
|
||||
testHasAttrByPathNonStrict = {
|
||||
expr = hasAttrByPath [] (throw "do not use");
|
||||
expected = true;
|
||||
};
|
||||
|
||||
testLongestValidPathPrefix_empty_empty = {
|
||||
expr = attrsets.longestValidPathPrefix [ ] { };
|
||||
expected = [ ];
|
||||
};
|
||||
|
||||
testLongestValidPathPrefix_empty_nonStrict = {
|
||||
expr = attrsets.longestValidPathPrefix [ ] (throw "do not use");
|
||||
expected = [ ];
|
||||
};
|
||||
|
||||
testLongestValidPathPrefix_zero = {
|
||||
expr = attrsets.longestValidPathPrefix [ "a" (throw "do not use") ] { d = null; };
|
||||
expected = [ ];
|
||||
};
|
||||
|
||||
testLongestValidPathPrefix_zero_b = {
|
||||
expr = attrsets.longestValidPathPrefix [ "z" "z" ] "remarkably harmonious";
|
||||
expected = [ ];
|
||||
};
|
||||
|
||||
testLongestValidPathPrefix_one = {
|
||||
expr = attrsets.longestValidPathPrefix [ "a" "b" "c" ] { a = null; };
|
||||
expected = [ "a" ];
|
||||
};
|
||||
|
||||
testLongestValidPathPrefix_two = {
|
||||
expr = attrsets.longestValidPathPrefix [ "a" "b" "c" ] { a.b = null; };
|
||||
expected = [ "a" "b" ];
|
||||
};
|
||||
|
||||
testLongestValidPathPrefix_three = {
|
||||
expr = attrsets.longestValidPathPrefix [ "a" "b" "c" ] { a.b.c = null; };
|
||||
expected = [ "a" "b" "c" ];
|
||||
};
|
||||
|
||||
testLongestValidPathPrefix_three_extra = {
|
||||
expr = attrsets.longestValidPathPrefix [ "a" "b" "c" ] { a.b.c.d = throw "nope"; };
|
||||
expected = [ "a" "b" "c" ];
|
||||
};
|
||||
|
||||
testFindFirstIndexExample1 = {
|
||||
expr = lists.findFirstIndex (x: x > 3) (abort "index found, so a default must not be evaluated") [ 1 6 4 ];
|
||||
expected = 1;
|
||||
|
@ -831,6 +898,26 @@ runTests {
|
|||
};
|
||||
};
|
||||
|
||||
testMatchAttrsMatchingExact = {
|
||||
expr = matchAttrs { cpu = { bits = 64; }; } { cpu = { bits = 64; }; };
|
||||
expected = true;
|
||||
};
|
||||
|
||||
testMatchAttrsMismatch = {
|
||||
expr = matchAttrs { cpu = { bits = 128; }; } { cpu = { bits = 64; }; };
|
||||
expected = false;
|
||||
};
|
||||
|
||||
testMatchAttrsMatchingImplicit = {
|
||||
expr = matchAttrs { cpu = { }; } { cpu = { bits = 64; }; };
|
||||
expected = true;
|
||||
};
|
||||
|
||||
testMatchAttrsMissingAttrs = {
|
||||
expr = matchAttrs { cpu = {}; } { };
|
||||
expected = false;
|
||||
};
|
||||
|
||||
testOverrideExistingEmpty = {
|
||||
expr = overrideExisting {} { a = 1; };
|
||||
expected = {};
|
||||
|
@ -1872,6 +1959,18 @@ runTests {
|
|||
expr = (with types; int).description;
|
||||
expected = "signed integer";
|
||||
};
|
||||
testTypeDescriptionIntsPositive = {
|
||||
expr = (with types; ints.positive).description;
|
||||
expected = "positive integer, meaning >0";
|
||||
};
|
||||
testTypeDescriptionIntsPositiveOrEnumAuto = {
|
||||
expr = (with types; either ints.positive (enum ["auto"])).description;
|
||||
expected = ''positive integer, meaning >0, or value "auto" (singular enum)'';
|
||||
};
|
||||
testTypeDescriptionListOfPositive = {
|
||||
expr = (with types; listOf ints.positive).description;
|
||||
expected = "list of (positive integer, meaning >0)";
|
||||
};
|
||||
testTypeDescriptionListOfInt = {
|
||||
expr = (with types; listOf int).description;
|
||||
expected = "list of signed integer";
|
||||
|
@ -1948,4 +2047,57 @@ runTests {
|
|||
testGetExe'FailureSecondArg = testingThrow (
|
||||
getExe' { type = "derivation"; } "dir/executable"
|
||||
);
|
||||
|
||||
testPlatformMatch = {
|
||||
expr = meta.platformMatch { system = "x86_64-linux"; } "x86_64-linux";
|
||||
expected = true;
|
||||
};
|
||||
|
||||
testPlatformMatchAttrs = {
|
||||
expr = meta.platformMatch (systems.elaborate "x86_64-linux") (systems.elaborate "x86_64-linux").parsed;
|
||||
expected = true;
|
||||
};
|
||||
|
||||
testPlatformMatchNoMatch = {
|
||||
expr = meta.platformMatch { system = "x86_64-darwin"; } "x86_64-linux";
|
||||
expected = false;
|
||||
};
|
||||
|
||||
testPlatformMatchMissingSystem = {
|
||||
expr = meta.platformMatch { } "x86_64-linux";
|
||||
expected = false;
|
||||
};
|
||||
|
||||
testPackagesFromDirectoryRecursive = {
|
||||
expr = packagesFromDirectoryRecursive {
|
||||
callPackage = path: overrides: import path overrides;
|
||||
directory = ./packages-from-directory;
|
||||
};
|
||||
expected = {
|
||||
a = "a";
|
||||
b = "b";
|
||||
# Note: Other files/directories in `./test-data/c/` are ignored and can be
|
||||
# used by `package.nix`.
|
||||
c = "c";
|
||||
my-namespace = {
|
||||
d = "d";
|
||||
e = "e";
|
||||
f = "f";
|
||||
my-sub-namespace = {
|
||||
g = "g";
|
||||
h = "h";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
# Check that `packagesFromDirectoryRecursive` can process a directory with a
|
||||
# top-level `package.nix` file into a single package.
|
||||
testPackagesFromDirectoryRecursiveTopLevelPackageNix = {
|
||||
expr = packagesFromDirectoryRecursive {
|
||||
callPackage = path: overrides: import path overrides;
|
||||
directory = ./packages-from-directory/c;
|
||||
};
|
||||
expected = "c";
|
||||
};
|
||||
}
|
||||
|
|
26
third_party/nixpkgs/lib/tests/modules.sh
vendored
26
third_party/nixpkgs/lib/tests/modules.sh
vendored
|
@ -24,14 +24,14 @@ evalConfig() {
|
|||
local attr=$1
|
||||
shift
|
||||
local script="import ./default.nix { modules = [ $* ];}"
|
||||
nix-instantiate --timeout 1 -E "$script" -A "$attr" --eval-only --show-trace --read-write-mode
|
||||
nix-instantiate --timeout 1 -E "$script" -A "$attr" --eval-only --show-trace --read-write-mode --json
|
||||
}
|
||||
|
||||
reportFailure() {
|
||||
local attr=$1
|
||||
shift
|
||||
local script="import ./default.nix { modules = [ $* ];}"
|
||||
echo 2>&1 "$ nix-instantiate -E '$script' -A '$attr' --eval-only"
|
||||
echo 2>&1 "$ nix-instantiate -E '$script' -A '$attr' --eval-only --json"
|
||||
evalConfig "$attr" "$@" || true
|
||||
((++fail))
|
||||
}
|
||||
|
@ -94,6 +94,14 @@ checkConfigOutput '^true$' config.result ./module-argument-default.nix
|
|||
# gvariant
|
||||
checkConfigOutput '^true$' config.assertion ./gvariant.nix
|
||||
|
||||
# https://github.com/NixOS/nixpkgs/pull/131205
|
||||
# We currently throw this error already in `config`, but throwing in `config.wrong1` would be acceptable.
|
||||
checkConfigError 'It seems as if you.re trying to declare an option by placing it into .config. rather than .options.' config.wrong1 ./error-mkOption-in-config.nix
|
||||
# We currently throw this error already in `config`, but throwing in `config.nest.wrong2` would be acceptable.
|
||||
checkConfigError 'It seems as if you.re trying to declare an option by placing it into .config. rather than .options.' config.nest.wrong2 ./error-mkOption-in-config.nix
|
||||
checkConfigError 'The option .sub.wrong2. does not exist. Definition values:' config.sub ./error-mkOption-in-submodule-config.nix
|
||||
checkConfigError '.*This can happen if you e.g. declared your options in .types.submodule.' config.sub ./error-mkOption-in-submodule-config.nix
|
||||
|
||||
# types.pathInStore
|
||||
checkConfigOutput '".*/store/0lz9p8xhf89kb1c1kk6jxrzskaiygnlh-bash-5.2-p15.drv"' config.pathInStore.ok1 ./types.nix
|
||||
checkConfigOutput '".*/store/0fb3ykw9r5hpayd05sr0cizwadzq1d8q-bash-5.2-p15"' config.pathInStore.ok2 ./types.nix
|
||||
|
@ -111,6 +119,12 @@ checkConfigError 'The option .* does not exist. Definition values:\n\s*- In .*'
|
|||
checkConfigError 'while evaluating a definition from `.*/define-enable-abort.nix' config.enable ./define-enable-abort.nix
|
||||
checkConfigError 'while evaluating the error message for definitions for .enable., which is an option that does not exist' config.enable ./define-enable-abort.nix
|
||||
|
||||
# Check boolByOr type.
|
||||
checkConfigOutput '^false$' config.value.falseFalse ./boolByOr.nix
|
||||
checkConfigOutput '^true$' config.value.trueFalse ./boolByOr.nix
|
||||
checkConfigOutput '^true$' config.value.falseTrue ./boolByOr.nix
|
||||
checkConfigOutput '^true$' config.value.trueTrue ./boolByOr.nix
|
||||
|
||||
checkConfigOutput '^1$' config.bare-submodule.nested ./declare-bare-submodule.nix ./declare-bare-submodule-nested-option.nix
|
||||
checkConfigOutput '^2$' config.bare-submodule.deep ./declare-bare-submodule.nix ./declare-bare-submodule-deep-option.nix
|
||||
checkConfigOutput '^42$' config.bare-submodule.nested ./declare-bare-submodule.nix ./declare-bare-submodule-nested-option.nix ./declare-bare-submodule-deep-option.nix ./define-bare-submodule-values.nix
|
||||
|
@ -352,12 +366,12 @@ checkConfigError 'The option .* has conflicting definitions' config.value ./type
|
|||
checkConfigOutput '^0$' config.value.int ./types-anything/equal-atoms.nix
|
||||
checkConfigOutput '^false$' config.value.bool ./types-anything/equal-atoms.nix
|
||||
checkConfigOutput '^""$' config.value.string ./types-anything/equal-atoms.nix
|
||||
checkConfigOutput '^/$' config.value.path ./types-anything/equal-atoms.nix
|
||||
checkConfigOutput '^"/[^"]+"$' config.value.path ./types-anything/equal-atoms.nix
|
||||
checkConfigOutput '^null$' config.value.null ./types-anything/equal-atoms.nix
|
||||
checkConfigOutput '^0.1$' config.value.float ./types-anything/equal-atoms.nix
|
||||
# Functions can't be merged together
|
||||
checkConfigError "The option .value.multiple-lambdas.<function body>. has conflicting option types" config.applied.multiple-lambdas ./types-anything/functions.nix
|
||||
checkConfigOutput '^<LAMBDA>$' config.value.single-lambda ./types-anything/functions.nix
|
||||
checkConfigOutput '^true$' config.valueIsFunction.single-lambda ./types-anything/functions.nix
|
||||
checkConfigOutput '^null$' config.applied.merging-lambdas.x ./types-anything/functions.nix
|
||||
checkConfigOutput '^null$' config.applied.merging-lambdas.y ./types-anything/functions.nix
|
||||
# Check that all mk* modifiers are applied
|
||||
|
@ -384,7 +398,7 @@ checkConfigOutput '^"a b y z"$' config.resultFooBar ./declare-variants.nix ./def
|
|||
checkConfigOutput '^"a b c"$' config.resultFooFoo ./declare-variants.nix ./define-variant.nix
|
||||
|
||||
## emptyValue's
|
||||
checkConfigOutput "[ ]" config.list.a ./emptyValues.nix
|
||||
checkConfigOutput "\[\]" config.list.a ./emptyValues.nix
|
||||
checkConfigOutput "{}" config.attrs.a ./emptyValues.nix
|
||||
checkConfigOutput "null" config.null.a ./emptyValues.nix
|
||||
checkConfigOutput "{}" config.submodule.a ./emptyValues.nix
|
||||
|
@ -393,7 +407,7 @@ checkConfigError 'The option .int.a. is used but not defined' config.int.a ./emp
|
|||
checkConfigError 'The option .nonEmptyList.a. is used but not defined' config.nonEmptyList.a ./emptyValues.nix
|
||||
|
||||
## types.raw
|
||||
checkConfigOutput "{ foo = <CODE>; }" config.unprocessedNesting ./raw.nix
|
||||
checkConfigOutput '^true$' config.unprocessedNestingEvaluates.success ./raw.nix
|
||||
checkConfigOutput "10" config.processedToplevel ./raw.nix
|
||||
checkConfigError "The option .multiple. is defined multiple times" config.multiple ./raw.nix
|
||||
checkConfigOutput "bar" config.priorities ./raw.nix
|
||||
|
|
14
third_party/nixpkgs/lib/tests/modules/boolByOr.nix
vendored
Normal file
14
third_party/nixpkgs/lib/tests/modules/boolByOr.nix
vendored
Normal file
|
@ -0,0 +1,14 @@
|
|||
{ lib, ... }: {
|
||||
|
||||
options.value = lib.mkOption {
|
||||
type = lib.types.lazyAttrsOf lib.types.boolByOr;
|
||||
};
|
||||
|
||||
config.value = {
|
||||
falseFalse = lib.mkMerge [ false false ];
|
||||
trueFalse = lib.mkMerge [ true false ];
|
||||
falseTrue = lib.mkMerge [ false true ];
|
||||
trueTrue = lib.mkMerge [ true true ];
|
||||
};
|
||||
}
|
||||
|
14
third_party/nixpkgs/lib/tests/modules/error-mkOption-in-config.nix
vendored
Normal file
14
third_party/nixpkgs/lib/tests/modules/error-mkOption-in-config.nix
vendored
Normal file
|
@ -0,0 +1,14 @@
|
|||
{ lib, ... }:
|
||||
let
|
||||
inherit (lib) mkOption;
|
||||
in
|
||||
{
|
||||
wrong1 = mkOption {
|
||||
};
|
||||
# This is not actually reported separately, so could be omitted from the test
|
||||
# but it makes the example more realistic.
|
||||
# Making it parse this _config_ as options would too risky. What if it's not
|
||||
# options but other values, that abort, throw, diverge, etc?
|
||||
nest.wrong2 = mkOption {
|
||||
};
|
||||
}
|
12
third_party/nixpkgs/lib/tests/modules/error-mkOption-in-submodule-config.nix
vendored
Normal file
12
third_party/nixpkgs/lib/tests/modules/error-mkOption-in-submodule-config.nix
vendored
Normal file
|
@ -0,0 +1,12 @@
|
|||
{ lib, ... }:
|
||||
let
|
||||
inherit (lib) mkOption;
|
||||
in
|
||||
{
|
||||
options.sub = lib.mkOption {
|
||||
type = lib.types.submodule {
|
||||
wrong2 = mkOption {};
|
||||
};
|
||||
default = {};
|
||||
};
|
||||
}
|
|
@ -1,4 +1,4 @@
|
|||
{ lib, ... }: {
|
||||
{ lib, config, ... }: {
|
||||
|
||||
options = {
|
||||
processedToplevel = lib.mkOption {
|
||||
|
@ -13,6 +13,9 @@
|
|||
priorities = lib.mkOption {
|
||||
type = lib.types.raw;
|
||||
};
|
||||
unprocessedNestingEvaluates = lib.mkOption {
|
||||
default = builtins.tryEval config.unprocessedNesting;
|
||||
};
|
||||
};
|
||||
|
||||
config = {
|
||||
|
|
|
@ -9,7 +9,7 @@
|
|||
value.int = 0;
|
||||
value.bool = false;
|
||||
value.string = "";
|
||||
value.path = /.;
|
||||
value.path = ./.;
|
||||
value.null = null;
|
||||
value.float = 0.1;
|
||||
}
|
||||
|
@ -17,7 +17,7 @@
|
|||
value.int = 0;
|
||||
value.bool = false;
|
||||
value.string = "";
|
||||
value.path = /.;
|
||||
value.path = ./.;
|
||||
value.null = null;
|
||||
value.float = 0.1;
|
||||
}
|
||||
|
|
|
@ -1,5 +1,9 @@
|
|||
{ lib, config, ... }: {
|
||||
|
||||
options.valueIsFunction = lib.mkOption {
|
||||
default = lib.mapAttrs (name: lib.isFunction) config.value;
|
||||
};
|
||||
|
||||
options.value = lib.mkOption {
|
||||
type = lib.types.anything;
|
||||
};
|
||||
|
|
17
third_party/nixpkgs/lib/tests/nix-for-tests.nix
vendored
Normal file
17
third_party/nixpkgs/lib/tests/nix-for-tests.nix
vendored
Normal file
|
@ -0,0 +1,17 @@
|
|||
{ pkgs
|
||||
}:
|
||||
|
||||
# The aws-sdk-cpp tests are flaky. Since pull requests to staging
|
||||
# cause nix to be rebuilt, this means that staging PRs end up
|
||||
# getting false CI failures due to whatever is flaky in the AWS
|
||||
# SDK tests. Since none of our CI needs to (or should be able to)
|
||||
# contact AWS S3, let's just omit it all from the Nix that runs
|
||||
# CI. Bonus: the tests build way faster.
|
||||
#
|
||||
# See also: https://github.com/NixOS/nix/issues/7582
|
||||
|
||||
builtins.mapAttrs (_: pkg:
|
||||
if builtins.isAttrs pkg
|
||||
then pkg.override { withAWS = false; }
|
||||
else pkg)
|
||||
pkgs.nixVersions
|
2
third_party/nixpkgs/lib/tests/packages-from-directory/a.nix
vendored
Normal file
2
third_party/nixpkgs/lib/tests/packages-from-directory/a.nix
vendored
Normal file
|
@ -0,0 +1,2 @@
|
|||
{ }:
|
||||
"a"
|
2
third_party/nixpkgs/lib/tests/packages-from-directory/b.nix
vendored
Normal file
2
third_party/nixpkgs/lib/tests/packages-from-directory/b.nix
vendored
Normal file
|
@ -0,0 +1,2 @@
|
|||
{ }:
|
||||
"b"
|
0
third_party/nixpkgs/lib/tests/packages-from-directory/c/my-extra-feature.patch
vendored
Normal file
0
third_party/nixpkgs/lib/tests/packages-from-directory/c/my-extra-feature.patch
vendored
Normal file
0
third_party/nixpkgs/lib/tests/packages-from-directory/c/not-a-namespace/not-a-package.nix
vendored
Normal file
0
third_party/nixpkgs/lib/tests/packages-from-directory/c/not-a-namespace/not-a-package.nix
vendored
Normal file
2
third_party/nixpkgs/lib/tests/packages-from-directory/c/package.nix
vendored
Normal file
2
third_party/nixpkgs/lib/tests/packages-from-directory/c/package.nix
vendored
Normal file
|
@ -0,0 +1,2 @@
|
|||
{ }:
|
||||
"c"
|
0
third_party/nixpkgs/lib/tests/packages-from-directory/c/support-definitions.nix
vendored
Normal file
0
third_party/nixpkgs/lib/tests/packages-from-directory/c/support-definitions.nix
vendored
Normal file
2
third_party/nixpkgs/lib/tests/packages-from-directory/my-namespace/d.nix
vendored
Normal file
2
third_party/nixpkgs/lib/tests/packages-from-directory/my-namespace/d.nix
vendored
Normal file
|
@ -0,0 +1,2 @@
|
|||
{ }:
|
||||
"d"
|
2
third_party/nixpkgs/lib/tests/packages-from-directory/my-namespace/e.nix
vendored
Normal file
2
third_party/nixpkgs/lib/tests/packages-from-directory/my-namespace/e.nix
vendored
Normal file
|
@ -0,0 +1,2 @@
|
|||
{ }:
|
||||
"e"
|
2
third_party/nixpkgs/lib/tests/packages-from-directory/my-namespace/f/package.nix
vendored
Normal file
2
third_party/nixpkgs/lib/tests/packages-from-directory/my-namespace/f/package.nix
vendored
Normal file
|
@ -0,0 +1,2 @@
|
|||
{ }:
|
||||
"f"
|
2
third_party/nixpkgs/lib/tests/packages-from-directory/my-namespace/my-sub-namespace/g.nix
vendored
Normal file
2
third_party/nixpkgs/lib/tests/packages-from-directory/my-namespace/my-sub-namespace/g.nix
vendored
Normal file
|
@ -0,0 +1,2 @@
|
|||
{ }:
|
||||
"g"
|
2
third_party/nixpkgs/lib/tests/packages-from-directory/my-namespace/my-sub-namespace/h.nix
vendored
Normal file
2
third_party/nixpkgs/lib/tests/packages-from-directory/my-namespace/my-sub-namespace/h.nix
vendored
Normal file
|
@ -0,0 +1,2 @@
|
|||
{ }:
|
||||
"h"
|
19
third_party/nixpkgs/lib/tests/release.nix
vendored
19
third_party/nixpkgs/lib/tests/release.nix
vendored
|
@ -1,8 +1,9 @@
|
|||
{ # The pkgs used for dependencies for the testing itself
|
||||
# Don't test properties of pkgs.lib, but rather the lib in the parent directory
|
||||
pkgs ? import ../.. {} // { lib = throw "pkgs.lib accessed, but the lib tests should use nixpkgs' lib path directly!"; },
|
||||
nix ? pkgs.nix,
|
||||
nixVersions ? [ pkgs.nixVersions.minimum nix pkgs.nixVersions.unstable ],
|
||||
nix ? pkgs-nixVersions.stable,
|
||||
nixVersions ? [ pkgs-nixVersions.minimum nix pkgs-nixVersions.unstable ],
|
||||
pkgs-nixVersions ? import ./nix-for-tests.nix { inherit pkgs; },
|
||||
}:
|
||||
|
||||
let
|
||||
|
@ -66,5 +67,17 @@ let
|
|||
in
|
||||
pkgs.symlinkJoin {
|
||||
name = "nixpkgs-lib-tests";
|
||||
paths = map testWithNix nixVersions;
|
||||
paths = map testWithNix nixVersions ++
|
||||
|
||||
#
|
||||
# TEMPORARY MIGRATION MECHANISM
|
||||
#
|
||||
# This comment and the expression which follows it should be
|
||||
# removed as part of resolving this issue:
|
||||
#
|
||||
# https://github.com/NixOS/nixpkgs/issues/272591
|
||||
#
|
||||
[(import ../../pkgs/test/release {})]
|
||||
;
|
||||
|
||||
}
|
||||
|
|
40
third_party/nixpkgs/lib/trivial.nix
vendored
40
third_party/nixpkgs/lib/trivial.nix
vendored
|
@ -1,6 +1,18 @@
|
|||
{ lib }:
|
||||
|
||||
rec {
|
||||
let
|
||||
inherit (lib.trivial)
|
||||
isFunction
|
||||
isInt
|
||||
functionArgs
|
||||
pathExists
|
||||
release
|
||||
setFunctionArgs
|
||||
toBaseDigits
|
||||
version
|
||||
versionSuffix
|
||||
warn;
|
||||
in {
|
||||
|
||||
## Simple (higher order) functions
|
||||
|
||||
|
@ -58,9 +70,7 @@ rec {
|
|||
of the next function, and the last function returns the
|
||||
final value.
|
||||
*/
|
||||
pipe = val: functions:
|
||||
let reverseApply = x: f: f x;
|
||||
in builtins.foldl' reverseApply val functions;
|
||||
pipe = builtins.foldl' (x: f: f x);
|
||||
|
||||
# note please don’t add a function like `compose = flip pipe`.
|
||||
# This would confuse users, because the order of the functions
|
||||
|
@ -195,7 +205,7 @@ rec {
|
|||
On each release the first letter is bumped and a new animal is chosen
|
||||
starting with that new letter.
|
||||
*/
|
||||
codeName = "Tapir";
|
||||
codeName = "Uakari";
|
||||
|
||||
/* Returns the current nixpkgs version suffix as string. */
|
||||
versionSuffix =
|
||||
|
@ -439,7 +449,7 @@ rec {
|
|||
*/
|
||||
functionArgs = f:
|
||||
if f ? __functor
|
||||
then f.__functionArgs or (lib.functionArgs (f.__functor f))
|
||||
then f.__functionArgs or (functionArgs (f.__functor f))
|
||||
else builtins.functionArgs f;
|
||||
|
||||
/* Check whether something is a function or something
|
||||
|
@ -510,22 +520,20 @@ rec {
|
|||
|
||||
toHexString 250 => "FA"
|
||||
*/
|
||||
toHexString = i:
|
||||
let
|
||||
toHexDigit = d:
|
||||
if d < 10
|
||||
then toString d
|
||||
else
|
||||
{
|
||||
toHexString = let
|
||||
hexDigits = {
|
||||
"10" = "A";
|
||||
"11" = "B";
|
||||
"12" = "C";
|
||||
"13" = "D";
|
||||
"14" = "E";
|
||||
"15" = "F";
|
||||
}.${toString d};
|
||||
in
|
||||
lib.concatMapStrings toHexDigit (toBaseDigits 16 i);
|
||||
};
|
||||
toHexDigit = d:
|
||||
if d < 10
|
||||
then toString d
|
||||
else hexDigits.${toString d};
|
||||
in i: lib.concatMapStrings toHexDigit (toBaseDigits 16 i);
|
||||
|
||||
/* `toBaseDigits base i` converts the positive integer i to a list of its
|
||||
digits in the given base. For example:
|
||||
|
|
39
third_party/nixpkgs/lib/types.nix
vendored
39
third_party/nixpkgs/lib/types.nix
vendored
|
@ -67,6 +67,7 @@ let
|
|||
;
|
||||
outer_types =
|
||||
rec {
|
||||
__attrsFailEvaluation = true;
|
||||
isType = type: x: (x._type or "") == type;
|
||||
|
||||
setType = typeName: value: value // {
|
||||
|
@ -112,9 +113,14 @@ rec {
|
|||
, # Description of the type, defined recursively by embedding the wrapped type if any.
|
||||
description ? null
|
||||
# A hint for whether or not this description needs parentheses. Possible values:
|
||||
# - "noun": a simple noun phrase such as "positive integer"
|
||||
# - "conjunction": a phrase with a potentially ambiguous "or" connective.
|
||||
# - "noun": a noun phrase
|
||||
# Example description: "positive integer",
|
||||
# - "conjunction": a phrase with a potentially ambiguous "or" connective
|
||||
# Example description: "int or string"
|
||||
# - "composite": a phrase with an "of" connective
|
||||
# Example description: "list of string"
|
||||
# - "nonRestrictiveClause": a noun followed by a comma and a clause
|
||||
# Example description: "positive integer, meaning >0"
|
||||
# See the `optionDescriptionPhrase` function.
|
||||
, descriptionClass ? null
|
||||
, # DO NOT USE WITHOUT KNOWING WHAT YOU ARE DOING!
|
||||
|
@ -275,6 +281,22 @@ rec {
|
|||
merge = mergeEqualOption;
|
||||
};
|
||||
|
||||
boolByOr = mkOptionType {
|
||||
name = "boolByOr";
|
||||
description = "boolean (merged using or)";
|
||||
descriptionClass = "noun";
|
||||
check = isBool;
|
||||
merge = loc: defs:
|
||||
foldl'
|
||||
(result: def:
|
||||
# Under the assumption that .check always runs before merge, we can assume that all defs.*.value
|
||||
# have been forced, and therefore we assume we don't introduce order-dependent strictness here
|
||||
result || def.value
|
||||
)
|
||||
false
|
||||
defs;
|
||||
};
|
||||
|
||||
int = mkOptionType {
|
||||
name = "int";
|
||||
description = "signed integer";
|
||||
|
@ -321,10 +343,12 @@ rec {
|
|||
unsigned = addCheck types.int (x: x >= 0) // {
|
||||
name = "unsignedInt";
|
||||
description = "unsigned integer, meaning >=0";
|
||||
descriptionClass = "nonRestrictiveClause";
|
||||
};
|
||||
positive = addCheck types.int (x: x > 0) // {
|
||||
name = "positiveInt";
|
||||
description = "positive integer, meaning >0";
|
||||
descriptionClass = "nonRestrictiveClause";
|
||||
};
|
||||
u8 = unsign 8 256;
|
||||
u16 = unsign 16 65536;
|
||||
|
@ -366,10 +390,12 @@ rec {
|
|||
nonnegative = addCheck number (x: x >= 0) // {
|
||||
name = "numberNonnegative";
|
||||
description = "nonnegative integer or floating point number, meaning >=0";
|
||||
descriptionClass = "nonRestrictiveClause";
|
||||
};
|
||||
positive = addCheck number (x: x > 0) // {
|
||||
name = "numberPositive";
|
||||
description = "positive integer or floating point number, meaning >0";
|
||||
descriptionClass = "nonRestrictiveClause";
|
||||
};
|
||||
};
|
||||
|
||||
|
@ -446,6 +472,7 @@ rec {
|
|||
passwdEntry = entryType: addCheck entryType (str: !(hasInfix ":" str || hasInfix "\n" str)) // {
|
||||
name = "passwdEntry ${entryType.name}";
|
||||
description = "${optionDescriptionPhrase (class: class == "noun") entryType}, not containing newlines or colons";
|
||||
descriptionClass = "nonRestrictiveClause";
|
||||
};
|
||||
|
||||
attrs = mkOptionType {
|
||||
|
@ -853,7 +880,13 @@ rec {
|
|||
# Either value of type `t1` or `t2`.
|
||||
either = t1: t2: mkOptionType rec {
|
||||
name = "either";
|
||||
description = "${optionDescriptionPhrase (class: class == "noun" || class == "conjunction") t1} or ${optionDescriptionPhrase (class: class == "noun" || class == "conjunction" || class == "composite") t2}";
|
||||
description =
|
||||
if t1.descriptionClass or null == "nonRestrictiveClause"
|
||||
then
|
||||
# Plain, but add comma
|
||||
"${t1.description}, or ${optionDescriptionPhrase (class: class == "noun" || class == "conjunction") t2}"
|
||||
else
|
||||
"${optionDescriptionPhrase (class: class == "noun" || class == "conjunction") t1} or ${optionDescriptionPhrase (class: class == "noun" || class == "conjunction" || class == "composite") t2}";
|
||||
descriptionClass = "conjunction";
|
||||
check = x: t1.check x || t2.check x;
|
||||
merge = loc: defs:
|
||||
|
|
7
third_party/nixpkgs/maintainers/README.md
vendored
7
third_party/nixpkgs/maintainers/README.md
vendored
|
@ -165,3 +165,10 @@ team after giving the existing members a few days to respond.
|
|||
|
||||
*Important:* If a team says it is a closed group, do not merge additions
|
||||
to the team without an approval by at least one existing member.
|
||||
|
||||
|
||||
# Maintainer scripts
|
||||
|
||||
Various utility scripts, which are mainly useful for nixpkgs maintainers,
|
||||
are available under `./scripts/`. See its [README](./scripts/README.md)
|
||||
for further information.
|
||||
|
|
825
third_party/nixpkgs/maintainers/maintainer-list.nix
vendored
825
third_party/nixpkgs/maintainers/maintainer-list.nix
vendored
File diff suppressed because it is too large
Load diff
62
third_party/nixpkgs/maintainers/scripts/README.md
vendored
Normal file
62
third_party/nixpkgs/maintainers/scripts/README.md
vendored
Normal file
|
@ -0,0 +1,62 @@
|
|||
# Maintainer scripts
|
||||
|
||||
This folder contains various executable scripts for nixpkgs maintainers,
|
||||
and supporting data or nixlang files as needed.
|
||||
These scripts generally aren't a stable interface and may changed or be removed.
|
||||
|
||||
What follows is a (very incomplete) overview of available scripts.
|
||||
|
||||
|
||||
## Metadata
|
||||
|
||||
### `check-by-name.sh`
|
||||
|
||||
An alias for `pkgs/test/nixpkgs-check-by-name/scripts/run-local.sh`, see [documentation](../../pkgs/test/nixpkgs-check-by-name/scripts/README.md).
|
||||
|
||||
### `get-maintainer.sh`
|
||||
|
||||
`get-maintainer.sh [selector] value` returns a JSON object describing
|
||||
a given nixpkgs maintainer, equivalent to `lib.maintainers.${x} // { handle = x; }`.
|
||||
|
||||
This allows looking up a maintainer's attrset (including GitHub and Matrix
|
||||
handles, email address etc.) based on any of their handles, more correctly and
|
||||
robustly than text search through `maintainers-list.nix`.
|
||||
|
||||
```
|
||||
❯ ./get-maintainer.sh nicoo
|
||||
{
|
||||
"email": "nicoo@debian.org",
|
||||
"github": "nbraud",
|
||||
"githubId": 1155801,
|
||||
"keys": [
|
||||
{
|
||||
"fingerprint": "E44E 9EA5 4B8E 256A FB73 49D3 EC9D 3708 72BC 7A8C"
|
||||
}
|
||||
],
|
||||
"name": "nicoo",
|
||||
"handle": "nicoo"
|
||||
}
|
||||
|
||||
❯ ./get-maintainer.sh name 'Silvan Mosberger'
|
||||
{
|
||||
"email": "contact@infinisil.com",
|
||||
"github": "infinisil",
|
||||
"githubId": 20525370,
|
||||
"keys": [
|
||||
{
|
||||
"fingerprint": "6C2B 55D4 4E04 8266 6B7D DA1A 422E 9EDA E015 7170"
|
||||
}
|
||||
],
|
||||
"matrix": "@infinisil:matrix.org",
|
||||
"name": "Silvan Mosberger",
|
||||
"handle": "infinisil"
|
||||
}
|
||||
```
|
||||
|
||||
The maintainer is designated by a `selector` which must be one of:
|
||||
- `handle` (default): the maintainer's attribute name in `lib.maintainers`;
|
||||
- `email`, `name`, `github`, `githubId`, `matrix`, `name`:
|
||||
attributes of the maintainer's object, matched exactly;
|
||||
see [`maintainer-list.nix`] for the fields' definition.
|
||||
|
||||
[`maintainer-list.nix`]: ../maintainer-list.nix
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue