Project import generated by Copybara.

GitOrigin-RevId: 34ad3ffe08adfca17fcb4e4a47bb5f3b113687be
This commit is contained in:
Default email 2021-10-17 11:34:42 +02:00
parent c8bd384b6d
commit a7848c7476
636 changed files with 28470 additions and 50346 deletions

View file

@ -158,7 +158,23 @@ This can be overridden.
By default, Agda sources are files ending on `.agda`, or literate Agda files ending on `.lagda`, `.lagda.tex`, `.lagda.org`, `.lagda.md`, `.lagda.rst`. The list of recognised Agda source extensions can be extended by setting the `extraExtensions` config variable. By default, Agda sources are files ending on `.agda`, or literate Agda files ending on `.lagda`, `.lagda.tex`, `.lagda.org`, `.lagda.md`, `.lagda.rst`. The list of recognised Agda source extensions can be extended by setting the `extraExtensions` config variable.
## Adding Agda packages to Nixpkgs {#adding-agda-packages-to-nixpkgs} ## Maintaining the Agda package set on Nixpkgs {#maintaining-the-agda-package-set-on-nixpkgs}
We are aiming at providing all common Agda libraries as packages on `nixpkgs`,
and keeping them up to date.
Contributions and maintenance help is always appreciated,
but the maintenance effort is typically low since the Agda ecosystem is quite small.
The `nixpkgs` Agda package set tries to take up a role similar to that of [Stackage](https://www.stackage.org/) in the Haskell world.
It is a curated set of libraries that:
1. Always work together.
2. Are as up-to-date as possible.
While the Haskell ecosystem is huge, and Stackage is highly automatised,
the Agda package set is small and can (still) be maintained by hand.
### Adding Agda packages to Nixpkgs {#adding-agda-packages-to-nixpkgs}
To add an Agda package to `nixpkgs`, the derivation should be written to `pkgs/development/libraries/agda/${library-name}/` and an entry should be added to `pkgs/top-level/agda-packages.nix`. Here it is called in a scope with access to all other Agda libraries, so the top line of the `default.nix` can look like: To add an Agda package to `nixpkgs`, the derivation should be written to `pkgs/development/libraries/agda/${library-name}/` and an entry should be added to `pkgs/top-level/agda-packages.nix`. Here it is called in a scope with access to all other Agda libraries, so the top line of the `default.nix` can look like:
@ -192,3 +208,49 @@ mkDerivation {
This library has a file called `.agda-lib`, and so we give an empty string to `libraryFile` as nothing precedes `.agda-lib` in the filename. This file contains `name: IAL-1.3`, and so we let `libraryName = "IAL-1.3"`. This library does not use an `Everything.agda` file and instead has a Makefile, so there is no need to set `everythingFile` and we set a custom `buildPhase`. This library has a file called `.agda-lib`, and so we give an empty string to `libraryFile` as nothing precedes `.agda-lib` in the filename. This file contains `name: IAL-1.3`, and so we let `libraryName = "IAL-1.3"`. This library does not use an `Everything.agda` file and instead has a Makefile, so there is no need to set `everythingFile` and we set a custom `buildPhase`.
When writing an Agda package it is essential to make sure that no `.agda-lib` file gets added to the store as a single file (for example by using `writeText`). This causes Agda to think that the nix store is a Agda library and it will attempt to write to it whenever it typechecks something. See [https://github.com/agda/agda/issues/4613](https://github.com/agda/agda/issues/4613). When writing an Agda package it is essential to make sure that no `.agda-lib` file gets added to the store as a single file (for example by using `writeText`). This causes Agda to think that the nix store is a Agda library and it will attempt to write to it whenever it typechecks something. See [https://github.com/agda/agda/issues/4613](https://github.com/agda/agda/issues/4613).
In the pull request adding this library,
you can test whether it builds correctly by writing in a comment:
```
@ofborg build agdaPackages.iowa-stdlib
```
### Maintaining Agda packages
As mentioned before, the aim is to have a compatible, and up-to-date package set.
These two conditions sometimes exclude each other:
For example, if we update `agdaPackages.standard-library` because there was an upstream release,
this will typically break many reverse dependencies,
i.e. downstream Agda libraries that depend on the standard library.
In `nixpkgs` we are typically among the first to notice this,
since we have build tests in place to check this.
In a pull request updating e.g. the standard library, you should write the following comment:
```
@ofborg build agdaPackages.standard-library.passthru.tests
```
This will build all reverse dependencies of the standard library,
for example `agdaPackages.agda-categories`, or `agdaPackages.generic`.
In some cases it is useful to build _all_ Agda packages.
This can be done with the following Github comment:
```
@ofborg build agda.passthru.tests.allPackages
```
Sometimes, the builds of the reverse dependencies fail because they have not yet been updated and released.
You should drop the maintainers a quick issue notifying them of the breakage,
citing the build error (which you can get from the ofborg logs).
If you are motivated, you might even send a pull request that fixes it.
Usually, the maintainers will answer within a week or two with a new release.
Bumping the version of that reverse dependency should be a further commit on your PR.
In the rare case that a new release is not to be expected within an acceptable time,
simply mark the broken package as broken by setting `meta.broken = true;`.
This will exclude it from the build test.
It can be added later when it is fixed,
and does not hinder the advancement of the whole package set in the meantime.

View file

@ -28,8 +28,7 @@ mkShell {
packages = [ packages = [
(with dotnetCorePackages; combinePackages [ (with dotnetCorePackages; combinePackages [
sdk_3_1 sdk_3_1
sdk_3_0 sdk_5_0
sdk_2_1
]) ])
]; ];
} }
@ -64,9 +63,9 @@ $ dotnet --info
The `dotnetCorePackages.sdk_X_Y` is preferred over the old dotnet-sdk as both major and minor version are very important for a dotnet environment. If a given minor version isn't present (or was changed), then this will likely break your ability to build a project. The `dotnetCorePackages.sdk_X_Y` is preferred over the old dotnet-sdk as both major and minor version are very important for a dotnet environment. If a given minor version isn't present (or was changed), then this will likely break your ability to build a project.
## dotnetCorePackages.sdk vs dotnetCorePackages.net vs dotnetCorePackages.netcore vs dotnetCorePackages.aspnetcore {#dotnetcorepackages.sdk-vs-dotnetcorepackages.net-vs-dotnetcorepackages.netcore-vs-dotnetcorepackages.aspnetcore} ## dotnetCorePackages.sdk vs dotnetCorePackages.runtime vs dotnetCorePackages.aspnetcore {#dotnetcorepackages.sdk-vs-dotnetcorepackages.runtime-vs-dotnetcorepackages.aspnetcore}
The `dotnetCorePackages.sdk` contains both a runtime and the full sdk of a given version. The `net`, `netcore` and `aspnetcore` packages are meant to serve as minimal runtimes to deploy alongside already built applications. For runtime versions >= .NET 5 `net` is used while `netcore` is used for older .NET Core runtime version. The `dotnetCorePackages.sdk` contains both a runtime and the full sdk of a given version. The `runtime` and `aspnetcore` packages are meant to serve as minimal runtimes to deploy alongside already built applications.
## Packaging a Dotnet Application {#packaging-a-dotnet-application} ## Packaging a Dotnet Application {#packaging-a-dotnet-application}

View file

@ -237,22 +237,6 @@ where they are known to differ. But there are ways to customize the argument:
--target /nix/store/asdfasdfsadf-thumb-crazy.json # contains {"foo":"","bar":""} --target /nix/store/asdfasdfsadf-thumb-crazy.json # contains {"foo":"","bar":""}
``` ```
Finally, as an ad-hoc escape hatch, a computed target (string or JSON file
path) can be passed directly to `buildRustPackage`:
```nix
pkgs.rustPlatform.buildRustPackage {
/* ... */
target = "x86_64-fortanix-unknown-sgx";
}
```
This is useful to avoid rebuilding Rust tools, since they are actually target
agnostic and don't need to be rebuilt. But in the future, we should always
build the Rust tools and standard library crates separately so there is no
reason not to take the `stdenv.hostPlatform.rustc`-modifying approach, and the
ad-hoc escape hatch to `buildRustPackage` can be removed.
Note that currently custom targets aren't compiled with `std`, so `cargo test` Note that currently custom targets aren't compiled with `std`, so `cargo test`
will fail. This can be ignored by adding `doCheck = false;` to your derivation. will fail. This can be ignored by adding `doCheck = false;` to your derivation.

View file

@ -153,6 +153,11 @@ in mkLicense lset) ({
free = false; free = false;
}; };
capec = {
fullName = "Common Attack Pattern Enumeration and Classification";
url = "https://capec.mitre.org/about/termsofuse.html";
};
clArtistic = { clArtistic = {
spdxId = "ClArtistic"; spdxId = "ClArtistic";
fullName = "Clarified Artistic License"; fullName = "Clarified Artistic License";

View file

@ -1197,6 +1197,12 @@
email = "sivaraman.balaji@gmail.com"; email = "sivaraman.balaji@gmail.com";
name = "Balaji Sivaraman"; name = "Balaji Sivaraman";
}; };
balodja = {
email = "balodja@gmail.com";
github = "balodja";
githubId = 294444;
name = "Vladimir Korolev";
};
baloo = { baloo = {
email = "nixpkgs@superbaloo.net"; email = "nixpkgs@superbaloo.net";
github = "baloo"; github = "baloo";
@ -2437,6 +2443,12 @@
githubId = 4331004; githubId = 4331004;
name = "Naoya Hatta"; name = "Naoya Hatta";
}; };
dalpd = {
email = "denizalpd@ogr.iu.edu.tr";
github = "dalpd";
githubId = 16895361;
name = "Deniz Alp Durmaz";
};
DamienCassou = { DamienCassou = {
email = "damien@cassou.me"; email = "damien@cassou.me";
github = "DamienCassou"; github = "DamienCassou";
@ -4082,6 +4094,12 @@
githubId = 16470252; githubId = 16470252;
name = "Gemini Lasswell"; name = "Gemini Lasswell";
}; };
gbtb = {
email = "goodbetterthebeast3@gmail.com";
github = "gbtb";
githubId = 37017396;
name = "gbtb";
};
gebner = { gebner = {
email = "gebner@gebner.org"; email = "gebner@gebner.org";
github = "gebner"; github = "gebner";
@ -4409,6 +4427,16 @@
githubId = 54728477; githubId = 54728477;
name = "Happy River"; name = "Happy River";
}; };
hardselius = {
email = "martin@hardselius.dev";
github = "hardselius";
githubId = 1422583;
name = "Martin Hardselius";
keys = [{
longkeyid = "rsa4096/0x03A6E6F786936619";
fingerprint = "3F35 E4CA CBF4 2DE1 2E90 53E5 03A6 E6F7 8693 6619";
}];
};
haslersn = { haslersn = {
email = "haslersn@fius.informatik.uni-stuttgart.de"; email = "haslersn@fius.informatik.uni-stuttgart.de";
github = "haslersn"; github = "haslersn";
@ -6973,6 +7001,12 @@
githubId = 458783; githubId = 458783;
name = "Martin Gammelsæter"; name = "Martin Gammelsæter";
}; };
martfont = {
name = "Martino Fontana";
email = "tinozzo123@tutanota.com";
github = "SuperSamus";
githubId = 40663462;
};
marzipankaiser = { marzipankaiser = {
email = "nixos@gaisseml.de"; email = "nixos@gaisseml.de";
github = "marzipankaiser"; github = "marzipankaiser";
@ -10507,6 +10541,13 @@
githubId = 4477729; githubId = 4477729;
name = "Sergey Mironov"; name = "Sergey Mironov";
}; };
smitop = {
name = "Smitty van Bodegom";
email = "me@smitop.com";
matrix = "@smitop:kde.org";
github = "Smittyvb";
githubId = 10530973;
};
sna = { sna = {
email = "abouzahra.9@wright.edu"; email = "abouzahra.9@wright.edu";
github = "s-na"; github = "s-na";
@ -12359,6 +12400,12 @@
githubId = 452; githubId = 452;
name = "Yurii Rashkovskii"; name = "Yurii Rashkovskii";
}; };
yrd = {
name = "Yannik Rödel";
email = "nix@yannik.info";
github = "yrd";
githubId = 1820447;
};
ysndr = { ysndr = {
email = "me@ysndr.de"; email = "me@ysndr.de";
github = "ysndr"; github = "ysndr";

View file

@ -33,8 +33,7 @@ TMP_FILE="$(mktemp)"
GENERATED_NIXFILE="pkgs/development/lua-modules/generated-packages.nix" GENERATED_NIXFILE="pkgs/development/lua-modules/generated-packages.nix"
LUAROCKS_CONFIG="$NIXPKGS_PATH/maintainers/scripts/luarocks-config.lua" LUAROCKS_CONFIG="$NIXPKGS_PATH/maintainers/scripts/luarocks-config.lua"
HEADER = """ HEADER = """/* {GENERATED_NIXFILE} is an auto-generated file -- DO NOT EDIT!
/* {GENERATED_NIXFILE} is an auto-generated file -- DO NOT EDIT!
Regenerate it with: Regenerate it with:
nixpkgs$ ./maintainers/scripts/update-luarocks-packages nixpkgs$ ./maintainers/scripts/update-luarocks-packages
@ -99,9 +98,8 @@ class LuaEditor(Editor):
header2 = textwrap.dedent( header2 = textwrap.dedent(
# header2 = inspect.cleandoc( # header2 = inspect.cleandoc(
""" """
{ self, stdenv, lib, fetchurl, fetchgit, ... } @ args: { self, stdenv, lib, fetchurl, fetchgit, callPackage, ... } @ args:
self: super: final: prev:
with self;
{ {
""") """)
f.write(header2) f.write(header2)
@ -199,6 +197,7 @@ def generate_pkg_nix(plug: LuaPlugin):
log.debug("running %s", ' '.join(cmd)) log.debug("running %s", ' '.join(cmd))
output = subprocess.check_output(cmd, text=True) output = subprocess.check_output(cmd, text=True)
output = "callPackage(" + output.strip() + ") {};\n\n"
return (plug, output) return (plug, output)
def main(): def main():

View file

@ -164,6 +164,16 @@ with lib.maintainers; {
scope = "Maintain Kodi and related packages."; scope = "Maintain Kodi and related packages.";
}; };
linux-kernel = {
members = [
TredwellGit
ma27
nequissimus
qyliss
];
scope = "Maintain the Linux kernel.";
};
mate = { mate = {
members = [ members = [
j03 j03

View file

@ -83,7 +83,8 @@
which introduced some breaking changes to the experimental OCI which introduced some breaking changes to the experimental OCI
manifest format. See manifest format. See
<link xlink:href="https://github.com/helm/community/blob/main/hips/hip-0006.md">HIP <link xlink:href="https://github.com/helm/community/blob/main/hips/hip-0006.md">HIP
6</link> for more details. 6</link> for more details. <literal>helmfile</literal> also
defaults to 0.141.0, which is the minimum compatible version.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
@ -1554,6 +1555,47 @@ Superuser created successfully.
encapsulation. encapsulation.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
Changing systemd <literal>.socket</literal> units now restarts
them and stops the service that is activated by them.
Additionally, services with
<literal>stopOnChange = false</literal> dont break anymore
when they are socket-activated.
</para>
</listitem>
<listitem>
<para>
The <literal>virtualisation.libvirtd</literal> module has been
refactored and updated with new options:
</para>
<itemizedlist spacing="compact">
<listitem>
<para>
<literal>virtualisation.libvirtd.qemu*</literal> options
(e.g.:
<literal>virtualisation.libvirtd.qemuRunAsRoot</literal>)
were moved to
<link xlink:href="options.html#opt-virtualisation.libvirtd.qemu"><literal>virtualisation.libvirtd.qemu</literal></link>
submodule,
</para>
</listitem>
<listitem>
<para>
software TPM1/TPM2 support (e.g.: Windows 11 guests)
(<link xlink:href="options.html#opt-virtualisation.libvirtd.qemu.swtpm"><literal>virtualisation.libvirtd.qemu.swtpm</literal></link>),
</para>
</listitem>
<listitem>
<para>
custom OVMF package (e.g.:
<literal>pkgs.OVMFFull</literal> with HTTP, CSM and Secure
Boot support)
(<link xlink:href="options.html#opt-virtualisation.libvirtd.qemu.ovmf.package"><literal>virtualisation.libvirtd.qemu.ovmf.package</literal></link>).
</para>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist> </itemizedlist>
</section> </section>
</section> </section>

View file

@ -29,6 +29,7 @@ In addition to numerous new and upgraded packages, this release has the followin
- Pantheon desktop has been updated to version 6. Due to changes of screen locker, if locking doesn't work for you, please try `gsettings set org.gnome.desktop.lockdown disable-lock-screen false`. - Pantheon desktop has been updated to version 6. Due to changes of screen locker, if locking doesn't work for you, please try `gsettings set org.gnome.desktop.lockdown disable-lock-screen false`.
- `kubernetes-helm` now defaults to 3.7.0, which introduced some breaking changes to the experimental OCI manifest format. See [HIP 6](https://github.com/helm/community/blob/main/hips/hip-0006.md) for more details. - `kubernetes-helm` now defaults to 3.7.0, which introduced some breaking changes to the experimental OCI manifest format. See [HIP 6](https://github.com/helm/community/blob/main/hips/hip-0006.md) for more details.
`helmfile` also defaults to 0.141.0, which is the minimum compatible version.
- GNOME has been upgraded to 41. Please take a look at their [Release Notes](https://help.gnome.org/misc/release-notes/41.0/) for details. - GNOME has been upgraded to 41. Please take a look at their [Release Notes](https://help.gnome.org/misc/release-notes/41.0/) for details.
@ -449,3 +450,10 @@ In addition to numerous new and upgraded packages, this release has the followin
- The `networking` module has a new `networking.fooOverUDP` option to configure Foo-over-UDP encapsulations. - The `networking` module has a new `networking.fooOverUDP` option to configure Foo-over-UDP encapsulations.
- `networking.sits` now supports Foo-over-UDP encapsulation. - `networking.sits` now supports Foo-over-UDP encapsulation.
- Changing systemd `.socket` units now restarts them and stops the service that is activated by them. Additionally, services with `stopOnChange = false` don't break anymore when they are socket-activated.
- The `virtualisation.libvirtd` module has been refactored and updated with new options:
- `virtualisation.libvirtd.qemu*` options (e.g.: `virtualisation.libvirtd.qemuRunAsRoot`) were moved to [`virtualisation.libvirtd.qemu`](options.html#opt-virtualisation.libvirtd.qemu) submodule,
- software TPM1/TPM2 support (e.g.: Windows 11 guests) ([`virtualisation.libvirtd.qemu.swtpm`](options.html#opt-virtualisation.libvirtd.qemu.swtpm)),
- custom OVMF package (e.g.: `pkgs.OVMFFull` with HTTP, CSM and Secure Boot support) ([`virtualisation.libvirtd.qemu.ovmf.package`](options.html#opt-virtualisation.libvirtd.qemu.ovmf.package)).

View file

@ -68,9 +68,8 @@ rec {
prefixLength = 24; prefixLength = 24;
} ]; } ];
}); });
in
{ key = "ip-address"; networkConfig =
config =
{ networking.hostName = mkDefault m.fst; { networking.hostName = mkDefault m.fst;
networking.interfaces = listToAttrs interfaces; networking.interfaces = listToAttrs interfaces;
@ -96,6 +95,14 @@ rec {
in flip concatMap interfacesNumbered in flip concatMap interfacesNumbered
({ fst, snd }: qemu-common.qemuNICFlags snd fst m.snd); ({ fst, snd }: qemu-common.qemuNICFlags snd fst m.snd);
}; };
in
{ key = "ip-address";
config = networkConfig // {
# Expose the networkConfig items for tests like nixops
# that need to recreate the network config.
system.build.networkConfig = networkConfig;
};
} }
) )
(getAttr m.fst nodes) (getAttr m.fst nodes)

View file

@ -83,10 +83,13 @@ let
optionsListVisible = lib.filter (opt: opt.visible && !opt.internal) (lib.optionAttrSetToDocList options); optionsListVisible = lib.filter (opt: opt.visible && !opt.internal) (lib.optionAttrSetToDocList options);
# Customly sort option list for the man page. # Customly sort option list for the man page.
# Always ensure that the sort order matches sortXML.py!
optionsList = lib.sort optionLess optionsListDesc; optionsList = lib.sort optionLess optionsListDesc;
# Convert the list of options into an XML file. # Convert the list of options into an XML file.
optionsXML = builtins.toFile "options.xml" (builtins.toXML optionsList); # This file is *not* sorted sorted to save on eval time, since the docbook XML
# and the manpage depend on it and thus we evaluate this on every system rebuild.
optionsXML = builtins.toFile "options.xml" (builtins.toXML optionsListDesc);
optionsNix = builtins.listToAttrs (map (o: { name = o.name; value = removeAttrs o ["name" "visible" "internal"]; }) optionsList); optionsNix = builtins.listToAttrs (map (o: { name = o.name; value = removeAttrs o ["name" "visible" "internal"]; }) optionsList);
@ -185,9 +188,10 @@ in {
exit 1 exit 1
fi fi
${pkgs.python3Minimal}/bin/python ${./sortXML.py} $optionsXML sorted.xml
${pkgs.libxslt.bin}/bin/xsltproc \ ${pkgs.libxslt.bin}/bin/xsltproc \
--stringparam revision '${revision}' \ --stringparam revision '${revision}' \
-o intermediate.xml ${./options-to-docbook.xsl} $optionsXML -o intermediate.xml ${./options-to-docbook.xsl} sorted.xml
${pkgs.libxslt.bin}/bin/xsltproc \ ${pkgs.libxslt.bin}/bin/xsltproc \
-o "$out" ${./postprocess-option-descriptions.xsl} intermediate.xml -o "$out" ${./postprocess-option-descriptions.xsl} intermediate.xml
''; '';

View file

@ -0,0 +1,28 @@
import xml.etree.ElementTree as ET
import sys
tree = ET.parse(sys.argv[1])
# the xml tree is of the form
# <expr><list> {all options, each an attrs} </list></expr>
options = list(tree.getroot().find('list'))
def sortKey(opt):
def order(s):
if s.startswith("enable"):
return 0
if s.startswith("package"):
return 1
return 2
return [
(order(p.attrib['value']), p.attrib['value'])
for p in opt.findall('attr[@name="loc"]/list/string')
]
# always ensure that the sort order matches the order used in the nix expression!
options.sort(key=sortKey)
doc = ET.Element("expr")
newOptions = ET.SubElement(doc, "list")
newOptions.extend(options)
ET.ElementTree(doc).write(sys.argv[2], encoding='utf-8')

View file

@ -1,6 +1,6 @@
let let
pkgs = (import ../../../../../../default.nix {}); pkgs = (import ../../../../../../default.nix {});
machine = import "${pkgs.path}/nixos/lib/eval-config.nix" { machine = import (pkgs.path + "/nixos/lib/eval-config.nix") {
system = "x86_64-linux"; system = "x86_64-linux";
modules = [ modules = [
({config, ...}: { imports = [ ./system.nix ]; }) ({config, ...}: { imports = [ ./system.nix ]; })

View file

@ -68,7 +68,7 @@ mount --rbind /sys "$mountPoint/sys"
fi fi
# Run the activation script. Set $LOCALE_ARCHIVE to supress some Perl locale warnings. # Run the activation script. Set $LOCALE_ARCHIVE to supress some Perl locale warnings.
LOCALE_ARCHIVE="$system/sw/lib/locale/locale-archive" chroot "$mountPoint" "$system/activate" 1>&2 || true LOCALE_ARCHIVE="$system/sw/lib/locale/locale-archive" IN_NIXOS_ENTER=1 chroot "$mountPoint" "$system/activate" 1>&2 || true
# Create /tmp # Create /tmp
chroot "$mountPoint" systemd-tmpfiles --create --remove --exclude-prefix=/dev 1>&2 || true chroot "$mountPoint" systemd-tmpfiles --create --remove --exclude-prefix=/dev 1>&2 || true

View file

@ -4,7 +4,9 @@
with lib; with lib;
{ let cfg = config.programs.evince;
in {
# Added 2019-08-09 # Added 2019-08-09
imports = [ imports = [
@ -22,6 +24,13 @@ with lib;
enable = mkEnableOption enable = mkEnableOption
"Evince, the GNOME document viewer"; "Evince, the GNOME document viewer";
package = mkOption {
type = types.package;
default = pkgs.evince;
defaultText = literalExpression "pkgs.evince";
description = "Evince derivation to use.";
};
}; };
}; };
@ -31,11 +40,11 @@ with lib;
config = mkIf config.programs.evince.enable { config = mkIf config.programs.evince.enable {
environment.systemPackages = [ pkgs.evince ]; environment.systemPackages = [ cfg.package ];
services.dbus.packages = [ pkgs.evince ]; services.dbus.packages = [ cfg.package ];
systemd.packages = [ pkgs.evince ]; systemd.packages = [ cfg.package ];
}; };

View file

@ -20,7 +20,7 @@ in
}; };
config = mkOption { config = mkOption {
type = types.attrs; type = with types; attrsOf (attrsOf anything);
default = { }; default = { };
example = { example = {
init.defaultBranch = "main"; init.defaultBranch = "main";

View file

@ -7,7 +7,7 @@ let
fpm = config.services.phpfpm.pools.roundcube; fpm = config.services.phpfpm.pools.roundcube;
localDB = cfg.database.host == "localhost"; localDB = cfg.database.host == "localhost";
user = cfg.database.username; user = cfg.database.username;
phpWithPspell = pkgs.php74.withExtensions ({ enabled, all }: [ all.pspell ] ++ enabled); phpWithPspell = pkgs.php80.withExtensions ({ enabled, all }: [ all.pspell ] ++ enabled);
in in
{ {
options.services.roundcube = { options.services.roundcube = {

View file

@ -109,7 +109,7 @@ let cfg = config.services.subsonic; in {
after = [ "network.target" ]; after = [ "network.target" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
script = '' script = ''
${pkgs.jre}/bin/java -Xmx${toString cfg.maxMemory}m \ ${pkgs.jre8}/bin/java -Xmx${toString cfg.maxMemory}m \
-Dsubsonic.home=${cfg.home} \ -Dsubsonic.home=${cfg.home} \
-Dsubsonic.host=${cfg.listenAddress} \ -Dsubsonic.host=${cfg.listenAddress} \
-Dsubsonic.port=${toString cfg.port} \ -Dsubsonic.port=${toString cfg.port} \

View file

@ -35,10 +35,15 @@ in
${concatMapStringsSep " " (x: "--no-collector." + x) cfg.disabledCollectors} \ ${concatMapStringsSep " " (x: "--no-collector." + x) cfg.disabledCollectors} \
--web.listen-address ${cfg.listenAddress}:${toString cfg.port} ${concatStringsSep " " cfg.extraFlags} --web.listen-address ${cfg.listenAddress}:${toString cfg.port} ${concatStringsSep " " cfg.extraFlags}
''; '';
# The systemd collector needs AF_UNIX RestrictAddressFamilies = optionals (any (collector: (collector == "logind" || collector == "systemd")) cfg.enabledCollectors) [
RestrictAddressFamilies = lib.optional (lib.any (x: x == "systemd") cfg.enabledCollectors) "AF_UNIX"; # needs access to dbus via unix sockets (logind/systemd)
"AF_UNIX"
] ++ optionals (any (collector: (collector == "network_route" || collector == "wifi")) cfg.enabledCollectors) [
# needs netlink sockets for wireless collector
"AF_NETLINK"
];
# The timex collector needs to access clock APIs # The timex collector needs to access clock APIs
ProtectClock = lib.any (x: x == "timex") cfg.disabledCollectors; ProtectClock = any (collector: collector == "timex") cfg.disabledCollectors;
}; };
}; };
} }

View file

@ -87,13 +87,20 @@ in
<note> <note>
<para>If you use the firewall consider adding the following:</para> <para>If you use the firewall consider adding the following:</para>
<programlisting> <programlisting>
networking.firewall.allowedTCPPorts = [ 139 445 ]; services.samba.openFirewall = true;
networking.firewall.allowedUDPPorts = [ 137 138 ];
</programlisting> </programlisting>
</note> </note>
''; '';
}; };
openFirewall = mkOption {
type = types.bool;
default = false;
description = ''
Whether to automatically open the necessary ports in the firewall.
'';
};
enableNmbd = mkOption { enableNmbd = mkOption {
type = types.bool; type = types.bool;
default = true; default = true;
@ -235,7 +242,10 @@ in
}; };
security.pam.services.samba = {}; security.pam.services.samba = {};
environment.systemPackages = [ config.services.samba.package ]; environment.systemPackages = [ cfg.package ];
networking.firewall.allowedTCPPorts = mkIf cfg.openFirewall [ 139 445 ];
networking.firewall.allowedUDPPorts = mkIf cfg.openFirewall [ 137 138 ];
}) })
]; ];

View file

@ -272,7 +272,7 @@ in
(mkIf cfg.ldap-proxy.enable { (mkIf cfg.ldap-proxy.enable {
systemd.services.privacyidea-ldap-proxy = let systemd.services.privacyidea-ldap-proxy = let
ldap-proxy-env = pkgs.python2.withPackages (ps: [ ps.privacyidea-ldap-proxy ]); ldap-proxy-env = pkgs.python3.withPackages (ps: [ ps.privacyidea-ldap-proxy ]);
in { in {
description = "privacyIDEA LDAP proxy"; description = "privacyIDEA LDAP proxy";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];

View file

@ -152,6 +152,8 @@ in
install -d -m '${cfg.downloadDirPermissions}' -o '${cfg.user}' -g '${cfg.group}' '${cfg.settings.download-dir}' install -d -m '${cfg.downloadDirPermissions}' -o '${cfg.user}' -g '${cfg.group}' '${cfg.settings.download-dir}'
'' + optionalString cfg.settings.incomplete-dir-enabled '' '' + optionalString cfg.settings.incomplete-dir-enabled ''
install -d -m '${cfg.downloadDirPermissions}' -o '${cfg.user}' -g '${cfg.group}' '${cfg.settings.incomplete-dir}' install -d -m '${cfg.downloadDirPermissions}' -o '${cfg.user}' -g '${cfg.group}' '${cfg.settings.incomplete-dir}'
'' + optionalString cfg.settings.watch-dir-enabled ''
install -d -m '${cfg.downloadDirPermissions}' -o '${cfg.user}' -g '${cfg.group}' '${cfg.settings.watch-dir}'
''; '';
assertions = [ assertions = [

View file

@ -539,6 +539,69 @@ in
Specify the OAuth token URL. Specify the OAuth token URL.
''; '';
}; };
baseURL = mkOption {
type = with types; nullOr str;
default = null;
description = ''
Specify the OAuth base URL.
'';
};
userProfileURL = mkOption {
type = with types; nullOr str;
default = null;
description = ''
Specify the OAuth userprofile URL.
'';
};
userProfileUsernameAttr = mkOption {
type = with types; nullOr str;
default = null;
description = ''
Specify the name of the attribute for the username from the claim.
'';
};
userProfileDisplayNameAttr = mkOption {
type = with types; nullOr str;
default = null;
description = ''
Specify the name of the attribute for the display name from the claim.
'';
};
userProfileEmailAttr = mkOption {
type = with types; nullOr str;
default = null;
description = ''
Specify the name of the attribute for the email from the claim.
'';
};
scope = mkOption {
type = with types; nullOr str;
default = null;
description = ''
Specify the OAuth scope.
'';
};
providerName = mkOption {
type = with types; nullOr str;
default = null;
description = ''
Specify the name to be displayed for this strategy.
'';
};
rolesClaim = mkOption {
type = with types; nullOr str;
default = null;
description = ''
Specify the role claim name.
'';
};
accessRole = mkOption {
type = with types; nullOr str;
default = null;
description = ''
Specify role which should be included in the ID token roles claim to grant access
'';
};
clientID = mkOption { clientID = mkOption {
type = types.str; type = types.str;
description = '' description = ''

View file

@ -144,6 +144,8 @@ in
''; '';
}; };
caddy.enable = mkEnableOption "Whether to enablle caddy reverse proxy to expose jitsi-meet";
prosody.enable = mkOption { prosody.enable = mkOption {
type = bool; type = bool;
default = true; default = true;
@ -322,6 +324,42 @@ in
}; };
}; };
services.caddy = mkIf cfg.caddy.enable {
enable = mkDefault true;
virtualHosts.${cfg.hostName} = {
extraConfig =
let
templatedJitsiMeet = pkgs.runCommand "templated-jitsi-meet" {} ''
cp -R ${pkgs.jitsi-meet}/* .
for file in *.html **/*.html ; do
${pkgs.sd}/bin/sd '<!--#include virtual="(.*)" -->' '{{ include "$1" }}' $file
done
rm config.js
rm interface_config.js
cp -R . $out
cp ${overrideJs "${pkgs.jitsi-meet}/config.js" "config" (recursiveUpdate defaultCfg cfg.config) cfg.extraConfig} $out/config.js
cp ${overrideJs "${pkgs.jitsi-meet}/interface_config.js" "interfaceConfig" cfg.interfaceConfig ""} $out/interface_config.js
cp ./libs/external_api.min.js $out/external_api.js
'';
in ''
handle /http-bind {
header Host ${cfg.hostName}
reverse_proxy 127.0.0.1:5280
}
handle /xmpp-websocket {
reverse_proxy 127.0.0.1:5280
}
handle {
templates
root * ${templatedJitsiMeet}
try_files {path} {path}
try_files {path} /index.html
file_server
}
'';
};
};
services.jitsi-videobridge = mkIf cfg.videobridge.enable { services.jitsi-videobridge = mkIf cfg.videobridge.enable {
enable = true; enable = true;
xmppConfigs."localhost" = { xmppConfigs."localhost" = {

View file

@ -219,6 +219,7 @@ in
] config.environment.pantheon.excludePackages); ] config.environment.pantheon.excludePackages);
programs.evince.enable = mkDefault true; programs.evince.enable = mkDefault true;
programs.evince.package = pkgs.pantheon.evince;
programs.file-roller.enable = mkDefault true; programs.file-roller.enable = mkDefault true;
# Settings from elementary-default-settings # Settings from elementary-default-settings

View file

@ -11,7 +11,6 @@ use Cwd 'abs_path';
my $out = "@out@"; my $out = "@out@";
# FIXME: maybe we should use /proc/1/exe to get the current systemd.
my $curSystemd = abs_path("/run/current-system/sw/bin"); my $curSystemd = abs_path("/run/current-system/sw/bin");
# To be robust against interruption, record what units need to be started etc. # To be robust against interruption, record what units need to be started etc.
@ -19,13 +18,16 @@ my $startListFile = "/run/nixos/start-list";
my $restartListFile = "/run/nixos/restart-list"; my $restartListFile = "/run/nixos/restart-list";
my $reloadListFile = "/run/nixos/reload-list"; my $reloadListFile = "/run/nixos/reload-list";
# Parse restart/reload requests by the activation script # Parse restart/reload requests by the activation script.
# Activation scripts may write newline-separated units to this
# file and switch-to-configuration will handle them. While
# `stopIfChanged = true` is ignored, switch-to-configuration will
# handle `restartIfChanged = false` and `reloadIfChanged = true`.
# This also works for socket-activated units.
my $restartByActivationFile = "/run/nixos/activation-restart-list"; my $restartByActivationFile = "/run/nixos/activation-restart-list";
my $reloadByActivationFile = "/run/nixos/activation-reload-list";
my $dryRestartByActivationFile = "/run/nixos/dry-activation-restart-list"; my $dryRestartByActivationFile = "/run/nixos/dry-activation-restart-list";
my $dryReloadByActivationFile = "/run/nixos/dry-activation-reload-list";
make_path("/run/nixos", { mode => 0755 }); make_path("/run/nixos", { mode => oct(755) });
my $action = shift @ARGV; my $action = shift @ARGV;
@ -147,6 +149,92 @@ sub fingerprintUnit {
return abs_path($s) . (-f "${s}.d/overrides.conf" ? " " . abs_path "${s}.d/overrides.conf" : ""); return abs_path($s) . (-f "${s}.d/overrides.conf" ? " " . abs_path "${s}.d/overrides.conf" : "");
} }
sub handleModifiedUnit {
my ($unit, $baseName, $newUnitFile, $activePrev, $unitsToStop, $unitsToStart, $unitsToReload, $unitsToRestart, $unitsToSkip) = @_;
if ($unit eq "sysinit.target" || $unit eq "basic.target" || $unit eq "multi-user.target" || $unit eq "graphical.target" || $unit =~ /\.slice$/ || $unit =~ /\.path$/) {
# Do nothing. These cannot be restarted directly.
# Slices and Paths don't have to be restarted since
# properties (resource limits and inotify watches)
# seem to get applied on daemon-reload.
} elsif ($unit =~ /\.mount$/) {
# Reload the changed mount unit to force a remount.
$unitsToReload->{$unit} = 1;
recordUnit($reloadListFile, $unit);
} else {
my $unitInfo = parseUnit($newUnitFile);
if (boolIsTrue($unitInfo->{'X-ReloadIfChanged'} // "no")) {
$unitsToReload->{$unit} = 1;
recordUnit($reloadListFile, $unit);
}
elsif (!boolIsTrue($unitInfo->{'X-RestartIfChanged'} // "yes") || boolIsTrue($unitInfo->{'RefuseManualStop'} // "no") || boolIsTrue($unitInfo->{'X-OnlyManualStart'} // "no")) {
$unitsToSkip->{$unit} = 1;
} else {
# If this unit is socket-activated, then stop it instead
# of restarting it to make sure the new version of it is
# socket-activated.
my $socketActivated = 0;
if ($unit =~ /\.service$/) {
my @sockets = split / /, ($unitInfo->{Sockets} // "");
if (scalar @sockets == 0) {
@sockets = ("$baseName.socket");
}
foreach my $socket (@sockets) {
if (-e "$out/etc/systemd/system/$socket") {
$socketActivated = 1;
$unitsToStop->{$unit} = 1;
# If the socket was not running previously,
# start it now.
if (not defined $activePrev->{$socket}) {
$unitsToStart->{$socket} = 1;
}
}
}
}
# Don't do the rest of this for socket-activated units
# because we handled these above where we stop the unit.
# Since only services can be socket-activated, the
# following condition always evaluates to `true` for
# non-service units.
if ($socketActivated) {
return;
}
# If we are restarting a socket, also stop the corresponding
# service. This is required because restarting a socket
# when the service is already activated fails.
if ($unit =~ /\.socket$/) {
my $service = $unitInfo->{Service} // "";
if ($service eq "") {
$service = "$baseName.service";
}
if (defined $activePrev->{$service}) {
$unitsToStop->{$service} = 1;
}
$unitsToRestart->{$unit} = 1;
recordUnit($restartListFile, $unit);
} else {
# Always restart non-services instead of stopping and starting them
# because it doesn't make sense to stop them with a config from
# the old evaluation.
if (!boolIsTrue($unitInfo->{'X-StopIfChanged'} // "yes") || $unit !~ /\.service$/) {
# This unit should be restarted instead of
# stopped and started.
$unitsToRestart->{$unit} = 1;
recordUnit($restartListFile, $unit);
} else {
# We write to a file to ensure that the
# service gets restarted if we're interrupted.
$unitsToStart->{$unit} = 1;
recordUnit($startListFile, $unit);
$unitsToStop->{$unit} = 1;
}
}
}
}
}
# Figure out what units need to be stopped, started, restarted or reloaded. # Figure out what units need to be stopped, started, restarted or reloaded.
my (%unitsToStop, %unitsToSkip, %unitsToStart, %unitsToRestart, %unitsToReload); my (%unitsToStop, %unitsToSkip, %unitsToStart, %unitsToRestart, %unitsToReload);
@ -219,65 +307,7 @@ while (my ($unit, $state) = each %{$activePrev}) {
} }
elsif (fingerprintUnit($prevUnitFile) ne fingerprintUnit($newUnitFile)) { elsif (fingerprintUnit($prevUnitFile) ne fingerprintUnit($newUnitFile)) {
if ($unit eq "sysinit.target" || $unit eq "basic.target" || $unit eq "multi-user.target" || $unit eq "graphical.target") { handleModifiedUnit($unit, $baseName, $newUnitFile, $activePrev, \%unitsToStop, \%unitsToStart, \%unitsToReload, \%unitsToRestart, %unitsToSkip);
# Do nothing. These cannot be restarted directly.
} elsif ($unit =~ /\.mount$/) {
# Reload the changed mount unit to force a remount.
$unitsToReload{$unit} = 1;
recordUnit($reloadListFile, $unit);
} elsif ($unit =~ /\.socket$/ || $unit =~ /\.path$/ || $unit =~ /\.slice$/) {
# FIXME: do something?
} else {
my $unitInfo = parseUnit($newUnitFile);
if (boolIsTrue($unitInfo->{'X-ReloadIfChanged'} // "no")) {
$unitsToReload{$unit} = 1;
recordUnit($reloadListFile, $unit);
}
elsif (!boolIsTrue($unitInfo->{'X-RestartIfChanged'} // "yes") || boolIsTrue($unitInfo->{'RefuseManualStop'} // "no") || boolIsTrue($unitInfo->{'X-OnlyManualStart'} // "no")) {
$unitsToSkip{$unit} = 1;
} else {
if (!boolIsTrue($unitInfo->{'X-StopIfChanged'} // "yes")) {
# This unit should be restarted instead of
# stopped and started.
$unitsToRestart{$unit} = 1;
recordUnit($restartListFile, $unit);
} else {
# If this unit is socket-activated, then stop the
# socket unit(s) as well, and restart the
# socket(s) instead of the service.
my $socketActivated = 0;
if ($unit =~ /\.service$/) {
my @sockets = split / /, ($unitInfo->{Sockets} // "");
if (scalar @sockets == 0) {
@sockets = ("$baseName.socket");
}
foreach my $socket (@sockets) {
if (defined $activePrev->{$socket}) {
$unitsToStop{$socket} = 1;
# Only restart sockets that actually
# exist in new configuration:
if (-e "$out/etc/systemd/system/$socket") {
$unitsToStart{$socket} = 1;
recordUnit($startListFile, $socket);
$socketActivated = 1;
}
}
}
}
# If the unit is not socket-activated, record
# that this unit needs to be started below.
# We write this to a file to ensure that the
# service gets restarted if we're interrupted.
if (!$socketActivated) {
$unitsToStart{$unit} = 1;
recordUnit($startListFile, $unit);
}
$unitsToStop{$unit} = 1;
}
}
}
} }
} }
} }
@ -362,8 +392,6 @@ sub filterUnits {
} }
my @unitsToStopFiltered = filterUnits(\%unitsToStop); my @unitsToStopFiltered = filterUnits(\%unitsToStop);
my @unitsToStartFiltered = filterUnits(\%unitsToStart);
# Show dry-run actions. # Show dry-run actions.
if ($action eq "dry-activate") { if ($action eq "dry-activate") {
@ -375,21 +403,44 @@ if ($action eq "dry-activate") {
print STDERR "would activate the configuration...\n"; print STDERR "would activate the configuration...\n";
system("$out/dry-activate", "$out"); system("$out/dry-activate", "$out");
$unitsToRestart{$_} = 1 foreach # Handle the activation script requesting the restart or reload of a unit.
split('\n', read_file($dryRestartByActivationFile, err_mode => 'quiet') // ""); my %unitsToAlsoStop;
my %unitsToAlsoSkip;
foreach (split('\n', read_file($dryRestartByActivationFile, err_mode => 'quiet') // "")) {
my $unit = $_;
my $baseUnit = $unit;
my $newUnitFile = "$out/etc/systemd/system/$baseUnit";
$unitsToReload{$_} = 1 foreach # Detect template instances.
split('\n', read_file($dryReloadByActivationFile, err_mode => 'quiet') // ""); if (!-e $newUnitFile && $unit =~ /^(.*)@[^\.]*\.(.*)$/) {
$baseUnit = "$1\@.$2";
$newUnitFile = "$out/etc/systemd/system/$baseUnit";
}
my $baseName = $baseUnit;
$baseName =~ s/\.[a-z]*$//;
handleModifiedUnit($unit, $baseName, $newUnitFile, $activePrev, \%unitsToAlsoStop, \%unitsToStart, \%unitsToReload, \%unitsToRestart, %unitsToAlsoSkip);
}
unlink($dryRestartByActivationFile);
my @unitsToAlsoStopFiltered = filterUnits(\%unitsToAlsoStop);
if (scalar(keys %unitsToAlsoStop) > 0) {
print STDERR "would stop the following units as well: ", join(", ", @unitsToAlsoStopFiltered), "\n"
if scalar @unitsToAlsoStopFiltered;
}
print STDERR "would NOT restart the following changed units as well: ", join(", ", sort(keys %unitsToAlsoSkip)), "\n"
if scalar(keys %unitsToAlsoSkip) > 0;
print STDERR "would restart systemd\n" if $restartSystemd; print STDERR "would restart systemd\n" if $restartSystemd;
print STDERR "would restart the following units: ", join(", ", sort(keys %unitsToRestart)), "\n"
if scalar(keys %unitsToRestart) > 0;
print STDERR "would start the following units: ", join(", ", @unitsToStartFiltered), "\n"
if scalar @unitsToStartFiltered;
print STDERR "would reload the following units: ", join(", ", sort(keys %unitsToReload)), "\n" print STDERR "would reload the following units: ", join(", ", sort(keys %unitsToReload)), "\n"
if scalar(keys %unitsToReload) > 0; if scalar(keys %unitsToReload) > 0;
unlink($dryRestartByActivationFile); print STDERR "would restart the following units: ", join(", ", sort(keys %unitsToRestart)), "\n"
unlink($dryReloadByActivationFile); if scalar(keys %unitsToRestart) > 0;
my @unitsToStartFiltered = filterUnits(\%unitsToStart);
print STDERR "would start the following units: ", join(", ", @unitsToStartFiltered), "\n"
if scalar @unitsToStartFiltered;
exit 0; exit 0;
} }
@ -400,7 +451,7 @@ if (scalar (keys %unitsToStop) > 0) {
print STDERR "stopping the following units: ", join(", ", @unitsToStopFiltered), "\n" print STDERR "stopping the following units: ", join(", ", @unitsToStopFiltered), "\n"
if scalar @unitsToStopFiltered; if scalar @unitsToStopFiltered;
# Use current version of systemctl binary before daemon is reexeced. # Use current version of systemctl binary before daemon is reexeced.
system("$curSystemd/systemctl", "stop", "--", sort(keys %unitsToStop)); # FIXME: ignore errors? system("$curSystemd/systemctl", "stop", "--", sort(keys %unitsToStop));
} }
print STDERR "NOT restarting the following changed units: ", join(", ", sort(keys %unitsToSkip)), "\n" print STDERR "NOT restarting the following changed units: ", join(", ", sort(keys %unitsToSkip)), "\n"
@ -414,12 +465,38 @@ system("$out/activate", "$out") == 0 or $res = 2;
# Handle the activation script requesting the restart or reload of a unit. # Handle the activation script requesting the restart or reload of a unit.
# We can only restart and reload (not stop/start) because the units to be # We can only restart and reload (not stop/start) because the units to be
# stopped are already stopped before the activation script is run. # stopped are already stopped before the activation script is run. We do however
$unitsToRestart{$_} = 1 foreach # make an exception for services that are socket-activated and that have to be stopped
split('\n', read_file($restartByActivationFile, err_mode => 'quiet') // ""); # instead of being restarted.
my %unitsToAlsoStop;
my %unitsToAlsoSkip;
foreach (split('\n', read_file($restartByActivationFile, err_mode => 'quiet') // "")) {
my $unit = $_;
my $baseUnit = $unit;
my $newUnitFile = "$out/etc/systemd/system/$baseUnit";
$unitsToReload{$_} = 1 foreach # Detect template instances.
split('\n', read_file($reloadByActivationFile, err_mode => 'quiet') // ""); if (!-e $newUnitFile && $unit =~ /^(.*)@[^\.]*\.(.*)$/) {
$baseUnit = "$1\@.$2";
$newUnitFile = "$out/etc/systemd/system/$baseUnit";
}
my $baseName = $baseUnit;
$baseName =~ s/\.[a-z]*$//;
handleModifiedUnit($unit, $baseName, $newUnitFile, $activePrev, \%unitsToAlsoStop, \%unitsToStart, \%unitsToReload, \%unitsToRestart, %unitsToAlsoSkip);
}
unlink($restartByActivationFile);
my @unitsToAlsoStopFiltered = filterUnits(\%unitsToAlsoStop);
if (scalar(keys %unitsToAlsoStop) > 0) {
print STDERR "stopping the following units as well: ", join(", ", @unitsToAlsoStopFiltered), "\n"
if scalar @unitsToAlsoStopFiltered;
system("$curSystemd/systemctl", "stop", "--", sort(keys %unitsToAlsoStop));
}
print STDERR "NOT restarting the following changed units as well: ", join(", ", sort(keys %unitsToAlsoSkip)), "\n"
if scalar(keys %unitsToAlsoSkip) > 0;
# Restart systemd if necessary. Note that this is done using the # Restart systemd if necessary. Note that this is done using the
# current version of systemd, just in case the new one has trouble # current version of systemd, just in case the new one has trouble
@ -460,14 +537,40 @@ if (scalar(keys %unitsToReload) > 0) {
print STDERR "reloading the following units: ", join(", ", sort(keys %unitsToReload)), "\n"; print STDERR "reloading the following units: ", join(", ", sort(keys %unitsToReload)), "\n";
system("@systemd@/bin/systemctl", "reload", "--", sort(keys %unitsToReload)) == 0 or $res = 4; system("@systemd@/bin/systemctl", "reload", "--", sort(keys %unitsToReload)) == 0 or $res = 4;
unlink($reloadListFile); unlink($reloadListFile);
unlink($reloadByActivationFile);
} }
# Restart changed services (those that have to be restarted rather # Restart changed services (those that have to be restarted rather
# than stopped and started). # than stopped and started).
if (scalar(keys %unitsToRestart) > 0) { if (scalar(keys %unitsToRestart) > 0) {
print STDERR "restarting the following units: ", join(", ", sort(keys %unitsToRestart)), "\n"; print STDERR "restarting the following units: ", join(", ", sort(keys %unitsToRestart)), "\n";
system("@systemd@/bin/systemctl", "restart", "--", sort(keys %unitsToRestart)) == 0 or $res = 4;
# We split the units to be restarted into sockets and non-sockets.
# This is because restarting sockets may fail which is not bad by
# itself but which will prevent changes on the sockets. We usually
# restart the socket and stop the service before that. Restarting
# the socket will fail however when the service was re-activated
# in the meantime. There is no proper way to prevent that from happening.
my @unitsWithErrorHandling = grep { $_ !~ /\.socket$/ } sort(keys %unitsToRestart);
my @unitsWithoutErrorHandling = grep { $_ =~ /\.socket$/ } sort(keys %unitsToRestart);
if (scalar(@unitsWithErrorHandling) > 0) {
system("@systemd@/bin/systemctl", "restart", "--", @unitsWithErrorHandling) == 0 or $res = 4;
}
if (scalar(@unitsWithoutErrorHandling) > 0) {
# Don't print warnings from systemctl
no warnings 'once';
open(OLDERR, ">&", \*STDERR);
close(STDERR);
my $ret = system("@systemd@/bin/systemctl", "restart", "--", @unitsWithoutErrorHandling);
# Print stderr again
open(STDERR, ">&OLDERR");
if ($ret ne 0) {
print STDERR "warning: some sockets failed to restart. Please check your journal (journalctl -eb) and act accordingly.\n";
}
}
unlink($restartListFile); unlink($restartListFile);
unlink($restartByActivationFile); unlink($restartByActivationFile);
} }
@ -478,6 +581,7 @@ if (scalar(keys %unitsToRestart) > 0) {
# that are symlinks to other units. We shouldn't start both at the # that are symlinks to other units. We shouldn't start both at the
# same time because we'll get a "Failed to add path to set" error from # same time because we'll get a "Failed to add path to set" error from
# systemd. # systemd.
my @unitsToStartFiltered = filterUnits(\%unitsToStart);
print STDERR "starting the following units: ", join(", ", @unitsToStartFiltered), "\n" print STDERR "starting the following units: ", join(", ", @unitsToStartFiltered), "\n"
if scalar @unitsToStartFiltered; if scalar @unitsToStartFiltered;
system("@systemd@/bin/systemctl", "start", "--", sort(keys %unitsToStart)) == 0 or $res = 4; system("@systemd@/bin/systemctl", "start", "--", sort(keys %unitsToStart)) == 0 or $res = 4;
@ -485,7 +589,7 @@ unlink($startListFile);
# Print failed and new units. # Print failed and new units.
my (@failed, @new, @restarting); my (@failed, @new);
my $activeNew = getActiveUnits; my $activeNew = getActiveUnits;
while (my ($unit, $state) = each %{$activeNew}) { while (my ($unit, $state) = each %{$activeNew}) {
if ($state->{state} eq "failed") { if ($state->{state} eq "failed") {
@ -501,7 +605,9 @@ while (my ($unit, $state) = each %{$activeNew}) {
push @failed, $unit; push @failed, $unit;
} }
} }
elsif ($state->{state} ne "failed" && !defined $activePrev->{$unit}) { # Ignore scopes since they are not managed by this script but rather
# created and managed by third-party services via the systemd dbus API.
elsif ($state->{state} ne "failed" && !defined $activePrev->{$unit} && $unit !~ /\.scope$/) {
push @new, $unit; push @new, $unit;
} }
} }

View file

@ -84,6 +84,13 @@ let
export localeArchive="${config.i18n.glibcLocales}/lib/locale/locale-archive" export localeArchive="${config.i18n.glibcLocales}/lib/locale/locale-archive"
substituteAll ${./switch-to-configuration.pl} $out/bin/switch-to-configuration substituteAll ${./switch-to-configuration.pl} $out/bin/switch-to-configuration
chmod +x $out/bin/switch-to-configuration chmod +x $out/bin/switch-to-configuration
${optionalString (pkgs.stdenv.hostPlatform == pkgs.stdenv.buildPlatform) ''
if ! output=$($perl/bin/perl -c $out/bin/switch-to-configuration 2>&1); then
echo "switch-to-configuration syntax is not valid:"
echo "$output"
exit 1
fi
''}
echo -n "${toString config.system.extraDependencies}" > $out/extra-dependencies echo -n "${toString config.system.extraDependencies}" > $out/extra-dependencies

View file

@ -332,6 +332,7 @@ let
if [ $? == 0 ]; then if [ $? == 0 ]; then
echo -ne "$new_salt\n$new_iterations" > /crypt-storage${dev.yubikey.storage.path} echo -ne "$new_salt\n$new_iterations" > /crypt-storage${dev.yubikey.storage.path}
sync /crypt-storage${dev.yubikey.storage.path}
else else
echo "Warning: Could not update LUKS key, current challenge persists!" echo "Warning: Could not update LUKS key, current challenge persists!"
fi fi

View file

@ -13,23 +13,140 @@ let
''; '';
ovmfFilePrefix = if pkgs.stdenv.isAarch64 then "AAVMF" else "OVMF"; ovmfFilePrefix = if pkgs.stdenv.isAarch64 then "AAVMF" else "OVMF";
qemuConfigFile = pkgs.writeText "qemu.conf" '' qemuConfigFile = pkgs.writeText "qemu.conf" ''
${optionalString cfg.qemuOvmf '' ${optionalString cfg.qemu.ovmf.enable ''
nvram = [ "/run/libvirt/nix-ovmf/${ovmfFilePrefix}_CODE.fd:/run/libvirt/nix-ovmf/${ovmfFilePrefix}_VARS.fd" ] nvram = [ "/run/libvirt/nix-ovmf/${ovmfFilePrefix}_CODE.fd:/run/libvirt/nix-ovmf/${ovmfFilePrefix}_VARS.fd" ]
''} ''}
${optionalString (!cfg.qemuRunAsRoot) '' ${optionalString (!cfg.qemu.runAsRoot) ''
user = "qemu-libvirtd" user = "qemu-libvirtd"
group = "qemu-libvirtd" group = "qemu-libvirtd"
''} ''}
${cfg.qemuVerbatimConfig} ${cfg.qemu.verbatimConfig}
''; '';
dirName = "libvirt"; dirName = "libvirt";
subDirs = list: [ dirName ] ++ map (e: "${dirName}/${e}") list; subDirs = list: [ dirName ] ++ map (e: "${dirName}/${e}") list;
in { ovmfModule = types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = true;
description = ''
Allows libvirtd to take advantage of OVMF when creating new
QEMU VMs with UEFI boot.
'';
};
package = mkOption {
type = types.package;
default = pkgs.OVMF;
defaultText = literalExpression "pkgs.OVMF";
example = literalExpression "pkgs.OVMFFull";
description = ''
OVMF package to use.
'';
};
};
};
swtpmModule = types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Allows libvirtd to use swtpm to create an emulated TPM.
'';
};
package = mkOption {
type = types.package;
default = pkgs.swtpm;
defaultText = literalExpression "pkgs.swtpm";
description = ''
swtpm package to use.
'';
};
};
};
qemuModule = types.submodule {
options = {
package = mkOption {
type = types.package;
default = pkgs.qemu;
defaultText = literalExpression "pkgs.qemu";
description = ''
Qemu package to use with libvirt.
`pkgs.qemu` can emulate alien architectures (e.g. aarch64 on x86)
`pkgs.qemu_kvm` saves disk space allowing to emulate only host architectures.
'';
};
runAsRoot = mkOption {
type = types.bool;
default = true;
description = ''
If true, libvirtd runs qemu as root.
If false, libvirtd runs qemu as unprivileged user qemu-libvirtd.
Changing this option to false may cause file permission issues
for existing guests. To fix these, manually change ownership
of affected files in /var/lib/libvirt/qemu to qemu-libvirtd.
'';
};
verbatimConfig = mkOption {
type = types.lines;
default = ''
namespaces = []
'';
description = ''
Contents written to the qemu configuration file, qemu.conf.
Make sure to include a proper namespace configuration when
supplying custom configuration.
'';
};
ovmf = mkOption {
type = ovmfModule;
default = { };
description = ''
QEMU's OVMF options.
'';
};
swtpm = mkOption {
type = swtpmModule;
default = { };
description = ''
QEMU's swtpm options.
'';
};
};
};
in
{
imports = [ imports = [
(mkRemovedOptionModule [ "virtualisation" "libvirtd" "enableKVM" ] (mkRemovedOptionModule [ "virtualisation" "libvirtd" "enableKVM" ]
"Set the option `virtualisation.libvirtd.qemuPackage' instead.") "Set the option `virtualisation.libvirtd.qemu.package' instead.")
(mkRenamedOptionModule
[ "virtualisation" "libvirtd" "qemuPackage" ]
[ "virtualisation" "libvirtd" "qemu" "package" ])
(mkRenamedOptionModule
[ "virtualisation" "libvirtd" "qemuRunAsRoot" ]
[ "virtualisation" "libvirtd" "qemu" "runAsRoot" ])
(mkRenamedOptionModule
[ "virtualisation" "libvirtd" "qemuVerbatimConfig" ]
[ "virtualisation" "libvirtd" "qemu" "verbatimConfig" ])
(mkRenamedOptionModule
[ "virtualisation" "libvirtd" "qemuOvmf" ]
[ "virtualisation" "libvirtd" "qemu" "ovmf" "enable" ])
(mkRenamedOptionModule
[ "virtualisation" "libvirtd" "qemuOvmfPackage" ]
[ "virtualisation" "libvirtd" "qemu" "ovmf" "package" ])
(mkRenamedOptionModule
[ "virtualisation" "libvirtd" "qemuSwtpm" ]
[ "virtualisation" "libvirtd" "qemu" "swtpm" "enable" ])
]; ];
###### interface ###### interface
@ -56,17 +173,6 @@ in {
''; '';
}; };
qemuPackage = mkOption {
type = types.package;
default = pkgs.qemu;
defaultText = literalExpression "pkgs.qemu";
description = ''
Qemu package to use with libvirt.
`pkgs.qemu` can emulate alien architectures (e.g. aarch64 on x86)
`pkgs.qemu_kvm` saves disk space allowing to emulate only host architectures.
'';
};
extraConfig = mkOption { extraConfig = mkOption {
type = types.lines; type = types.lines;
default = ""; default = "";
@ -76,39 +182,6 @@ in {
''; '';
}; };
qemuRunAsRoot = mkOption {
type = types.bool;
default = true;
description = ''
If true, libvirtd runs qemu as root.
If false, libvirtd runs qemu as unprivileged user qemu-libvirtd.
Changing this option to false may cause file permission issues
for existing guests. To fix these, manually change ownership
of affected files in /var/lib/libvirt/qemu to qemu-libvirtd.
'';
};
qemuVerbatimConfig = mkOption {
type = types.lines;
default = ''
namespaces = []
'';
description = ''
Contents written to the qemu configuration file, qemu.conf.
Make sure to include a proper namespace configuration when
supplying custom configuration.
'';
};
qemuOvmf = mkOption {
type = types.bool;
default = true;
description = ''
Allows libvirtd to take advantage of OVMF when creating new
QEMU VMs with UEFI boot.
'';
};
extraOptions = mkOption { extraOptions = mkOption {
type = types.listOf types.str; type = types.listOf types.str;
default = [ ]; default = [ ];
@ -119,7 +192,7 @@ in {
}; };
onBoot = mkOption { onBoot = mkOption {
type = types.enum ["start" "ignore" ]; type = types.enum [ "start" "ignore" ];
default = "start"; default = "start";
description = '' description = ''
Specifies the action to be done to / on the guests when the host boots. Specifies the action to be done to / on the guests when the host boots.
@ -131,7 +204,7 @@ in {
}; };
onShutdown = mkOption { onShutdown = mkOption {
type = types.enum ["shutdown" "suspend" ]; type = types.enum [ "shutdown" "suspend" ];
default = "suspend"; default = "suspend";
description = '' description = ''
When shutting down / restarting the host what method should When shutting down / restarting the host what method should
@ -149,6 +222,13 @@ in {
''; '';
}; };
qemu = mkOption {
type = qemuModule;
default = { };
description = ''
QEMU related options.
'';
};
}; };
@ -161,13 +241,19 @@ in {
assertion = config.security.polkit.enable; assertion = config.security.polkit.enable;
message = "The libvirtd module currently requires Polkit to be enabled ('security.polkit.enable = true')."; message = "The libvirtd module currently requires Polkit to be enabled ('security.polkit.enable = true').";
} }
{
assertion = builtins.elem "fd" cfg.qemu.ovmf.package.outputs;
message = "The option 'virtualisation.libvirtd.qemuOvmfPackage' needs a package that has an 'fd' output.";
}
]; ];
environment = { environment = {
# this file is expected in /etc/qemu and not sysconfdir (/var/lib) # this file is expected in /etc/qemu and not sysconfdir (/var/lib)
etc."qemu/bridge.conf".text = lib.concatMapStringsSep "\n" (e: etc."qemu/bridge.conf".text = lib.concatMapStringsSep "\n"
"allow ${e}") cfg.allowedBridges; (e:
systemPackages = with pkgs; [ libressl.nc iptables cfg.package cfg.qemuPackage ]; "allow ${e}")
cfg.allowedBridges;
systemPackages = with pkgs; [ libressl.nc iptables cfg.package cfg.qemu.package ];
etc.ethertypes.source = "${pkgs.ebtables}/etc/ethertypes"; etc.ethertypes.source = "${pkgs.ebtables}/etc/ethertypes";
}; };
@ -209,17 +295,17 @@ in {
cp -f ${qemuConfigFile} /var/lib/${dirName}/qemu.conf cp -f ${qemuConfigFile} /var/lib/${dirName}/qemu.conf
# stable (not GC'able as in /nix/store) paths for using in <emulator> section of xml configs # stable (not GC'able as in /nix/store) paths for using in <emulator> section of xml configs
for emulator in ${cfg.package}/libexec/libvirt_lxc ${cfg.qemuPackage}/bin/qemu-kvm ${cfg.qemuPackage}/bin/qemu-system-*; do for emulator in ${cfg.package}/libexec/libvirt_lxc ${cfg.qemu.package}/bin/qemu-kvm ${cfg.qemu.package}/bin/qemu-system-*; do
ln -s --force "$emulator" /run/${dirName}/nix-emulators/ ln -s --force "$emulator" /run/${dirName}/nix-emulators/
done done
for helper in libexec/qemu-bridge-helper bin/qemu-pr-helper; do for helper in libexec/qemu-bridge-helper bin/qemu-pr-helper; do
ln -s --force ${cfg.qemuPackage}/$helper /run/${dirName}/nix-helpers/ ln -s --force ${cfg.qemu.package}/$helper /run/${dirName}/nix-helpers/
done done
${optionalString cfg.qemuOvmf '' ${optionalString cfg.qemu.ovmf.enable ''
ln -s --force ${pkgs.OVMF.fd}/FV/${ovmfFilePrefix}_CODE.fd /run/${dirName}/nix-ovmf/ ln -s --force ${cfg.qemu.ovmf.package.fd}/FV/${ovmfFilePrefix}_CODE.fd /run/${dirName}/nix-ovmf/
ln -s --force ${pkgs.OVMF.fd}/FV/${ovmfFilePrefix}_VARS.fd /run/${dirName}/nix-ovmf/ ln -s --force ${cfg.qemu.ovmf.package.fd}/FV/${ovmfFilePrefix}_VARS.fd /run/${dirName}/nix-ovmf/
''} ''}
''; '';
@ -238,12 +324,17 @@ in {
++ optional vswitch.enable "ovs-vswitchd.service"; ++ optional vswitch.enable "ovs-vswitchd.service";
environment.LIBVIRTD_ARGS = escapeShellArgs ( environment.LIBVIRTD_ARGS = escapeShellArgs (
[ "--config" configFile [
"--timeout" "120" # from ${libvirt}/var/lib/sysconfig/libvirtd "--config"
] ++ cfg.extraOptions); configFile
"--timeout"
"120" # from ${libvirt}/var/lib/sysconfig/libvirtd
] ++ cfg.extraOptions
);
path = [ cfg.qemuPackage ] # libvirtd requires qemu-img to manage disk images path = [ cfg.qemu.package ] # libvirtd requires qemu-img to manage disk images
++ optional vswitch.enable vswitch.package; ++ optional vswitch.enable vswitch.package
++ optional cfg.qemu.swtpm.enable cfg.qemu.swtpm.package;
serviceConfig = { serviceConfig = {
Type = "notify"; Type = "notify";

View file

@ -311,6 +311,7 @@ in
nitter = handleTest ./nitter.nix {}; nitter = handleTest ./nitter.nix {};
nix-serve = handleTest ./nix-ssh-serve.nix {}; nix-serve = handleTest ./nix-ssh-serve.nix {};
nix-ssh-serve = handleTest ./nix-ssh-serve.nix {}; nix-ssh-serve = handleTest ./nix-ssh-serve.nix {};
nixops = handleTest ./nixops/default.nix {};
nixos-generate-config = handleTest ./nixos-generate-config.nix {}; nixos-generate-config = handleTest ./nixos-generate-config.nix {};
node-red = handleTest ./node-red.nix {}; node-red = handleTest ./node-red.nix {};
nomad = handleTest ./nomad.nix {}; nomad = handleTest ./nomad.nix {};

View file

@ -383,5 +383,18 @@ import ./make-test-python.nix ({ pkgs, ... }: {
docker.succeed( docker.succeed(
"tar -tf ${examples.exportBash} | grep '\./bin/bash' > /dev/null" "tar -tf ${examples.exportBash} | grep '\./bin/bash' > /dev/null"
) )
with subtest("Ensure bare paths in contents are loaded correctly"):
docker.succeed(
"docker load --input='${examples.build-image-with-path}'",
"docker run --rm build-image-with-path bash -c '[[ -e /hello.txt ]]'",
"docker rmi build-image-with-path",
)
docker.succeed(
"${examples.layered-image-with-path} | docker load",
"docker run --rm layered-image-with-path bash -c '[[ -e /hello.txt ]]'",
"docker rmi layered-image-with-path",
)
''; '';
}) })

View file

@ -0,0 +1,115 @@
{ pkgs, ... }:
let
inherit (pkgs) lib;
tests = {
# TODO: uncomment stable
# - Blocked on https://github.com/NixOS/nixpkgs/issues/138584 which has a
# PR in staging: https://github.com/NixOS/nixpkgs/pull/139986
# - Alternatively, blocked on a NixOps 2 release
# https://github.com/NixOS/nixops/issues/1242
# stable = testsLegacyNetwork { nixopsPkg = pkgs.nixops; };
unstable = testsForPackage { nixopsPkg = pkgs.nixopsUnstable; };
# inherit testsForPackage;
};
testsForPackage = lib.makeOverridable (args: lib.recurseIntoAttrs {
legacyNetwork = testLegacyNetwork args;
});
testLegacyNetwork = { nixopsPkg }: pkgs.nixosTest ({
nodes = {
deployer = { config, lib, nodes, pkgs, ... }: {
imports = [ ../../modules/installer/cd-dvd/channel.nix ];
environment.systemPackages = [ nixopsPkg ];
nix.binaryCaches = lib.mkForce [ ];
users.users.person.isNormalUser = true;
virtualisation.writableStore = true;
virtualisation.memorySize = 1024 /*MiB*/;
virtualisation.pathsInNixDB = [
pkgs.hello
pkgs.figlet
# This includes build dependencies all the way down. Not efficient,
# but we do need build deps to an *arbitrary* depth, which is hard to
# determine.
(allDrvOutputs nodes.server.config.system.build.toplevel)
];
};
server = { lib, ... }: {
imports = [ ./legacy/base-configuration.nix ];
};
};
testScript = { nodes }:
let
deployerSetup = pkgs.writeScript "deployerSetup" ''
#!${pkgs.runtimeShell}
set -eux -o pipefail
cp --no-preserve=mode -r ${./legacy} unicorn
cp --no-preserve=mode ${../ssh-keys.nix} unicorn/ssh-keys.nix
mkdir -p ~/.ssh
cp ${snakeOilPrivateKey} ~/.ssh/id_ed25519
chmod 0400 ~/.ssh/id_ed25519
'';
serverNetworkJSON = pkgs.writeText "server-network.json"
(builtins.toJSON nodes.server.config.system.build.networkConfig);
in
''
import shlex
def deployer_do(cmd):
cmd = shlex.quote(cmd)
return deployer.succeed(f"su person -l -c {cmd} &>/dev/console")
start_all()
deployer_do("cat /etc/hosts")
deployer_do("${deployerSetup}")
deployer_do("cp ${serverNetworkJSON} unicorn/server-network.json")
# Establish that ssh works, regardless of nixops
# Easy way to accept the server host key too.
server.wait_for_open_port(22)
deployer.wait_for_unit("network.target")
# Put newlines on console, to flush the console reader's line buffer
# in case nixops' last output did not end in a newline, as is the case
# with a status line (if implemented?)
deployer.succeed("while sleep 60s; do echo [60s passed] >/dev/console; done &")
deployer_do("cd ~/unicorn; ssh -oStrictHostKeyChecking=accept-new root@server echo hi")
# Create and deploy
deployer_do("cd ~/unicorn; nixops create")
deployer_do("cd ~/unicorn; nixops deploy --confirm")
deployer_do("cd ~/unicorn; nixops ssh server 'hello | figlet'")
'';
});
inherit (import ../ssh-keys.nix pkgs) snakeOilPrivateKey snakeOilPublicKey;
/*
Return a store path with a closure containing everything including
derivations and all build dependency outputs, all the way down.
*/
allDrvOutputs = pkg:
let name = lib.strings.sanitizeDerivationName "allDrvOutputs-${pkg.pname or pkg.name or "unknown"}";
in
pkgs.runCommand name { refs = pkgs.writeReferencesToFile pkg.drvPath; } ''
touch $out
while read ref; do
case $ref in
*.drv)
cat $ref >>$out
;;
esac
done <$refs
'';
in
tests

View file

@ -0,0 +1,31 @@
{ lib, modulesPath, pkgs, ... }:
let
ssh-keys =
if builtins.pathExists ../../ssh-keys.nix
then # Outside sandbox
../../ssh-keys.nix
else # In sandbox
./ssh-keys.nix;
inherit (import ssh-keys pkgs)
snakeOilPrivateKey snakeOilPublicKey;
in
{
imports = [
(modulesPath + "/virtualisation/qemu-vm.nix")
(modulesPath + "/testing/test-instrumentation.nix")
];
virtualisation.writableStore = true;
nix.binaryCaches = lib.mkForce [ ];
virtualisation.graphics = false;
documentation.enable = false;
services.qemuGuest.enable = true;
boot.loader.grub.enable = false;
services.openssh.enable = true;
users.users.root.openssh.authorizedKeys.keys = [
snakeOilPublicKey
];
security.pam.services.sshd.limits =
[{ domain = "*"; item = "memlock"; type = "-"; value = 1024; }];
}

View file

@ -0,0 +1,15 @@
{
network = {
description = "Legacy Network using <nixpkgs> and legacy state.";
# NB this is not really what makes it a legacy network; lack of flakes is.
storage.legacy = { };
};
server = { lib, pkgs, ... }: {
deployment.targetEnv = "none";
imports = [
./base-configuration.nix
(lib.modules.importJSON ./server-network.json)
];
environment.systemPackages = [ pkgs.hello pkgs.figlet ];
};
}

View file

@ -20,6 +20,7 @@ import ./make-test-python.nix ({ pkgs, ... }:
server = server =
{ ... }: { ... }:
{ services.samba.enable = true; { services.samba.enable = true;
services.samba.openFirewall = true;
services.samba.shares.public = services.samba.shares.public =
{ path = "/public"; { path = "/public";
"read only" = true; "read only" = true;
@ -27,8 +28,6 @@ import ./make-test-python.nix ({ pkgs, ... }:
"guest ok" = "yes"; "guest ok" = "yes";
comment = "Public samba share."; comment = "Public samba share.";
}; };
networking.firewall.allowedTCPPorts = [ 139 445 ];
networking.firewall.allowedUDPPorts = [ 137 138 ];
}; };
}; };

View file

@ -7,15 +7,224 @@ import ./make-test-python.nix ({ pkgs, ...} : {
}; };
nodes = { nodes = {
machine = { ... }: { machine = { config, pkgs, lib, ... }: {
environment.systemPackages = [ pkgs.socat ]; # for the socket activation stuff
users.mutableUsers = false; users.mutableUsers = false;
specialisation = {
# A system with a simple socket-activated unit
simple-socket.configuration = {
systemd.services.socket-activated.serviceConfig = {
ExecStart = pkgs.writeScript "socket-test.py" /* python */ ''
#!${pkgs.python3}/bin/python3
from socketserver import TCPServer, StreamRequestHandler
import socket
class Handler(StreamRequestHandler):
def handle(self):
self.wfile.write("hello".encode("utf-8"))
class Server(TCPServer):
def __init__(self, server_address, handler_cls):
# Invoke base but omit bind/listen steps (performed by systemd activation!)
TCPServer.__init__(
self, server_address, handler_cls, bind_and_activate=False)
# Override socket
self.socket = socket.fromfd(3, self.address_family, self.socket_type)
if __name__ == "__main__":
server = Server(("localhost", 1234), Handler)
server.serve_forever()
'';
};
systemd.sockets.socket-activated = {
wantedBy = [ "sockets.target" ];
listenStreams = [ "/run/test.sock" ];
socketConfig.SocketMode = lib.mkDefault "0777";
};
};
# The same system but the socket is modified
modified-socket.configuration = {
imports = [ config.specialisation.simple-socket.configuration ];
systemd.sockets.socket-activated.socketConfig.SocketMode = "0666";
};
# The same system but the service is modified
modified-service.configuration = {
imports = [ config.specialisation.simple-socket.configuration ];
systemd.services.socket-activated.serviceConfig.X-Test = "test";
};
# The same system but both service and socket are modified
modified-service-and-socket.configuration = {
imports = [ config.specialisation.simple-socket.configuration ];
systemd.services.socket-activated.serviceConfig.X-Test = "some_value";
systemd.sockets.socket-activated.socketConfig.SocketMode = "0444";
};
# A system with a socket-activated service and some simple services
service-and-socket.configuration = {
imports = [ config.specialisation.simple-socket.configuration ];
systemd.services.simple-service = {
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = "${pkgs.coreutils}/bin/true";
};
};
systemd.services.simple-restart-service = {
stopIfChanged = false;
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = "${pkgs.coreutils}/bin/true";
};
};
systemd.services.simple-reload-service = {
reloadIfChanged = true;
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = "${pkgs.coreutils}/bin/true";
ExecReload = "${pkgs.coreutils}/bin/true";
};
};
systemd.services.no-restart-service = {
restartIfChanged = false;
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = "${pkgs.coreutils}/bin/true";
};
};
};
# The same system but with an activation script that restarts all services
restart-and-reload-by-activation-script.configuration = {
imports = [ config.specialisation.service-and-socket.configuration ];
system.activationScripts.restart-and-reload-test = {
supportsDryActivation = true;
deps = [];
text = ''
if [ "$NIXOS_ACTION" = dry-activate ]; then
f=/run/nixos/dry-activation-restart-list
else
f=/run/nixos/activation-restart-list
fi
cat <<EOF >> "$f"
simple-service.service
simple-restart-service.service
simple-reload-service.service
no-restart-service.service
socket-activated.service
EOF
'';
};
};
# A system with a timer
with-timer.configuration = {
systemd.timers.test-timer = {
wantedBy = [ "timers.target" ];
timerConfig.OnCalendar = "@1395716396"; # chosen by fair dice roll
};
systemd.services.test-timer = {
serviceConfig = {
Type = "oneshot";
ExecStart = "${pkgs.coreutils}/bin/true";
};
};
};
# The same system but with another time
with-timer-modified.configuration = {
imports = [ config.specialisation.with-timer.configuration ];
systemd.timers.test-timer.timerConfig.OnCalendar = lib.mkForce "Fri 2012-11-23 16:00:00";
};
# A system with a systemd mount
with-mount.configuration = {
systemd.mounts = [
{
description = "Testmount";
what = "tmpfs";
type = "tmpfs";
where = "/testmount";
options = "size=1M";
wantedBy = [ "local-fs.target" ];
}
];
};
# The same system but with another time
with-mount-modified.configuration = {
systemd.mounts = [
{
description = "Testmount";
what = "tmpfs";
type = "tmpfs";
where = "/testmount";
options = "size=10M";
wantedBy = [ "local-fs.target" ];
}
];
};
# A system with a path unit
with-path.configuration = {
systemd.paths.test-watch = {
wantedBy = [ "paths.target" ];
pathConfig.PathExists = "/testpath";
};
systemd.services.test-watch = {
serviceConfig = {
Type = "oneshot";
ExecStart = "${pkgs.coreutils}/bin/touch /testpath-modified";
};
};
};
# The same system but watching another file
with-path-modified.configuration = {
imports = [ config.specialisation.with-path.configuration ];
systemd.paths.test-watch.pathConfig.PathExists = lib.mkForce "/testpath2";
};
# A system with a slice
with-slice.configuration = {
systemd.slices.testslice.sliceConfig.MemoryMax = "1"; # don't allow memory allocation
systemd.services.testservice = {
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = "${pkgs.coreutils}/bin/true";
Slice = "testslice.slice";
};
};
};
# The same system but the slice allows to allocate memory
with-slice-non-crashing.configuration = {
imports = [ config.specialisation.with-slice.configuration ];
systemd.slices.testslice.sliceConfig.MemoryMax = lib.mkForce null;
};
};
}; };
other = { ... }: { other = { ... }: {
users.mutableUsers = true; users.mutableUsers = true;
}; };
}; };
testScript = {nodes, ...}: let testScript = { nodes, ... }: let
originalSystem = nodes.machine.config.system.build.toplevel; originalSystem = nodes.machine.config.system.build.toplevel;
otherSystem = nodes.other.config.system.build.toplevel; otherSystem = nodes.other.config.system.build.toplevel;
@ -27,12 +236,182 @@ import ./make-test-python.nix ({ pkgs, ...} : {
set -o pipefail set -o pipefail
exec env -i "$@" | tee /dev/stderr exec env -i "$@" | tee /dev/stderr
''; '';
in '' in /* python */ ''
def switch_to_specialisation(name, action="test"):
out = machine.succeed(f"${originalSystem}/specialisation/{name}/bin/switch-to-configuration {action} 2>&1")
assert_lacks(out, "switch-to-configuration line") # Perl warnings
return out
def assert_contains(haystack, needle):
if needle not in haystack:
print("The haystack that will cause the following exception is:")
print("---")
print(haystack)
print("---")
raise Exception(f"Expected string '{needle}' was not found")
def assert_lacks(haystack, needle):
if needle in haystack:
print("The haystack that will cause the following exception is:")
print("---")
print(haystack, end="")
print("---")
raise Exception(f"Unexpected string '{needle}' was found")
machine.succeed( machine.succeed(
"${stderrRunner} ${originalSystem}/bin/switch-to-configuration test" "${stderrRunner} ${originalSystem}/bin/switch-to-configuration test"
) )
machine.succeed( machine.succeed(
"${stderrRunner} ${otherSystem}/bin/switch-to-configuration test" "${stderrRunner} ${otherSystem}/bin/switch-to-configuration test"
) )
with subtest("systemd sockets"):
machine.succeed("${originalSystem}/bin/switch-to-configuration test")
# Simple socket is created
out = switch_to_specialisation("simple-socket")
assert_lacks(out, "stopping the following units:")
# not checking for reload because dbus gets reloaded
assert_lacks(out, "restarting the following units:")
assert_lacks(out, "\nstarting the following units:")
assert_contains(out, "the following new units were started: socket-activated.socket\n")
assert_lacks(out, "as well:")
machine.succeed("[ $(stat -c%a /run/test.sock) = 777 ]")
# Changing the socket restarts it
out = switch_to_specialisation("modified-socket")
assert_lacks(out, "stopping the following units:")
#assert_lacks(out, "reloading the following units:")
assert_contains(out, "restarting the following units: socket-activated.socket\n")
assert_lacks(out, "\nstarting the following units:")
assert_lacks(out, "the following new units were started:")
assert_lacks(out, "as well:")
machine.succeed("[ $(stat -c%a /run/test.sock) = 666 ]") # change was applied
# The unit is properly activated when the socket is accessed
if machine.succeed("socat - UNIX-CONNECT:/run/test.sock") != "hello":
raise Exception("Socket was not properly activated")
# Changing the socket restarts it and ignores the active service
out = switch_to_specialisation("simple-socket")
assert_contains(out, "stopping the following units: socket-activated.service\n")
assert_lacks(out, "reloading the following units:")
assert_contains(out, "restarting the following units: socket-activated.socket\n")
assert_lacks(out, "\nstarting the following units:")
assert_lacks(out, "the following new units were started:")
assert_lacks(out, "as well:")
machine.succeed("[ $(stat -c%a /run/test.sock) = 777 ]") # change was applied
# Changing the service does nothing when the service is not active
out = switch_to_specialisation("modified-service")
assert_lacks(out, "stopping the following units:")
assert_lacks(out, "reloading the following units:")
assert_lacks(out, "restarting the following units:")
assert_lacks(out, "\nstarting the following units:")
assert_lacks(out, "the following new units were started:")
assert_lacks(out, "as well:")
# Activating the service and modifying it stops it but leaves the socket untouched
machine.succeed("socat - UNIX-CONNECT:/run/test.sock")
out = switch_to_specialisation("simple-socket")
assert_contains(out, "stopping the following units: socket-activated.service\n")
assert_lacks(out, "reloading the following units:")
assert_lacks(out, "restarting the following units:")
assert_lacks(out, "\nstarting the following units:")
assert_lacks(out, "the following new units were started:")
assert_lacks(out, "as well:")
# Activating the service and both the service and the socket stops the service and restarts the socket
machine.succeed("socat - UNIX-CONNECT:/run/test.sock")
out = switch_to_specialisation("modified-service-and-socket")
assert_contains(out, "stopping the following units: socket-activated.service\n")
assert_lacks(out, "reloading the following units:")
assert_contains(out, "restarting the following units: socket-activated.socket\n")
assert_lacks(out, "\nstarting the following units:")
assert_lacks(out, "the following new units were started:")
assert_lacks(out, "as well:")
with subtest("restart and reload by activation file"):
out = switch_to_specialisation("service-and-socket")
# Switch to a system where the example services get restarted
# by the activation script
out = switch_to_specialisation("restart-and-reload-by-activation-script")
assert_lacks(out, "stopping the following units:")
assert_contains(out, "stopping the following units as well: simple-service.service, socket-activated.service\n")
assert_contains(out, "reloading the following units: simple-reload-service.service\n")
assert_contains(out, "restarting the following units: simple-restart-service.service\n")
assert_contains(out, "\nstarting the following units: simple-service.service")
# The same, but in dry mode
switch_to_specialisation("service-and-socket")
out = switch_to_specialisation("restart-and-reload-by-activation-script", action="dry-activate")
assert_lacks(out, "would stop the following units:")
assert_contains(out, "would stop the following units as well: simple-service.service, socket-activated.service\n")
assert_contains(out, "would reload the following units: simple-reload-service.service\n")
assert_contains(out, "would restart the following units: simple-restart-service.service\n")
assert_contains(out, "\nwould start the following units: simple-service.service")
with subtest("mounts"):
switch_to_specialisation("with-mount")
out = machine.succeed("mount | grep 'on /testmount'")
assert_contains(out, "size=1024k")
out = switch_to_specialisation("with-mount-modified")
assert_lacks(out, "stopping the following units:")
assert_contains(out, "reloading the following units: testmount.mount\n")
assert_lacks(out, "restarting the following units:")
assert_lacks(out, "\nstarting the following units:")
assert_lacks(out, "the following new units were started:")
assert_lacks(out, "as well:")
# It changed
out = machine.succeed("mount | grep 'on /testmount'")
assert_contains(out, "size=10240k")
with subtest("timers"):
switch_to_specialisation("with-timer")
out = machine.succeed("systemctl show test-timer.timer")
assert_contains(out, "OnCalendar=2014-03-25 02:59:56 UTC")
out = switch_to_specialisation("with-timer-modified")
assert_lacks(out, "stopping the following units:")
assert_lacks(out, "reloading the following units:")
assert_contains(out, "restarting the following units: test-timer.timer\n")
assert_lacks(out, "\nstarting the following units:")
assert_lacks(out, "the following new units were started:")
assert_lacks(out, "as well:")
# It changed
out = machine.succeed("systemctl show test-timer.timer")
assert_contains(out, "OnCalendar=Fri 2012-11-23 16:00:00")
with subtest("paths"):
switch_to_specialisation("with-path")
machine.fail("test -f /testpath-modified")
# touch the file, unit should be triggered
machine.succeed("touch /testpath")
machine.wait_until_succeeds("test -f /testpath-modified")
machine.succeed("rm /testpath /testpath-modified")
switch_to_specialisation("with-path-modified")
machine.succeed("touch /testpath")
machine.fail("test -f /testpath-modified")
machine.succeed("touch /testpath2")
machine.wait_until_succeeds("test -f /testpath-modified")
# This test ensures that changes to slice configuration get applied.
# We test this by having a slice that allows no memory allocation at
# all and starting a service within it. If the service crashes, the slice
# is applied and if we modify the slice to allow memory allocation, the
# service should successfully start.
with subtest("slices"):
machine.succeed("echo 0 > /proc/sys/vm/panic_on_oom") # allow OOMing
out = switch_to_specialisation("with-slice")
machine.fail("systemctl start testservice.service")
out = switch_to_specialisation("with-slice-non-crashing")
machine.succeed("systemctl start testservice.service")
machine.succeed("echo 1 > /proc/sys/vm/panic_on_oom") # disallow OOMing
''; '';
}) })

View file

@ -7,9 +7,9 @@ stdenv.mkDerivation rec {
version = "1.3.0.1"; version = "1.3.0.1";
src = fetchFromGitHub { src = fetchFromGitHub {
rev = version;
repo = "mimic1";
owner = "MycroftAI"; owner = "MycroftAI";
repo = "mimic1";
rev = version;
sha256 = "1agwgby9ql8r3x5rd1rgx3xp9y4cdg4pi3kqlz3vanv9na8nf3id"; sha256 = "1agwgby9ql8r3x5rd1rgx3xp9y4cdg4pi3kqlz3vanv9na8nf3id";
}; };

View file

@ -14,16 +14,16 @@ let
in in
rustPlatform.buildRustPackage rec { rustPlatform.buildRustPackage rec {
pname = "ncspot"; pname = "ncspot";
version = "0.8.2"; version = "0.9.0";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "hrkfdn"; owner = "hrkfdn";
repo = "ncspot"; repo = "ncspot";
rev = "v${version}"; rev = "v${version}";
sha256 = "1rs1jy7zzfgqzr64ld8whn0wlw8n7rk1svxx0xfxm3ynmgc7sd68"; sha256 = "07qqs5q64zaxl3b2091vjihqb35fm0136cm4zibrgpx21akmbvr2";
}; };
cargoSha256 = "10g7gdi1iz751wa60vr4fs0cvfsgs3pfcp8pnywicl0vsdp25fmc"; cargoSha256 = "0sdbba32f56z2q7kha5fxw2f00hikbz9sf4zl4wfl2i9b13j7mj0";
cargoBuildFlags = [ "--no-default-features" "--features" "${lib.concatStringsSep "," features}" ]; cargoBuildFlags = [ "--no-default-features" "--features" "${lib.concatStringsSep "," features}" ];

View file

@ -1,26 +1,35 @@
{ fetchurl, fetchpatch, lib, stdenv, pkg-config, intltool, libpulseaudio, { fetchurl
gtkmm3 , libcanberra-gtk3, gnome, wrapGAppsHook }: , fetchpatch
, lib
, stdenv
, pkg-config
, intltool
, libpulseaudio
, gtkmm3
, libsigcxx
, libcanberra-gtk3
, json-glib
, gnome
, wrapGAppsHook
}:
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "pavucontrol"; pname = "pavucontrol";
version = "4.0"; version = "5.0";
src = fetchurl { src = fetchurl {
url = "https://freedesktop.org/software/pulseaudio/${pname}/${pname}-${version}.tar.xz"; url = "https://freedesktop.org/software/pulseaudio/${pname}/${pname}-${version}.tar.xz";
sha256 = "1qhlkl3g8d7h72xjskii3g1l7la2cavwp69909pzmbi2jyn5pi4g"; sha256 = "sha256-zityw7XxpwrQ3xndgXUPlFW9IIcNHTo20gU2ry6PTno=";
}; };
patches = [ buildInputs = [
# Can be removed with the next version bump libpulseaudio
# https://gitlab.freedesktop.org/pulseaudio/pavucontrol/-/merge_requests/20 gtkmm3
(fetchpatch { libsigcxx
name = "streamwidget-fix-drop-down-wayland.patch"; libcanberra-gtk3
url = "https://gitlab.freedesktop.org/pulseaudio/pavucontrol/-/commit/ae278b8643cf1089f66df18713c8154208d9a505.patch"; json-glib
sha256 = "066vhxjz6gmi2sp2n4pa1cdsxjnq6yml5js094g5n7ld34p84dpj"; gnome.adwaita-icon-theme
})]; ];
buildInputs = [ libpulseaudio gtkmm3 libcanberra-gtk3
gnome.adwaita-icon-theme ];
nativeBuildInputs = [ pkg-config intltool wrapGAppsHook ]; nativeBuildInputs = [ pkg-config intltool wrapGAppsHook ];

View file

@ -13,13 +13,13 @@
mkDerivation rec { mkDerivation rec {
pname = "ptcollab"; pname = "ptcollab";
version = "0.4.3"; version = "0.5.0";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "yuxshao"; owner = "yuxshao";
repo = "ptcollab"; repo = "ptcollab";
rev = "v${version}"; rev = "v${version}";
sha256 = "sha256-bFFWPl7yaTwCKz7/f9Vk6mg0roUnig0dFERS4IE4R7g="; sha256 = "sha256-sN3O8m+ib6Chb/RXTFbNWW6PnrolCHpmC/avRX93AH4=";
}; };
nativeBuildInputs = [ qmake pkg-config ]; nativeBuildInputs = [ qmake pkg-config ];

View file

@ -17,12 +17,14 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "reaper"; pname = "reaper";
version = "6.29"; version = "6.38";
src = fetchurl { src = fetchurl {
url = "https://www.reaper.fm/files/${lib.versions.major version}.x/reaper${builtins.replaceStrings ["."] [""] version}_linux_${stdenv.targetPlatform.qemuArch}.tar.xz"; url = "https://www.reaper.fm/files/${lib.versions.major version}.x/reaper${builtins.replaceStrings ["."] [""] version}_linux_${stdenv.hostPlatform.qemuArch}.tar.xz";
hash = if stdenv.isx86_64 then "sha256-DOul6J2Y7szy4+Q4SeO0uG6PSuU+MELE7ky8W3mSpTQ=" hash = {
else "sha256-67iTi6bFlbQtyCjnPIjK8K/3aV+zaCsWBRCWmgYonM4="; x86_64-linux = "sha256-K5EnrmzP8pyW9dR1fbMzkPzpS6aHm8JF1+m3afnH4rU=";
aarch64-linux = "sha256-6wNWDXjQNyfU2l9Xi9JtmAuoKtHuIY5cvNMjYkwh2Sk=";
}.${stdenv.hostPlatform.system};
}; };
nativeBuildInputs = [ nativeBuildInputs = [
@ -76,6 +78,6 @@ stdenv.mkDerivation rec {
homepage = "https://www.reaper.fm/"; homepage = "https://www.reaper.fm/";
license = licenses.unfree; license = licenses.unfree;
platforms = [ "x86_64-linux" "aarch64-linux" ]; platforms = [ "x86_64-linux" "aarch64-linux" ];
maintainers = with maintainers; [ jfrankenau ilian ]; maintainers = with maintainers; [ jfrankenau ilian orivej ];
}; };
} }

View file

@ -1,65 +1,58 @@
{ stdenv { lib
, dpkg , stdenv
, lib
, autoPatchelfHook
, fetchurl , fetchurl
, gtk3 , autoPatchelfHook
, glib , dpkg
, desktop-file-utils
, alsa-lib , alsa-lib
, libjack2
, harfbuzz
, fribidi
, pango
, freetype , freetype
, libglvnd
, curl , curl
, libXcursor
, libXinerama
, libXrandr
, libXrender
, libjack2
}: }:
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "tonelib-gfx"; pname = "tonelib-gfx";
version = "4.6.6"; version = "4.7.0";
src = fetchurl { src = fetchurl {
url = "https://www.tonelib.net/download/0509/ToneLib-GFX-amd64.deb"; url = "https://www.tonelib.net/download/0930/ToneLib-GFX-amd64.deb";
sha256 = "sha256-wdX3SQSr0IZHsTUl+1Y0iETme3gTyryexhZ/9XHkGeo="; hash = "sha256-BcbX0dz94B4mj6QeQsnuZmwXAaXH+yJjnrUPgEYVqkU=";
}; };
nativeBuildInputs = [ autoPatchelfHook dpkg ];
buildInputs = [ buildInputs = [
dpkg stdenv.cc.cc.lib
gtk3
glib
desktop-file-utils
alsa-lib alsa-lib
libjack2
harfbuzz
fribidi
pango
freetype freetype
libglvnd
] ++ runtimeDependencies;
runtimeDependencies = map lib.getLib [
curl
libXcursor
libXinerama
libXrandr
libXrender
libjack2
]; ];
nativeBuildInputs = [ unpackCmd = "dpkg -x $curSrc source";
autoPatchelfHook
];
unpackPhase = ''
mkdir -p $TMP/ $out/
dpkg -x $src $TMP
'';
installPhase = '' installPhase = ''
cp -R $TMP/usr/* $out/ mv usr $out
mv $out/bin/ToneLib-GFX $out/bin/tonelib-gfx substituteInPlace $out/share/applications/ToneLib-GFX.desktop --replace /usr/ $out/
''; '';
runtimeDependencies = [
(lib.getLib curl)
];
meta = with lib; { meta = with lib; {
description = "Tonelib GFX is an amp and effects modeling software for electric guitar and bass."; description = "Tonelib GFX is an amp and effects modeling software for electric guitar and bass.";
homepage = "https://tonelib.net/"; homepage = "https://tonelib.net/";
license = licenses.unfree; license = licenses.unfree;
maintainers = with maintainers; [ dan4ik605743 ]; maintainers = with maintainers; [ dan4ik605743 orivej ];
platforms = platforms.linux; platforms = [ "x86_64-linux" ];
}; };
} }

View file

@ -9,16 +9,16 @@
rustPlatform.buildRustPackage rec { rustPlatform.buildRustPackage rec {
pname = "electrs"; pname = "electrs";
version = "0.9.0"; version = "0.9.1";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "romanz"; owner = "romanz";
repo = pname; repo = pname;
rev = "v${version}"; rev = "v${version}";
sha256 = "04dqbn2nfzllxfcn3v9vkfy2hn2syihijr575621r1pj65pcgf8y"; hash = "sha256-GDO8iGntQncvdJiDMBJk9GrGF9JToasbLRzju3S0TS0=";
}; };
cargoSha256 = "0hl8q62lankrab8gq9vxmkn68drs0hw5pk0q6aiq8fxsb63dzsw0"; cargoHash = "sha256-Ms785+3Z4xEUW8FRRu1FIHk7HSWYLBThKlJDFjW6j0I=";
# needed for librocksdb-sys # needed for librocksdb-sys
nativeBuildInputs = [ llvmPackages.clang ]; nativeBuildInputs = [ llvmPackages.clang ];

View file

@ -0,0 +1,39 @@
#!/usr/bin/env nix-shell
#!nix-shell -i bash -p coreutils curl jq git gnupg common-updater-scripts
set -euo pipefail
# Fetch latest release, GPG-verify the tag, update derivation
scriptDir=$(cd "${BASH_SOURCE[0]%/*}" && pwd)
nixpkgs=$(realpath "$scriptDir"/../../../..)
oldVersion=$(nix-instantiate --eval -E "(import \"$nixpkgs\" { config = {}; overlays = []; }).electrs.version" | tr -d '"')
version=$(curl -s --show-error "https://api.github.com/repos/romanz/electrs/releases/latest" | jq -r '.tag_name' | tail -c +2)
if [[ $version == $oldVersion ]]; then
echo "Already at latest version $version"
exit 0
fi
echo "New version: $version"
tmpdir=$(mktemp -d /tmp/electrs-verify-gpg.XXX)
repo=$tmpdir/repo
trap "rm -rf $tmpdir" EXIT
git clone --depth 1 --branch v${version} -c advice.detachedHead=false https://github.com/romanz/electrs $repo
export GNUPGHOME=$tmpdir
echo
echo "Fetching romanz's key"
gpg --keyserver hkps://keys.openpgp.org --recv-keys 15c8c3574ae4f1e25f3f35c587cae5fa46917cbb 2> /dev/null
echo
echo "Verifying commit"
git -C $repo verify-tag v${version}
rm -rf $repo/.git
hash=$(nix hash path $repo)
(cd "$nixpkgs" && update-source-version electrs "$version" "$hash")
sed -i 's|cargoHash = .*|cargoHash = "sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=";|' "$scriptDir/default.nix"
echo
echo "electrs: $oldVersion -> $version"

View file

@ -2,7 +2,7 @@
, fetchurl , fetchurl
, makeDesktopItem , makeDesktopItem
, curl , curl
, dotnet-netcore , dotnetCorePackages
, fontconfig , fontconfig
, krb5 , krb5
, openssl , openssl
@ -11,9 +11,10 @@
}: }:
let let
dotnet-runtime = dotnetCorePackages.runtime_5_0;
libPath = lib.makeLibraryPath [ libPath = lib.makeLibraryPath [
curl curl
dotnet-netcore dotnet-runtime
fontconfig.lib fontconfig.lib
krb5 krb5
openssl openssl

View file

@ -214,9 +214,9 @@ in runCommand
# source-code itself). # source-code itself).
platforms = [ "x86_64-linux" ]; platforms = [ "x86_64-linux" ];
maintainers = with maintainers; rec { maintainers = with maintainers; rec {
stable = [ meutraa ]; stable = [ meutraa fabianhjr ];
beta = [ meutraa ]; beta = [ meutraa fabianhjr ];
canary = [ meutraa ]; canary = [ meutraa fabianhjr ];
dev = canary; dev = canary;
}."${channel}"; }."${channel}";
}; };

View file

@ -9,8 +9,8 @@ let
inherit buildFHSUserEnv; inherit buildFHSUserEnv;
}; };
stableVersion = { stableVersion = {
version = "2020.3.1.24"; # "Android Studio Arctic Fox (2020.3.1)" version = "2020.3.1.25"; # "Android Studio Arctic Fox (2020.3.1)"
sha256Hash = "0k8jcq8vpjayvwm9wqcrjhnp7dly0h4bb8nxspck5zmi8q2ar67l"; sha256Hash = "10gpwb130bzp6a9g958cjqcb2gsm0vdgm08nm5xy45xdh54nxjfg";
}; };
betaVersion = { betaVersion = {
version = "2021.1.1.14"; # "Android Studio Bumblebee (2021.1.1) Beta 1" version = "2021.1.1.14"; # "Android Studio Bumblebee (2021.1.1) Beta 1"

View file

@ -242,12 +242,12 @@ in
clion = buildClion rec { clion = buildClion rec {
name = "clion-${version}"; name = "clion-${version}";
version = "2021.2.1"; /* updated by script */ version = "2021.2.3"; /* updated by script */
description = "C/C++ IDE. New. Intelligent. Cross-platform"; description = "C/C++ IDE. New. Intelligent. Cross-platform";
license = lib.licenses.unfree; license = lib.licenses.unfree;
src = fetchurl { src = fetchurl {
url = "https://download.jetbrains.com/cpp/CLion-${version}.tar.gz"; url = "https://download.jetbrains.com/cpp/CLion-${version}.tar.gz";
sha256 = "0knl0ca15cj0nggyfhd7s0szxr2vp7xvvp3nna3mplssfn59zf9d"; /* updated by script */ sha256 = "09qbzkxyk435s4n04s12ncjyri024wj9pwz8wgjjsswpfa69dhr5"; /* updated by script */
}; };
wmClass = "jetbrains-clion"; wmClass = "jetbrains-clion";
update-channel = "CLion RELEASE"; # channel's id as in http://www.jetbrains.com/updates/updates.xml update-channel = "CLion RELEASE"; # channel's id as in http://www.jetbrains.com/updates/updates.xml
@ -255,12 +255,12 @@ in
datagrip = buildDataGrip rec { datagrip = buildDataGrip rec {
name = "datagrip-${version}"; name = "datagrip-${version}";
version = "2021.2.2"; /* updated by script */ version = "2021.2.4"; /* updated by script */
description = "Your Swiss Army Knife for Databases and SQL"; description = "Your Swiss Army Knife for Databases and SQL";
license = lib.licenses.unfree; license = lib.licenses.unfree;
src = fetchurl { src = fetchurl {
url = "https://download.jetbrains.com/datagrip/${name}.tar.gz"; url = "https://download.jetbrains.com/datagrip/${name}.tar.gz";
sha256 = "18dammsvd43x8cx0plzwgankmzfv7j79z0nsdagd540v99c2r2v3"; /* updated by script */ sha256 = "1vj9ihzw07bh30ngy8mj027ljq9zzd904k61f8jbfpw75vknh8f6"; /* updated by script */
}; };
wmClass = "jetbrains-datagrip"; wmClass = "jetbrains-datagrip";
update-channel = "DataGrip RELEASE"; update-channel = "DataGrip RELEASE";
@ -268,12 +268,12 @@ in
goland = buildGoland rec { goland = buildGoland rec {
name = "goland-${version}"; name = "goland-${version}";
version = "2021.2.2"; /* updated by script */ version = "2021.2.3"; /* updated by script */
description = "Up and Coming Go IDE"; description = "Up and Coming Go IDE";
license = lib.licenses.unfree; license = lib.licenses.unfree;
src = fetchurl { src = fetchurl {
url = "https://download.jetbrains.com/go/${name}.tar.gz"; url = "https://download.jetbrains.com/go/${name}.tar.gz";
sha256 = "0ayqvyd24klafm09kls4fdp2acqsvh0zhm4wsrmrshlpmdqd5vjk"; /* updated by script */ sha256 = "1n0yrk05xv4pard82b6z349ksiw8k75s9525pnpa2ny1ay1klhdg"; /* updated by script */
}; };
wmClass = "jetbrains-goland"; wmClass = "jetbrains-goland";
update-channel = "GoLand RELEASE"; update-channel = "GoLand RELEASE";
@ -281,12 +281,12 @@ in
idea-community = buildIdea rec { idea-community = buildIdea rec {
name = "idea-community-${version}"; name = "idea-community-${version}";
version = "2021.2.1"; /* updated by script */ version = "2021.2.3"; /* updated by script */
description = "Integrated Development Environment (IDE) by Jetbrains, community edition"; description = "Integrated Development Environment (IDE) by Jetbrains, community edition";
license = lib.licenses.asl20; license = lib.licenses.asl20;
src = fetchurl { src = fetchurl {
url = "https://download.jetbrains.com/idea/ideaIC-${version}.tar.gz"; url = "https://download.jetbrains.com/idea/ideaIC-${version}.tar.gz";
sha256 = "1af43c51ryvqc7c9r3kz2266j0nvz50xw1vhfjbd74c3ycj8a1zz"; /* updated by script */ sha256 = "166rhssyizn40rlar7ym7gkwz2aawp58qqvrs60w3cwwvjvb0bjq"; /* updated by script */
}; };
wmClass = "jetbrains-idea-ce"; wmClass = "jetbrains-idea-ce";
update-channel = "IntelliJ IDEA RELEASE"; update-channel = "IntelliJ IDEA RELEASE";
@ -294,12 +294,12 @@ in
idea-ultimate = buildIdea rec { idea-ultimate = buildIdea rec {
name = "idea-ultimate-${version}"; name = "idea-ultimate-${version}";
version = "2021.2.1"; /* updated by script */ version = "2021.2.3"; /* updated by script */
description = "Integrated Development Environment (IDE) by Jetbrains, requires paid license"; description = "Integrated Development Environment (IDE) by Jetbrains, requires paid license";
license = lib.licenses.unfree; license = lib.licenses.unfree;
src = fetchurl { src = fetchurl {
url = "https://download.jetbrains.com/idea/ideaIU-${version}-no-jbr.tar.gz"; url = "https://download.jetbrains.com/idea/ideaIU-${version}-no-jbr.tar.gz";
sha256 = "1257a9d9h3ybdsnm74jmgzp1rfi1629gv9kr0w2nhmxj7ghhbx4w"; /* updated by script */ sha256 = "1d0kk2yydrbzvdy6dy9jqr182panidmbf2hy80gvi5ph2r5rv1qd"; /* updated by script */
}; };
wmClass = "jetbrains-idea"; wmClass = "jetbrains-idea";
update-channel = "IntelliJ IDEA RELEASE"; update-channel = "IntelliJ IDEA RELEASE";
@ -307,13 +307,13 @@ in
mps = buildMps rec { mps = buildMps rec {
name = "mps-${version}"; name = "mps-${version}";
version = "2021.1.3"; /* updated by script */ version = "2021.2.1"; /* updated by script */
versionMajorMinor = "2021.1"; /* updated by script */ versionMajorMinor = "2021.2"; /* updated by script */
description = "Create your own domain-specific language"; description = "Create your own domain-specific language";
license = lib.licenses.asl20; license = lib.licenses.asl20;
src = fetchurl { src = fetchurl {
url = "https://download.jetbrains.com/mps/${versionMajorMinor}/MPS-${version}.tar.gz"; url = "https://download.jetbrains.com/mps/${versionMajorMinor}/MPS-${version}.tar.gz";
sha256 = "0w1nchaa2d3z3mdp43mvifnbibl1ribyc98dm7grnwvrqk72pabf"; /* updated by script */ sha256 = "1yawjc5xwga1mmlsl3068ml532941mq08i9ji3dhj1nwdkyav2jz"; /* updated by script */
}; };
wmClass = "jetbrains-mps"; wmClass = "jetbrains-mps";
update-channel = "MPS RELEASE"; update-channel = "MPS RELEASE";
@ -321,12 +321,12 @@ in
phpstorm = buildPhpStorm rec { phpstorm = buildPhpStorm rec {
name = "phpstorm-${version}"; name = "phpstorm-${version}";
version = "2021.2.1"; /* updated by script */ version = "2021.2.3"; /* updated by script */
description = "Professional IDE for Web and PHP developers"; description = "Professional IDE for Web and PHP developers";
license = lib.licenses.unfree; license = lib.licenses.unfree;
src = fetchurl { src = fetchurl {
url = "https://download.jetbrains.com/webide/PhpStorm-${version}.tar.gz"; url = "https://download.jetbrains.com/webide/PhpStorm-${version}.tar.gz";
sha256 = "1iqnq38d71wbl1iqhqr5as1802s53m3220vq4g42mdjgdj296bdk"; /* updated by script */ sha256 = "1avcm4fnkn0jkw85s505yz5kjbxzk038463sjdsca04pv5yhsdp0"; /* updated by script */
}; };
wmClass = "jetbrains-phpstorm"; wmClass = "jetbrains-phpstorm";
update-channel = "PhpStorm RELEASE"; update-channel = "PhpStorm RELEASE";
@ -334,12 +334,12 @@ in
pycharm-community = buildPycharm rec { pycharm-community = buildPycharm rec {
name = "pycharm-community-${version}"; name = "pycharm-community-${version}";
version = "2021.2.1"; /* updated by script */ version = "2021.2.2"; /* updated by script */
description = "PyCharm Community Edition"; description = "PyCharm Community Edition";
license = lib.licenses.asl20; license = lib.licenses.asl20;
src = fetchurl { src = fetchurl {
url = "https://download.jetbrains.com/python/${name}.tar.gz"; url = "https://download.jetbrains.com/python/${name}.tar.gz";
sha256 = "1z59yvk3wrqn0c9581vvv62wxf4fyybha426ipyqml8c405z27y4"; /* updated by script */ sha256 = "0s9kk3n5ac6lvqi2yw9gvvm45865jchiwyrs8pq2dgdkgaligrjv"; /* updated by script */
}; };
wmClass = "jetbrains-pycharm-ce"; wmClass = "jetbrains-pycharm-ce";
update-channel = "PyCharm RELEASE"; update-channel = "PyCharm RELEASE";
@ -347,12 +347,12 @@ in
pycharm-professional = buildPycharm rec { pycharm-professional = buildPycharm rec {
name = "pycharm-professional-${version}"; name = "pycharm-professional-${version}";
version = "2021.2.1"; /* updated by script */ version = "2021.2.2"; /* updated by script */
description = "PyCharm Professional Edition"; description = "PyCharm Professional Edition";
license = lib.licenses.unfree; license = lib.licenses.unfree;
src = fetchurl { src = fetchurl {
url = "https://download.jetbrains.com/python/${name}.tar.gz"; url = "https://download.jetbrains.com/python/${name}.tar.gz";
sha256 = "0sh9kdr53dhhq171p9lmsvci3qzlds4vzyqx12mzfvfs7svri1w2"; /* updated by script */ sha256 = "0mgmmf926n3ipr8fxn6f9hsa5vkil8yrw5qlixi8nwnx7chmkp56"; /* updated by script */
}; };
wmClass = "jetbrains-pycharm"; wmClass = "jetbrains-pycharm";
update-channel = "PyCharm RELEASE"; update-channel = "PyCharm RELEASE";
@ -360,12 +360,12 @@ in
rider = buildRider rec { rider = buildRider rec {
name = "rider-${version}"; name = "rider-${version}";
version = "2021.2.1"; /* updated by script */ version = "2021.2.2"; /* updated by script */
description = "A cross-platform .NET IDE based on the IntelliJ platform and ReSharper"; description = "A cross-platform .NET IDE based on the IntelliJ platform and ReSharper";
license = lib.licenses.unfree; license = lib.licenses.unfree;
src = fetchurl { src = fetchurl {
url = "https://download.jetbrains.com/rider/JetBrains.Rider-${version}.tar.gz"; url = "https://download.jetbrains.com/rider/JetBrains.Rider-${version}.tar.gz";
sha256 = "1b5ih6q8kyds8px7gldfz1m9ap3kk27yswwxy1735c83094l2nlm"; /* updated by script */ sha256 = "17xx8mz3dr5iqlr0lsiy8a6cxz3wp5vg8z955cdv0hf8b5rncqfa"; /* updated by script */
}; };
wmClass = "jetbrains-rider"; wmClass = "jetbrains-rider";
update-channel = "Rider RELEASE"; update-channel = "Rider RELEASE";
@ -373,12 +373,12 @@ in
ruby-mine = buildRubyMine rec { ruby-mine = buildRubyMine rec {
name = "ruby-mine-${version}"; name = "ruby-mine-${version}";
version = "2021.2.1"; /* updated by script */ version = "2021.2.3"; /* updated by script */
description = "The Most Intelligent Ruby and Rails IDE"; description = "The Most Intelligent Ruby and Rails IDE";
license = lib.licenses.unfree; license = lib.licenses.unfree;
src = fetchurl { src = fetchurl {
url = "https://download.jetbrains.com/ruby/RubyMine-${version}.tar.gz"; url = "https://download.jetbrains.com/ruby/RubyMine-${version}.tar.gz";
sha256 = "09blnm6han2rmdvjbr1va081zndzvjr1i0m3njaiwcb9rf2axm32"; /* updated by script */ sha256 = "0bbq5ya1dxrgaqqqsc4in4rgv7v292hww3bb0vpzwz6dmc2jly1i"; /* updated by script */
}; };
wmClass = "jetbrains-rubymine"; wmClass = "jetbrains-rubymine";
update-channel = "RubyMine RELEASE"; update-channel = "RubyMine RELEASE";
@ -386,12 +386,12 @@ in
webstorm = buildWebStorm rec { webstorm = buildWebStorm rec {
name = "webstorm-${version}"; name = "webstorm-${version}";
version = "2021.2.1"; /* updated by script */ version = "2021.2.2"; /* updated by script */
description = "Professional IDE for Web and JavaScript development"; description = "Professional IDE for Web and JavaScript development";
license = lib.licenses.unfree; license = lib.licenses.unfree;
src = fetchurl { src = fetchurl {
url = "https://download.jetbrains.com/webstorm/WebStorm-${version}.tar.gz"; url = "https://download.jetbrains.com/webstorm/WebStorm-${version}.tar.gz";
sha256 = "12i9f5sw02gcgviflfs6gwmnxvzhgmm4v4447am0syl4nq8nyv1s"; /* updated by script */ sha256 = "1a3vlqza9nbc4a2qxrzdckmq003zx1db9dy7wx462amc8sbh6v92"; /* updated by script */
}; };
wmClass = "jetbrains-webstorm"; wmClass = "jetbrains-webstorm";
update-channel = "WebStorm RELEASE"; update-channel = "WebStorm RELEASE";

View file

@ -42,7 +42,7 @@ let
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "rstudio"; owner = "rstudio";
repo = "rstudio"; repo = "rstudio";
rev = version; rev = "v${version}";
sha256 = "sha256-9c1bNsf8kJjpcZ2cMV/pPNtXQkFOntX29a1cdnXpllE="; sha256 = "sha256-9c1bNsf8kJjpcZ2cMV/pPNtXQkFOntX29a1cdnXpllE=";
}; };

View file

@ -14,17 +14,17 @@ let
archive_fmt = if stdenv.isDarwin then "zip" else "tar.gz"; archive_fmt = if stdenv.isDarwin then "zip" else "tar.gz";
sha256 = { sha256 = {
x86_64-linux = "069jdwqs9z2z95mjs9nx58rp1516dyyqn5bc0vgr7xvlbis97lq0"; x86_64-linux = "1yfaf9qdaf6njvj8kilmivyl0nnhdvd9hbzpf8hv3kw5rfpdvy89";
x86_64-darwin = "1bd32dkpyfgknxqn76jcwpa47rac9q14glbf5sb1rh9rfav0m1m8"; x86_64-darwin = "10rx5aif61xipf5lcjzkidz9dhbm5gc2wf87c2j456nixaxbx0b4";
aarch64-linux = "1axxnys3pd2qrvj6mqpa5cih44b4dbpgi8mvn616d8d45jgdnc1r"; aarch64-linux = "13h4ffdm9y9p3jnqcjvapykbm73bkjy5jaqwhsi293f9r7jfp9rf";
aarch64-darwin = "0bdp0k20lfwpsl1a3dz6c97s0b5bp3rhb66jwgbyyc16zrz79r1z"; aarch64-darwin = "07nmrxc25rfp5ibarhg3c14ksk2ymqmsnc55hicvvhw93g2qczby";
armv7l-linux = "077w5hvc4brb56zs0w37nr4a8vlcij5z3yrv3rz16p58nnkj56hs"; armv7l-linux = "1gz1mmw2vp986l9sm7rd6hypxs70sz63sbmzyxwfqpvj973dl23q";
}.${system}; }.${system};
in in
callPackage ./generic.nix rec { callPackage ./generic.nix rec {
# Please backport all compatible updates to the stable release. # Please backport all compatible updates to the stable release.
# This is important for the extension ecosystem. # This is important for the extension ecosystem.
version = "1.61.1"; version = "1.61.2";
pname = "vscode"; pname = "vscode";
executableName = "code" + lib.optionalString isInsiders "-insiders"; executableName = "code" + lib.optionalString isInsiders "-insiders";

View file

@ -13,10 +13,10 @@ let
archive_fmt = if system == "x86_64-darwin" then "zip" else "tar.gz"; archive_fmt = if system == "x86_64-darwin" then "zip" else "tar.gz";
sha256 = { sha256 = {
x86_64-linux = "0ic7h5aq1lyplk01bydqwrvz40h59sf0n0q4gxj844k4qidy14md"; x86_64-linux = "1q260kjhyx8djl82275ii63z1mzypsz7rkz3pj1n2wjkwsnw276x";
x86_64-darwin = "15s3vj7740ksb82gdjqpxw6cyd45ymdpacamkqk800929cv715qs"; x86_64-darwin = "1scx155rm8j6dwn0i31b6ajsdxcn1n24p3k6dx248w0zyiwd5wm1";
aarch64-linux = "0n3bxggfzlr1cqarq861yfqka3qfgpwvk8j22l7dv4vki06f8jzy"; aarch64-linux = "1j788a0p767i65ying9pfg6rss8l7g76n2323dnmj12bhxs6cqd1";
armv7l-linux = "0jksfdals8xf3vh5hqrd40pj5qn8byjrakjnrv926qznxjj152bn"; armv7l-linux = "1yfwmfxpilfv2h3pp698pg4wr6dnyzwg0r266xiwsw7z38jh54fk";
}.${system}; }.${system};
sourceRoot = { sourceRoot = {
@ -31,7 +31,7 @@ in
# Please backport all compatible updates to the stable release. # Please backport all compatible updates to the stable release.
# This is important for the extension ecosystem. # This is important for the extension ecosystem.
version = "1.61.1"; version = "1.61.2";
pname = "vscodium"; pname = "vscodium";
executableName = "codium"; executableName = "codium";

View file

@ -0,0 +1,56 @@
{ lib
, rustPlatform
, fetchFromGitHub
, stdenv
, python3
, libGL
, libX11
, libXcursor
, libXi
, libXrandr
, libxcb
, libxkbcommon
, AppKit
, IOKit
}:
rustPlatform.buildRustPackage rec {
pname = "epick";
version = "0.5.1";
src = fetchFromGitHub {
owner = "vv9k";
repo = pname;
rev = version;
sha256 = "0l7m45bqx62nrwi0r4pdwxcq37s7h3nnawk9nq2zpvl9wcgnx3gc";
};
cargoSha256 = "sha256-LERV3+zwt5oVfyueGfxM7HsOha4cuWTkPyvPQwHSZqo=";
nativeBuildInputs = lib.optional stdenv.isLinux python3;
buildInputs = lib.optionals stdenv.isLinux [
libGL
libX11
libXcursor
libXi
libXrandr
libxcb
libxkbcommon
] ++ lib.optionals stdenv.isDarwin [
AppKit
IOKit
];
postFixup = lib.optionalString stdenv.isLinux ''
patchelf --set-rpath ${lib.makeLibraryPath buildInputs} $out/bin/epick
'';
meta = with lib; {
description = "Simple color picker that lets the user create harmonic palettes with ease";
homepage = "https://github.com/vv9k/epick";
changelog = "https://github.com/vv9k/epick/blob/${version}/CHANGELOG.md";
license = licenses.gpl3Only;
maintainers = with maintainers; [ figsoda ];
};
}

View file

@ -0,0 +1,33 @@
{ lib
, rustPlatform
, fetchFromGitHub
, glib
, pkg-config
, wrapGAppsHook
, gtk3
}:
rustPlatform.buildRustPackage rec {
pname = "image-roll";
version = "1.3.1";
src = fetchFromGitHub {
owner = "weclaw1";
repo = pname;
rev = version;
sha256 = "007jzmrn4cnqbi6fy5lxanbwa4pc72fbcv9irk3pfd0wspp05s8j";
};
cargoSha256 = "sha256-dRRBfdGTXtoNbp7OWqOdNECXHCpj0ipkCOvcdekW+G4=";
nativeBuildInputs = [ glib pkg-config wrapGAppsHook ];
buildInputs = [ gtk3 ];
meta = with lib; {
description = "Simple and fast GTK image viewer with basic image manipulation tools";
homepage = "https://github.com/weclaw1/image-roll";
license = licenses.mit;
maintainers = with maintainers; [ figsoda ];
};
}

View file

@ -1,7 +1,9 @@
{ lib { lib
, mkDerivation , mkDerivation
, makeDesktopItem
, fetchurl , fetchurl
, pkg-config , pkg-config
, copyDesktopItems
, cairo , cairo
, freetype , freetype
, ghostscript , ghostscript
@ -26,7 +28,7 @@ mkDerivation rec {
sourceRoot = "${pname}-${version}/src"; sourceRoot = "${pname}-${version}/src";
nativeBuildInputs = [ pkg-config ]; nativeBuildInputs = [ pkg-config copyDesktopItems ];
buildInputs = [ buildInputs = [
cairo cairo
@ -42,15 +44,35 @@ mkDerivation rec {
zlib zlib
]; ];
IPEPREFIX=placeholder "out"; IPEPREFIX = placeholder "out";
URWFONTDIR="${texlive}/texmf-dist/fonts/type1/urw/"; URWFONTDIR = "${texlive}/texmf-dist/fonts/type1/urw/";
LUA_PACKAGE = "lua"; LUA_PACKAGE = "lua";
qtWrapperArgs = [ "--prefix PATH : ${texlive}/bin" ]; qtWrapperArgs = [ "--prefix PATH : ${lib.makeBinPath [ texlive ]}" ];
enableParallelBuilding = true; enableParallelBuilding = true;
# TODO: make .desktop entry desktopItems = [
(makeDesktopItem {
name = pname;
desktopName = "Ipe";
genericName = "Drawing editor";
comment = "A drawing editor for creating figures in PDF format";
exec = "ipe";
icon = "ipe";
mimeType = "text/xml;application/pdf";
categories = "Graphics;Qt;";
extraDesktopEntries = {
StartupWMClass = "ipe";
StartupNotify = "true";
};
})
];
postInstall = ''
mkdir -p $out/share/icons/hicolor/128x128/apps
ln -s $out/share/ipe/${version}/icons/icon_128x128.png $out/share/icons/hicolor/128x128/apps/ipe.png
'';
meta = with lib; { meta = with lib; {
description = "An editor for drawing figures"; description = "An editor for drawing figures";

View file

@ -2,23 +2,26 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "1password"; pname = "1password";
version = "1.11.2"; version = "1.12.2";
src = src =
if stdenv.isLinux then fetchzip { if stdenv.isLinux then
fetchzip
{
url = { url = {
"i686-linux" = "https://cache.agilebits.com/dist/1P/op/pkg/v${version}/op_linux_386_v${version}.zip"; "i686-linux" = "https://cache.agilebits.com/dist/1P/op/pkg/v${version}/op_linux_386_v${version}.zip";
"x86_64-linux" = "https://cache.agilebits.com/dist/1P/op/pkg/v${version}/op_linux_amd64_v${version}.zip"; "x86_64-linux" = "https://cache.agilebits.com/dist/1P/op/pkg/v${version}/op_linux_amd64_v${version}.zip";
"aarch64-linux" = "https://cache.agilebits.com/dist/1P/op/pkg/v${version}/op_linux_arm_v${version}.zip"; "aarch64-linux" = "https://cache.agilebits.com/dist/1P/op/pkg/v${version}/op_linux_arm_v${version}.zip";
}.${stdenv.hostPlatform.system}; }.${stdenv.hostPlatform.system};
sha256 = { sha256 = {
"i686-linux" = "0rh5bakj9qd43cf6wj5v46a3h98kcwqyc0f1yw72wvcacvjycyjz"; "i686-linux" = "tCm/vDBASPN9FBSVRJ6BrFc7hdtZWPEAgvokJhjazPg=";
"x86_64-linux" = "00nf0cb8cxk1pvzr1wq778wvikzrlzy38r3rzkq44whdpdj50jzx"; "x86_64-linux" = "3VkVMuTAfeEowkguJi2fd1kG7GwO1VN5GBPgNaH3Zv4=";
"aarch64-linux" = "1gv282z49bj3ln5na4wb1z5455a64cyd54fp5i96k8shaxd0apxf"; "aarch64-linux" = "vWoA/0ZfdwVniHmxC4nH1QIc6bjdb00+SwlkIWc9BPs=";
}.${stdenv.hostPlatform.system}; }.${stdenv.hostPlatform.system};
stripRoot = false; stripRoot = false;
} else fetchurl { } else
fetchurl {
url = "https://cache.agilebits.com/dist/1P/op/pkg/v${version}/op_apple_universal_v${version}.pkg"; url = "https://cache.agilebits.com/dist/1P/op/pkg/v${version}/op_apple_universal_v${version}.pkg";
sha256 = "1pqdjr6d23j9fpwgahb0s1ni1bpjv9jajs1hapgq5kdrww2w7nhm"; sha256 = "xG/6YZdkJxr5Py90rkIyG4mK40yFTmNSfih9jO2uF+4=";
}; };
buildInputs = lib.optionals stdenv.isDarwin [ xar cpio ]; buildInputs = lib.optionals stdenv.isDarwin [ xar cpio ];

View file

@ -11,8 +11,8 @@ in
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "authy"; pname = "authy";
version = "1.8.4"; version = "1.9.0";
rev = "6"; rev = "7";
buildInputs = [ buildInputs = [
alsa-lib alsa-lib
@ -50,7 +50,7 @@ stdenv.mkDerivation rec {
src = fetchurl { src = fetchurl {
url = "https://api.snapcraft.io/api/v1/snaps/download/H8ZpNgIoPyvmkgxOWw5MSzsXK1wRZiHn_${rev}.snap"; url = "https://api.snapcraft.io/api/v1/snaps/download/H8ZpNgIoPyvmkgxOWw5MSzsXK1wRZiHn_${rev}.snap";
sha256 = "07h4mgp229nlvw9ifiiyzph26aa61w4x4f1xya8vw580blrk1ph9"; sha256 = "10az47cc3lgsdi0ixmmna08nqf9xm7gsl1ph00wfwrxzsi05ygx3";
}; };
nativeBuildInputs = [ autoPatchelfHook makeWrapper squashfsTools ]; nativeBuildInputs = [ autoPatchelfHook makeWrapper squashfsTools ];

View file

@ -1,6 +1,7 @@
{ lib { lib
, mkDerivation , mkDerivation
, fetchurl , fetchurl
, fetchpatch
, poppler_utils , poppler_utils
, pkg-config , pkg-config
, libpng , libpng
@ -26,18 +27,21 @@
mkDerivation rec { mkDerivation rec {
pname = "calibre"; pname = "calibre";
version = "5.24.0"; version = "5.29.0";
src = fetchurl { src = fetchurl {
url = "https://download.calibre-ebook.com/${version}/${pname}-${version}.tar.xz"; url = "https://download.calibre-ebook.com/${version}/${pname}-${version}.tar.xz";
hash = "sha256:18dr577nv7ijw3ar6mrk2xrc54mlrqkaj5jrc6s5sirl0710fdfg"; sha256 = "sha256-9ymHEpTHDUM3NAGoeSETzKRLKgJLRY4eEli6N5lbZug=";
}; };
# https://sources.debian.org/patches/calibre/5.29.0+dfsg-1
patches = [ patches = [
# Plugin installation (very insecure) disabled (from Debian) # allow for plugin update check, but no calibre version check
./disable_plugins.patch (fetchpatch {
# Automatic version update disabled by default (from Debian) name = "0001_only_plugin_update.patch";
./no_updates_dialog.patch url = "https://sources.debian.org/data/main/c/calibre/5.29.0%2Bdfsg-1/debian/patches/0001-only-plugin-update.patch";
sha256 = "sha256-aGT8rJ/eQKAkmyHBWdY0ouZuWvDwtLVJU5xY6d3hY3k=";
})
] ]
++ lib.optional (!unrarSupport) ./dont_build_unrar_plugin.patch; ++ lib.optional (!unrarSupport) ./dont_build_unrar_plugin.patch;

View file

@ -1,17 +0,0 @@
Description: Disable plugin dialog. It uses a totally non-authenticated and non-trusted way of installing arbitrary code.
Author: Martin Pitt <mpitt@debian.org>
Bug-Debian: http://bugs.debian.org/640026
Index: calibre-0.8.29+dfsg/src/calibre/gui2/actions/preferences.py
===================================================================
--- calibre-0.8.29+dfsg.orig/src/calibre/gui2/actions/preferences.py 2011-12-16 05:49:14.000000000 +0100
+++ calibre-0.8.29+dfsg/src/calibre/gui2/actions/preferences.py 2011-12-20 19:29:04.798468930 +0100
@@ -28,8 +28,6 @@
pm.addAction(QIcon(I('config.png')), _('Preferences'), self.do_config)
cm('welcome wizard', _('Run welcome wizard'),
icon='wizard.png', triggered=self.gui.run_wizard)
- cm('plugin updater', _('Get plugins to enhance calibre'),
- icon='plugins/plugin_updater.png', triggered=self.get_plugins)
if not DEBUG:
pm.addSeparator()
cm('restart', _('Restart in debug mode'), icon='debug.png',

View file

@ -1,15 +0,0 @@
diff -burN calibre-2.9.0.orig/src/calibre/gui2/main.py calibre-2.9.0/src/calibre/gui2/main.py
--- calibre-2.9.0.orig/src/calibre/gui2/main.py 2014-11-09 20:09:54.081231882 +0800
+++ calibre-2.9.0/src/calibre/gui2/main.py 2014-11-09 20:15:48.193033844 +0800
@@ -37,8 +37,9 @@
help=_('Start minimized to system tray.'))
parser.add_option('-v', '--verbose', default=0, action='count',
help=_('Ignored, do not use. Present only for legacy reasons'))
- parser.add_option('--no-update-check', default=False, action='store_true',
- help=_('Do not check for updates'))
+ parser.add_option('--update-check', dest='no_update_check', default=True,
+ action='store_false',
+ help=_('Check for updates'))
parser.add_option('--ignore-plugins', default=False, action='store_true',
help=_('Ignore custom plugins, useful if you installed a plugin'
' that is preventing calibre from starting'))

View file

@ -18,13 +18,13 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "dbeaver"; pname = "dbeaver";
version = "21.2.2"; # When updating also update fetchedMavenDeps.sha256 version = "21.2.3"; # When updating also update fetchedMavenDeps.sha256
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "dbeaver"; owner = "dbeaver";
repo = "dbeaver"; repo = "dbeaver";
rev = version; rev = version;
sha256 = "6FQd7UGX19Ez/updybia/tzl+9GYyPPzPGFsV67Enq0="; sha256 = "0xu/uMMloCUuhKs392kn6qJzlobDNuvwlHGdS/gGAB8=";
}; };
fetchedMavenDeps = stdenv.mkDerivation { fetchedMavenDeps = stdenv.mkDerivation {
@ -50,7 +50,7 @@ stdenv.mkDerivation rec {
dontFixup = true; dontFixup = true;
outputHashAlgo = "sha256"; outputHashAlgo = "sha256";
outputHashMode = "recursive"; outputHashMode = "recursive";
outputHash = "VHOIK6sOAP+O9HicUiE2avLcppRzocPUf1XIcyuGw30="; outputHash = "7Sm1hAoi5xc4MLONOD8ySLLkpao0qmlMRRva/8zR210=";
}; };
nativeBuildInputs = [ nativeBuildInputs = [

View file

@ -13,29 +13,24 @@
, tllist , tllist
, fcft , fcft
, enableCairo ? true , enableCairo ? true
, enablePNG ? true , withPNGBackend ? "libpng"
, enableSVG ? true , withSVGBackend ? "librsvg"
# Optional dependencies # Optional dependencies
, cairo , cairo
, librsvg , librsvg
, libpng , libpng
}: }:
let
# Courtesy of sternenseemann and FRidh, commit c9a7fdfcfb420be8e0179214d0d91a34f5974c54
mesonFeatureFlag = opt: b: "-D${opt}=${if b then "enabled" else "disabled"}";
in
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "fuzzel"; pname = "fuzzel";
version = "1.6.1"; version = "1.6.4";
src = fetchFromGitea { src = fetchFromGitea {
domain = "codeberg.org"; domain = "codeberg.org";
owner = "dnkl"; owner = "dnkl";
repo = "fuzzel"; repo = "fuzzel";
rev = version; rev = version;
sha256 = "sha256-JW5sAlTprSRIdFbmSaUreGtNccERgQMGEW+WCSscYQk="; sha256 = "sha256-wl3dO6EwLXWf0XtAIml1NlNRIvpIQJuq1pxLmo/pAUE=";
}; };
nativeBuildInputs = [ nativeBuildInputs = [
@ -54,15 +49,15 @@ stdenv.mkDerivation rec {
tllist tllist
fcft fcft
] ++ lib.optional enableCairo cairo ] ++ lib.optional enableCairo cairo
++ lib.optional enablePNG libpng ++ lib.optional (withPNGBackend == "libpng") libpng
++ lib.optional enableSVG librsvg; ++ lib.optional (withSVGBackend == "librsvg") librsvg;
mesonBuildType = "release"; mesonBuildType = "release";
mesonFlags = [ mesonFlags = [
(mesonFeatureFlag "enable-cairo" enableCairo) "-Denable-cairo=${if enableCairo then "enabled" else "disabled"}"
(mesonFeatureFlag "enable-png" enablePNG) "-Dpng-backend=${withPNGBackend}"
(mesonFeatureFlag "enable-svg" enableSVG) "-Dsvg-backend=${withSVGBackend}"
]; ];
meta = with lib; { meta = with lib; {

View file

@ -1,12 +1,12 @@
{ lib, stdenv, fetchurl, appimageTools, makeWrapper, electron }: { lib, stdenv, fetchurl, appimageTools, makeWrapper, electron_13 }:
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "logseq"; pname = "logseq";
version = "0.3.5"; version = "0.4.2";
src = fetchurl { src = fetchurl {
url = "https://github.com/logseq/logseq/releases/download/${version}/logseq-linux-x64-${version}.AppImage"; url = "https://github.com/logseq/logseq/releases/download/${version}/logseq-linux-x64-${version}.AppImage";
sha256 = "ruJALAI0YQNwG8An5VzoJX06Qu/pXZ9zsrPZ7EH+5Pk="; sha256 = "BEDScQtGfkp74Gx3RKK8ItNQ9JD8AJkl1zdS/gZqyXk=";
name = "${pname}-${version}.AppImage"; name = "${pname}-${version}.AppImage";
}; };
@ -36,7 +36,7 @@ stdenv.mkDerivation rec {
''; '';
postFixup = '' postFixup = ''
makeWrapper ${electron}/bin/electron $out/bin/${pname} \ makeWrapper ${electron_13}/bin/electron $out/bin/${pname} \
--add-flags $out/share/${pname}/resources/app --add-flags $out/share/${pname}/resources/app
''; '';

View file

@ -15,6 +15,11 @@
, webkitgtk , webkitgtk
, wrapGAppsHook , wrapGAppsHook
# check inputs
, xvfb-run
, nose
, flake8
# python dependencies # python dependencies
, dbus-python , dbus-python
, distro , distro
@ -46,7 +51,7 @@
let let
# See lutris/util/linux.py # See lutris/util/linux.py
binPath = lib.makeBinPath [ requiredTools = [
xrandr xrandr
pciutils pciutils
psmisc psmisc
@ -64,6 +69,8 @@ let
xorg.xkbcomp xorg.xkbcomp
]; ];
binPath = lib.makeBinPath requiredTools;
gstDeps = with gst_all_1; [ gstDeps = with gst_all_1; [
gst-libav gst-libav
gst-plugins-bad gst-plugins-bad
@ -76,13 +83,13 @@ let
in in
buildPythonApplication rec { buildPythonApplication rec {
pname = "lutris-original"; pname = "lutris-original";
version = "0.5.8.4"; version = "0.5.9.1";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "lutris"; owner = "lutris";
repo = "lutris"; repo = "lutris";
rev = "v${version}"; rev = "v${version}";
sha256 = "sha256-5ivXIgDyM9PRvuUhPFPgziXDvggcL+p65kI2yOaiS1M="; sha256 = "sha256-ykPJneCKbFKv0x/EDo9PkRb1LkMeFeYzTDmvE3ShNe0=";
}; };
nativeBuildInputs = [ wrapGAppsHook ]; nativeBuildInputs = [ wrapGAppsHook ];
@ -111,6 +118,20 @@ buildPythonApplication rec {
python_magic python_magic
]; ];
checkInputs = [ xvfb-run nose flake8 ] ++ requiredTools;
preCheck = "export HOME=$PWD";
checkPhase = ''
runHook preCheck
xvfb-run -s '-screen 0 800x600x24' make test
runHook postCheck
'';
# unhardcodes xrandr and fixes nosetests
# upstream in progress: https://github.com/lutris/lutris/pull/3754
patches = [
./fixes.patch
];
# avoid double wrapping # avoid double wrapping
dontWrapGApps = true; dontWrapGApps = true;
makeWrapperArgs = [ makeWrapperArgs = [
@ -121,8 +142,6 @@ buildPythonApplication rec {
# see https://github.com/NixOS/nixpkgs/issues/56943 # see https://github.com/NixOS/nixpkgs/issues/56943
strictDeps = false; strictDeps = false;
preCheck = "export HOME=$PWD";
meta = with lib; { meta = with lib; {
homepage = "https://lutris.net"; homepage = "https://lutris.net";
description = "Open Source gaming platform for GNU/Linux"; description = "Open Source gaming platform for GNU/Linux";

View file

@ -0,0 +1,67 @@
diff --git a/Makefile b/Makefile
index 821a9500..75affa77 100644
--- a/Makefile
+++ b/Makefile
@@ -25,12 +25,12 @@ release: build-source upload upload-ppa
test:
rm tests/fixtures/pga.db -f
- nosetests3
+ nosetests
cover:
rm tests/fixtures/pga.db -f
rm tests/coverage/ -rf
- nosetests3 --with-coverage --cover-package=lutris --cover-html --cover-html-dir=tests/coverage
+ nosetests --with-coverage --cover-package=lutris --cover-html --cover-html-dir=tests/coverage
pgp-renew:
osc signkey --extend home:strycore
diff --git a/lutris/util/graphics/xrandr.py b/lutris/util/graphics/xrandr.py
index f788c94c..5544dbe9 100644
--- a/lutris/util/graphics/xrandr.py
+++ b/lutris/util/graphics/xrandr.py
@@ -5,6 +5,7 @@ from collections import namedtuple
from lutris.util.log import logger
from lutris.util.system import read_process_output
+from lutris.util.linux import LINUX_SYSTEM
Output = namedtuple("Output", ("name", "mode", "position", "rotation", "primary", "rate"))
@@ -12,7 +13,7 @@ Output = namedtuple("Output", ("name", "mode", "position", "rotation", "primary"
def _get_vidmodes():
"""Return video modes from XrandR"""
logger.debug("Retrieving video modes from XrandR")
- return read_process_output(["xrandr"]).split("\n")
+ return read_process_output([LINUX_SYSTEM.get("xrandr")]).split("\n")
def get_outputs(): # pylint: disable=too-many-locals
@@ -76,7 +77,7 @@ def turn_off_except(display):
for output in get_outputs():
if output.name != display:
logger.info("Turning off %s", output[0])
- subprocess.Popen(["xrandr", "--output", output.name, "--off"])
+ subprocess.Popen([LINUX_SYSTEM.get("xrandr"), "--output", output.name, "--off"])
def get_resolutions():
@@ -111,7 +112,7 @@ def change_resolution(resolution):
logger.warning("Resolution %s doesn't exist.", resolution)
else:
logger.info("Changing resolution to %s", resolution)
- subprocess.Popen(["xrandr", "-s", resolution])
+ subprocess.Popen([LINUX_SYSTEM.get("xrandr"), "-s", resolution])
else:
for display in resolution:
logger.debug("Switching to %s on %s", display.mode, display.name)
@@ -128,7 +129,7 @@ def change_resolution(resolution):
logger.info("Switching resolution of %s to %s", display.name, display.mode)
subprocess.Popen(
[
- "xrandr",
+ LINUX_SYSTEM.get("xrandr"),
"--output",
display.name,
"--mode",

View file

@ -32,7 +32,7 @@ stdenv.mkDerivation rec {
]; ];
meta = with lib; { meta = with lib; {
homepage = "https://github.com/alexays/waybar"; homepage = "https://hg.sr.ht/~scoopta/rootbar";
description = "A bar for Wayland WMs"; description = "A bar for Wayland WMs";
longDescription = '' longDescription = ''
Root Bar is a bar for wlroots based wayland compositors such as sway and Root Bar is a bar for wlroots based wayland compositors such as sway and

View file

@ -15,14 +15,14 @@
python3Packages.buildPythonApplication rec { python3Packages.buildPythonApplication rec {
pname = "themechanger"; pname = "themechanger";
version = "0.10.1"; version = "0.10.2";
format = "other"; format = "other";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "ALEX11BR"; owner = "ALEX11BR";
repo = "ThemeChanger"; repo = "ThemeChanger";
rev = "v${version}"; rev = "v${version}";
sha256 = "1bxxn5bmdwaxfvyh6z2rxklwnxgvv6kh5y9m8r1k7d0n4msx1x2h"; sha256 = "00z1npm3lpvf0wc9z2v58pc4nxxh8x9m158kxf1k0qlz536jrzqr";
}; };
nativeBuildInputs = [ nativeBuildInputs = [

View file

@ -6,15 +6,13 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "upwork"; pname = "upwork";
version = "5.6.8.0"; version = "5.6.9.3";
src = fetchurl { src = fetchurl {
url = "https://upwork-usw2-desktopapp.upwork.com/binaries/v5_6_8_0_836f43f6f6be4149/${pname}_${version}_amd64.deb"; url = "https://upwork-usw2-desktopapp.upwork.com/binaries/v5_6_9_3_10c2eb9781db4d7f/${pname}_${version}_amd64.deb";
sha256 = "b3a52f773d633837882dc107b206006325722ca5d5d5a1e8bdf5453f872e1b6f"; sha256 = "0b884aa6992d438cee09f58673780218a00a823e03c114b0c753947020c0a327";
}; };
dontWrapGApps = true;
nativeBuildInputs = [ nativeBuildInputs = [
dpkg dpkg
wrapGAppsHook wrapGAppsHook
@ -31,6 +29,10 @@ stdenv.mkDerivation rec {
libPath = lib.makeLibraryPath buildInputs; libPath = lib.makeLibraryPath buildInputs;
dontWrapGApps = true;
dontBuild = true;
dontConfigure = true;
unpackPhase = '' unpackPhase = ''
dpkg-deb -x ${src} ./ dpkg-deb -x ${src} ./
''; '';

View file

@ -1,8 +1,8 @@
{ {
"stable": { "stable": {
"version": "94.0.4606.81", "version": "95.0.4638.54",
"sha256": "16755mfqxxmvslm9ix060safrnml91ckj5p85960jj5g5hmslwbh", "sha256": "1zb1009gg9962axn2l1krycz7ml20i8z2n3ka2psxpg68pbqivry",
"sha256bin64": "1d3z5np6b6jax7afak7f0yh76kmmdggdjlrzwyhy8hgrv7c7rsdz", "sha256bin64": "0mf9jfzwz6nkz1yg8lndz1gmsvmdh1rxhqkv0vd9nr04h5x9b41a",
"deps": { "deps": {
"gn": { "gn": {
"version": "2021-08-11", "version": "2021-08-11",
@ -12,15 +12,15 @@
} }
}, },
"chromedriver": { "chromedriver": {
"version": "94.0.4606.61", "version": "95.0.4638.17",
"sha256_linux": "1l7ls8qqqd9q3a20a459q40jd9djzf67s8qfdmfj44vwgidiw0fj", "sha256_linux": "0jqq2h3rjancq9gk4w29gcr4b3z4irnkbvcj97fdsnksck9y5h2q",
"sha256_darwin": "1b43agdd6vw5zarrbbk1sgmdr6n3d9cdl4dcikk304yplh58d49v" "sha256_darwin": "0vl73i28xq3z5njg4287j08pb2sfd28amc8hkm4ddq5dgqpim0l8"
} }
}, },
"beta": { "beta": {
"version": "95.0.4638.49", "version": "95.0.4638.54",
"sha256": "11fiq6p2d99hl166pf39g83pk7m7ibi1zz19wj7qmcc7ql7006jz", "sha256": "1zb1009gg9962axn2l1krycz7ml20i8z2n3ka2psxpg68pbqivry",
"sha256bin64": "04s81fnr01jq74fpl5n6jg8iw5iw6sdwyz40zja68h1crxh5d6d6", "sha256bin64": "06d0kjnrv8z74icc6nahllxbwn3xxwn0vgc7ss47402zrqig8lch",
"deps": { "deps": {
"gn": { "gn": {
"version": "2021-08-11", "version": "2021-08-11",

View file

@ -1,9 +1,9 @@
{ lib, buildGoModule, fetchFromGitHub, fetchzip, installShellFiles }: { lib, buildGoModule, fetchFromGitHub, fetchzip, installShellFiles }:
let let
version = "0.17.2"; version = "0.18.3";
sha256 = "0kcdx4ldnshk4pqq37a7p08xr5cpsjrbrifk9fc3jbiw39m09mhf"; sha256 = "0nvvjc0ml1irn7vxyq4m43qimp128cx8hczk21y5m39i2rg4yzx4";
manifestsSha256 = "1v6md4xh4sq1vmb5a8qvb66l101fq75lmv2s4j2z3walssb5mmgj"; manifestsSha256 = "1qgw9ij0b85vvdx03wmbbwanhq1hf69wphy58lsqwf33rdq0bb1m";
manifests = fetchzip { manifests = fetchzip {
url = "https://github.com/fluxcd/flux2/releases/download/v${version}/manifests.tar.gz"; url = "https://github.com/fluxcd/flux2/releases/download/v${version}/manifests.tar.gz";
@ -23,7 +23,7 @@ buildGoModule rec {
inherit sha256; inherit sha256;
}; };
vendorSha256 = "sha256-glifJ0V3RwS7E6EWZsCa88m0MK883RhPSXCsAmMggVs="; vendorSha256 = "0vgi5cnvmc98xa2ibpgvvqlc90hf3gj3v17yqncid596ig3dnqsc";
nativeBuildInputs = [ installShellFiles ]; nativeBuildInputs = [ installShellFiles ];

View file

@ -20,11 +20,11 @@ setKV () {
setKV version ${VERSION} setKV version ${VERSION}
setKV sha256 ${SHA256} setKV sha256 ${SHA256}
setKV manifestsSha256 ${SPEC_SHA256} setKV manifestsSha256 ${SPEC_SHA256}
setKV vendorSha256 "" setKV vendorSha256 "0000000000000000000000000000000000000000000000000000" # The same as lib.fakeSha256
cd ../../../../../ cd ../../../../../
set +e set +e
VENDOR_SHA256=$(nix-build --no-out-link -A fluxcd 2>&1 | grep "got:" | cut -d':' -f2 | sed 's| ||g') VENDOR_SHA256=$(nix-build --no-out-link -A fluxcd 2>&1 >/dev/null | grep "got:" | cut -d':' -f2 | sed 's| ||g')
set -e set -e
cd - > /dev/null cd - > /dev/null

View file

@ -2,13 +2,13 @@
buildGoModule rec { buildGoModule rec {
pname = "helmfile"; pname = "helmfile";
version = "0.140.1"; version = "0.141.0";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "roboll"; owner = "roboll";
repo = "helmfile"; repo = "helmfile";
rev = "v${version}"; rev = "v${version}";
sha256 = "sha256-QnGu/EGzgWva/EA6gKrDzWgjX6OrfZKzWIhRqKbexjU="; sha256 = "sha256-UwjV3xgnZa0Emzw4FP/+gHh1ES6MTihrrlGKUBH6O9Q=";
}; };
vendorSha256 = "sha256-HKHMeDnIDmQ7AjuS2lYCMphTHGD1JgQuBYDJe2+PEk4="; vendorSha256 = "sha256-HKHMeDnIDmQ7AjuS2lYCMphTHGD1JgQuBYDJe2+PEk4=";

View file

@ -243,6 +243,9 @@ stdenv.mkDerivation rec {
pname = "k3s"; pname = "k3s";
version = k3sVersion; version = k3sVersion;
# `src` here is a workaround for the updateScript bot. It couldn't be empty.
src = builtins.filterSource (path: type: false) ./.;
# Important utilities used by the kubelet, see # Important utilities used by the kubelet, see
# https://github.com/kubernetes/kubernetes/issues/26093#issuecomment-237202494 # https://github.com/kubernetes/kubernetes/issues/26093#issuecomment-237202494
# Note the list in that issue is stale and some aren't relevant for k3s. # Note the list in that issue is stale and some aren't relevant for k3s.

View file

@ -12,7 +12,7 @@ LATEST_TAG_RAWFILE=${WORKDIR}/latest_tag.json
curl --silent ${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} \ curl --silent ${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} \
https://api.github.com/repos/k3s-io/k3s/releases > ${LATEST_TAG_RAWFILE} https://api.github.com/repos/k3s-io/k3s/releases > ${LATEST_TAG_RAWFILE}
LATEST_TAG_NAME=$(jq 'map(.tag_name)' ${LATEST_TAG_RAWFILE} | grep -v -e rc -e engine | sed 's/["|,| ]//g' | sort -r | head -n1) LATEST_TAG_NAME=$(jq 'map(.tag_name)' ${LATEST_TAG_RAWFILE} | grep -v -e rc -e engine | sed 's/["|,| ]//g' | sort -V -r | head -n1)
K3S_VERSION=$(echo ${LATEST_TAG_NAME} | sed 's/^v//') K3S_VERSION=$(echo ${LATEST_TAG_NAME} | sed 's/^v//')
K3S_COMMIT=$(curl --silent ${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} \ K3S_COMMIT=$(curl --silent ${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} \

View file

@ -15,6 +15,8 @@ buildGoPackage {
goPackagePath = "github.com/bitnami/kubecfg"; goPackagePath = "github.com/bitnami/kubecfg";
ldflags = [ "-s" "-w" "-X main.version=v${version}" ];
meta = { meta = {
description = "A tool for managing Kubernetes resources as code"; description = "A tool for managing Kubernetes resources as code";
homepage = "https://github.com/bitnami/kubecfg"; homepage = "https://github.com/bitnami/kubecfg";

View file

@ -5,13 +5,11 @@ set -x -eu -o pipefail
cd $(dirname "$0") cd $(dirname "$0")
TAG=$(curl ${GITHUB_TOKEN:+" -u \":$GITHUB_TOKEN\""} \ VERSION=$(curl ${GITHUB_TOKEN:+" -u \":$GITHUB_TOKEN\""} \
--silent https://api.github.com/repos/linkerd/linkerd2/releases | \ --silent https://api.github.com/repos/linkerd/linkerd2/releases | \
jq 'map(.tag_name)' | grep edge | sed 's/["|,| ]//g' | sort -r | head -n1) jq 'map(.tag_name)' | grep edge | sed 's/["|,| ]//g' | sed 's/edge-//' | sort -V -r | head -n1)
VERSION=$(echo ${TAG} | sed 's/^edge-//') SHA256=$(nix-prefetch-url --quiet --unpack https://github.com/linkerd/linkerd2/archive/refs/tags/edge-${VERSION}.tar.gz)
SHA256=$(nix-prefetch-url --quiet --unpack https://github.com/linkerd/linkerd2/archive/refs/tags/${TAG}.tar.gz)
setKV () { setKV () {
sed -i "s|$1 = \".*\"|$1 = \"${2:-}\"|" ./edge.nix sed -i "s|$1 = \".*\"|$1 = \"${2:-}\"|" ./edge.nix
@ -19,11 +17,11 @@ setKV () {
setKV version ${VERSION} setKV version ${VERSION}
setKV sha256 ${SHA256} setKV sha256 ${SHA256}
setKV vendorSha256 "" # Necessary to force clean build. setKV vendorSha256 "0000000000000000000000000000000000000000000000000000" # Necessary to force clean build.
cd ../../../../../ cd ../../../../../
set +e set +e
VENDOR_SHA256=$(nix-build --no-out-link -A linkerd_edge 2>&1 | grep "got:" | cut -d':' -f2 | sed 's| ||g') VENDOR_SHA256=$(nix-build --no-out-link -A linkerd_edge 2>&1 >/dev/null | grep "got:" | cut -d':' -f2 | sed 's| ||g')
set -e set -e
cd - > /dev/null cd - > /dev/null

View file

@ -5,13 +5,11 @@ set -x -eu -o pipefail
cd $(dirname "$0") cd $(dirname "$0")
TAG=$(curl ${GITHUB_TOKEN:+" -u \":$GITHUB_TOKEN\""} \ VERSION=$(curl ${GITHUB_TOKEN:+" -u \":$GITHUB_TOKEN\""} \
--silent https://api.github.com/repos/linkerd/linkerd2/releases/latest | \ --silent https://api.github.com/repos/linkerd/linkerd2/releases | \
jq -r '.tag_name') jq 'map(.tag_name)' | grep stable | sed 's/["|,| ]//g' | sed 's/stable-//' | sort -V -r | head -n1)
VERSION=$(echo ${TAG} | sed 's/^stable-//') SHA256=$(nix-prefetch-url --quiet --unpack https://github.com/linkerd/linkerd2/archive/refs/tags/stable-${VERSION}.tar.gz)
SHA256=$(nix-prefetch-url --quiet --unpack https://github.com/linkerd/linkerd2/archive/refs/tags/${TAG}.tar.gz)
setKV () { setKV () {
sed -i "s|$1 = \".*\"|$1 = \"${2:-}\"|" ./default.nix sed -i "s|$1 = \".*\"|$1 = \"${2:-}\"|" ./default.nix
@ -19,11 +17,11 @@ setKV () {
setKV version ${VERSION} setKV version ${VERSION}
setKV sha256 ${SHA256} setKV sha256 ${SHA256}
setKV vendorSha256 "" # Necessary to force clean build. setKV vendorSha256 "0000000000000000000000000000000000000000000000000000" # Necessary to force clean build.
cd ../../../../../ cd ../../../../../
set +e set +e
VENDOR_SHA256=$(nix-build --no-out-link -A linkerd 2>&1 | grep "got:" | cut -d':' -f2 | sed 's| ||g') VENDOR_SHA256=$(nix-build --no-out-link -A linkerd 2>&1 >/dev/null | grep "got:" | cut -d':' -f2 | sed 's| ||g')
set -e set -e
cd - > /dev/null cd - > /dev/null

View file

@ -1,4 +1,5 @@
{ pkgs { nixosTests
, pkgs
, poetry2nix , poetry2nix
, lib , lib
, overrides ? (self: super: {}) , overrides ? (self: super: {})
@ -59,10 +60,17 @@ let
} }
).python; ).python;
in interpreter.pkgs.nixops.withPlugins(ps: [ pkg = interpreter.pkgs.nixops.withPlugins(ps: [
ps.nixops-encrypted-links ps.nixops-encrypted-links
ps.nixops-virtd ps.nixops-virtd
ps.nixops-aws ps.nixops-aws
ps.nixops-gcp ps.nixops-gcp
ps.nixopsvbox ps.nixopsvbox
]) ]) // rec {
# Workaround for https://github.com/NixOS/nixpkgs/issues/119407
# TODO after #1199407: Use .overrideAttrs(pkg: old: { passthru.tests = .....; })
tests = nixosTests.nixops.unstable.override { nixopsPkg = pkg; };
# Not strictly necessary, but probably expected somewhere; part of the workaround:
passthru.tests = tests;
};
in pkg

View file

@ -2,13 +2,13 @@
buildGoModule rec { buildGoModule rec {
pname = "terragrunt"; pname = "terragrunt";
version = "0.33.0"; version = "0.35.1";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "gruntwork-io"; owner = "gruntwork-io";
repo = pname; repo = pname;
rev = "v${version}"; rev = "v${version}";
sha256 = "sha256-FvgB0jG6PEvhrT9Au/Uv9XSgKx+zNw8zETpg2dJ6QX4="; sha256 = "sha256-DCum3vCrN530Z0VW0WEoLtjN+kre/mU9O+sJxckZgfc=";
}; };
vendorSha256 = "sha256-y84EFmoJS4SeA5YFIVFU0iWa5NnjU5yvOj7OFE+jGN0="; vendorSha256 = "sha256-y84EFmoJS4SeA5YFIVFU0iWa5NnjU5yvOj7OFE+jGN0=";

View file

@ -2,14 +2,14 @@
python3Packages.buildPythonApplication rec { python3Packages.buildPythonApplication rec {
pname = "flexget"; pname = "flexget";
version = "3.1.139"; version = "3.1.140";
# Fetch from GitHub in order to use `requirements.in` # Fetch from GitHub in order to use `requirements.in`
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "flexget"; owner = "flexget";
repo = "flexget"; repo = "flexget";
rev = "v${version}"; rev = "v${version}";
sha256 = "0gnj89q5mv5qiy6zsp85sswmwzm0y73nffjj3vrccx5lmxd955nv"; sha256 = "15ngmpqqx902l7gxg2lb6h8q8vmjk247jbqhc92l1apr1imjqcc5";
}; };
postPatch = '' postPatch = ''

View file

@ -1,20 +1,15 @@
{ lib, stdenv, bitlbee }: { lib, runCommandLocal, bitlbee }:
with lib; with lib;
plugins: plugins: runCommandLocal "bitlbee-plugins" {
inherit plugins;
stdenv.mkDerivation {
inherit bitlbee plugins;
name = "bitlbee-plugins";
buildInputs = [ bitlbee plugins ]; buildInputs = [ bitlbee plugins ];
phases = [ "installPhase" ]; } ''
installPhase = ''
mkdir -p $out/lib/bitlbee mkdir -p $out/lib/bitlbee
for plugin in $plugins; do for plugin in $plugins; do
for thing in $(ls $plugin/lib/bitlbee); do for thing in $(ls $plugin/lib/bitlbee); do
ln -s $plugin/lib/bitlbee/$thing $out/lib/bitlbee/ ln -s $plugin/lib/bitlbee/$thing $out/lib/bitlbee/
done done
done done
''; ''
}

View file

@ -4,6 +4,7 @@
, makeWrapper , makeWrapper
, makeDesktopItem , makeDesktopItem
, mkYarnPackage , mkYarnPackage
, fetchYarnDeps
, electron , electron
, element-web , element-web
, callPackage , callPackage
@ -13,27 +14,28 @@
, useWayland ? false , useWayland ? false
}: }:
# Notes for maintainers:
# * versions of `element-web` and `element-desktop` should be kept in sync.
# * the Yarn dependency expression must be updated with `./update-element-desktop.sh <git release tag>`
let let
pinData = (builtins.fromJSON (builtins.readFile ./pin.json));
executableName = "element-desktop"; executableName = "element-desktop";
version = "1.9.2"; electron_exec = if stdenv.isDarwin then "${electron}/Applications/Electron.app/Contents/MacOS/Electron" else "${electron}/bin/electron";
in
mkYarnPackage rec {
pname = "element-desktop";
inherit (pinData) version;
name = "${pname}-${version}";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "vector-im"; owner = "vector-im";
repo = "element-desktop"; repo = "element-desktop";
rev = "v${version}"; rev = "v${version}";
sha256 = "sha256-F1uyyBbs+U7tQzRtn+p923Z/BY8Nwxr/JTMYwsak8W8="; sha256 = pinData.desktopSrcHash;
}; };
electron_exec = if stdenv.isDarwin then "${electron}/Applications/Electron.app/Contents/MacOS/Electron" else "${electron}/bin/electron";
in
mkYarnPackage rec {
name = "element-desktop-${version}";
inherit version src;
packageJSON = ./element-desktop-package.json; packageJSON = ./element-desktop-package.json;
yarnNix = ./element-desktop-yarndeps.nix; offlineCache = fetchYarnDeps {
yarnLock = src + "/yarn.lock";
sha256 = pinData.desktopYarnHash;
};
nativeBuildInputs = [ makeWrapper ]; nativeBuildInputs = [ makeWrapper ];
@ -102,6 +104,8 @@ mkYarnPackage rec {
''; '';
}; };
passthru.updateScript = ./update.sh;
meta = with lib; { meta = with lib; {
description = "A feature-rich client for Matrix.org"; description = "A feature-rich client for Matrix.org";
homepage = "https://element.io/"; homepage = "https://element.io/";

View file

@ -1,9 +1,7 @@
{ lib, stdenv, fetchurl, writeText, jq, conf ? {} }: { lib, stdenv, fetchurl, writeText, jq, conf ? {} }:
# Note for maintainers:
# Versions of `element-web` and `element-desktop` should be kept in sync.
let let
pinData = (builtins.fromJSON (builtins.readFile ./pin.json));
noPhoningHome = { noPhoningHome = {
disable_guests = true; # disable automatic guest account registration at matrix.org disable_guests = true; # disable automatic guest account registration at matrix.org
piwik = false; # disable analytics piwik = false; # disable analytics
@ -12,11 +10,11 @@ let
in stdenv.mkDerivation rec { in stdenv.mkDerivation rec {
pname = "element-web"; pname = "element-web";
version = "1.9.2"; inherit (pinData) version;
src = fetchurl { src = fetchurl {
url = "https://github.com/vector-im/element-web/releases/download/v${version}/element-v${version}.tar.gz"; url = "https://github.com/vector-im/element-web/releases/download/v${version}/element-v${version}.tar.gz";
sha256 = "sha256-Qkn0vyZGvBAeOfTzxydWzjFQJwY39INAhwZNX4xsM7U="; sha256 = pinData.webHash;
}; };
installPhase = '' installPhase = ''

View file

@ -1,15 +1,18 @@
{ lib, stdenv, fetchFromGitHub, nodejs-14_x, python3, callPackage { lib, stdenv, fetchFromGitHub, nodejs-14_x, python3, callPackage
, fixup_yarn_lock, yarn, pkg-config, libsecret, xcbuild, Security, AppKit }: , fixup_yarn_lock, yarn, pkg-config, libsecret, xcbuild, Security, AppKit, fetchYarnDeps }:
stdenv.mkDerivation rec { let
pinData = (builtins.fromJSON (builtins.readFile ./pin.json));
in stdenv.mkDerivation rec {
pname = "keytar"; pname = "keytar";
version = "7.7.0"; inherit (pinData) version;
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "atom"; owner = "atom";
repo = "node-keytar"; repo = "node-keytar";
rev = "v${version}"; rev = "v${version}";
sha256 = "0ajvr4kjbyw2shb1y14c0dsghdlnq30f19hk2sbzj6n9y3xa3pmi"; sha256 = pinData.srcHash;
}; };
nativeBuildInputs = [ nodejs-14_x python3 yarn pkg-config ] nativeBuildInputs = [ nodejs-14_x python3 yarn pkg-config ]
@ -19,7 +22,10 @@ stdenv.mkDerivation rec {
npm_config_nodedir = nodejs-14_x; npm_config_nodedir = nodejs-14_x;
yarnOfflineCache = (callPackage ./yarn.nix {}).offline_cache; yarnOfflineCache = fetchYarnDeps {
yarnLock = ./yarn.lock;
sha256 = pinData.yarnHash;
};
buildPhase = '' buildPhase = ''
cp ${./yarn.lock} ./yarn.lock cp ${./yarn.lock} ./yarn.lock

View file

@ -0,0 +1,5 @@
{
"version": "7.7.0",
"srcHash": "sd6h+vDJGvmXFhOm4MDAljb4dAOMBB8W1IL7JSfJWyo=",
"yarnHash": "1m75hvl06mcj260hicbmv75p94h73gw5d24zpm5wxwc0q8v8wzfl"
}

View file

@ -1,19 +1,38 @@
#!/usr/bin/env nix-shell #!/usr/bin/env nix-shell
#!nix-shell -I nixpkgs=../ -i bash -p wget yarn2nix yarn #!nix-shell -I nixpkgs=../../../../../../ -i bash -p wget prefetch-yarn-deps yarn
set -euo pipefail if [ "$#" -gt 1 ] || [[ "$1" == -* ]]; then
echo "Regenerates packaging data for the seshat package."
if [ "$#" -ne 1 ] || [[ "$1" == -* ]]; then echo "Usage: $0 [git release tag]"
echo "Regenerates the Yarn dependency lock files."
echo "Usage: $0 <git release tag>"
exit 1 exit 1
fi fi
SRC="https://raw.githubusercontent.com/atom/node-keytar/$1" version="$1"
set -euo pipefail
if [ -z "$version" ]; then
version="$(wget -O- "https://api.github.com/repos/atom/node-keytar/releases?per_page=1" | jq -r '.[0].tag_name')"
fi
# strip leading "v"
version="${version#v}"
SRC="https://raw.githubusercontent.com/atom/node-keytar/v$version"
wget "$SRC/package-lock.json" wget "$SRC/package-lock.json"
wget "$SRC/package.json" wget "$SRC/package.json"
rm -f yarn.lock rm -f yarn.lock
yarn import yarn import
yarn2nix > yarn.nix
rm -rf node_modules package.json package-lock.json rm -rf node_modules package.json package-lock.json
yarn_hash=$(prefetch-yarn-deps yarn.lock)
src_hash=$(nix-prefetch-github atom node-keytar --rev v${version} | jq -r .sha256)
cat > pin.json << EOF
{
"version": "$version",
"srcHash": "$src_hash",
"yarnHash": "$yarn_hash"
}
EOF

View file

@ -0,0 +1,6 @@
{
"version": "1.9.2",
"desktopSrcHash": "F1uyyBbs+U7tQzRtn+p923Z/BY8Nwxr/JTMYwsak8W8=",
"desktopYarnHash": "0iwbszhaxaxggymixljzjb2gqrsij67fwakxhd3yj9g1zds49ghh",
"webHash": "1d9kdj65yk86hx087x1p0qkm0cffaqkwgwzl74g11g264szz8ja2"
}

View file

@ -1,14 +1,17 @@
{ lib, stdenv, rustPlatform, fetchFromGitHub, callPackage, sqlcipher, nodejs-14_x, python3, yarn, fixup_yarn_lock, CoreServices }: { lib, stdenv, rustPlatform, fetchFromGitHub, callPackage, sqlcipher, nodejs-14_x, python3, yarn, fixup_yarn_lock, CoreServices, fetchYarnDeps }:
rustPlatform.buildRustPackage rec { let
pinData = (builtins.fromJSON (builtins.readFile ./pin.json));
in rustPlatform.buildRustPackage rec {
pname = "seshat-node"; pname = "seshat-node";
version = "2.3.0"; inherit (pinData) version;
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "matrix-org"; owner = "matrix-org";
repo = "seshat"; repo = "seshat";
rev = version; rev = version;
sha256 = "0zigrz59mhih9asmbbh38z2fg0sii2342q6q0500qil2a0rssai7"; sha256 = pinData.srcHash;
}; };
sourceRoot = "source/seshat-node/native"; sourceRoot = "source/seshat-node/native";
@ -18,7 +21,10 @@ rustPlatform.buildRustPackage rec {
npm_config_nodedir = nodejs-14_x; npm_config_nodedir = nodejs-14_x;
yarnOfflineCache = (callPackage ./yarn.nix {}).offline_cache; yarnOfflineCache = fetchYarnDeps {
yarnLock = src + "/seshat-node/yarn.lock";
sha256 = pinData.yarnHash;
};
buildPhase = '' buildPhase = ''
cd .. cd ..
@ -42,5 +48,5 @@ rustPlatform.buildRustPackage rec {
cp -r . $out cp -r . $out
''; '';
cargoSha256 = "0habjf85mzqxwf8k15msm4cavd7ldq4zpxddkwd4inl2lkvlffqj"; cargoSha256 = pinData.cargoHash;
} }

View file

@ -0,0 +1,6 @@
{
"version": "2.3.0",
"srcHash": "JyqtM1CCRgxAAdhgQYaIUYPnxEcDrlW1SjDCmsrPL34=",
"yarnHash": "0bym6i1f0i3bs4fncbiwzwmbxp7j14rz1v4kyvsl02qs97qw1jac",
"cargoHash": "sha256-EjtH96SC2kgan631+wlu9LStGKm6ljCR4x3/WpCTS0E="
}

View file

@ -1,16 +1,49 @@
#!/usr/bin/env nix-shell #!/usr/bin/env nix-shell
#!nix-shell -I nixpkgs=../ -i bash -p wget yarn2nix #!nix-shell -I nixpkgs=../../../../../../ -i bash -p wget prefetch-yarn-deps yarn nix-prefetch
set -euo pipefail if [ "$#" -gt 1 ] || [[ "$1" == -* ]]; then
echo "Regenerates packaging data for the seshat package."
if [ "$#" -ne 1 ] || [[ "$1" == -* ]]; then echo "Usage: $0 [git release tag]"
echo "Regenerates the Yarn dependency lock files."
echo "Usage: $0 <git release tag>"
exit 1 exit 1
fi fi
SRC="https://raw.githubusercontent.com/matrix-org/seshat/$1" version="$1"
set -euo pipefail
if [ -z "$version" ]; then
version="$(wget -O- "https://api.github.com/repos/matrix-org/seshat/tags" | jq -r '.[] | .name' | sort --version-sort | tail -1)"
fi
SRC="https://raw.githubusercontent.com/matrix-org/seshat/$version"
tmpdir=$(mktemp -d)
trap 'rm -rf "$tmpdir"' EXIT
pushd $tmpdir
wget "$SRC/seshat-node/yarn.lock" wget "$SRC/seshat-node/yarn.lock"
yarn2nix > yarn.nix yarn_hash=$(prefetch-yarn-deps yarn.lock)
rm yarn.lock popd
src_hash=$(nix-prefetch-github matrix-org seshat --rev ${version} | jq -r .sha256)
cat > pin.json << EOF
{
"version": "$version",
"srcHash": "$src_hash",
"yarnHash": "$yarn_hash",
"cargoHash": "0000000000000000000000000000000000000000000000000000"
}
EOF
cargo_hash=$(nix-prefetch "{ sha256 }: (import ../../../../../.. {}).element-desktop.seshat.cargoDeps")
cat > pin.json << EOF
{
"version": "$version",
"srcHash": "$src_hash",
"yarnHash": "$yarn_hash",
"cargoHash": "$cargo_hash"
}
EOF

View file

@ -1,17 +0,0 @@
#!/usr/bin/env nix-shell
#!nix-shell -I nixpkgs=../../../../../ -i bash -p wget yarn2nix nix-prefetch-git
set -euo pipefail
if [ "$#" -ne 1 ] || [[ "$1" == -* ]]; then
echo "Regenerates the Yarn dependency lock files for the element-desktop package."
echo "Usage: $0 <git release tag>"
exit 1
fi
RIOT_WEB_SRC="https://raw.githubusercontent.com/vector-im/element-desktop/$1"
wget "$RIOT_WEB_SRC/package.json" -O element-desktop-package.json
wget "$RIOT_WEB_SRC/yarn.lock" -O element-desktop-yarndeps.lock
yarn2nix --no-patch --lockfile=element-desktop-yarndeps.lock > element-desktop-yarndeps.nix
rm element-desktop-yarndeps.lock

View file

@ -0,0 +1,43 @@
#!/usr/bin/env nix-shell
#!nix-shell -I nixpkgs=../../../../../ -i bash -p nix wget prefetch-yarn-deps nix-prefetch-github
if [ "$#" -gt 1 ] || [[ "$1" == -* ]]; then
echo "Regenerates packaging data for the element packages."
echo "Usage: $0 [git release tag]"
exit 1
fi
version="$1"
set -euo pipefail
if [ -z "$version" ]; then
version="$(wget -O- "https://api.github.com/repos/vector-im/element-desktop/releases?per_page=1" | jq -r '.[0].tag_name')"
fi
# strip leading "v"
version="${version#v}"
desktop_src="https://raw.githubusercontent.com/vector-im/element-desktop/v$version"
desktop_src_hash=$(nix-prefetch-github vector-im element-desktop --rev v${version} | jq -r .sha256)
web_hash=$(nix-prefetch-url "https://github.com/vector-im/element-web/releases/download/v$version/element-v$version.tar.gz")
wget "$desktop_src/package.json" -O element-desktop-package.json
tmpdir=$(mktemp -d)
trap 'rm -rf "$tmpdir"' EXIT
pushd $tmpdir
wget "$desktop_src/yarn.lock"
desktop_yarn_hash=$(prefetch-yarn-deps yarn.lock)
popd
cat > pin.json << EOF
{
"version": "$version",
"desktopSrcHash": "$desktop_src_hash",
"desktopYarnHash": "$desktop_yarn_hash",
"webHash": "$web_hash"
}
EOF

View file

@ -17,11 +17,11 @@
let unwrapped = stdenv.mkDerivation rec { let unwrapped = stdenv.mkDerivation rec {
pname = "pidgin"; pname = "pidgin";
majorVersion = "2"; majorVersion = "2";
version = "${majorVersion}.14.6"; version = "${majorVersion}.14.8";
src = fetchurl { src = fetchurl {
url = "mirror://sourceforge/pidgin/${pname}-${version}.tar.bz2"; url = "mirror://sourceforge/pidgin/${pname}-${version}.tar.bz2";
sha256 = "bb45f7c032f9efd6922a5dbf2840995775e5584771b23992d04f6eff7dff5336"; sha256 = "1jjc15pfyw3012q5ffv7q4r88wv07ndqh0wakyxa2k0w4708b01z";
}; };
nativeBuildInputs = [ makeWrapper ]; nativeBuildInputs = [ makeWrapper ];

View file

@ -0,0 +1,92 @@
{ lib, stdenv, pkgs, fetchurl }:
let
libPathNative = { packages }: lib.makeLibraryPath packages;
in
stdenv.mkDerivation rec {
pname = "rocketchat-desktop";
version = "3.5.7";
src = fetchurl {
url = "https://github.com/RocketChat/Rocket.Chat.Electron/releases/download/${version}/rocketchat_${version}_amd64.deb";
sha256 = "1ri8a60fsbqgq83f8wkyfnd59nqk4d0gpz1vanj54769zflpl71s";
};
buildInputs = with pkgs; [
gtk3
stdenv.cc.cc
zlib
glib
dbus
atk
pango
freetype
libgnome-keyring3
fontconfig
gdk-pixbuf
cairo
cups
expat
libgpg-error
alsa-lib
nspr
nss
xorg.libXrender
xorg.libX11
xorg.libXext
xorg.libXdamage
xorg.libXtst
xorg.libXcomposite
xorg.libXi
xorg.libXfixes
xorg.libXrandr
xorg.libXcursor
xorg.libxkbfile
xorg.libXScrnSaver
systemd
libnotify
xorg.libxcb
at-spi2-atk
at-spi2-core
libdbusmenu
libdrm
mesa
xorg.libxshmfence
libxkbcommon
];
dontBuild = true;
dontConfigure = true;
unpackPhase = ''
ar p $src data.tar.xz | tar xJ ./opt/ ./usr/
'';
installPhase = ''
runHook preInstall
mkdir -p $out/bin
mv opt $out
mv usr/share $out
ln -s $out/opt/Rocket.Chat/rocketchat-desktop $out/bin/rocketchat-desktop
runHook postInstall
'';
postFixup =
let
libpath = libPathNative { packages = buildInputs; };
in
''
app=$out/opt/Rocket.Chat
patchelf --set-interpreter "$(cat $NIX_CC/nix-support/dynamic-linker)" \
--set-rpath "${libpath}:$app" \
$app/rocketchat-desktop
sed -i -e "s|Exec=.*$|Exec=$out/bin/rocketchat-desktop|" $out/share/applications/rocketchat-desktop.desktop
'';
meta = with lib; {
description = "Official Desktop client for Rocket.Chat";
homepage = "https://github.com/RocketChat/Rocket.Chat.Electron";
license = licenses.mit;
maintainers = with maintainers; [ gbtb ];
platforms = platforms.x86_64;
};
}

View file

@ -23,7 +23,7 @@ let
--set LC_MESSAGES "${spellcheckerLanguage}"''); --set LC_MESSAGES "${spellcheckerLanguage}"'');
in stdenv.mkDerivation rec { in stdenv.mkDerivation rec {
pname = "signal-desktop"; pname = "signal-desktop";
version = "5.19.0"; # Please backport all updates to the stable channel. version = "5.20.0"; # Please backport all updates to the stable channel.
# All releases have a limited lifetime and "expire" 90 days after the release. # All releases have a limited lifetime and "expire" 90 days after the release.
# When releases "expire" the application becomes unusable until an update is # When releases "expire" the application becomes unusable until an update is
# applied. The expiration date for the current release can be extracted with: # applied. The expiration date for the current release can be extracted with:
@ -33,7 +33,7 @@ in stdenv.mkDerivation rec {
src = fetchurl { src = fetchurl {
url = "https://updates.signal.org/desktop/apt/pool/main/s/signal-desktop/signal-desktop_${version}_amd64.deb"; url = "https://updates.signal.org/desktop/apt/pool/main/s/signal-desktop/signal-desktop_${version}_amd64.deb";
sha256 = "0avns5axcfs8x9sv7hyjxi1cr7gag00avfj0h99wgn251b313g1a"; sha256 = "0a57gajxjqkp7zcmjc3iiys06b7v53nd81gkwrsfn2gmshihlzkd";
}; };
nativeBuildInputs = [ nativeBuildInputs = [

View file

@ -33,6 +33,7 @@
, nspr , nspr
, nss , nss
, pango , pango
, pipewire
, systemd , systemd
, xdg-utils , xdg-utils
, xorg , xorg
@ -119,6 +120,7 @@ let
nspr nspr
nss nss
pango pango
pipewire
stdenv.cc.cc stdenv.cc.cc
systemd systemd
xorg.libX11 xorg.libX11

View file

@ -15,15 +15,15 @@
rustPlatform.buildRustPackage rec { rustPlatform.buildRustPackage rec {
pname = "meli"; pname = "meli";
version = "alpha-0.7.1"; version = "alpha-0.7.2";
src = fetchgit { src = fetchgit {
url = "https://git.meli.delivery/meli/meli.git"; url = "https://git.meli.delivery/meli/meli.git";
rev = version; rev = version;
sha256 = "00iai2z5zydx9bw0ii0n6d7zwm5rrkj03b4ymic0ybwjahqzvyfq"; sha256 = "sha256-cbigEJhX6vL+gHa40cxplmPsDhsqujkzQxe0Dr6+SK0=";
}; };
cargoSha256 = "1r54a51j91iv0ziasjygzw30vqqvqibcnwnkih5xv0ijbaly61n0"; cargoSha256 = "sha256-ZE653OtXyZ9454bKPApmuL2kVko/hGBWEAya1L1KIoc=";
cargoBuildFlags = lib.optional withNotmuch "--features=notmuch"; cargoBuildFlags = lib.optional withNotmuch "--features=notmuch";

View file

@ -9,11 +9,11 @@ let
in stdenv.mkDerivation rec { in stdenv.mkDerivation rec {
pname = "msmtp"; pname = "msmtp";
version = "1.8.16"; version = "1.8.17";
src = fetchurl { src = fetchurl {
url = "https://marlam.de/${pname}/releases/${pname}-${version}.tar.xz"; url = "https://marlam.de/${pname}/releases/${pname}-${version}.tar.xz";
sha256 = "1n271yr83grpki9szdirnk6wb5rcc319f0gmfabyw3fzyf4msjy0"; sha256 = "sha256-D92+dMGp3PZGG0obDbPk00JmGEUAxAPX8QetQttOxNM=";
}; };
patches = [ patches = [

Some files were not shown because too many files have changed in this diff Show more