Project import generated by Copybara.

GitOrigin-RevId: 8536aeb4154f5458994080bc4cf542695c144739
This commit is contained in:
Default email 2020-05-03 19:38:23 +02:00
parent 27f2c9edb7
commit 9a250f78df
806 changed files with 16966 additions and 11172 deletions

View file

@ -176,6 +176,7 @@
# PHP
/doc/languages-frameworks/php.section.md @etu
/nixos/tests/php @etu
/pkgs/build-support/build-pecl.nix @etu
/pkgs/development/interpreters/php @etu
/pkgs/top-level/php-packages.nix @etu
/pkgs/build-support/build-pecl.nix @etu

View file

@ -50,12 +50,13 @@ For package version upgrades and such a one-line commit message is usually suffi
## Backporting changes
To [backport a change into a release branch](https://nixos.org/nixpkgs/manual/#submitting-changes-stable-release-branches):
Follow these steps to backport a change into a release branch in compliance with the [commit policy](https://nixos.org/nixpkgs/manual/#submitting-changes-stable-release-branches).
1. Take note of the commit in which the change was introduced into `master`.
1. Take note of the commits in which the change was introduced into `master` branch.
2. Check out the target _release branch_, e.g. `release-20.03`. Do not use a _channel branch_ like `nixos-20.03` or `nixpkgs-20.03`.
3. Use `git cherry-pick -x <original commit>`.
4. Open your backport PR. Make sure to select the release branch (e.g. `release-20.03`) as the target branch of the PR, and link to the PR in which the original change was made to `master`.
3. Create a branch for your change, e.g. `git checkout -b backport`.
4. When the reason to backport is not obvious from the original commit message, use `git cherry-pick -xe <original commit>` and add a reason. Otherwise use `git cherry-pick -x <original commit>`. That's fine for minor version updates that only include security and bug fixes, commits that fixes an otherwise broken package or similar.
5. Push to GitHub and open a backport pull request. Make sure to select the release branch (e.g. `release-20.03`) as the target branch of the pull request, and link to the pull request in which the original change was comitted to `master`. The pull request title should be the commit title with the release version as prefix, e.g. `[20.03]`.
## Reviewing contributions

View file

@ -1,4 +1,13 @@
<!-- Nixpkgs has a lot of new incoming Pull Requests, but not enough people to review this constant stream. Even if you aren't a committer, we would appreciate reviews of other PRs, especially simple ones like package updates. Just testing the relevant package/service and leaving a comment saying what you tested, how you tested it and whether it worked would be great. List of open PRs: <https://github.com/NixOS/nixpkgs/pulls>, for more about reviewing contributions: <https://hydra.nixos.org/job/nixpkgs/trunk/manual/latest/download/1/nixpkgs/manual.html#chap-reviewing-contributions>. Reviewing isn't mandatory, but it would help out a lot and reduce the average time-to-merge for all of us. Thanks a lot if you do! -->
<!--
To help with the large amounts of pull requests, we would appreciate your
reviews of other pull requests, especially simple package updates. Just leave a
comment describing what you have tested in the relevant package/service.
Reviewing helps to reduce the average time-to-merge for everyone.
Thanks a lot if you do!
List of open PRs: https://github.com/NixOS/nixpkgs/pulls
Reviewing guidelines: https://hydra.nixos.org/job/nixpkgs/trunk/manual/latest/download/1/nixpkgs/manual.html#chap-reviewing-contributions
-->
###### Motivation for this change

View file

@ -407,23 +407,47 @@ Additional information.
<section xml:id="submitting-changes-stable-release-branches">
<title>Stable release branches</title>
<itemizedlist>
<para>
For cherry-picking a commit to a stable release branch (<quote>backporting</quote>), use <literal>git cherry-pick -x &lt;original commit&gt;</literal> so that the original commit id is included in the commit.
</para>
<para>
Add a reason for the backport by using <literal>git cherry-pick -xe &lt;original commit&gt;</literal> instead when it is not obvious from the original commit message. It is not needed when its a minor version update that includes security and bug fixes but dont add new features or when the commit fixes an otherwise broken package.
</para>
<para>
Here is an example of a cherry-picked commit message with good reason description:
</para>
<screen>
zfs: Keep trying root import until it works
Works around #11003.
(cherry picked from commit 98b213a11041af39b39473906b595290e2a4e2f9)
Reason: several people cannot boot with ZFS on NVMe
</screen>
<para>
Other examples of reasons are:
</para>
<itemizedlist spacing="compact">
<listitem>
<para>
If you're cherry-picking a commit to a stable release branch (“backporting”), always use <command>git cherry-pick -xe</command> and ensure the message contains a clear description about why this needs to be included in the stable branch.
Previously the build would fail due to, e.g., <literal>getaddrinfo</literal> not being defined
</para>
</listitem>
<listitem>
<para>
An example of a cherry-picked commit would look like this:
The previous download links were all broken
</para>
</listitem>
<listitem>
<para>
Crash when starting on some X11 systems
</para>
<screen>
nixos: Refactor the world.
The original commit message describing the reason why the world was torn apart.
(cherry picked from commit abcdef)
Reason: I just had a gut feeling that this would also be wanted by people from
the stone age.
</screen>
</listitem>
</itemizedlist>
</section>

View file

@ -21,6 +21,7 @@
<xi:include href="node.section.xml" />
<xi:include href="ocaml.xml" />
<xi:include href="perl.xml" />
<xi:include href="php.section.xml" />
<xi:include href="python.section.xml" />
<xi:include href="qt.xml" />
<xi:include href="r.section.xml" />

View file

@ -1,26 +1,30 @@
# PHP
# PHP {#sec-php}
## User Guide
## User Guide {#ssec-php-user-guide}
### Using PHP
#### Overview
### Overview {#ssec-php-user-guide-overview}
Several versions of PHP are available on Nix, each of which having a
wide variety of extensions and libraries available.
The attribute `php` refers to the version of PHP considered most
stable and thoroughly tested in nixpkgs for any given release of
NixOS. Note that while this version of PHP may not be the latest major
release from upstream, any version of PHP supported in nixpkgs may be
utilized by specifying the desired attribute by version, such as
`php74`.
The different versions of PHP that nixpkgs provides are located under
attributes named based on major and minor version number; e.g.,
`php74` is PHP 7.4.
Only versions of PHP that are supported by upstream for the entirety
of a given NixOS release will be included in that release of
NixOS. See [PHP Supported
Versions](https://www.php.net/supported-versions.php).
The attribute `php` refers to the version of PHP considered most
stable and thoroughly tested in nixpkgs for any given release of
NixOS - not necessarily the latest major release from upstream.
All available PHP attributes are wrappers around their respective
binary PHP package and provide commonly used extensions this way. The
real PHP 7.4 package, i.e. the unwrapped one, is available as
`php74.unwrapped`; see the next section for more details.
Interactive tools built on PHP are put in `php.packages`; composer is
for example available at `php.packages.composer`.
@ -30,39 +34,44 @@ opcache extension shipped with PHP is available at
`php.extensions.opcache` and the third-party ImageMagick extension at
`php.extensions.imagick`.
The different versions of PHP that nixpkgs provides is located under
attributes named based on major and minor version number; e.g.,
`php74` is PHP 7.4 with commonly used extensions installed,
`php74base` is the same PHP runtime without extensions.
#### Installing PHP with packages
### Installing PHP with extensions {#ssec-php-user-guide-installing-with-extensions}
A PHP package with specific extensions enabled can be built using
`php.withExtensions`. This is a function which accepts an anonymous
function as its only argument; the function should take one argument,
the set of all extensions, and return a list of wanted extensions. For
example, a PHP package with the opcache and ImageMagick extensions
enabled:
function as its only argument; the function should accept two named
parameters: `enabled` - a list of currently enabled extensions and
`all` - the set of all extensions, and return a list of wanted
extensions. For example, a PHP package with all default extensions and
ImageMagick enabled:
```nix
php.withExtensions (e: with e; [ imagick opcache ])
php.withExtensions ({ enabled, all }:
enabled ++ [ all.imagick ])
```
Note that this will give you a package with _only_ opcache and
ImageMagick, none of the other extensions which are enabled by default
in the `php` package will be available.
To enable building on a previous PHP package, the currently enabled
extensions are made available in its `enabledExtensions`
attribute. For example, to generate a package with all default
extensions enabled, except opcache, but with ImageMagick:
To exclude some, but not all, of the default extensions, you can
filter the `enabled` list like this:
```nix
php.withExtensions (e:
(lib.filter (e: e != php.extensions.opcache) php.enabledExtensions)
++ [ e.imagick ])
php.withExtensions ({ enabled, all }:
(lib.filter (e: e != php.extensions.opcache) enabled)
++ [ all.imagick ])
```
To build your list of extensions from the ground up, you can simply
ignore `enabled`:
```nix
php.withExtensions ({ all, ... }: with all; [ imagick opcache ])
```
`php.withExtensions` provides extensions by wrapping a minimal php
base package, providing a `php.ini` file listing all extensions to be
loaded. You can access this package through the `php.unwrapped`
attribute; useful if you, for example, need access to the `dev`
output. The generated `php.ini` file can be accessed through the
`php.phpIni` attribute.
If you want a PHP build with extra configuration in the `php.ini`
file, you can use `php.buildEnv`. This function takes two named and
optional parameters: `extensions` and `extraConfig`. `extensions`
@ -73,19 +82,19 @@ and ImageMagick extensions enabled, and `memory_limit` set to `256M`:
```nix
php.buildEnv {
extensions = e: with e; [ imagick opcache ];
extensions = { all, ... }: with all; [ imagick opcache ];
extraConfig = "memory_limit=256M";
}
```
##### Example setup for `phpfpm`
#### Example setup for `phpfpm` {#ssec-php-user-guide-installing-with-extensions-phpfpm}
You can use the previous examples in a `phpfpm` pool called `foo` as
follows:
```nix
let
myPhp = php.withExtensions (e: with e; [ imagick opcache ]);
myPhp = php.withExtensions ({ all, ... }: with all; [ imagick opcache ]);
in {
services.phpfpm.pools."foo".phpPackage = myPhp;
};
@ -94,7 +103,7 @@ in {
```nix
let
myPhp = php.buildEnv {
extensions = e: with e; [ imagick opcache ];
extensions = { all, ... }: with all; [ imagick opcache ];
extraConfig = "memory_limit=256M";
};
in {
@ -102,11 +111,27 @@ in {
};
```
##### Example usage with `nix-shell`
#### Example usage with `nix-shell` {#ssec-php-user-guide-installing-with-extensions-nix-shell}
This brings up a temporary environment that contains a PHP interpreter
with the extensions `imagick` and `opcache` enabled.
with the extensions `imagick` and `opcache` enabled:
```sh
nix-shell -p 'php.buildEnv { extensions = e: with e; [ imagick opcache ]; }'
nix-shell -p 'php.withExtensions ({ all, ... }: with all; [ imagick opcache ])'
```
### Installing PHP packages with extensions {#ssec-php-user-guide-installing-packages-with-extensions}
All interactive tools use the PHP package you get them from, so all
packages at `php.packages.*` use the `php` package with its default
extensions. Sometimes this default set of extensions isn't enough and
you may want to extend it. A common case of this is the `composer`
package: a project may depend on certain extensions and `composer`
won't work with that project unless those extensions are loaded.
Example of building `composer` with additional extensions:
```nix
(php.withExtensions ({ all, enabled }:
enabled ++ (with all; [ imagick redis ]))
).packages.composer
```

View file

@ -1,10 +1,11 @@
# to run these tests:
# nix-build nixpkgs/lib/tests/maintainers.nix
# If nothing is output, all tests passed
{ pkgs ? import ../.. {} }:
# to run these tests (and the others)
# nix-build nixpkgs/lib/tests/release.nix
{ # The pkgs used for dependencies for the testing itself
pkgs
, lib
}:
let
inherit (pkgs) lib;
inherit (lib) types;
maintainerModule = { config, ... }: {

View file

@ -1,8 +1,17 @@
{ pkgs ? import ../.. {} }:
{ # The pkgs used for dependencies for the testing itself
# Don't test properties of pkgs.lib, but rather the lib in the parent directory
pkgs ? import ../.. {} // { lib = throw "pkgs.lib accessed, but the lib tests should use nixpkgs' lib path directly!"; }
}:
pkgs.runCommandNoCC "nixpkgs-lib-tests" {
buildInputs = [ pkgs.nix (import ./check-eval.nix) (import ./maintainers.nix { inherit pkgs; }) ];
NIX_PATH = "nixpkgs=${toString pkgs.path}";
buildInputs = [
pkgs.nix
(import ./check-eval.nix)
(import ./maintainers.nix {
inherit pkgs;
lib = import ../.;
})
];
} ''
datadir="${pkgs.nix}/share"
export TEST_ROOT=$(pwd)/test-tmp

View file

@ -758,7 +758,7 @@
name = "Jonathan Glines";
};
avaq = {
email = "avaq+nixos@xs4all.nl";
email = "nixpkgs@account.avaq.it";
github = "avaq";
githubId = 1217745;
name = "Aldwin Vlasblom";
@ -1406,6 +1406,16 @@
githubId = 1103294;
name = "Christopher Rosset";
};
christianharke = {
email = "christian@harke.ch";
github = "christianharke";
githubId = 13007345;
name = "Christian Harke";
keys = [{
longkeyid = "rsa4096/0x830A9728630966F4";
fingerprint = "4EBB 30F1 E89A 541A A7F2 52BE 830A 9728 6309 66F4";
}];
};
christopherpoole = {
email = "mail@christopherpoole.net";
github = "christopherpoole";
@ -1470,6 +1480,12 @@
githubId = 848609;
name = "Michael Bishop";
};
cmacrae = {
email = "hi@cmacr.ae";
github = "cmacrae";
githubId = 3392199;
name = "Calum MacRae";
};
cmcdragonkai = {
email = "roger.qiu@matrix.ai";
github = "cmcdragonkai";
@ -4068,6 +4084,12 @@
githubId = 6346418;
name = "Kolby Crouch";
};
kolloch = {
email = "info@eigenvalue.net";
github = "kolloch";
githubId = 339354;
name = "Peter Kolloch";
};
konimex = {
email = "herdiansyah@netc.eu";
github = "konimex";
@ -5608,6 +5630,12 @@
githubId = 369111;
name = "Morgan Jones";
};
numkem = {
name = "Sebastien Bariteau";
email = "numkem@numkem.org";
github = "numkem";
githubId = 332423;
};
nyanloutre = {
email = "paul@nyanlout.re";
github = "nyanloutre";
@ -8247,6 +8275,12 @@
githubId = 483465;
name = "Mateusz Wykurz";
};
wulfsta = {
email = "wulfstawulfsta@gmail.com";
github = "Wulfsta";
githubId = 13378502;
name = "Wulfsta";
};
wyvie = {
email = "elijahrum@gmail.com";
github = "wyvie";

View file

@ -17,6 +17,18 @@
{ lib }:
with lib.maintainers; {
acme = {
members = [
aanderse
andrew-d
arianvp
emily
flokli
m1cr0man
];
scope = "Maintain ACME-related packages and modules.";
};
freedesktop = {
members = [ jtojnar worldofpeace ];
scope = "Maintain Freedesktop.org packages for graphical desktop.";
@ -31,6 +43,17 @@ with lib.maintainers; {
scope = "Maintain GNOME desktop environment and platform.";
};
php = {
members = [
aanderse
etu
globin
ma27
talyz
];
scope = "Maintain PHP related packages and extensions.";
};
podman = {
members = [
adisbladis

View file

@ -31,6 +31,7 @@
<xref linkend="opt-services.xserver.windowManager.twm.enable"/> = true;
<xref linkend="opt-services.xserver.windowManager.icewm.enable"/> = true;
<xref linkend="opt-services.xserver.windowManager.i3.enable"/> = true;
<xref linkend="opt-services.xserver.windowManager.herbstluftwm.enable"/> = true;
</programlisting>
</para>
<para>

View file

@ -41,6 +41,11 @@
neo</command>!)
</para>
<para>
If the text is too small to be legible, try <command>setfont ter-132n</command>
to increase the font size.
</para>
<section xml:id="sec-installation-booting-networking">
<title>Networking in the installer</title>

View file

@ -26,6 +26,11 @@
<listitem>
<para>GNOME desktop environment was upgraded to 3.36, see its <link xlink:href="https://help.gnome.org/misc/release-notes/3.36/">release notes</link>.</para>
</listitem>
<listitem>
<para>
We now distribute a GNOME ISO.
</para>
</listitem>
<listitem>
<para>
PHP now defaults to PHP 7.4, updated from 7.3.
@ -140,69 +145,69 @@
</listitem>
<listitem>
<para>
Since this release there's an easy way to customize your PHP install to get a much smaller
base PHP with only wanted extensions enabled. See the following snippet installing a smaller PHP
with the extensions <literal>imagick</literal>, <literal>opcache</literal> and
Since this release there's an easy way to customize your PHP
install to get a much smaller base PHP with only wanted
extensions enabled. See the following snippet installing a
smaller PHP with the extensions <literal>imagick</literal>,
<literal>opcache</literal>, <literal>pdo</literal> and
<literal>pdo_mysql</literal> loaded:
<programlisting>
environment.systemPackages = [
(pkgs.php.buildEnv { extensions = pp: with pp; [
imagick
opcache
pdo_mysql
]; })
(pkgs.php.withExtensions
({ all, ... }: with all; [
imagick
opcache
pdo
pdo_mysql
])
)
];</programlisting>
The default <literal>php</literal> attribute hasn't lost any extensions -
the <literal>opcache</literal> extension was added there.
The default <literal>php</literal> attribute hasn't lost any
extensions. The <literal>opcache</literal> extension has been
added.
All upstream PHP extensions are available under <package><![CDATA[php.extensions.<name?>]]></package>.
</para>
<para>
The updated <literal>php</literal> attribute is now easily customizable to your liking
by using extensions instead of writing config files or changing configure flags.
Therefore we have removed the following configure flags:
All PHP <literal>config</literal> flags have been removed for
the following reasons:
<itemizedlist>
<title>PHP <literal>config</literal> flags that we don't read anymore:</title>
<listitem><para><literal>config.php.argon2</literal></para></listitem>
<listitem><para><literal>config.php.bcmath</literal></para></listitem>
<listitem><para><literal>config.php.bz2</literal></para></listitem>
<listitem><para><literal>config.php.calendar</literal></para></listitem>
<listitem><para><literal>config.php.curl</literal></para></listitem>
<listitem><para><literal>config.php.exif</literal></para></listitem>
<listitem><para><literal>config.php.ftp</literal></para></listitem>
<listitem><para><literal>config.php.gd</literal></para></listitem>
<listitem><para><literal>config.php.gettext</literal></para></listitem>
<listitem><para><literal>config.php.gmp</literal></para></listitem>
<listitem><para><literal>config.php.imap</literal></para></listitem>
<listitem><para><literal>config.php.intl</literal></para></listitem>
<listitem><para><literal>config.php.ldap</literal></para></listitem>
<listitem><para><literal>config.php.libxml2</literal></para></listitem>
<listitem><para><literal>config.php.libzip</literal></para></listitem>
<listitem><para><literal>config.php.mbstring</literal></para></listitem>
<listitem><para><literal>config.php.mysqli</literal></para></listitem>
<listitem><para><literal>config.php.mysqlnd</literal></para></listitem>
<listitem><para><literal>config.php.openssl</literal></para></listitem>
<listitem><para><literal>config.php.pcntl</literal></para></listitem>
<listitem><para><literal>config.php.pdo_mysql</literal></para></listitem>
<listitem><para><literal>config.php.pdo_odbc</literal></para></listitem>
<listitem><para><literal>config.php.pdo_pgsql</literal></para></listitem>
<listitem><para><literal>config.php.phpdbg</literal></para></listitem>
<listitem><para><literal>config.php.postgresql</literal></para></listitem>
<listitem><para><literal>config.php.readline</literal></para></listitem>
<listitem><para><literal>config.php.soap</literal></para></listitem>
<listitem><para><literal>config.php.sockets</literal></para></listitem>
<listitem><para><literal>config.php.sodium</literal></para></listitem>
<listitem><para><literal>config.php.sqlite</literal></para></listitem>
<listitem><para><literal>config.php.tidy</literal></para></listitem>
<listitem><para><literal>config.php.xmlrpc</literal></para></listitem>
<listitem><para><literal>config.php.xsl</literal></para></listitem>
<listitem><para><literal>config.php.zip</literal></para></listitem>
<listitem><para><literal>config.php.zlib</literal></para></listitem>
<listitem>
<para>
The updated <literal>php</literal> attribute is now easily
customizable to your liking by using
<literal>php.withExtensions</literal> or
<literal>php.buildEnv</literal> instead of writing config files
or changing configure flags.
</para>
</listitem>
<listitem>
<para>
The remaining configuration flags can now be set directly on
the <literal>php</literal> attribute. For example, instead of
<programlisting>
php.override {
config.php.embed = true;
config.php.apxs2 = false;
}
</programlisting>
you should now write
<programlisting>
php.override {
embedSupport = true;
apxs2Support = false;
}
</programlisting>
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
@ -266,6 +271,46 @@ environment.systemPackages = [
</programlisting>
</para>
</listitem>
<listitem>
<para>
The httpd web server previously started its main process as root
privileged, then ran worker processes as a less privileged identity user.
This was changed to start all of httpd as a less privileged user (defined by
<xref linkend="opt-services.httpd.user"/> and
<xref linkend="opt-services.httpd.group"/>). As a consequence, all files that
are needed for httpd to run (included configuration fragments, SSL
certificates and keys, etc.) must now be readable by this less privileged
user/group.
</para>
<para>
The default value for <xref linkend="opt-services.httpd.mpm"/>
has been changed from <literal>prefork</literal> to <literal>event</literal>. Along with
this change the default value for
<link linkend="opt-services.httpd.virtualHosts">services.httpd.virtualHosts.&lt;name&gt;.http2</link>
has been set to <literal>true</literal>.
</para>
</listitem>
<listitem>
<para>
The <literal>systemd-networkd</literal> option
<literal>systemd.network.networks.&lt;name&gt;.dhcp.CriticalConnection</literal>
has been removed following upstream systemd's deprecation of the same. It is recommended to use
<literal>systemd.network.networks.&lt;name&gt;.networkConfig.KeepConfiguration</literal> instead.
See <citerefentry><refentrytitle>systemd.network</refentrytitle>
<manvolnum>5</manvolnum></citerefentry> for details.
</para>
</listitem>
<listitem>
<para>
The <literal>systemd-networkd</literal> option
<literal>systemd.network.networks._name_.dhcpConfig</literal>
has been renamed to
<xref linkend="opt-systemd.network.networks._name_.dhcpV4Config"/>
following upstream systemd's documentation change.
See <citerefentry><refentrytitle>systemd.network</refentrytitle>
<manvolnum>5</manvolnum></citerefentry> for details.
</para>
</listitem>
</itemizedlist>
</section>

View file

@ -85,8 +85,6 @@ CHAR_TO_KEY = {
}
# Forward references
nr_tests: int
failed_tests: list
log: "Logger"
machines: "List[Machine]"
@ -882,33 +880,16 @@ def run_tests() -> None:
if machine.is_up():
machine.execute("sync")
if nr_tests != 0:
nr_succeeded = nr_tests - len(failed_tests)
eprint("{} out of {} tests succeeded".format(nr_succeeded, nr_tests))
if len(failed_tests) > 0:
eprint(
"The following tests have failed:\n - {}".format(
"\n - ".join(failed_tests)
)
)
sys.exit(1)
@contextmanager
def subtest(name: str) -> Iterator[None]:
global nr_tests
global failed_tests
with log.nested(name):
nr_tests += 1
try:
yield
return True
except Exception as e:
failed_tests.append(
'Test "{}" failed with error: "{}"'.format(name, str(e))
)
log.log("error: {}".format(str(e)))
log.log(f'Test "{name}" failed with error: "{e}"')
raise e
return False
@ -928,9 +909,6 @@ if __name__ == "__main__":
]
exec("\n".join(machine_eval))
nr_tests = 0
failed_tests = []
@atexit.register
def clean_up() -> None:
with log.nested("cleaning up"):

View file

@ -1,5 +1,5 @@
let
pkgs = (import <nixpkgs> {});
pkgs = (import ../../../../../../default.nix {});
machine = import "${pkgs.path}/nixos/lib/eval-config.nix" {
system = "x86_64-linux";
modules = [

View file

@ -25,6 +25,7 @@ in
fonts = {
enableFontDir = mkOption {
type = types.bool;
default = false;
description = ''
Whether to create a directory with links to all fonts in

View file

@ -9,6 +9,7 @@ with lib;
fonts = {
enableGhostscriptFonts = mkOption {
type = types.bool;
default = false;
description = ''
Whether to add the fonts provided by Ghostscript (such as

View file

@ -88,6 +88,7 @@ in
};
useTLS = mkOption {
type = types.bool;
default = false;
description = ''
If enabled, use TLS (encryption) over an LDAP (port 389)
@ -109,6 +110,7 @@ in
daemon = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to let the nslcd daemon (nss-pam-ldapd) handle the

View file

@ -10,35 +10,34 @@ let
canLoadExternalModules = config.services.nscd.enable;
myhostname = canLoadExternalModules;
mymachines = canLoadExternalModules;
# XXX Move these to their respective modules
nssmdns = canLoadExternalModules && config.services.avahi.nssmdns;
nsswins = canLoadExternalModules && config.services.samba.nsswins;
ldap = canLoadExternalModules && (config.users.ldap.enable && config.users.ldap.nsswitch);
sssd = canLoadExternalModules && config.services.sssd.enable;
resolved = canLoadExternalModules && config.services.resolved.enable;
googleOsLogin = canLoadExternalModules && config.security.googleOsLogin.enable;
hostArray = [ "files" ]
++ optional mymachines "mymachines"
++ optional nssmdns "mdns_minimal [NOTFOUND=return]"
++ optional nsswins "wins"
++ optional resolved "resolve [!UNAVAIL=return]"
++ [ "dns" ]
++ optional nssmdns "mdns"
++ optional myhostname "myhostname";
hostArray = mkMerge [
(mkBefore [ "files" ])
(mkIf mymachines [ "mymachines" ])
(mkIf nssmdns [ "mdns_minimal [NOTFOUND=return]" ])
(mkIf nsswins [ "wins" ])
(mkIf resolved [ "resolve [!UNAVAIL=return]" ])
(mkAfter [ "dns" ])
(mkIf nssmdns (mkOrder 1501 [ "mdns" ])) # 1501 to ensure it's after dns
(mkIf myhostname (mkOrder 1600 [ "myhostname" ])) # 1600 to ensure it's always the last
];
passwdArray = [ "files" ]
++ optional sssd "sss"
++ optional ldap "ldap"
++ optional mymachines "mymachines"
++ optional googleOsLogin "cache_oslogin oslogin"
++ [ "systemd" ];
passwdArray = mkMerge [
(mkBefore [ "files" ])
(mkIf ldap [ "ldap" ])
(mkIf mymachines [ "mymachines" ])
(mkIf canLoadExternalModules (mkAfter [ "systemd" ]))
];
shadowArray = [ "files" ]
++ optional sssd "sss"
++ optional ldap "ldap";
servicesArray = [ "files" ]
++ optional sssd "sss";
shadowArray = mkMerge [
(mkBefore [ "files" ])
(mkIf ldap [ "ldap" ])
];
in {
options = {
@ -61,17 +60,73 @@ in {
};
};
system.nssHosts = mkOption {
type = types.listOf types.str;
default = [];
example = [ "mdns" ];
description = ''
List of host entries to configure in <filename>/etc/nsswitch.conf</filename>.
'';
};
system.nssDatabases = {
passwd = mkOption {
type = types.listOf types.str;
description = ''
List of passwd entries to configure in <filename>/etc/nsswitch.conf</filename>.
Note that "files" is always prepended while "systemd" is appended if nscd is enabled.
This option only takes effect if nscd is enabled.
'';
default = [];
};
group = mkOption {
type = types.listOf types.str;
description = ''
List of group entries to configure in <filename>/etc/nsswitch.conf</filename>.
Note that "files" is always prepended while "systemd" is appended if nscd is enabled.
This option only takes effect if nscd is enabled.
'';
default = [];
};
shadow = mkOption {
type = types.listOf types.str;
description = ''
List of shadow entries to configure in <filename>/etc/nsswitch.conf</filename>.
Note that "files" is always prepended.
This option only takes effect if nscd is enabled.
'';
default = [];
};
hosts = mkOption {
type = types.listOf types.str;
description = ''
List of hosts entries to configure in <filename>/etc/nsswitch.conf</filename>.
Note that "files" is always prepended, and "dns" and "myhostname" are always appended.
This option only takes effect if nscd is enabled.
'';
default = [];
};
services = mkOption {
type = types.listOf types.str;
description = ''
List of services entries to configure in <filename>/etc/nsswitch.conf</filename>.
Note that "files" is always prepended.
This option only takes effect if nscd is enabled.
'';
default = [];
};
};
};
imports = [
(mkRenamedOptionModule [ "system" "nssHosts" ] [ "system" "nssDatabases" "hosts" ])
];
config = {
assertions = [
{
@ -87,30 +142,34 @@ in {
];
# Name Service Switch configuration file. Required by the C
# library. !!! Factor out the mdns stuff. The avahi module
# should define an option used by this module.
# library.
environment.etc."nsswitch.conf".text = ''
passwd: ${concatStringsSep " " passwdArray}
group: ${concatStringsSep " " passwdArray}
shadow: ${concatStringsSep " " shadowArray}
passwd: ${concatStringsSep " " config.system.nssDatabases.passwd}
group: ${concatStringsSep " " config.system.nssDatabases.group}
shadow: ${concatStringsSep " " config.system.nssDatabases.shadow}
hosts: ${concatStringsSep " " config.system.nssHosts}
hosts: ${concatStringsSep " " config.system.nssDatabases.hosts}
networks: files
ethers: files
services: ${concatStringsSep " " servicesArray}
services: ${concatStringsSep " " config.system.nssDatabases.services}
protocols: files
rpc: files
'';
system.nssHosts = hostArray;
system.nssDatabases = {
passwd = passwdArray;
group = passwdArray;
shadow = shadowArray;
hosts = hostArray;
services = mkBefore [ "files" ];
};
# Systemd provides nss-myhostname to ensure that our hostname
# always resolves to a valid IP address. It returns all locally
# configured IP addresses, or ::1 and 127.0.0.2 as
# fallbacks. Systemd also provides nss-mymachines to return IP
# addresses of local containers.
system.nssModules = (optionals canLoadExternalModules [ config.systemd.package.out ])
++ optional googleOsLogin pkgs.google-compute-engine-oslogin.out;
system.nssModules = (optionals canLoadExternalModules [ config.systemd.package.out ]);
};
}

View file

@ -1,7 +1,7 @@
# This module contains the basic configuration for building a NixOS
# installation CD.
{ config, lib, pkgs, ... }:
{ config, lib, options, pkgs, ... }:
with lib;
@ -15,6 +15,9 @@ with lib;
../../profiles/installation-device.nix
];
# Adds terminus_font for people with HiDPI displays
console.packages = options.console.packages.default ++ [ pkgs.terminus_font ];
# ISO naming.
isoImage.isoName = "${config.isoImage.isoBaseName}-${config.system.nixos.label}-${pkgs.stdenv.hostPlatform.system}.iso";

View file

@ -196,7 +196,6 @@
./security/pam_usb.nix
./security/pam_mount.nix
./security/polkit.nix
./security/prey.nix
./security/rngd.nix
./security/rtkit.nix
./security/wrappers/default.nix

View file

@ -2,6 +2,8 @@
with lib;
let inherit (pkgs) writeScript; in
let
pkgs2storeContents = l : map (x: { object = x; symlink = "none"; }) l;
@ -30,7 +32,12 @@ in {
];
# Some container managers like lxc need these
extraCommands = "mkdir -p proc sys dev";
extraCommands =
let script = writeScript "extra-commands.sh" ''
rm etc
mkdir -p proc sys dev etc
'';
in script;
};
boot.isContainer = true;

View file

@ -9,9 +9,7 @@ let
HomepageLocation = cfg.homepageLocation;
DefaultSearchProviderSearchURL = cfg.defaultSearchProviderSearchURL;
DefaultSearchProviderSuggestURL = cfg.defaultSearchProviderSuggestURL;
ExtensionInstallForcelist = map (extension:
"${extension};https://clients2.google.com/service/update2/crx"
) cfg.extensions;
ExtensionInstallForcelist = cfg.extensions;
};
in
@ -28,7 +26,11 @@ in
List of chromium extensions to install.
For list of plugins ids see id in url of extensions on
<link xlink:href="https://chrome.google.com/webstore/category/extensions">chrome web store</link>
page.
page. To install a chromium extension not included in the chrome web
store, append to the extension id a semicolon ";" followed by a URL
pointing to an Update Manifest XML file. See
<link xlink:href="https://www.chromium.org/administrators/policy-list-3#ExtensionInstallForcelist">ExtensionInstallForcelist</link>
for additional details.
'';
default = [];
example = literalExample ''

View file

@ -178,6 +178,10 @@ in
set -l post (string join0 $fish_complete_path | string match --regex "[^\x00]*generated_completions.*" | string split0 | string match -er ".")
set fish_complete_path $prev "/etc/fish/generated_completions" $post
end
# prevent fish from generating completions on first run
if not test -d $__fish_user_data_dir/generated_completions
${pkgs.coreutils}/bin/mkdir $__fish_user_data_dir/generated_completions
end
'';
environment.etc."fish/generated_completions".source =

View file

@ -45,7 +45,32 @@ in
config = mkIf cfg.enable {
environment.etc.xonshrc.text = cfg.config;
environment.etc.xonshrc.text = ''
# /etc/xonshrc: DO NOT EDIT -- this file has been generated automatically.
if not ''${...}.get('__NIXOS_SET_ENVIRONMENT_DONE'):
# The NixOS environment and thereby also $PATH
# haven't been fully set up at this point. But
# `source-bash` below requires `bash` to be on $PATH,
# so add an entry with bash's location:
$PATH.add('${pkgs.bash}/bin')
# Stash xonsh's ls alias, so that we don't get a collision
# with Bash's ls alias from environment.shellAliases:
_ls_alias = aliases.pop('ls', None)
# Source the NixOS environment config.
source-bash "${config.system.build.setEnvironment}"
# Restore xonsh's ls alias, overriding that from Bash (if any).
if _ls_alias is not None:
aliases['ls'] = _ls_alias
del _ls_alias
${cfg.config}
'';
environment.systemPackages = [ cfg.package ];

View file

@ -49,6 +49,10 @@ with lib;
simply add the brightnessctl package to environment.systemPackages.
'')
(mkRemovedOptionModule ["services" "prey" ] ''
prey-bash-client is deprecated upstream
'')
# Do NOT add any option renames here, see top of the file
];
}

View file

@ -99,7 +99,7 @@ let
keyType = mkOption {
type = types.str;
default = "ec384";
default = "ec256";
description = ''
Key type to use for private keys.
For an up to date list of supported values check the --key-type option
@ -458,7 +458,7 @@ in
];
meta = {
maintainers = with lib.maintainers; [ abbradar fpletz globin m1cr0man ];
maintainers = lib.teams.acme.members;
doc = ./acme.xml;
};
}

View file

@ -9,6 +9,7 @@ with lib;
];
options.security.apparmor.confineSUIDApplications = mkOption {
type = types.bool;
default = true;
description = ''
Install AppArmor profiles for commonly-used SUID application

View file

@ -49,6 +49,7 @@ in
# enable the nss module, so user lookups etc. work
system.nssModules = [ package ];
system.nssDatabases.passwd = [ "cache_oslogin" "oslogin" ];
# Ugly: sshd refuses to start if a store path is given because /nix/store is group-writable.
# So indirect by a symlink.

View file

@ -219,6 +219,14 @@ let
'';
};
nodelay = mkOption {
default = false;
type = types.bool;
description = ''
Wheather the delay after typing a wrong password should be disabled.
'';
};
requireWheel = mkOption {
default = false;
type = types.bool;
@ -366,7 +374,7 @@ let
|| cfg.enableGnomeKeyring
|| cfg.googleAuthenticator.enable
|| cfg.duoSecurity.enable)) ''
auth required pam_unix.so ${optionalString cfg.allowNullPassword "nullok"} likeauth
auth required pam_unix.so ${optionalString cfg.allowNullPassword "nullok"} ${optionalString cfg.nodelay "nodelay"} likeauth
${optionalString config.security.pam.enableEcryptfs
"auth optional ${pkgs.ecryptfs}/lib/security/pam_ecryptfs.so unwrap"}
${optionalString cfg.pamMount
@ -382,7 +390,7 @@ let
"auth required ${pkgs.duo-unix}/lib/security/pam_duo.so"}
'') + ''
${optionalString cfg.unixAuth
"auth sufficient pam_unix.so ${optionalString cfg.allowNullPassword "nullok"} likeauth try_first_pass"}
"auth sufficient pam_unix.so ${optionalString cfg.allowNullPassword "nullok"} ${optionalString cfg.nodelay "nodelay"} likeauth try_first_pass"}
${optionalString cfg.otpwAuth
"auth sufficient ${pkgs.otpw}/lib/security/pam_otpw.so"}
${optionalString use_ldap
@ -545,6 +553,7 @@ in
};
security.pam.enableSSHAgentAuth = mkOption {
type = types.bool;
default = false;
description =
''
@ -555,12 +564,7 @@ in
'';
};
security.pam.enableOTPW = mkOption {
default = false;
description = ''
Enable the OTPW (one-time password) PAM module.
'';
};
security.pam.enableOTPW = mkEnableOption "the OTPW (one-time password) PAM module";
security.pam.u2f = {
enable = mkOption {
@ -719,12 +723,7 @@ in
};
};
security.pam.enableEcryptfs = mkOption {
default = false;
description = ''
Enable eCryptfs PAM module (mounting ecryptfs home directory on login).
'';
};
security.pam.enableEcryptfs = mkEnableOption "eCryptfs PAM module (mounting ecryptfs home directory on login)";
users.motd = mkOption {
default = null;

View file

@ -1,51 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.prey;
myPrey = pkgs.prey-bash-client.override {
apiKey = cfg.apiKey;
deviceKey = cfg.deviceKey;
};
in {
options = {
services.prey = {
enable = mkOption {
default = false;
type = types.bool;
description = ''
Enables the <link xlink:href="http://preyproject.com/" />
shell client. Be sure to specify both API and device keys.
Once enabled, a <command>cron</command> job will run every 15
minutes to report status information.
'';
};
deviceKey = mkOption {
type = types.str;
description = ''
<literal>Device key</literal> obtained by visiting
<link xlink:href="https://panel.preyproject.com/devices" />
and clicking on your device.
'';
};
apiKey = mkOption {
type = types.str;
description = ''
<literal>API key</literal> obtained from
<link xlink:href="https://panel.preyproject.com/profile" />.
'';
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ myPrey ];
services.cron.systemCronJobs = [ "*/15 * * * * root ${myPrey}/prey.sh" ];
};
}

View file

@ -81,8 +81,8 @@ in
after = mkIf cfg.docker [ "docker.service" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
# Taken from https://github.com/rancher/k3s/blob/v1.17.4+k3s1/contrib/ansible/roles/k3s/node/templates/k3s.service.j2
Type = "notify";
# See: https://github.com/rancher/k3s/blob/dddbd16305284ae4bd14c0aade892412310d7edc/install.sh#L197
Type = if cfg.role == "agent" then "exec" else "notify";
KillMode = "process";
Delegate = "yes";
Restart = "always";

View file

@ -1,160 +1,494 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.gitlab-runner;
configFile =
if (cfg.configFile == null) then
(pkgs.runCommand "config.toml" {
buildInputs = [ pkgs.remarshal ];
preferLocalBuild = true;
} ''
remarshal -if json -of toml \
< ${pkgs.writeText "config.json" (builtins.toJSON cfg.configOptions)} \
> $out
'')
else
cfg.configFile;
hasDocker = config.virtualisation.docker.enable;
hashedServices = with builtins; (mapAttrs' (name: service: nameValuePair
"${name}_${config.networking.hostName}_${
substring 0 12
(hashString "md5" (unsafeDiscardStringContext (toJSON service)))}"
service)
cfg.services);
configPath = "$HOME/.gitlab-runner/config.toml";
configureScript = pkgs.writeShellScriptBin "gitlab-runner-configure" (
if (cfg.configFile != null) then ''
mkdir -p $(dirname ${configPath})
cp ${cfg.configFile} ${configPath}
# make config file readable by service
chown -R --reference=$HOME $(dirname ${configPath})
'' else ''
export CONFIG_FILE=${configPath}
mkdir -p $(dirname ${configPath})
# remove no longer existing services
gitlab-runner verify --delete
# current and desired state
NEEDED_SERVICES=$(echo ${concatStringsSep " " (attrNames hashedServices)} | tr " " "\n")
REGISTERED_SERVICES=$(gitlab-runner list 2>&1 | grep 'Executor' | awk '{ print $1 }')
# difference between current and desired state
NEW_SERVICES=$(grep -vxF -f <(echo "$REGISTERED_SERVICES") <(echo "$NEEDED_SERVICES") || true)
OLD_SERVICES=$(grep -vxF -f <(echo "$NEEDED_SERVICES") <(echo "$REGISTERED_SERVICES") || true)
# register new services
${concatStringsSep "\n" (mapAttrsToList (name: service: ''
if echo "$NEW_SERVICES" | grep -xq ${name}; then
bash -c ${escapeShellArg (concatStringsSep " \\\n " ([
"set -a && source ${service.registrationConfigFile} &&"
"gitlab-runner register"
"--non-interactive"
"--name ${name}"
"--executor ${service.executor}"
"--limit ${toString service.limit}"
"--request-concurrency ${toString service.requestConcurrency}"
"--maximum-timeout ${toString service.maximumTimeout}"
] ++ service.registrationFlags
++ optional (service.buildsDir != null)
"--builds-dir ${service.buildsDir}"
++ optional (service.preCloneScript != null)
"--pre-clone-script ${service.preCloneScript}"
++ optional (service.preBuildScript != null)
"--pre-build-script ${service.preBuildScript}"
++ optional (service.postBuildScript != null)
"--post-build-script ${service.postBuildScript}"
++ optional (service.tagList != [ ])
"--tag-list ${concatStringsSep "," service.tagList}"
++ optional service.runUntagged
"--run-untagged"
++ optional service.protected
"--access-level ref_protected"
++ optional service.debugTraceDisabled
"--debug-trace-disabled"
++ map (e: "--env ${escapeShellArg e}") (mapAttrsToList (name: value: "${name}=${value}") service.environmentVariables)
++ optionals (service.executor == "docker") (
assert (
assertMsg (service.dockerImage != null)
"dockerImage option is required for docker executor (${name})");
[ "--docker-image ${service.dockerImage}" ]
++ optional service.dockerDisableCache
"--docker-disable-cache"
++ optional service.dockerPrivileged
"--docker-privileged"
++ map (v: "--docker-volumes ${escapeShellArg v}") service.dockerVolumes
++ map (v: "--docker-extra-hosts ${escapeShellArg v}") service.dockerExtraHosts
++ map (v: "--docker-allowed-images ${escapeShellArg v}") service.dockerAllowedImages
++ map (v: "--docker-allowed-services ${escapeShellArg v}") service.dockerAllowedServices
)
))} && sleep 1
fi
'') hashedServices)}
# unregister old services
for NAME in $(echo "$OLD_SERVICES")
do
[ ! -z "$NAME" ] && gitlab-runner unregister \
--name "$NAME" && sleep 1
done
# update global options
remarshal --if toml --of json ${configPath} \
| jq -cM '.check_interval = ${toString cfg.checkInterval} |
.concurrent = ${toString cfg.concurrent}' \
| remarshal --if json --of toml \
| sponge ${configPath}
# make config file readable by service
chown -R --reference=$HOME $(dirname ${configPath})
'');
startScript = pkgs.writeShellScriptBin "gitlab-runner-start" ''
export CONFIG_FILE=${configPath}
exec gitlab-runner run --working-directory $HOME
'';
in
{
options.services.gitlab-runner = {
enable = mkEnableOption "Gitlab Runner";
configFile = mkOption {
type = types.nullOr types.path;
default = null;
description = ''
Configuration file for gitlab-runner.
Use this option in favor of configOptions to avoid placing CI tokens in the nix store.
<option>configFile</option> takes precedence over <option>configOptions</option>.
<option>configFile</option> takes precedence over <option>services</option>.
<option>checkInterval</option> and <option>concurrent</option> will be ignored too.
Warning: Not using <option>configFile</option> will potentially result in secrets
leaking into the WORLD-READABLE nix store.
This option is deprecated, please use <option>services</option> instead.
You can use <option>registrationConfigFile</option> and
<option>registrationFlags</option>
for settings not covered by this module.
'';
type = types.nullOr types.path;
};
configOptions = mkOption {
checkInterval = mkOption {
type = types.int;
default = 0;
example = literalExample "with lib; (length (attrNames config.services.gitlab-runner.services)) * 3";
description = ''
Configuration for gitlab-runner
<option>configFile</option> will take precedence over this option.
Warning: all Configuration, especially CI token, will be stored in a
WORLD-READABLE file in the Nix Store.
If you want to protect your CI token use <option>configFile</option> instead.
Defines the interval length, in seconds, between new jobs check.
The default value is 3;
if set to 0 or lower, the default value will be used.
See <link xlink:href="https://docs.gitlab.com/runner/configuration/advanced-configuration.html#how-check_interval-works">runner documentation</link> for more information.
'';
};
concurrent = mkOption {
type = types.int;
default = 1;
example = literalExample "config.nix.maxJobs";
description = ''
Limits how many jobs globally can be run concurrently.
The most upper limit of jobs using all defined runners.
0 does not mean unlimited.
'';
type = types.attrs;
example = {
concurrent = 2;
runners = [{
name = "docker-nix-1.11";
url = "https://CI/";
token = "TOKEN";
executor = "docker";
builds_dir = "";
docker = {
host = "";
image = "nixos/nix:1.11";
privileged = true;
disable_cache = true;
cache_dir = "";
};
}];
};
};
gracefulTermination = mkOption {
default = false;
type = types.bool;
default = false;
description = ''
Finish all remaining jobs before stopping, restarting or reconfiguring.
If not set gitlab-runner will stop immediatly without waiting for jobs to finish,
which will lead to failed builds.
Finish all remaining jobs before stopping.
If not set gitlab-runner will stop immediatly without waiting
for jobs to finish, which will lead to failed builds.
'';
};
gracefulTimeout = mkOption {
default = "infinity";
type = types.str;
default = "infinity";
example = "5min 20s";
description = ''Time to wait until a graceful shutdown is turned into a forceful one.'';
description = ''
Time to wait until a graceful shutdown is turned into a forceful one.
'';
};
workDir = mkOption {
default = "/var/lib/gitlab-runner";
type = types.path;
description = "The working directory used";
};
package = mkOption {
description = "Gitlab Runner package to use";
type = types.package;
default = pkgs.gitlab-runner;
defaultText = "pkgs.gitlab-runner";
type = types.package;
example = literalExample "pkgs.gitlab-runner_1_11";
description = "Gitlab Runner package to use.";
};
packages = mkOption {
default = [ pkgs.bash pkgs.docker-machine ];
defaultText = "[ pkgs.bash pkgs.docker-machine ]";
extraPackages = mkOption {
type = types.listOf types.package;
default = [ ];
description = ''
Packages to add to PATH for the gitlab-runner process.
Extra packages to add to PATH for the gitlab-runner process.
'';
};
services = mkOption {
description = "GitLab Runner services.";
default = { };
example = literalExample ''
{
# runner for building in docker via host's nix-daemon
# nix store will be readable in runner, might be insecure
nix = {
# File should contain at least these two variables:
# `CI_SERVER_URL`
# `REGISTRATION_TOKEN`
registrationConfigFile = "/run/secrets/gitlab-runner-registration";
dockerImage = "alpine";
dockerVolumes = [
"/nix/store:/nix/store:ro"
"/nix/var/nix/db:/nix/var/nix/db:ro"
"/nix/var/nix/daemon-socket:/nix/var/nix/daemon-socket:ro"
];
dockerDisableCache = true;
preBuildScript = pkgs.writeScript "setup-container" '''
mkdir -p -m 0755 /nix/var/log/nix/drvs
mkdir -p -m 0755 /nix/var/nix/gcroots
mkdir -p -m 0755 /nix/var/nix/profiles
mkdir -p -m 0755 /nix/var/nix/temproots
mkdir -p -m 0755 /nix/var/nix/userpool
mkdir -p -m 1777 /nix/var/nix/gcroots/per-user
mkdir -p -m 1777 /nix/var/nix/profiles/per-user
mkdir -p -m 0755 /nix/var/nix/profiles/per-user/root
mkdir -p -m 0700 "$HOME/.nix-defexpr"
. ''${pkgs.nix}/etc/profile.d/nix.sh
''${pkgs.nix}/bin/nix-env -i ''${concatStringsSep " " (with pkgs; [ nix cacert git openssh ])}
''${pkgs.nix}/bin/nix-channel --add https://nixos.org/channels/nixpkgs-unstable
''${pkgs.nix}/bin/nix-channel --update nixpkgs
''';
environmentVariables = {
ENV = "/etc/profile";
USER = "root";
NIX_REMOTE = "daemon";
PATH = "/nix/var/nix/profiles/default/bin:/nix/var/nix/profiles/default/sbin:/bin:/sbin:/usr/bin:/usr/sbin";
NIX_SSL_CERT_FILE = "/nix/var/nix/profiles/default/etc/ssl/certs/ca-bundle.crt";
};
tagList = [ "nix" ];
};
# runner for building docker images
docker-images = {
# File should contain at least these two variables:
# `CI_SERVER_URL`
# `REGISTRATION_TOKEN`
registrationConfigFile = "/run/secrets/gitlab-runner-registration";
dockerImage = "docker:stable";
dockerVolumes = [
"/var/run/docker.sock:/var/run/docker.sock"
];
tagList = [ "docker-images" ];
};
# runner for executing stuff on host system (very insecure!)
# make sure to add required packages (including git!)
# to `environment.systemPackages`
shell = {
# File should contain at least these two variables:
# `CI_SERVER_URL`
# `REGISTRATION_TOKEN`
registrationConfigFile = "/run/secrets/gitlab-runner-registration";
executor = "shell";
tagList = [ "shell" ];
};
# runner for everything else
default = {
# File should contain at least these two variables:
# `CI_SERVER_URL`
# `REGISTRATION_TOKEN`
registrationConfigFile = "/run/secrets/gitlab-runner-registration";
dockerImage = "debian:stable";
};
}
'';
type = types.attrsOf (types.submodule {
options = {
registrationConfigFile = mkOption {
type = types.path;
description = ''
Absolute path to a file with environment variables
used for gitlab-runner registration.
A list of all supported environment variables can be found in
<literal>gitlab-runner register --help</literal>.
Ones that you probably want to set is
<literal>CI_SERVER_URL=&lt;CI server URL&gt;</literal>
<literal>REGISTRATION_TOKEN=&lt;registration secret&gt;</literal>
'';
};
registrationFlags = mkOption {
type = types.listOf types.str;
default = [ ];
example = [ "--docker-helper-image my/gitlab-runner-helper" ];
description = ''
Extra command-line flags passed to
<literal>gitlab-runner register</literal>.
Execute <literal>gitlab-runner register --help</literal>
for a list of supported flags.
'';
};
environmentVariables = mkOption {
type = types.attrsOf types.str;
default = { };
example = { NAME = "value"; };
description = ''
Custom environment variables injected to build environment.
For secrets you can use <option>registrationConfigFile</option>
with <literal>RUNNER_ENV</literal> variable set.
'';
};
executor = mkOption {
type = types.str;
default = "docker";
description = ''
Select executor, eg. shell, docker, etc.
See <link xlink:href="https://docs.gitlab.com/runner/executors/README.html">runner documentation</link> for more information.
'';
};
buildsDir = mkOption {
type = types.nullOr types.path;
default = null;
example = "/var/lib/gitlab-runner/builds";
description = ''
Absolute path to a directory where builds will be stored
in context of selected executor (Locally, Docker, SSH).
'';
};
dockerImage = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
Docker image to be used.
'';
};
dockerVolumes = mkOption {
type = types.listOf types.str;
default = [ ];
example = [ "/var/run/docker.sock:/var/run/docker.sock" ];
description = ''
Bind-mount a volume and create it
if it doesn't exist prior to mounting.
'';
};
dockerDisableCache = mkOption {
type = types.bool;
default = false;
description = ''
Disable all container caching.
'';
};
dockerPrivileged = mkOption {
type = types.bool;
default = false;
description = ''
Give extended privileges to container.
'';
};
dockerExtraHosts = mkOption {
type = types.listOf types.str;
default = [ ];
example = [ "other-host:127.0.0.1" ];
description = ''
Add a custom host-to-IP mapping.
'';
};
dockerAllowedImages = mkOption {
type = types.listOf types.str;
default = [ ];
example = [ "ruby:*" "python:*" "php:*" "my.registry.tld:5000/*:*" ];
description = ''
Whitelist allowed images.
'';
};
dockerAllowedServices = mkOption {
type = types.listOf types.str;
default = [ ];
example = [ "postgres:9" "redis:*" "mysql:*" ];
description = ''
Whitelist allowed services.
'';
};
preCloneScript = mkOption {
type = types.nullOr types.path;
default = null;
description = ''
Runner-specific command script executed before code is pulled.
'';
};
preBuildScript = mkOption {
type = types.nullOr types.path;
default = null;
description = ''
Runner-specific command script executed after code is pulled,
just before build executes.
'';
};
postBuildScript = mkOption {
type = types.nullOr types.path;
default = null;
description = ''
Runner-specific command script executed after code is pulled
and just after build executes.
'';
};
tagList = mkOption {
type = types.listOf types.str;
default = [ ];
description = ''
Tag list.
'';
};
runUntagged = mkOption {
type = types.bool;
default = false;
description = ''
Register to run untagged builds; defaults to
<literal>true</literal> when <option>tagList</option> is empty.
'';
};
limit = mkOption {
type = types.int;
default = 0;
description = ''
Limit how many jobs can be handled concurrently by this service.
0 (default) simply means don't limit.
'';
};
requestConcurrency = mkOption {
type = types.int;
default = 0;
description = ''
Limit number of concurrent requests for new jobs from GitLab.
'';
};
maximumTimeout = mkOption {
type = types.int;
default = 0;
description = ''
What is the maximum timeout (in seconds) that will be set for
job when using this Runner. 0 (default) simply means don't limit.
'';
};
protected = mkOption {
type = types.bool;
default = false;
description = ''
When set to true Runner will only run on pipelines
triggered on protected branches.
'';
};
debugTraceDisabled = mkOption {
type = types.bool;
default = false;
description = ''
When set to true Runner will disable the possibility of
using the <literal>CI_DEBUG_TRACE</literal> feature.
'';
};
};
});
};
};
config = mkIf cfg.enable {
warnings = optional (cfg.configFile != null) "services.gitlab-runner.`configFile` is deprecated, please use services.gitlab-runner.`services`.";
environment.systemPackages = [ cfg.package ];
systemd.services.gitlab-runner = {
path = cfg.packages;
environment = config.networking.proxy.envVars // {
# Gitlab runner will not start if the HOME variable is not set
HOME = cfg.workDir;
};
description = "Gitlab Runner";
documentation = [ "https://docs.gitlab.com/runner/" ];
after = [ "network.target" ]
++ optional hasDocker "docker.service";
requires = optional hasDocker "docker.service";
wantedBy = [ "multi-user.target" ];
environment = config.networking.proxy.envVars // {
HOME = "/var/lib/gitlab-runner";
};
path = with pkgs; [
bash
gawk
jq
moreutils
remarshal
utillinux
cfg.package.bin
] ++ cfg.extraPackages;
reloadIfChanged = true;
restartTriggers = [
config.environment.etc."gitlab-runner/config.toml".source
];
serviceConfig = {
# Set `DynamicUser` under `systemd.services.gitlab-runner.serviceConfig`
# to `lib.mkForce false` in your configuration to run this service as root.
# You can also set `User` and `Group` options to run this service as desired user.
# Make sure to restart service or changes won't apply.
DynamicUser = true;
StateDirectory = "gitlab-runner";
ExecReload= "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
ExecStart = ''${cfg.package.bin}/bin/gitlab-runner run \
--working-directory ${cfg.workDir} \
--config /etc/gitlab-runner/config.toml \
--service gitlab-runner \
--user gitlab-runner \
'';
} // optionalAttrs (cfg.gracefulTermination) {
SupplementaryGroups = optional hasDocker "docker";
ExecStartPre = "!${configureScript}/bin/gitlab-runner-configure";
ExecStart = "${startScript}/bin/gitlab-runner-start";
ExecReload = "!${configureScript}/bin/gitlab-runner-configure";
} // optionalAttrs (cfg.gracefulTermination) {
TimeoutStopSec = "${cfg.gracefulTimeout}";
KillSignal = "SIGQUIT";
KillMode = "process";
};
};
# Make the gitlab-runner command availabe so users can query the runner
environment.systemPackages = [ cfg.package ];
# Make sure the config can be reloaded on change
environment.etc."gitlab-runner/config.toml".source = configFile;
users.users.gitlab-runner = {
group = "gitlab-runner";
extraGroups = optional hasDocker "docker";
uid = config.ids.uids.gitlab-runner;
home = cfg.workDir;
createHome = true;
};
users.groups.gitlab-runner.gid = config.ids.gids.gitlab-runner;
# Enable docker if `docker` executor is used in any service
virtualisation.docker.enable = mkIf (
any (s: s.executor == "docker") (attrValues cfg.services)
) (mkDefault true);
};
imports = [
(mkRenamedOptionModule [ "services" "gitlab-runner" "packages" ] [ "services" "gitlab-runner" "extraPackages" ] )
(mkRemovedOptionModule [ "services" "gitlab-runner" "configOptions" ] "Use services.gitlab-runner.services option instead" )
(mkRemovedOptionModule [ "services" "gitlab-runner" "workDir" ] "You should move contents of workDir (if any) to /var/lib/gitlab-runner" )
];
}

View file

@ -269,6 +269,7 @@ in
};
enableSmtp = mkOption {
type = types.bool;
default = true;
description = "Whether to enable smtp in master.cf.";
};

View file

@ -7,7 +7,7 @@ let
fpm = config.services.phpfpm.pools.roundcube;
localDB = cfg.database.host == "localhost";
user = cfg.database.username;
phpWithPspell = pkgs.php.withExtensions (e: [ e.pspell ] ++ pkgs.php.enabledExtensions);
phpWithPspell = pkgs.php.withExtensions ({ enabled, all }: [ all.pspell ] ++ enabled);
in
{
options.services.roundcube = {

View file

@ -15,6 +15,7 @@ in
enable = mkEnableOption "the SpamAssassin daemon";
debug = mkOption {
type = types.bool;
default = false;
description = "Whether to run the SpamAssassin daemon in debug mode";
};

View file

@ -57,6 +57,7 @@ in
};
debug = mkOption {
type = types.bool;
default = false;
description = ''
Pass -d and -7 to automount and write log to the system journal.

View file

@ -25,10 +25,7 @@ in
description = "Whether to support multi-user mode by enabling the Disnix D-Bus service";
};
useWebServiceInterface = mkOption {
default = false;
description = "Whether to enable the DisnixWebService interface running on Apache Tomcat";
};
useWebServiceInterface = mkEnableOption "the DisnixWebService interface running on Apache Tomcat";
package = mkOption {
type = types.path;

View file

@ -180,7 +180,7 @@ let
${optionalString (cfg.smtp.passwordFile != null) ''password: "@smtpPassword@",''}
domain: "${cfg.smtp.domain}",
${optionalString (cfg.smtp.authentication != null) "authentication: :${cfg.smtp.authentication},"}
enable_starttls_auto: ${toString cfg.smtp.enableStartTLSAuto},
enable_starttls_auto: ${boolToString cfg.smtp.enableStartTLSAuto},
ca_file: "/etc/ssl/certs/ca-certificates.crt",
openssl_verify_mode: '${cfg.smtp.opensslVerifyMode}'
}

View file

@ -17,9 +17,9 @@ let
cfgUpdate = pkgs.writeText "octoprint-config.yaml" (builtins.toJSON fullConfig);
pluginsEnv = pkgs.python.buildEnv.override {
extraLibs = cfg.plugins pkgs.octoprint-plugins;
};
pluginsEnv = package.python.withPackages (ps: [ps.octoprint] ++ (cfg.plugins ps));
package = pkgs.octoprint;
in
{
@ -106,7 +106,6 @@ in
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
path = [ pluginsEnv ];
environment.PYTHONPATH = makeSearchPathOutput "lib" pkgs.python.sitePackages [ pluginsEnv ];
preStart = ''
if [ -e "${cfg.stateDir}/config.yaml" ]; then
@ -119,7 +118,7 @@ in
'';
serviceConfig = {
ExecStart = "${pkgs.octoprint}/bin/octoprint serve -b ${cfg.stateDir}";
ExecStart = "${pluginsEnv}/bin/octoprint serve -b ${cfg.stateDir}";
User = cfg.user;
Group = cfg.group;
};

View file

@ -82,6 +82,7 @@ in {
]);
ProtectHome = "tmpfs";
WorkingDirectory = libDir;
SyslogIdentifier = "pykms";
Restart = "on-failure";
MemoryLimit = cfg.memoryLimit;
};

View file

@ -75,6 +75,11 @@ in {
};
system.nssModules = optional cfg.enable pkgs.sssd;
system.nssDatabases = {
passwd = [ "sss" ];
shadow = [ "sss" ];
services = [ "sss" ];
};
services.dbus.packages = [ pkgs.sssd ];
})

View file

@ -19,6 +19,7 @@ in
'';
};
autorun = mkOption {
type = types.bool;
default = true;
description = ''
Whether to automatically start the tunnel.

View file

@ -72,6 +72,7 @@ in
};
noScan = mkOption {
type = types.bool;
default = false;
description = ''
Do not scan for overlapping BSSs in HT40+/- mode.
@ -127,6 +128,7 @@ in
};
wpa = mkOption {
type = types.bool;
default = true;
description = ''
Enable WPA (IEEE 802.11i/D3.0) to authenticate with the access point.

View file

@ -12,6 +12,7 @@ with lib;
enable = mkEnableOption "OpenFire XMPP server";
usePostgreSQL = mkOption {
type = types.bool;
default = true;
description = "
Whether you use PostgreSQL service for your storage back-end.

View file

@ -1,9 +1,7 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.prosody;
sslOpts = { ... }: {
@ -30,8 +28,21 @@ let
};
};
discoOpts = {
options = {
url = mkOption {
type = types.str;
description = "URL of the endpoint you want to make discoverable";
};
description = mkOption {
type = types.str;
description = "A short description of the endpoint you want to advertise";
};
};
};
moduleOpts = {
# Generally required
# Required for compliance with https://compliance.conversations.im/about/
roster = mkOption {
type = types.bool;
default = true;
@ -69,6 +80,18 @@ let
description = "Keep multiple clients in sync";
};
csi = mkOption {
type = types.bool;
default = true;
description = "Implements the CSI protocol that allows clients to report their active/inactive state to the server";
};
cloud_notify = mkOption {
type = types.bool;
default = true;
description = "Push notifications to inform users of new messages or other pertinent information even when they have no XMPP clients online";
};
pep = mkOption {
type = types.bool;
default = true;
@ -89,10 +112,22 @@ let
vcard = mkOption {
type = types.bool;
default = true;
default = false;
description = "Allow users to set vCards";
};
vcard_legacy = mkOption {
type = types.bool;
default = true;
description = "Converts users profiles and Avatars between old and new formats";
};
bookmarks = mkOption {
type = types.bool;
default = true;
description = "Allows interop between older clients that use XEP-0048: Bookmarks in its 1.0 version and recent clients which use it in PEP";
};
# Nice to have
version = mkOption {
type = types.bool;
@ -126,10 +161,16 @@ let
mam = mkOption {
type = types.bool;
default = false;
default = true;
description = "Store messages in an archive and allow users to access it";
};
smacks = mkOption {
type = types.bool;
default = true;
description = "Allow a client to resume a disconnected session, and prevent message loss";
};
# Admin interfaces
admin_adhoc = mkOption {
type = types.bool;
@ -137,6 +178,18 @@ let
description = "Allows administration via an XMPP client that supports ad-hoc commands";
};
http_files = mkOption {
type = types.bool;
default = true;
description = "Serve static files from a directory over HTTP";
};
proxy65 = mkOption {
type = types.bool;
default = true;
description = "Enables a file transfer proxy service which clients behind NAT can use";
};
admin_telnet = mkOption {
type = types.bool;
default = false;
@ -156,12 +209,6 @@ let
description = "Enable WebSocket support";
};
http_files = mkOption {
type = types.bool;
default = false;
description = "Serve static files from a directory over HTTP";
};
# Other specific functionality
limits = mkOption {
type = types.bool;
@ -210,13 +257,6 @@ let
default = false;
description = "Legacy authentication. Only used by some old clients and bots";
};
proxy65 = mkOption {
type = types.bool;
default = false;
description = "Enables a file transfer proxy service which clients behind NAT can use";
};
};
toLua = x:
@ -235,6 +275,153 @@ let
};
'';
mucOpts = { ... }: {
options = {
domain = mkOption {
type = types.str;
description = "Domain name of the MUC";
};
name = mkOption {
type = types.str;
description = "The name to return in service discovery responses for the MUC service itself";
default = "Prosody Chatrooms";
};
restrictRoomCreation = mkOption {
type = types.enum [ true false "admin" "local" ];
default = false;
description = "Restrict room creation to server admins";
};
maxHistoryMessages = mkOption {
type = types.int;
default = 20;
description = "Specifies a limit on what each room can be configured to keep";
};
roomLocking = mkOption {
type = types.bool;
default = true;
description = ''
Enables room locking, which means that a room must be
configured before it can be used. Locked rooms are invisible
and cannot be entered by anyone but the creator
'';
};
roomLockTimeout = mkOption {
type = types.int;
default = 300;
description = ''
Timout after which the room is destroyed or unlocked if not
configured, in seconds
'';
};
tombstones = mkOption {
type = types.bool;
default = true;
description = ''
When a room is destroyed, it leaves behind a tombstone which
prevents the room being entered or recreated. It also allows
anyone who was not in the room at the time it was destroyed
to learn about it, and to update their bookmarks. Tombstones
prevents the case where someone could recreate a previously
semi-anonymous room in order to learn the real JIDs of those
who often join there.
'';
};
tombstoneExpiry = mkOption {
type = types.int;
default = 2678400;
description = ''
This settings controls how long a tombstone is considered
valid. It defaults to 31 days. After this time, the room in
question can be created again.
'';
};
vcard_muc = mkOption {
type = types.bool;
default = true;
description = "Adds the ability to set vCard for Multi User Chat rooms";
};
# Extra parameters. Defaulting to prosody default values.
# Adding them explicitly to make them visible from the options
# documentation.
#
# See https://prosody.im/doc/modules/mod_muc for more details.
roomDefaultPublic = mkOption {
type = types.bool;
default = true;
description = "If set, the MUC rooms will be public by default.";
};
roomDefaultMembersOnly = mkOption {
type = types.bool;
default = false;
description = "If set, the MUC rooms will only be accessible to the members by default.";
};
roomDefaultModerated = mkOption {
type = types.bool;
default = false;
description = "If set, the MUC rooms will be moderated by default.";
};
roomDefaultPublicJids = mkOption {
type = types.bool;
default = false;
description = "If set, the MUC rooms will display the public JIDs by default.";
};
roomDefaultChangeSubject = mkOption {
type = types.bool;
default = false;
description = "If set, the rooms will display the public JIDs by default.";
};
roomDefaultHistoryLength = mkOption {
type = types.int;
default = 20;
description = "Number of history message sent to participants by default.";
};
roomDefaultLanguage = mkOption {
type = types.str;
default = "en";
description = "Default room language.";
};
};
};
uploadHttpOpts = { ... }: {
options = {
domain = mkOption {
type = types.nullOr types.str;
description = "Domain name for the http-upload service";
};
uploadFileSizeLimit = mkOption {
type = types.str;
default = "50 * 1024 * 1024";
description = "Maximum file size, in bytes. Defaults to 50MB.";
};
uploadExpireAfter = mkOption {
type = types.str;
default = "60 * 60 * 24 * 7";
description = "Max age of a file before it gets deleted, in seconds.";
};
userQuota = mkOption {
type = types.nullOr types.int;
default = null;
example = 1234;
description = ''
Maximum size of all uploaded files per user, in bytes. There
will be no quota if this option is set to null.
'';
};
httpUploadPath = mkOption {
type = types.str;
description = ''
Directory where the uploaded files will be stored. By
default, uploaded files are put in a sub-directory of the
default Prosody storage path (usually /var/lib/prosody).
'';
default = "/var/lib/prosody";
};
};
};
vHostOpts = { ... }: {
options = {
@ -283,6 +470,27 @@ in
description = "Whether to enable the prosody server";
};
xmppComplianceSuite = mkOption {
type = types.bool;
default = true;
description = ''
The XEP-0423 defines a set of recommended XEPs to implement
for a server. It's generally a good idea to implement this
set of extensions if you want to provide your users with a
good XMPP experience.
This NixOS module aims to provide a "advanced server"
experience as per defined in the XEP-0423[1] specification.
Setting this option to true will prevent you from building a
NixOS configuration which won't comply with this standard.
You can explicitely decide to ignore this standard if you
know what you are doing by setting this option to false.
[1] https://xmpp.org/extensions/xep-0423.html
'';
};
package = mkOption {
type = types.package;
description = "Prosody package to use";
@ -302,6 +510,12 @@ in
default = "/var/lib/prosody";
};
disco_items = mkOption {
type = types.listOf (types.submodule discoOpts);
default = [];
description = "List of discoverable items you want to advertise.";
};
user = mkOption {
type = types.str;
default = "prosody";
@ -320,6 +534,31 @@ in
description = "Allow account creation";
};
# HTTP server-related options
httpPorts = mkOption {
type = types.listOf types.int;
description = "Listening HTTP ports list for this service.";
default = [ 5280 ];
};
httpInterfaces = mkOption {
type = types.listOf types.str;
default = [ "*" "::" ];
description = "Interfaces on which the HTTP server will listen on.";
};
httpsPorts = mkOption {
type = types.listOf types.int;
description = "Listening HTTPS ports list for this service.";
default = [ 5281 ];
};
httpsInterfaces = mkOption {
type = types.listOf types.str;
default = [ "*" "::" ];
description = "Interfaces on which the HTTPS server will listen on.";
};
c2sRequireEncryption = mkOption {
type = types.bool;
default = true;
@ -387,6 +626,26 @@ in
description = "Addtional path in which to look find plugins/modules";
};
uploadHttp = mkOption {
description = ''
Configures the Prosody builtin HTTP server to handle user uploads.
'';
type = types.nullOr (types.submodule uploadHttpOpts);
default = null;
example = {
domain = "uploads.my-xmpp-example-host.org";
};
};
muc = mkOption {
type = types.listOf (types.submodule mucOpts);
default = [ ];
example = [ {
domain = "conference.my-xmpp-example-host.org";
} ];
description = "Multi User Chat (MUC) configuration";
};
virtualHosts = mkOption {
description = "Define the virtual hosts";
@ -443,9 +702,44 @@ in
config = mkIf cfg.enable {
assertions = let
genericErrMsg = ''
Having a server not XEP-0423-compliant might make your XMPP
experience terrible. See the NixOS manual for further
informations.
If you know what you're doing, you can disable this warning by
setting config.services.prosody.xmppComplianceSuite to false.
'';
errors = [
{ assertion = (builtins.length cfg.muc > 0) || !cfg.xmppComplianceSuite;
message = ''
You need to setup at least a MUC domain to comply with
XEP-0423.
'' + genericErrMsg;}
{ assertion = cfg.uploadHttp != null || !cfg.xmppComplianceSuite;
message = ''
You need to setup the uploadHttp module through
config.services.prosody.uploadHttp to comply with
XEP-0423.
'' + genericErrMsg;}
];
in errors;
environment.systemPackages = [ cfg.package ];
environment.etc."prosody/prosody.cfg.lua".text = ''
environment.etc."prosody/prosody.cfg.lua".text =
let
httpDiscoItems = if (cfg.uploadHttp != null)
then [{ url = cfg.uploadHttp.domain; description = "HTTP upload endpoint";}]
else [];
mucDiscoItems = builtins.foldl'
(acc: muc: [{ url = muc.domain; description = "${muc.domain} MUC endpoint";}] ++ acc)
[]
cfg.muc;
discoItems = cfg.disco_items ++ httpDiscoItems ++ mucDiscoItems;
in ''
pidfile = "/run/prosody/prosody.pid"
@ -472,6 +766,10 @@ in
${ lib.concatStringsSep "\n" (map (x: "${toLua x};") cfg.extraModules)}
};
disco_items = {
${ lib.concatStringsSep "\n" (builtins.map (x: ''{ "${x.url}", "${x.description}"};'') discoItems)}
};
allow_registration = ${toLua cfg.allowRegistration}
c2s_require_encryption = ${toLua cfg.c2sRequireEncryption}
@ -486,6 +784,42 @@ in
authentication = ${toLua cfg.authentication}
http_interfaces = ${toLua cfg.httpInterfaces}
https_interfaces = ${toLua cfg.httpsInterfaces}
http_ports = ${toLua cfg.httpPorts}
https_ports = ${toLua cfg.httpsPorts}
${lib.concatMapStrings (muc: ''
Component ${toLua muc.domain} "muc"
modules_enabled = { "muc_mam"; ${optionalString muc.vcard_muc ''"vcard_muc";'' } }
name = ${toLua muc.name}
restrict_room_creation = ${toLua muc.restrictRoomCreation}
max_history_messages = ${toLua muc.maxHistoryMessages}
muc_room_locking = ${toLua muc.roomLocking}
muc_room_lock_timeout = ${toLua muc.roomLockTimeout}
muc_tombstones = ${toLua muc.tombstones}
muc_tombstone_expiry = ${toLua muc.tombstoneExpiry}
muc_room_default_public = ${toLua muc.roomDefaultPublic}
muc_room_default_members_only = ${toLua muc.roomDefaultMembersOnly}
muc_room_default_moderated = ${toLua muc.roomDefaultModerated}
muc_room_default_public_jids = ${toLua muc.roomDefaultPublicJids}
muc_room_default_change_subject = ${toLua muc.roomDefaultChangeSubject}
muc_room_default_history_length = ${toLua muc.roomDefaultHistoryLength}
muc_room_default_language = ${toLua muc.roomDefaultLanguage}
'') cfg.muc}
${ lib.optionalString (cfg.uploadHttp != null) ''
Component ${toLua cfg.uploadHttp.domain} "http_upload"
http_upload_file_size_limit = ${cfg.uploadHttp.uploadFileSizeLimit}
http_upload_expire_after = ${cfg.uploadHttp.uploadExpireAfter}
${lib.optionalString (cfg.uploadHttp.userQuota != null) "http_upload_quota = ${toLua cfg.uploadHttp.userQuota}"}
http_upload_path = ${toLua cfg.uploadHttp.httpUploadPath}
''}
${ cfg.extraConfig }
${ lib.concatStringsSep "\n" (lib.mapAttrsToList (n: v: ''
@ -522,9 +856,22 @@ in
PIDFile = "/run/prosody/prosody.pid";
ExecStart = "${cfg.package}/bin/prosodyctl start";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
MemoryDenyWriteExecute = true;
PrivateDevices = true;
PrivateMounts = true;
PrivateTmp = true;
ProtectControlGroups = true;
ProtectHome = true;
ProtectHostname = true;
ProtectKernelModules = true;
ProtectKernelTunables = true;
RestrictNamespaces = true;
RestrictRealtime = true;
RestrictSUIDSGID = true;
};
};
};
meta.doc = ./prosody.xml;
}

View file

@ -0,0 +1,88 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="module-services-prosody">
<title>Prosody</title>
<para>
<link xlink:href="https://prosody.im/">Prosody</link> is an open-source, modern XMPP server.
</para>
<section xml:id="module-services-prosody-basic-usage">
<title>Basic usage</title>
<para>
A common struggle for most XMPP newcomers is to find the right set
of XMPP Extensions (XEPs) to setup. Forget to activate a few of
those and your XMPP experience might turn into a nightmare!
</para>
<para>
The XMPP community tackles this problem by creating a meta-XEP
listing a decent set of XEPs you should implement. This meta-XEP
is issued every year, the 2020 edition being
<link xlink:href="https://xmpp.org/extensions/xep-0423.html">XEP-0423</link>.
</para>
<para>
The NixOS Prosody module will implement most of these recommendend XEPs out of
the box. That being said, two components still require some
manual configuration: the
<link xlink:href="https://xmpp.org/extensions/xep-0045.html">Multi User Chat (MUC)</link>
and the <link xlink:href="https://xmpp.org/extensions/xep-0363.html">HTTP File Upload</link> ones.
You'll need to create a DNS subdomain for each of those. The current convention is to name your
MUC endpoint <literal>conference.example.org</literal> and your HTTP upload domain <literal>upload.example.org</literal>.
</para>
<para>
A good configuration to start with, including a
<link xlink:href="https://xmpp.org/extensions/xep-0045.html">Multi User Chat (MUC)</link>
endpoint as well as a <link xlink:href="https://xmpp.org/extensions/xep-0363.html">HTTP File Upload</link>
endpoint will look like this:
<programlisting>
services.prosody = {
<link linkend="opt-services.prosody.enable">enable</link> = true;
<link linkend="opt-services.prosody.admins">admins</link> = [ "root@example.org" ];
<link linkend="opt-services.prosody.ssl.cert">ssl.cert</link> = "/var/lib/acme/example.org/fullchain.pem";
<link linkend="opt-services.prosody.ssl.key">ssl.key</link> = "/var/lib/acme/example.org/key.pem";
<link linkend="opt-services.prosody.virtualHosts">virtualHosts</link>."example.org" = {
<link linkend="opt-services.prosody.virtualHosts._name__.enabled">enabled</link> = true;
<link linkend="opt-services.prosody.virtualHosts._name__.domain">domain</link> = "example.org";
<link linkend="opt-services.prosody.virtualHosts._name__.ssl.cert">ssl.cert</link> = "/var/lib/acme/example.org/fullchain.pem";
<link linkend="opt-services.prosody.virtualHosts._name__.ssl.key">ssl.key</link> = "/var/lib/acme/example.org/key.pem";
};
<link linkend="opt-services.prosody.muc">muc</link> = [ {
<link linkend="opt-services.prosody.muc">domain</link> = "conference.example.org";
} ];
<link linkend="opt-services.prosody.uploadHttp">uploadHttp</link> = {
<link linkend="opt-services.prosody.uploadHttp.domain">domain</link> = "upload.example.org";
};
};</programlisting>
</para>
</section>
<section xml:id="module-services-prosody-letsencrypt">
<title>Let's Encrypt Configuration</title>
<para>
As you can see in the code snippet from the
<link linkend="module-services-prosody-basic-usage">previous section</link>,
you'll need a single TLS certificate covering your main endpoint,
the MUC one as well as the HTTP Upload one. We can generate such a
certificate by leveraging the ACME
<link linkend="opt-security.acme.certs._name_.extraDomains">extraDomains</link> module option.
</para>
<para>
Provided the setup detailed in the previous section, you'll need the following acme configuration to generate
a TLS certificate for the three endponits:
<programlisting>
security.acme = {
<link linkend="opt-security.acme.email">email</link> = "root@example.org";
<link linkend="opt-security.acme.acceptTerms">acceptTerms</link> = true;
<link linkend="opt-security.acme.certs">certs</link> = {
"example.org" = {
<link linkend="opt-security.acme.certs._name_.webroot">webroot</link> = "/var/www/example.org";
<link linkend="opt-security.acme.certs._name_.email">email</link> = "root@example.org";
<link linkend="opt-security.acme.certs._name_.extraDomains">extraDomains."conference.example.org"</link> = null;
<link linkend="opt-security.acme.certs._name_.extraDomains">extraDomains."upload.example.org"</link> = null;
};
};
};</programlisting>
</para>
</section>
</chapter>

View file

@ -54,21 +54,25 @@ in
};
syslog = mkOption {
type = types.bool;
default = true;
description = ''Whether to enable syslog output.'';
};
passwordAuthentication = mkOption {
type = types.bool;
default = true;
description = ''Whether to enable password authentication.'';
};
publicKeyAuthentication = mkOption {
type = types.bool;
default = true;
description = ''Whether to enable public key authentication.'';
};
rootLogin = mkOption {
type = types.bool;
default = false;
description = ''Whether to enable remote root login.'';
};
@ -90,11 +94,13 @@ in
};
tcpForwarding = mkOption {
type = types.bool;
default = true;
description = ''Whether to enable TCP/IP forwarding.'';
};
x11Forwarding = mkOption {
type = types.bool;
default = true;
description = ''Whether to enable X11 forwarding.'';
};

View file

@ -15,6 +15,7 @@ in
options = {
networking.tcpcrypt.enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable opportunistic TCP encryption. If the other end

View file

@ -62,7 +62,6 @@ in {
systemd.services.thelounge = {
description = "The Lounge web IRC client";
wantedBy = [ "multi-user.target" ];
environment = { THELOUNGE_HOME = dataDir; };
preStart = "ln -sf ${pkgs.writeText "config.js" configJsData} ${dataDir}/config.js";
serviceConfig = {
User = "thelounge";

View file

@ -9,6 +9,7 @@ with lib;
options = {
networking.wicd.enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to start <command>wicd</command>. Wired and

View file

@ -83,6 +83,14 @@ in {
'';
};
group = mkOption {
type = types.str;
default = "root";
example = "wheel";
description =
"Group to grant acces to the Yggdrasil control socket.";
};
openMulticastPort = mkOption {
type = bool;
default = false;
@ -144,8 +152,9 @@ in {
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
Restart = "always";
Group = cfg.group;
RuntimeDirectory = "yggdrasil";
RuntimeDirectoryMode = "0700";
RuntimeDirectoryMode = "0750";
BindReadOnlyPaths = mkIf configFileProvided
[ "${cfg.configFile}" ];

View file

@ -153,6 +153,16 @@ in
'';
};
allowFrom = mkOption {
type = types.listOf types.str;
default = [ "localhost" ];
example = [ "all" ];
apply = concatMapStringsSep "\n" (x: "Allow ${x}");
description = ''
From which hosts to allow unconditional access.
'';
};
bindirCmds = mkOption {
type = types.lines;
internal = true;
@ -403,19 +413,19 @@ in
<Location />
Order allow,deny
Allow localhost
${cfg.allowFrom}
</Location>
<Location /admin>
Order allow,deny
Allow localhost
${cfg.allowFrom}
</Location>
<Location /admin/conf>
AuthType Basic
Require user @SYSTEM
Order allow,deny
Allow localhost
${cfg.allowFrom}
</Location>
<Policy default>

View file

@ -24,7 +24,11 @@ in
enable = mkOption {
type = types.bool;
default = true;
description = "Whether to enable the Name Service Cache Daemon.";
description = ''
Whether to enable the Name Service Cache Daemon.
Disabling this is strongly discouraged, as this effectively disables NSS Lookups
from all non-glibc NSS modules, including the ones provided by systemd.
'';
};
config = mkOption {

View file

@ -142,7 +142,7 @@ in {
description = ''
Extra packages available at runtime to enable Deluge's plugins. For example,
extraction utilities are required for the built-in "Extractor" plugin.
This always contains unzip, gnutar, xz, p7zip and bzip2.
This always contains unzip, gnutar, xz and bzip2.
'';
};
@ -187,7 +187,7 @@ in {
);
# Provide a default set of `extraPackages`.
services.deluge.extraPackages = with pkgs; [ unzip gnutar xz p7zip bzip2 ];
services.deluge.extraPackages = with pkgs; [ unzip gnutar xz bzip2 ];
systemd.tmpfiles.rules = [
"d '${cfg.dataDir}' 0770 ${cfg.user} ${cfg.group}"

View file

@ -29,7 +29,7 @@ let
'') cfg.skins)}
${concatStringsSep "\n" (mapAttrsToList (k: v: ''
ln -s ${v} $out/share/mediawiki/extensions/${k}
ln -s ${if v != null then v else "$src/share/mediawiki/extensions/${k}"} $out/share/mediawiki/extensions/${k}
'') cfg.extensions)}
'';
};
@ -204,17 +204,28 @@ in
default = {};
type = types.attrsOf types.path;
description = ''
List of paths whose content is copied to the 'skins'
subdirectory of the MediaWiki installation.
Attribute set of paths whose content is copied to the <filename>skins</filename>
subdirectory of the MediaWiki installation in addition to the default skins.
'';
};
extensions = mkOption {
default = {};
type = types.attrsOf types.path;
type = types.attrsOf (types.nullOr types.path);
description = ''
List of paths whose content is copied to the 'extensions'
subdirectory of the MediaWiki installation.
Attribute set of paths whose content is copied to the <filename>extensions</filename>
subdirectory of the MediaWiki installation and enabled in configuration.
Use <literal>null</literal> instead of path to enable extensions that are part of MediaWiki.
'';
example = literalExample ''
{
Matomo = pkgs.fetchzip {
url = "https://github.com/DaSchTour/matomo-mediawiki-extension/archive/v4.0.1.tar.gz";
sha256 = "0g5rd3zp0avwlmqagc59cg9bbkn3r7wx7p6yr80s644mj6dlvs1b";
};
ParserFunctions = null;
}
'';
};

View file

@ -11,8 +11,8 @@ let
base = pkgs.php74;
in
base.buildEnv {
extensions = e: with e;
base.enabledExtensions ++ [
extensions = { enabled, all }: with all;
enabled ++ [
apcu redis memcached imagick
];
extraConfig = phpOptionsStr;

View file

@ -41,9 +41,9 @@ let
"mime" "autoindex" "negotiation" "dir"
"alias" "rewrite"
"unixd" "slotmem_shm" "socache_shmcb"
"mpm_${cfg.multiProcessingModule}"
"mpm_${cfg.mpm}"
]
++ (if cfg.multiProcessingModule == "prefork" then [ "cgi" ] else [ "cgid" ])
++ (if cfg.mpm == "prefork" then [ "cgi" ] else [ "cgid" ])
++ optional enableHttp2 "http2"
++ optional enableSSL "ssl"
++ optional enableUserDir "userdir"
@ -264,7 +264,7 @@ let
PidFile ${runtimeDir}/httpd.pid
${optionalString (cfg.multiProcessingModule != "prefork") ''
${optionalString (cfg.mpm != "prefork") ''
# mod_cgid requires this.
ScriptSock ${runtimeDir}/cgisock
''}
@ -338,7 +338,7 @@ let
}
''
cat ${php}/etc/php.ini > $out
cat ${php}/lib/custom-php.ini > $out
cat ${php.phpIni} > $out
echo "$options" >> $out
'';
@ -350,6 +350,7 @@ in
imports = [
(mkRemovedOptionModule [ "services" "httpd" "extraSubservices" ] "Most existing subservices have been ported to the NixOS module system. Please update your configuration accordingly.")
(mkRemovedOptionModule [ "services" "httpd" "stateDir" ] "The httpd module now uses /run/httpd as a runtime directory.")
(mkRenamedOptionModule [ "services" "httpd" "multiProcessingModule" ] [ "services" "httpd" "mpm" ])
# virtualHosts options
(mkRemovedOptionModule [ "services" "httpd" "documentRoot" ] "Please define a virtual host using `services.httpd.virtualHosts`.")
@ -454,7 +455,13 @@ in
type = types.str;
default = "wwwrun";
description = ''
User account under which httpd runs.
User account under which httpd children processes run.
If you require the main httpd process to run as
<literal>root</literal> add the following configuration:
<programlisting>
systemd.services.httpd.serviceConfig.User = lib.mkForce "root";
</programlisting>
'';
};
@ -462,7 +469,7 @@ in
type = types.str;
default = "wwwrun";
description = ''
Group under which httpd runs.
Group under which httpd children processes run.
'';
};
@ -539,20 +546,19 @@ in
'';
};
multiProcessingModule = mkOption {
mpm = mkOption {
type = types.enum [ "event" "prefork" "worker" ];
default = "prefork";
default = "event";
example = "worker";
description =
''
Multi-processing module to be used by Apache. Available
modules are <literal>prefork</literal> (the default;
handles each request in a separate child process),
<literal>worker</literal> (hybrid approach that starts a
number of child processes each running a number of
threads) and <literal>event</literal> (a recent variant of
<literal>worker</literal> that handles persistent
connections more efficiently).
modules are <literal>prefork</literal> (handles each
request in a separate child process), <literal>worker</literal>
(hybrid approach that starts a number of child processes
each running a number of threads) and <literal>event</literal>
(the default; a recent variant of <literal>worker</literal>
that handles persistent connections more efficiently).
'';
};
@ -652,7 +658,7 @@ in
services.httpd.phpOptions =
''
; Needed for PHP's mail() function.
sendmail_path = sendmail -t -i
sendmail_path = ${pkgs.system-sendmail}/bin/sendmail -t -i
; Don't advertise PHP
expose_php = off
@ -703,9 +709,7 @@ in
wants = concatLists (map (hostOpts: [ "acme-${hostOpts.hostName}.service" "acme-selfsigned-${hostOpts.hostName}.service" ]) vhostsACME);
after = [ "network.target" "fs.target" ] ++ map (hostOpts: "acme-selfsigned-${hostOpts.hostName}.service") vhostsACME;
path =
[ pkg pkgs.coreutils pkgs.gnugrep ]
++ optional cfg.enablePHP pkgs.system-sendmail; # Needed for PHP's mail() function.
path = [ pkg pkgs.coreutils pkgs.gnugrep ];
environment =
optionalAttrs cfg.enablePHP { PHPRC = phpIni; }
@ -725,7 +729,7 @@ in
ExecStart = "@${pkg}/bin/httpd httpd -f ${httpdConf}";
ExecStop = "${pkg}/bin/httpd -f ${httpdConf} -k graceful-stop";
ExecReload = "${pkg}/bin/httpd -f ${httpdConf} -k graceful";
User = "root";
User = cfg.user;
Group = cfg.group;
Type = "forking";
PIDFile = "${runtimeDir}/httpd.pid";
@ -733,6 +737,7 @@ in
RestartSec = "5s";
RuntimeDirectory = "httpd httpd/runtime";
RuntimeDirectoryMode = "0750";
AmbientCapabilities = [ "CAP_NET_BIND_SERVICE" ];
};
};

View file

@ -137,7 +137,7 @@ in
http2 = mkOption {
type = types.bool;
default = false;
default = true;
description = ''
Whether to enable HTTP 2. HTTP/2 is supported in all multi-processing modules that come with httpd. <emphasis>However, if you use the prefork mpm, there will
be severe restrictions.</emphasis> Refer to <link xlink:href="https://httpd.apache.org/docs/2.4/howto/http2.html#mpm-config"/> for details.

View file

@ -60,6 +60,7 @@ in
};
useJK = mkOption {
type = types.bool;
default = false;
description = "Whether to use to connector to the Apache HTTP server";
};

View file

@ -1,7 +1,7 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-pantheon">
<title>Pantheon Destkop</title>
<title>Pantheon Desktop</title>
<para>
Pantheon is the desktop environment created for the elementary OS distribution. It is written from scratch in Vala, utilizing GNOME technologies with GTK 3 and Granite.
</para>

View file

@ -93,16 +93,17 @@ in
};
wayland = mkOption {
type = types.bool;
default = true;
description = ''
Allow GDM to run on Wayland instead of Xserver.
Note to enable Wayland with Nvidia you need to
enable the <option>nvidiaWayland</option>.
'';
type = types.bool;
};
nvidiaWayland = mkOption {
type = types.bool;
default = false;
description = ''
Whether to allow wayland to be used with the proprietary

View file

@ -16,12 +16,7 @@ in
services.xserver.digimend = {
enable = mkOption {
default = false;
description = ''
Whether to enable the digimend drivers for Huion/XP-Pen/etc. tablets.
'';
};
enable = mkEnableOption "the digimend drivers for Huion/XP-Pen/etc. tablets";
};

View file

@ -205,7 +205,7 @@ let
"IPv6HopLimit" "IPv4ProxyARP" "IPv6ProxyNDP" "IPv6ProxyNDPAddress"
"IPv6PrefixDelegation" "IPv6MTUBytes" "Bridge" "Bond" "VRF" "VLAN"
"IPVLAN" "MACVLAN" "VXLAN" "Tunnel" "ActiveSlave" "PrimarySlave"
"ConfigureWithoutCarrier" "Xfrm"
"ConfigureWithoutCarrier" "Xfrm" "KeepConfiguration"
])
# Note: For DHCP the values both, none, v4, v6 are deprecated
(assertValueOneOf "DHCP" ["yes" "no" "ipv4" "ipv6" "both" "none" "v4" "v6"])
@ -228,6 +228,7 @@ let
(assertValueOneOf "ActiveSlave" boolValues)
(assertValueOneOf "PrimarySlave" boolValues)
(assertValueOneOf "ConfigureWithoutCarrier" boolValues)
(assertValueOneOf "KeepConfiguration" (boolValues ++ ["static" "dhcp-on-stop" "dhcp"]))
];
checkAddress = checkUnitConfig "Address" [
@ -274,15 +275,16 @@ let
])
];
checkDhcp = checkUnitConfig "DHCP" [
checkDhcpV4 = checkUnitConfig "DHCPv4" [
(assertOnlyFields [
"UseDNS" "UseNTP" "UseMTU" "Anonymize" "SendHostname" "UseHostname"
"Hostname" "UseDomains" "UseRoutes" "UseTimezone" "CriticalConnection"
"ClientIdentifier" "VendorClassIdentifier" "UserClass" "DUIDType"
"DUIDRawData" "IAID" "RequestBroadcast" "RouteMetric" "RouteTable"
"ListenPort" "RapidCommit"
"UseDNS" "RoutesToDNS" "UseNTP" "UseMTU" "Anonymize" "SendHostname" "UseHostname"
"Hostname" "UseDomains" "UseRoutes" "UseTimezone"
"ClientIdentifier" "VendorClassIdentifier" "UserClass" "MaxAttempts"
"DUIDType" "DUIDRawData" "IAID" "RequestBroadcast" "RouteMetric" "RouteTable"
"ListenPort" "SendRelease"
])
(assertValueOneOf "UseDNS" boolValues)
(assertValueOneOf "RoutesToDNS" boolValues)
(assertValueOneOf "UseNTP" boolValues)
(assertValueOneOf "UseMTU" boolValues)
(assertValueOneOf "Anonymize" boolValues)
@ -291,13 +293,50 @@ let
(assertValueOneOf "UseDomains" ["yes" "no" "route"])
(assertValueOneOf "UseRoutes" boolValues)
(assertValueOneOf "UseTimezone" boolValues)
(assertValueOneOf "CriticalConnection" boolValues)
(assertMinimum "MaxAttempts" 0)
(assertValueOneOf "RequestBroadcast" boolValues)
(assertInt "RouteTable")
(assertMinimum "RouteTable" 0)
(assertValueOneOf "RapidCommit" boolValues)
(assertValueOneOf "SendRelease" boolValues)
];
checkDhcpV6 = checkUnitConfig "DHCPv6" [
(assertOnlyFields [
"UseDns" "UseNTP" "RapidCommit" "ForceDHCPv6PDOtherInformation"
"PrefixDelegationHint"
])
(assertValueOneOf "UseDNS" boolValues)
(assertValueOneOf "UseNTP" boolValues)
(assertValueOneOf "RapidCommit" boolValues)
(assertValueOneOf "ForceDHCPv6PDOtherInformation" boolValues)
];
checkIpv6PrefixDelegation = checkUnitConfig "IPv6PrefixDelegation" [
(assertOnlyFields [
"Managed" "OtherInformation" "RouterLifetimeSec"
"RouterPreference" "EmitDNS" "DNS" "EmitDomains" "Domains"
"DNSLifetimeSec"
])
(assertValueOneOf "Managed" boolValues)
(assertValueOneOf "OtherInformation" boolValues)
(assertValueOneOf "RouterPreference" ["high" "medium" "low" "normal" "default"])
(assertValueOneOf "EmitDNS" boolValues)
(assertValueOneOf "EmitDomains" boolValues)
(assertMinimum "DNSLifetimeSec" 0)
];
checkIpv6Prefix = checkUnitConfig "IPv6Prefix" [
(assertOnlyFields [
"AddressAutoconfiguration" "OnLink" "Prefix"
"PreferredLifetimeSec" "ValidLifetimeSec"
])
(assertValueOneOf "AddressAutoconfiguration" boolValues)
(assertValueOneOf "OnLink" boolValues)
(assertMinimum "PreferredLifetimeSec" 0)
(assertMinimum "ValidLifetimeSec" 0)
];
checkDhcpServer = checkUnitConfig "DHCPServer" [
(assertOnlyFields [
"PoolOffset" "PoolSize" "DefaultLeaseTimeSec" "MaxLeaseTimeSec"
@ -621,6 +660,22 @@ let
};
};
ipv6PrefixOptions = {
options = {
ipv6PrefixConfig = mkOption {
default = {};
example = { Prefix = "fd00::/64"; };
type = types.addCheck (types.attrsOf unitOption) checkIpv6Prefix;
description = ''
Each attribute in this set specifies an option in the
<literal>[IPv6Prefix]</literal> section of the unit. See
<citerefentry><refentrytitle>systemd.network</refentrytitle>
<manvolnum>5</manvolnum></citerefentry> for details.
'';
};
};
};
networkOptions = commonNetworkOptions // {
@ -636,13 +691,55 @@ let
'';
};
# systemd.network.networks.*.dhcpConfig has been deprecated in favor of ….dhcpV4Config
# Produce a nice warning message so users know it is gone.
dhcpConfig = mkOption {
visible = false;
apply = _: throw "The option `systemd.network.networks.*.dhcpConfig` can no longer be used since it's been removed. Please use `systemd.network.networks.*.dhcpV4Config` instead.";
};
dhcpV4Config = mkOption {
default = {};
example = { UseDNS = true; UseRoutes = true; };
type = types.addCheck (types.attrsOf unitOption) checkDhcp;
type = types.addCheck (types.attrsOf unitOption) checkDhcpV4;
description = ''
Each attribute in this set specifies an option in the
<literal>[DHCP]</literal> section of the unit. See
<literal>[DHCPv4]</literal> section of the unit. See
<citerefentry><refentrytitle>systemd.network</refentrytitle>
<manvolnum>5</manvolnum></citerefentry> for details.
'';
};
dhcpV6Config = mkOption {
default = {};
example = { UseDNS = true; UseRoutes = true; };
type = types.addCheck (types.attrsOf unitOption) checkDhcpV6;
description = ''
Each attribute in this set specifies an option in the
<literal>[DHCPv6]</literal> section of the unit. See
<citerefentry><refentrytitle>systemd.network</refentrytitle>
<manvolnum>5</manvolnum></citerefentry> for details.
'';
};
ipv6PrefixDelegationConfig = mkOption {
default = {};
example = { EmitDNS = true; Managed = true; OtherInformation = true; };
type = types.addCheck (types.attrsOf unitOption) checkIpv6PrefixDelegation;
description = ''
Each attribute in this set specifies an option in the
<literal>[IPv6PrefixDelegation]</literal> section of the unit. See
<citerefentry><refentrytitle>systemd.network</refentrytitle>
<manvolnum>5</manvolnum></citerefentry> for details.
'';
};
ipv6Prefixes = mkOption {
default = [];
example = { AddressAutoconfiguration = true; OnLink = true; };
type = with types; listOf (submodule ipv6PrefixOptions);
description = ''
A list of ipv6Prefix sections to be added to the unit. See
<citerefentry><refentrytitle>systemd.network</refentrytitle>
<manvolnum>5</manvolnum></citerefentry> for details.
'';
@ -973,11 +1070,26 @@ let
${concatStringsSep "\n" (map (s: "Tunnel=${s}") def.tunnel)}
${concatStringsSep "\n" (map (s: "Xfrm=${s}") def.xfrm)}
${optionalString (def.dhcpConfig != { }) ''
[DHCP]
${attrsToSection def.dhcpConfig}
${optionalString (def.dhcpV4Config != { }) ''
[DHCPv4]
${attrsToSection def.dhcpV4Config}
''}
${optionalString (def.dhcpV6Config != {}) ''
[DHCPv6]
${attrsToSection def.dhcpV6Config}
''}
${optionalString (def.ipv6PrefixDelegationConfig != {}) ''
[IPv6PrefixDelegation]
${attrsToSection def.ipv6PrefixDelegationConfig}
''}
${flip concatMapStrings def.ipv6Prefixes (x: ''
[IPv6Prefix]
${attrsToSection x.ipv6PrefixConfig}
'')}
${optionalString (def.dhcpServerConfig != { }) ''
[DHCPServer]
${attrsToSection def.dhcpServerConfig}
@ -1054,6 +1166,7 @@ in
};
config = mkMerge [
# .link units are honored by udev, no matter if systemd-networkd is enabled or not.
{
systemd.network.units = mapAttrs' (n: v: nameValuePair "${n}.link" (linkToUnit n v)) cfg.links;

View file

@ -291,21 +291,43 @@ let self = {
"19.03".sa-east-1.hvm-ebs = "ami-0c6a43c6e0ad1f4e2";
"19.03".ap-south-1.hvm-ebs = "ami-0303deb1b5890f878";
# 19.09.981.205691b7cbe
"19.09".eu-west-1.hvm-ebs = "ami-0ebd3156e21e9642f";
"19.09".eu-west-2.hvm-ebs = "ami-02a2b5480a79084b7";
"19.09".eu-west-3.hvm-ebs = "ami-09aa175c7588734f7";
"19.09".eu-central-1.hvm-ebs = "ami-00a7fafd7e237a330";
"19.09".us-east-1.hvm-ebs = "ami-00a8eeaf232a74f84";
"19.09".us-east-2.hvm-ebs = "ami-093efd3a57a1e03a8";
"19.09".us-west-1.hvm-ebs = "ami-0913e9a2b677fac30";
"19.09".us-west-2.hvm-ebs = "ami-02d9a19f77b47882a";
"19.09".ca-central-1.hvm-ebs = "ami-0627dd3f7b3627a29";
"19.09".ap-southeast-1.hvm-ebs = "ami-083614e4d08f2164d";
"19.09".ap-southeast-2.hvm-ebs = "ami-0048c704185ded6dc";
"19.09".ap-northeast-1.hvm-ebs = "ami-0329e7fc2d7f60bd0";
"19.09".ap-northeast-2.hvm-ebs = "ami-03d4ae7d0b5fc364f";
"19.09".ap-south-1.hvm-ebs = "ami-0b599690b35aeef23";
# 19.09.2243.84af403f54f
"19.09".eu-west-1.hvm-ebs = "ami-071082f0fa035374f";
"19.09".eu-west-2.hvm-ebs = "ami-0d9dc33c54d1dc4c3";
"19.09".eu-west-3.hvm-ebs = "ami-09566799591d1bfed";
"19.09".eu-central-1.hvm-ebs = "ami-015f8efc2be419b79";
"19.09".eu-north-1.hvm-ebs = "ami-07fc0a32d885e01ed";
"19.09".us-east-1.hvm-ebs = "ami-03330d8b51287412f";
"19.09".us-east-2.hvm-ebs = "ami-0518b4c84972e967f";
"19.09".us-west-1.hvm-ebs = "ami-06ad07e61a353b4a6";
"19.09".us-west-2.hvm-ebs = "ami-0e31e30925cf3ce4e";
"19.09".ca-central-1.hvm-ebs = "ami-07df50fc76702a36d";
"19.09".ap-southeast-1.hvm-ebs = "ami-0f71ae5d4b0b78d95";
"19.09".ap-southeast-2.hvm-ebs = "ami-057bbf2b4bd62d210";
"19.09".ap-northeast-1.hvm-ebs = "ami-02a62555ca182fb5b";
"19.09".ap-northeast-2.hvm-ebs = "ami-0219dde0e6b7b7b93";
"19.09".ap-south-1.hvm-ebs = "ami-066f7f2a895c821a1";
"19.09".ap-east-1.hvm-ebs = "ami-055b2348db2827ff1";
"19.09".sa-east-1.hvm-ebs = "ami-018aab68377227e06";
latest = self."19.09";
# 20.03.1554.94e39623a49
"20.03".eu-west-1.hvm-ebs = "ami-02c34db5766cc7013";
"20.03".eu-west-2.hvm-ebs = "ami-0e32bd8c7853883f1";
"20.03".eu-west-3.hvm-ebs = "ami-061edb1356c1d69fd";
"20.03".eu-central-1.hvm-ebs = "ami-0a1a94722dcbff94c";
"20.03".eu-north-1.hvm-ebs = "ami-02699abfacbb6464b";
"20.03".us-east-1.hvm-ebs = "ami-0c5e7760748b74e85";
"20.03".us-east-2.hvm-ebs = "ami-030296bb256764655";
"20.03".us-west-1.hvm-ebs = "ami-050be818e0266b741";
"20.03".us-west-2.hvm-ebs = "ami-06562f78dca68eda2";
"20.03".ca-central-1.hvm-ebs = "ami-02365684a173255c7";
"20.03".ap-southeast-1.hvm-ebs = "ami-0dbf353e168d155f7";
"20.03".ap-southeast-2.hvm-ebs = "ami-04c0f3a75f63daddd";
"20.03".ap-northeast-1.hvm-ebs = "ami-093d9cc49c191eb6c";
"20.03".ap-northeast-2.hvm-ebs = "ami-0087df91a7b6ebd45";
"20.03".ap-south-1.hvm-ebs = "ami-0a1a6b569af04af9d";
"20.03".ap-east-1.hvm-ebs = "ami-0d18fdd309cdefa86";
"20.03".sa-east-1.hvm-ebs = "ami-09859378158ae971d";
latest = self."20.03";
}; in self

View file

@ -546,7 +546,7 @@ in
Note that this option might require to do some adjustments to the container configuration,
e.g. you might want to set
<varname>systemd.network.networks.$interface.dhcpConfig.ClientIdentifier</varname> to "mac"
<varname>systemd.network.networks.$interface.dhcpV4Config.ClientIdentifier</varname> to "mac"
if you use <varname>macvlans</varname> option.
This way dhcp client identifier will be stable between the container restarts.

View file

@ -4,18 +4,20 @@ let
inherit (lib) mkOption types;
podmanPackage = (pkgs.podman.override { inherit (cfg) extraPackages; });
# Provides a fake "docker" binary mapping to podman
dockerCompat = pkgs.runCommandNoCC "${pkgs.podman.pname}-docker-compat-${pkgs.podman.version}" {
dockerCompat = pkgs.runCommandNoCC "${podmanPackage.pname}-docker-compat-${podmanPackage.version}" {
outputs = [ "out" "bin" "man" ];
inherit (pkgs.podman) meta;
inherit (podmanPackage) meta;
} ''
mkdir $out
mkdir -p $bin/bin
ln -s ${pkgs.podman.bin}/bin/podman $bin/bin/docker
ln -s ${podmanPackage.bin}/bin/podman $bin/bin/docker
mkdir -p $man/share/man/man1
for f in ${pkgs.podman.man}/share/man/man1/*; do
for f in ${podmanPackage.man}/share/man/man1/*; do
basename=$(basename $f | sed s/podman/docker/g)
ln -s $f $man/share/man/man1/$basename
done
@ -54,6 +56,19 @@ in
'';
};
extraPackages = mkOption {
type = with types; listOf package;
default = [ ];
example = lib.literalExample ''
[
pkgs.gvisor
]
'';
description = ''
Extra packages to be installed in the Podman wrapper.
'';
};
libpod = mkOption {
default = {};
description = "Libpod configuration";
@ -77,29 +92,24 @@ in
config = lib.mkIf cfg.enable {
environment.systemPackages = [
pkgs.podman # Docker compat
pkgs.runc # Default container runtime
pkgs.crun # Default container runtime (cgroups v2)
pkgs.conmon # Container runtime monitor
pkgs.slirp4netns # User-mode networking for unprivileged namespaces
pkgs.fuse-overlayfs # CoW for images, much faster than default vfs
pkgs.utillinux # nsenter
pkgs.iptables
]
++ lib.optional cfg.dockerCompat dockerCompat;
environment.systemPackages = [ podmanPackage ]
++ lib.optional cfg.dockerCompat dockerCompat;
environment.etc."containers/libpod.conf".text = ''
cni_plugin_dir = ["${pkgs.cni-plugins}/bin/"]
cni_config_dir = "/etc/cni/net.d/"
'' + cfg.libpod.extraConfig;
environment.etc."cni/net.d/87-podman-bridge.conflist".source = copyFile "${pkgs.podman.src}/cni/87-podman-bridge.conflist";
environment.etc."cni/net.d/87-podman-bridge.conflist".source = copyFile "${pkgs.podman-unwrapped.src}/cni/87-podman-bridge.conflist";
# Enable common /etc/containers configuration
virtualisation.containers.enable = true;
assertions = [{
assertion = cfg.dockerCompat -> !config.virtualisation.docker.enable;
message = "Option dockerCompat conflicts with docker";
}];
};
}

View file

@ -499,7 +499,7 @@ in
# FIXME: Consolidate this one day.
virtualisation.qemu.options = mkMerge [
(mkIf (pkgs.stdenv.isi686 || pkgs.stdenv.isx86_64) [
"-vga std" "-usb" "-device usb-tablet,bus=usb-bus.0"
"-usb" "-device usb-tablet,bus=usb-bus.0"
])
(mkIf (pkgs.stdenv.isAarch32 || pkgs.stdenv.isAarch64) [
"-device virtio-gpu-pci" "-device usb-ehci,id=usb0" "-device usb-kbd" "-device usb-tablet"

View file

@ -103,6 +103,7 @@ in
};
forwardDns = mkOption {
type = types.bool;
default = false;
description = ''
If set to <literal>true</literal>, the DNS queries from the
@ -135,14 +136,8 @@ in
};
};
virtualisation.xen.trace =
mkOption {
default = false;
description =
''
Enable Xen tracing.
'';
};
virtualisation.xen.trace = mkEnableOption "Xen tracing";
};

View file

@ -50,6 +50,7 @@ in rec {
(onFullSupported "nixos.dummy")
(onAllSupported "nixos.iso_minimal")
(onSystems ["x86_64-linux"] "nixos.iso_plasma5")
(onSystems ["x86_64-linux"] "nixos.iso_gnome")
(onFullSupported "nixos.manual")
(onSystems ["x86_64-linux"] "nixos.ova")
(onSystems ["aarch64-linux"] "nixos.sd_image")
@ -110,6 +111,7 @@ in rec {
(onFullSupported "nixos.tests.networking.scripted.sit")
(onFullSupported "nixos.tests.networking.scripted.static")
(onFullSupported "nixos.tests.networking.scripted.vlan")
(onFullSupported "nixos.tests.systemd-networkd-ipv6-prefix-delegation")
(onFullSupported "nixos.tests.nfs3.simple")
(onFullSupported "nixos.tests.nfs4.simple")
(onFullSupported "nixos.tests.openssh")

View file

@ -155,6 +155,12 @@ in rec {
inherit system;
});
iso_gnome = forMatchingSystems [ "x86_64-linux" ] (system: makeIso {
module = ./modules/installer/cd-dvd/installation-cd-graphical-gnome.nix;
type = "gnome";
inherit system;
});
# A variant with a more recent (but possibly less stable) kernel
# that might support more hardware.
iso_minimal_new_kernel = forMatchingSystems [ "x86_64-linux" "aarch64-linux" ] (system: makeIso {
@ -308,9 +314,9 @@ in rec {
lapp = makeClosure ({ pkgs, ... }:
{ services.httpd.enable = true;
services.httpd.adminAddr = "foo@example.org";
services.httpd.enablePHP = true;
services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql;
environment.systemPackages = [ pkgs.php ];
});
};
}

View file

@ -12,8 +12,9 @@ let
fi
'';
in import ./make-test-python.nix {
in import ./make-test-python.nix ({ lib, ... }: {
name = "acme";
meta.maintainers = lib.teams.acme.members;
nodes = rec {
acme = { nodes, lib, ... }: {
@ -207,4 +208,4 @@ in import ./make-test-python.nix {
"curl --cacert /tmp/ca.crt https://c.example.test/ | grep -qF 'hello world'"
)
'';
}
})

View file

@ -287,6 +287,7 @@ in
snapper = handleTest ./snapper.nix {};
solr = handleTest ./solr.nix {};
spacecookie = handleTest ./spacecookie.nix {};
spike = handleTest ./spike.nix {};
sonarr = handleTest ./sonarr.nix {};
strongswan-swanctl = handleTest ./strongswan-swanctl.nix {};
sudo = handleTest ./sudo.nix {};
@ -301,6 +302,7 @@ in
systemd-networkd-vrf = handleTest ./systemd-networkd-vrf.nix {};
systemd-networkd = handleTest ./systemd-networkd.nix {};
systemd-networkd-dhcpserver = handleTest ./systemd-networkd-dhcpserver.nix {};
systemd-networkd-ipv6-prefix-delegation = handleTest ./systemd-networkd-ipv6-prefix-delegation.nix {};
systemd-nspawn = handleTest ./systemd-nspawn.nix {};
pdns-recursor = handleTest ./pdns-recursor.nix {};
taskserver = handleTest ./taskserver.nix {};

View file

@ -101,6 +101,7 @@ let
prefixed indices. Ignore the error if the filter does not result in an
actionable list of indices (ignore_empty_list) and exit cleanly.
options:
allow_ilm_indices: true
ignore_empty_list: True
disable_action: False
filters:

View file

@ -22,6 +22,8 @@ in {
client = { ... }: {};
};
testScript = ''
MOCKUSER = "mockuser_nixos_org"
MOCKADMIN = "mockadmin_nixos_org"
start_all()
server.wait_for_unit("mock-google-metadata.service")
@ -29,10 +31,10 @@ in {
# mockserver should return a non-expired ssh key for both mockuser and mockadmin
server.succeed(
'${pkgs.google-compute-engine-oslogin}/bin/google_authorized_keys mockuser | grep -q "${snakeOilPublicKey}"'
f'${pkgs.google-compute-engine-oslogin}/bin/google_authorized_keys {MOCKUSER} | grep -q "${snakeOilPublicKey}"'
)
server.succeed(
'${pkgs.google-compute-engine-oslogin}/bin/google_authorized_keys mockadmin | grep -q "${snakeOilPublicKey}"'
f'${pkgs.google-compute-engine-oslogin}/bin/google_authorized_keys {MOCKADMIN} | grep -q "${snakeOilPublicKey}"'
)
# install snakeoil ssh key on the client, and provision .ssh/config file
@ -50,20 +52,22 @@ in {
client.fail("ssh ghost@server 'true'")
# we should be able to connect as mockuser
client.succeed("ssh mockuser@server 'true'")
client.succeed(f"ssh {MOCKUSER}@server 'true'")
# but we shouldn't be able to sudo
client.fail(
"ssh mockuser@server '/run/wrappers/bin/sudo /run/current-system/sw/bin/id' | grep -q 'root'"
f"ssh {MOCKUSER}@server '/run/wrappers/bin/sudo /run/current-system/sw/bin/id' | grep -q 'root'"
)
# we should also be able to log in as mockadmin
client.succeed("ssh mockadmin@server 'true'")
client.succeed(f"ssh {MOCKADMIN}@server 'true'")
# pam_oslogin_admin.so should now have generated a sudoers file
server.succeed("find /run/google-sudoers.d | grep -q '/run/google-sudoers.d/mockadmin'")
server.succeed(
f"find /run/google-sudoers.d | grep -q '/run/google-sudoers.d/{MOCKADMIN}'"
)
# and we should be able to sudo
client.succeed(
"ssh mockadmin@server '/run/wrappers/bin/sudo /run/current-system/sw/bin/id' | grep -q 'root'"
f"ssh {MOCKADMIN}@server '/run/wrappers/bin/sudo /run/current-system/sw/bin/id' | grep -q 'root'"
)
'';
})

View file

@ -7,24 +7,29 @@ import hashlib
import base64
from http.server import BaseHTTPRequestHandler, HTTPServer
from urllib.parse import urlparse, parse_qs
from typing import Dict
SNAKEOIL_PUBLIC_KEY = os.environ['SNAKEOIL_PUBLIC_KEY']
MOCKUSER="mockuser_nixos_org"
MOCKADMIN="mockadmin_nixos_org"
def w(msg):
def w(msg: bytes):
sys.stderr.write(f"{msg}\n")
sys.stderr.flush()
def gen_fingerprint(pubkey):
def gen_fingerprint(pubkey: str):
decoded_key = base64.b64decode(pubkey.encode("ascii").split()[1])
return hashlib.sha256(decoded_key).hexdigest()
def gen_email(username):
def gen_email(username: str):
"""username seems to be a 21 characters long number string, so mimic that in a reproducible way"""
return str(int(hashlib.sha256(username.encode()).hexdigest(), 16))[0:21]
def gen_mockuser(username: str, uid: str, gid: str, home_directory: str, snakeoil_pubkey: str) -> Dict:
snakeoil_pubkey_fingerprint = gen_fingerprint(snakeoil_pubkey)
# seems to be a 21 characters long numberstring, so mimic that in a reproducible way
@ -56,7 +61,8 @@ def gen_mockuser(username: str, uid: str, gid: str, home_directory: str, snakeoi
class ReqHandler(BaseHTTPRequestHandler):
def _send_json_ok(self, data):
def _send_json_ok(self, data: dict):
self.send_response(200)
self.send_header('Content-type', 'application/json')
self.end_headers()
@ -64,29 +70,62 @@ class ReqHandler(BaseHTTPRequestHandler):
w(out)
self.wfile.write(out)
def _send_json_success(self, success=True):
self.send_response(200)
self.send_header('Content-type', 'application/json')
self.end_headers()
out = json.dumps({"success": success}).encode()
w(out)
self.wfile.write(out)
def _send_404(self):
self.send_response(404)
self.end_headers()
def do_GET(self):
p = str(self.path)
# mockuser and mockadmin are allowed to login, both use the same snakeoil public key
if p == '/computeMetadata/v1/oslogin/users?username=mockuser' \
or p == '/computeMetadata/v1/oslogin/users?uid=1009719690':
self._send_json_ok(gen_mockuser(username='mockuser', uid='1009719690', gid='1009719690',
home_directory='/home/mockuser', snakeoil_pubkey=SNAKEOIL_PUBLIC_KEY))
elif p == '/computeMetadata/v1/oslogin/users?username=mockadmin' \
or p == '/computeMetadata/v1/oslogin/users?uid=1009719691':
self._send_json_ok(gen_mockuser(username='mockadmin', uid='1009719691', gid='1009719691',
home_directory='/home/mockadmin', snakeoil_pubkey=SNAKEOIL_PUBLIC_KEY))
pu = urlparse(p)
params = parse_qs(pu.query)
# mockuser is allowed to login
elif p == f"/computeMetadata/v1/oslogin/authorize?email={gen_email('mockuser')}&policy=login":
self._send_json_ok({'success': True})
# users endpoint
if pu.path == "/computeMetadata/v1/oslogin/users":
# mockuser and mockadmin are allowed to login, both use the same snakeoil public key
if params.get('username') == [MOCKUSER] or params.get('uid') == ["1009719690"]:
username = MOCKUSER
uid = "1009719690"
elif params.get('username') == [MOCKADMIN] or params.get('uid') == ["1009719691"]:
username = MOCKADMIN
uid = "1009719691"
else:
self._send_404()
return
# mockadmin may also become root
elif p == f"/computeMetadata/v1/oslogin/authorize?email={gen_email('mockadmin')}&policy=login" or p == f"/computeMetadata/v1/oslogin/authorize?email={gen_email('mockadmin')}&policy=adminLogin":
self._send_json_ok({'success': True})
self._send_json_ok(gen_mockuser(username=username, uid=uid, gid=uid, home_directory=f"/home/{username}", snakeoil_pubkey=SNAKEOIL_PUBLIC_KEY))
return
# authorize endpoint
elif pu.path == "/computeMetadata/v1/oslogin/authorize":
# is user allowed to login?
if params.get("policy") == ["login"]:
# mockuser and mockadmin are allowed to login
if params.get('email') == [gen_email(MOCKUSER)] or params.get('email') == [gen_email(MOCKADMIN)]:
self._send_json_success()
return
self._send_json_success(False)
return
# is user allowed to become root?
elif params.get("policy") == ["adminLogin"]:
# only mockadmin is allowed to become admin
self._send_json_success((params['email'] == [gen_email(MOCKADMIN)]))
return
# send 404 for other policies
else:
self._send_404()
return
else:
sys.stderr.write(f"Unhandled path: {p}\n")
sys.stderr.flush()
self.send_response(501)
self.send_response(404)
self.end_headers()
self.wfile.write(b'')

View file

@ -8,6 +8,13 @@ import ./make-test-python.nix ({ pkgs, lib, ... }: {
services.mediawiki.virtualHost.hostName = "localhost";
services.mediawiki.virtualHost.adminAddr = "root@example.com";
services.mediawiki.passwordFile = pkgs.writeText "password" "correcthorsebatterystaple";
services.mediawiki.extensions = {
Matomo = pkgs.fetchzip {
url = "https://github.com/DaSchTour/matomo-mediawiki-extension/archive/v4.0.1.tar.gz";
sha256 = "0g5rd3zp0avwlmqagc59cg9bbkn3r7wx7p6yr80s644mj6dlvs1b";
};
ParserFunctions = null;
};
};
testScript = ''

View file

@ -1,6 +1,6 @@
import ../make-test-python.nix ({pkgs, ...}: {
import ../make-test-python.nix ({pkgs, lib, ...}: {
name = "php-fpm-nginx-test";
meta.maintainers = with pkgs.stdenv.lib.maintainers; [ etu ];
meta.maintainers = lib.teams.php.members;
machine = { config, lib, pkgs, ... }: {
services.nginx = {

View file

@ -1,6 +1,6 @@
import ../make-test-python.nix ({pkgs, ...}: {
import ../make-test-python.nix ({pkgs, lib, ...}: {
name = "php-httpd-test";
meta.maintainers = with pkgs.stdenv.lib.maintainers; [ etu ];
meta.maintainers = lib.teams.php.members;
machine = { config, lib, pkgs, ... }: {
services.httpd = {

View file

@ -1,7 +1,9 @@
let
testString = "can-use-subgroups";
in import ../make-test-python.nix ({ ...}: {
in import ../make-test-python.nix ({lib, ...}: {
name = "php-httpd-pcre-jit-test";
meta.maintainers = lib.teams.php.members;
machine = { lib, pkgs, ... }: {
time.timeZone = "UTC";
services.httpd = {

View file

@ -0,0 +1,295 @@
# This test verifies that we can request and assign IPv6 prefixes from upstream
# (e.g. ISP) routers.
# The setup consits of three VMs. One for the ISP, as your residential router
# and the third as a client machine in the residential network.
#
# There are two VLANs in this test:
# - VLAN 1 is the connection between the ISP and the router
# - VLAN 2 is the connection between the router and the client
import ./make-test-python.nix ({pkgs, ...}: {
name = "systemd-networkd-ipv6-prefix-delegation";
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ andir ];
};
nodes = {
# The ISP's routers job is to delegate IPv6 prefixes via DHCPv6. Like with
# regular IPv6 auto-configuration it will also emit IPv6 router
# advertisements (RAs). Those RA's will not carry a prefix but in contrast
# just set the "Other" flag to indicate to the receiving nodes that they
# should attempt DHCPv6.
#
# Note: On the ISPs device we don't really care if we are using networkd in
# this example. That being said we can't use it (yet) as networkd doesn't
# implement the serving side of DHCPv6. We will use ISC's well aged dhcpd6
# for that task.
isp = { lib, pkgs, ... }: {
virtualisation.vlans = [ 1 ];
networking = {
useDHCP = false;
firewall.enable = false;
interfaces.eth1.ipv4.addresses = lib.mkForce []; # no need for legacy IP
interfaces.eth1.ipv6.addresses = lib.mkForce [
{ address = "2001:DB8::"; prefixLength = 64; }
];
};
# Since we want to program the routes that we delegate to the "customer"
# into our routing table we must have a way to gain the required privs.
# This security wrapper will do in our test setup.
#
# DO NOT COPY THIS TO PRODUCTION AS IS. Think about it at least twice.
# Everyone on the "isp" machine will be able to add routes to the kernel.
security.wrappers.add-dhcpd-lease = {
source = pkgs.writeShellScript "add-dhcpd-lease" ''
exec ${pkgs.iproute}/bin/ip -6 route replace "$1" via "$2"
'';
capabilities = "cap_net_admin+ep";
};
services = {
# Configure the DHCPv6 server
#
# We will hand out /48 prefixes from the subnet 2001:DB8:F000::/36.
# That gives us ~8k prefixes. That should be enough for this test.
#
# Since (usually) you will not receive a prefix with the router
# advertisements we also hand out /128 leases from the range
# 2001:DB8:0000:0000:FFFF::/112.
dhcpd6 = {
enable = true;
interfaces = [ "eth1" ];
extraConfig = ''
subnet6 2001:DB8::/36 {
range6 2001:DB8:0000:0000:FFFF:: 2001:DB8:0000:0000:FFFF::FFFF;
prefix6 2001:DB8:F000:: 2001:DB8:FFFF:: /48;
}
# This is the secret sauce. We have to extract the prefix and the
# next hop when commiting the lease to the database. dhcpd6
# (rightfully) has not concept of adding routes to the systems
# routing table. It really depends on the setup.
#
# In a production environment your DHCPv6 server is likely not the
# router. You might want to consider BGP, custom NetConf calls, …
# in those cases.
on commit {
set IP = pick-first-value(binary-to-ascii(16, 16, ":", substring(option dhcp6.ia-na, 16, 16)), "n/a");
set Prefix = pick-first-value(binary-to-ascii(16, 16, ":", suffix(option dhcp6.ia-pd, 16)), "n/a");
set PrefixLength = pick-first-value(binary-to-ascii(10, 8, ":", substring(suffix(option dhcp6.ia-pd, 17), 0, 1)), "n/a");
log(concat(IP, " ", Prefix, " ", PrefixLength));
execute("/run/wrappers/bin/add-dhcpd-lease", concat(Prefix,"/",PrefixLength), IP);
}
'';
};
# Finally we have to set up the router advertisements. While we could be
# using networkd or bird for this task `radvd` is probably the most
# venerable of them all. It was made explicitly for this purpose and
# the configuration is much more straightforward than what networkd
# requires.
# As outlined above we will have to set the `Managed` flag as otherwise
# the clients will not know if they should do DHCPv6. (Some do
# anyway/always)
radvd = {
enable = true;
config = ''
interface eth1 {
AdvSendAdvert on;
AdvManagedFlag on;
AdvOtherConfigFlag off; # we don't really have DNS or NTP or anything like that to distribute
prefix ::/64 {
AdvOnLink on;
AdvAutonomous on;
};
};
'';
};
};
};
# This will be our (residential) router that receives the IPv6 prefix (IA_PD)
# and /128 (IA_NA) allocation.
#
# Here we will actually start using networkd.
router = {
virtualisation.vlans = [ 1 2 ];
systemd.services.systemd-networkd.environment.SYSTEMD_LOG_LEVEL = "debug";
boot.kernel.sysctl = {
# we want to forward packets from the ISP to the client and back.
"net.ipv6.conf.all.forwarding" = 1;
};
networking = {
useNetworkd = true;
useDHCP = false;
# Consider enabling this in production and generating firewall rules
# for fowarding/input from the configured interfaces so you do not have
# to manage multiple places
firewall.enable = false;
};
systemd.network = {
networks = {
# systemd-networkd will load the first network unit file
# that matches, ordered lexiographically by filename.
# /etc/systemd/network/{40-eth1,99-main}.network already
# exists. This network unit must be loaded for the test,
# however, hence why this network is named such.
# Configuration of the interface to the ISP.
# We must request accept RAs and request the PD prefix.
"01-eth1" = {
name = "eth1";
networkConfig = {
Description = "ISP interface";
IPv6AcceptRA = true;
#DHCP = false; # no need for legacy IP
};
linkConfig = {
# We care about this interface when talking about being "online".
# If this interface is in the `routable` state we can reach
# others and they should be able to reach us.
RequiredForOnline = "routable";
};
# This configures the DHCPv6 client part towards the ISPs DHCPv6 server.
dhcpV6Config = {
# We have to include a request for a prefix in our DHCPv6 client
# request packets.
# Otherwise the upstream DHCPv6 server wouldn't know if we want a
# prefix or not. Note: On some installation it makes sense to
# always force that option on the DHPCv6 server since there are
# certain CPEs that are just not setting this field but happily
# accept the delegated prefix.
PrefixDelegationHint = "::/48";
};
ipv6PrefixDelegationConfig = {
# Let networkd know that we would very much like to use DHCPv6
# to obtain the "managed" information. Not sure why they can't
# just take that from the upstream RAs.
Managed = true;
};
};
# Interface to the client. Here we should redistribute a /64 from
# the prefix we received from the ISP.
"01-eth2" = {
name = "eth2";
networkConfig = {
Description = "Client interface";
# the client shouldn't be allowed to send us RAs, that would be weird.
IPv6AcceptRA = false;
# Just delegate prefixes from the DHCPv6 PD pool.
# If you also want to distribute a local ULA prefix you want to
# set this to `yes` as that includes both static prefixes as well
# as PD prefixes.
IPv6PrefixDelegation = "dhcpv6";
};
# finally "act as router" (according to systemd.network(5))
ipv6PrefixDelegationConfig = {
RouterLifetimeSec = 300; # required as otherwise no RA's are being emitted
# In a production environment you should consider setting these as well:
#EmitDNS = true;
#EmitDomains = true;
#DNS= = "fe80::1"; # or whatever "well known" IP your router will have on the inside.
};
# This adds a "random" ULA prefix to the interface that is being
# advertised to the clients.
# Not used in this test.
# ipv6Prefixes = [
# {
# ipv6PrefixConfig = {
# AddressAutoconfiguration = true;
# PreferredLifetimeSec = 1800;
# ValidLifetimeSec = 1800;
# };
# }
# ];
};
# finally we are going to add a static IPv6 unique local address to
# the "lo" interface. This will serve as ICMPv6 echo target to
# verify connectivity from the client to the router.
"01-lo" = {
name = "lo";
addresses = [
{ addressConfig.Address = "FD42::1/128"; }
];
};
};
};
# make the network-online target a requirement, we wait for it in our test script
systemd.targets.network-online.wantedBy = [ "multi-user.target" ];
};
# This is the client behind the router. We should be receving router
# advertisements for both the ULA and the delegated prefix.
# All we have to do is boot with the default (networkd) configuration.
client = {
virtualisation.vlans = [ 2 ];
systemd.services.systemd-networkd.environment.SYSTEMD_LOG_LEVEL = "debug";
networking = {
useNetworkd = true;
useDHCP = false;
};
# make the network-online target a requirement, we wait for it in our test script
systemd.targets.network-online.wantedBy = [ "multi-user.target" ];
};
};
testScript = ''
# First start the router and wait for it it reach a state where we are
# certain networkd is up and it is able to send out RAs
router.start()
router.wait_for_unit("systemd-networkd.service")
# After that we can boot the client and wait for the network online target.
# Since we only care about IPv6 that should not involve waiting for legacy
# IP leases.
client.start()
client.wait_for_unit("network-online.target")
# the static address on the router should not be reachable
client.wait_until_succeeds("ping -6 -c 1 FD42::1")
# the global IP of the ISP router should still not be a reachable
router.fail("ping -6 -c 1 2001:DB8::")
# Once we have internal connectivity boot up the ISP
isp.start()
# Since for the ISP "being online" should have no real meaning we just
# wait for the target where all the units have been started.
# It probably still takes a few more seconds for all the RA timers to be
# fired etc..
isp.wait_for_unit("multi-user.target")
# wait until the uplink interface has a good status
router.wait_for_unit("network-online.target")
router.wait_until_succeeds("ping -6 -c1 2001:DB8::")
# shortly after that the client should have received it's global IPv6
# address and thus be able to ping the ISP
client.wait_until_succeeds("ping -6 -c1 2001:DB8::")
# verify that we got a globally scoped address in eth1 from the
# documentation prefix
ip_output = client.succeed("ip --json -6 address show dev eth1")
import json
ip_json = json.loads(ip_output)[0]
assert any(
addr["local"].upper().startswith("2001:DB8:")
for addr in ip_json["addr_info"]
if addr["scope"] == "global"
)
'';
})

View file

@ -1,27 +1,80 @@
import ../make-test-python.nix {
name = "prosody";
let
cert = pkgs: pkgs.runCommandNoCC "selfSignedCerts" { buildInputs = [ pkgs.openssl ]; } ''
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -nodes -subj '/CN=example.com/CN=uploads.example.com/CN=conference.example.com'
mkdir -p $out
cp key.pem cert.pem $out
'';
createUsers = pkgs: pkgs.writeScriptBin "create-prosody-users" ''
#!${pkgs.bash}/bin/bash
set -e
# Creates and set password for the 2 xmpp test users.
#
# Doing that in a bash script instead of doing that in the test
# script allow us to easily provision the users when running that
# test interactively.
prosodyctl register cthon98 example.com nothunter2
prosodyctl register azurediamond example.com hunter2
'';
delUsers = pkgs: pkgs.writeScriptBin "delete-prosody-users" ''
#!${pkgs.bash}/bin/bash
set -e
# Deletes the test users.
#
# Doing that in a bash script instead of doing that in the test
# script allow us to easily provision the users when running that
# test interactively.
prosodyctl deluser cthon98@example.com
prosodyctl deluser azurediamond@example.com
'';
in import ../make-test-python.nix {
name = "prosody";
nodes = {
client = { nodes, pkgs, ... }: {
client = { nodes, pkgs, config, ... }: {
security.pki.certificateFiles = [ "${cert pkgs}/cert.pem" ];
console.keyMap = "fr-bepo";
networking.extraHosts = ''
${nodes.server.config.networking.primaryIPAddress} example.com
${nodes.server.config.networking.primaryIPAddress} conference.example.com
${nodes.server.config.networking.primaryIPAddress} uploads.example.com
'';
environment.systemPackages = [
(pkgs.callPackage ./xmpp-sendmessage.nix { connectTo = nodes.server.config.networking.primaryIPAddress; })
];
};
server = { config, pkgs, ... }: {
security.pki.certificateFiles = [ "${cert pkgs}/cert.pem" ];
console.keyMap = "fr-bepo";
networking.extraHosts = ''
${config.networking.primaryIPAddress} example.com
${config.networking.primaryIPAddress} conference.example.com
${config.networking.primaryIPAddress} uploads.example.com
'';
networking.firewall.enable = false;
environment.systemPackages = [
(createUsers pkgs)
(delUsers pkgs)
];
services.prosody = {
enable = true;
# TODO: use a self-signed certificate
c2sRequireEncryption = false;
extraConfig = ''
storage = "sql"
'';
virtualHosts.test = {
ssl.cert = "${cert pkgs}/cert.pem";
ssl.key = "${cert pkgs}/key.pem";
virtualHosts.example = {
domain = "example.com";
enabled = true;
ssl.cert = "${cert pkgs}/cert.pem";
ssl.key = "${cert pkgs}/key.pem";
};
muc = [
{
domain = "conference.example.com";
}
];
uploadHttp = {
domain = "uploads.example.com";
};
};
};
@ -31,16 +84,8 @@ import ../make-test-python.nix {
server.wait_for_unit("prosody.service")
server.succeed('prosodyctl status | grep "Prosody is running"')
# set password to 'nothunter2' (it's asked twice)
server.succeed("yes nothunter2 | prosodyctl adduser cthon98@example.com")
# set password to 'y'
server.succeed("yes | prosodyctl adduser azurediamond@example.com")
# correct password to "hunter2"
server.succeed("yes hunter2 | prosodyctl passwd azurediamond@example.com")
client.succeed("send-message")
server.succeed("prosodyctl deluser cthon98@example.com")
server.succeed("prosodyctl deluser azurediamond@example.com")
server.succeed("create-prosody-users")
client.succeed('send-message 2>&1 | grep "XMPP SCRIPT TEST SUCCESS"')
server.succeed("delete-prosody-users")
'';
}

View file

@ -1,46 +1,61 @@
{ writeScriptBin, python3, connectTo ? "localhost" }:
writeScriptBin "send-message" ''
#!${(python3.withPackages (ps: [ ps.sleekxmpp ])).interpreter}
# Based on the sleekxmpp send_client example, look there for more details:
# https://github.com/fritzy/SleekXMPP/blob/develop/examples/send_client.py
import sleekxmpp
{ writeScriptBin, writeText, python3, connectTo ? "localhost" }:
let
dummyFile = writeText "dummy-file" ''
Dear dog,
class SendMsgBot(sleekxmpp.ClientXMPP):
"""
A basic SleekXMPP bot that will log in, send a message,
and then log out.
"""
def __init__(self, jid, password, recipient, message):
sleekxmpp.ClientXMPP.__init__(self, jid, password)
Please find this *really* important attachment.
self.recipient = recipient
self.msg = message
Yours truly,
John
'';
in writeScriptBin "send-message" ''
#!${(python3.withPackages (ps: [ ps.slixmpp ])).interpreter}
import logging
import sys
from types import MethodType
self.add_event_handler("session_start", self.start, threaded=True)
def start(self, event):
self.send_presence()
self.get_roster()
self.send_message(mto=self.recipient,
mbody=self.msg,
mtype='chat')
self.disconnect(wait=True)
from slixmpp import ClientXMPP
from slixmpp.exceptions import IqError, IqTimeout
if __name__ == '__main__':
xmpp = SendMsgBot("cthon98@example.com", "nothunter2", "azurediamond@example.com", "hey, if you type in your pw, it will show as stars")
xmpp.register_plugin('xep_0030') # Service Discovery
xmpp.register_plugin('xep_0199') # XMPP Ping
class CthonTest(ClientXMPP):
# TODO: verify certificate
# If you want to verify the SSL certificates offered by a server:
# xmpp.ca_certs = "path/to/ca/cert"
def __init__(self, jid, password):
ClientXMPP.__init__(self, jid, password)
self.add_event_handler("session_start", self.session_start)
if xmpp.connect(('${connectTo}', 5222)):
xmpp.process(block=True)
else:
print("Unable to connect.")
sys.exit(1)
async def session_start(self, event):
log = logging.getLogger(__name__)
self.send_presence()
self.get_roster()
# Sending a test message
self.send_message(mto="azurediamond@example.com", mbody="Hello, this is dog.", mtype="chat")
log.info('Message sent')
# Test http upload (XEP_0363)
def timeout_callback(arg):
log.error("ERROR: Cannot upload file. XEP_0363 seems broken")
sys.exit(1)
url = await self['xep_0363'].upload_file("${dummyFile}",timeout=10, timeout_callback=timeout_callback)
log.info('Upload success!')
# Test MUC
self.plugin['xep_0045'].join_muc('testMucRoom', 'cthon98', wait=True)
log.info('MUC join success!')
log.info('XMPP SCRIPT TEST SUCCESS')
self.disconnect(wait=True)
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG,
format='%(levelname)-8s %(message)s')
ct = CthonTest('cthon98@example.com', 'nothunter2')
ct.register_plugin('xep_0071')
ct.register_plugin('xep_0128')
# HTTP Upload
ct.register_plugin('xep_0363')
# MUC
ct.register_plugin('xep_0045')
ct.connect(("server", 5222))
ct.process(forever=False)
''

View file

@ -91,7 +91,7 @@ let
'';
meta = with stdenv.lib; {
homepage = "http://www.clementine-player.org";
homepage = "https://www.clementine-player.org";
description = "A multiplatform music player";
license = licenses.gpl3Plus;
platforms = platforms.linux;
@ -130,7 +130,7 @@ let
'';
enableParallelBuilding = true;
meta = with stdenv.lib; {
homepage = "http://www.clementine-player.org";
homepage = "https://www.clementine-player.org";
description = "Spotify integration for Clementine";
# The blob itself is Apache-licensed, although libspotify is unfree.
license = licenses.asl20;

View file

@ -7,13 +7,13 @@
stdenv.mkDerivation rec {
pname = "ft2-clone";
version = "1.15";
version = "1.23";
src = fetchFromGitHub {
owner = "8bitbubsy";
repo = "ft2-clone";
rev = "v${version}";
sha256 = "19xgdaij71gpvq216zjlp60zmfdl2a8kf8sc3bpk8a4d4xh4n151";
sha256 = "03prdifc2nz7smmzdy19flp33m927vb7j5bhdc46gak753pikw7d";
};
nativeBuildInputs = [ cmake ];

View file

@ -99,6 +99,11 @@ in stdenv.mkDerivation rec {
)
'';
# Meson is no longer able to pick up Boost automatically.
# https://github.com/NixOS/nixpkgs/issues/86131
BOOST_INCLUDEDIR = "${stdenv.lib.getDev boost}/include";
BOOST_LIBRARYDIR = "${stdenv.lib.getLib boost}/lib";
meta = with stdenv.lib; {
description = "Limiter, compressor, reverberation, equalizer and auto volume effects for Pulseaudio applications";
homepage = "https://github.com/wwmm/pulseeffects";

View file

@ -4,11 +4,11 @@
stdenv.mkDerivation rec {
pname = "puredata";
version = "0.49-0";
version = "0.50-2";
src = fetchurl {
url = "http://msp.ucsd.edu/Software/pd-${version}.src.tar.gz";
sha256 = "18rzqbpgnnvyslap7k0ly87aw1bbxkb0rk5agpr423ibs9slxq6j";
sha256 = "0dz6r6jy0zfs1xy1xspnrxxks8kddi9c7pxz4vpg2ygwv83ghpg5";
};
nativeBuildInputs = [ autoreconfHook gettext makeWrapper ];

View file

@ -1,12 +1,12 @@
{ mkDerivation, lib, fetchurl, pkgconfig, qtbase, qttools, alsaLib, libjack2 }:
mkDerivation rec {
version = "0.6.1";
version = "0.6.2";
pname = "qmidinet";
src = fetchurl {
url = "mirror://sourceforge/qmidinet/${pname}-${version}.tar.gz";
sha256 = "1nvbvx3wg2s6s7r4x6m2pm9nx7pdz00ghw9h10wfqi2s474mwip0";
sha256 = "0siqzyhwg3l9av7jbca3bqdww7xspjlpi9ya4mkj211xc3a3a1d6";
};
hardeningDisable = [ "format" ];

View file

@ -1,30 +1,29 @@
{ stdenv, fetchurl, alsaLib, python, SDL }:
{ stdenv, fetchFromGitHub
, autoreconfHook
, alsaLib, python, SDL }:
stdenv.mkDerivation rec {
version = "20120105";
pname = "schismtracker";
version = "20190805";
src = fetchurl {
url = "http://schismtracker.org/dl/${pname}-${version}.tar.bz2";
sha256 = "1ny7wv2wxm1av299wvpskall6438wjjpadphmqc7c0h6d0zg5kii";
src = fetchFromGitHub {
owner = pname;
repo = pname;
rev = version;
sha256 = "0qqps20vvn3rgpg8174bjrrm38gqcci2z5z4c1r1vhbccclahgsd";
};
preConfigure = ''
# Build fails on Linux with windres.
export ac_cv_prog_ac_ct_WINDRES=
'';
configureFlags = [ "--enable-dependency-tracking" ];
buildInputs = [ alsaLib python SDL ];
nativeBuildInputs = [ autoreconfHook python ];
enableParallelBuilding = true;
buildInputs = [ alsaLib SDL ];
meta = {
meta = with stdenv.lib; {
description = "Music tracker application, free reimplementation of Impulse Tracker";
homepage = "http://schismtracker.org/";
license = stdenv.lib.licenses.gpl2;
license = licenses.gpl2;
platforms = [ "x86_64-linux" "i686-linux" ];
maintainers = [ stdenv.lib.maintainers.ftrvxmtrx ];
maintainers = with maintainers; [ ftrvxmtrx ];
};
}

View file

@ -7,13 +7,13 @@ with stdenv.lib;
mkDerivation rec {
name = "bitcoin" + (toString (optional (!withGui) "d")) + "-abc-" + version;
version = "0.21.3";
version = "0.21.5";
src = fetchFromGitHub {
owner = "bitcoin-ABC";
repo = "bitcoin-abc";
rev = "v${version}";
sha256 = "1pzdgghbsss2qjfgl42lvkbs5yc5q6jnzqnp24lljmrh341g2zn4";
sha256 = "1jx33n8dhn16iaxvmc56cxw0i5qk0ga5nf7qf9frwwq6zkglknga";
};
patches = [ ./fix-bitcoin-qt-build.patch ];

View file

@ -4,11 +4,11 @@
with stdenv.lib;
stdenv.mkDerivation rec {
pname = "clightning";
version = "0.8.1";
version = "0.8.2";
src = fetchurl {
url = "https://github.com/ElementsProject/lightning/releases/download/v${version}/clightning-v${version}.zip";
sha256 = "079d3yx7yr7qrilqgaayvn18lxl8h6a1gwwbsgm5xsyxj4vdlz7r";
sha256 = "1w5l3r3pnhnwz3x7mjgd69cw9a18fpyjwj7kmfka7cf9hdgcwp9x";
};
enableParallelBuilding = true;

View file

@ -2,16 +2,18 @@
buildGoModule rec {
pname = "lnd";
version = "0.9.0-beta";
version = "0.9.2-beta";
src = fetchFromGitHub {
owner = "lightningnetwork";
repo = "lnd";
rev = "v${version}";
sha256 = "1hq105s9ykp6nsn4iicjnl3mwspqkbfsswkx7sgzv3jggg08fkq9";
sha256 = "0gm33z89fiqv231ks2mkpsblskcsijipq8fcmip6m6jy8g06b1gb";
};
modSha256 = "1pvcvpiz6ck8xkgpypchrq9kgkik0jxd7f3jhihbgldsh4zaqiaq";
modSha256 = "1khxplvyaqgaddrx1nna1fw0nb1xz9bmqpxpfifif4f5nmx90gbr";
subPackages = ["cmd/lncli" "cmd/lnd"];
meta = with lib; {
description = "Lightning Network Daemon";

View file

@ -5,7 +5,7 @@
, qtquickcontrols, qtquickcontrols2
, monero, unbound, readline, boost, libunwind
, libsodium, pcsclite, zeromq, cppzmq
, hidapi, libusb, protobuf, randomx
, hidapi, libusb-compat-0_1, protobuf, randomx
}:
with stdenv.lib;
@ -29,7 +29,7 @@ stdenv.mkDerivation rec {
qtxmlpatterns
monero unbound readline
boost libunwind libsodium pcsclite zeromq
cppzmq hidapi libusb protobuf randomx
cppzmq hidapi libusb-compat-0_1 protobuf randomx
];
NIX_CFLAGS_COMPILE = [ "-Wno-error=format-security" ];

View file

@ -2,7 +2,7 @@
, cmake, pkgconfig
, boost, miniupnpc, openssl, unbound, cppzmq
, zeromq, pcsclite, readline, libsodium, hidapi
, pythonProtobuf, randomx, rapidjson, libusb
, pythonProtobuf, randomx, rapidjson, libusb-compat-0_1
, CoreData, IOKit, PCSC
}:
@ -26,7 +26,7 @@ stdenv.mkDerivation rec {
boost miniupnpc openssl unbound
cppzmq zeromq pcsclite readline
libsodium hidapi randomx rapidjson
pythonProtobuf libusb
pythonProtobuf libusb-compat-0_1
] ++ stdenv.lib.optionals stdenv.isDarwin [ IOKit CoreData PCSC ];
cmakeFlags = [

View file

@ -19,7 +19,7 @@ with stdenv.lib;
stdenv.mkDerivation rec {
pname = "vertcoin";
version = "0.14.0";
version = "0.15.0.1";
name = pname + toString (optional (!withGui) "d") + "-" + version;
@ -27,7 +27,7 @@ stdenv.mkDerivation rec {
owner = pname + "-project";
repo = pname + "-core";
rev = version;
sha256 = "00vnmrhn5mad58dyiz8rxgsrn0663ii6fdbcqm20mv1l313k4882";
sha256 = "09q7qicw52gv225hq6wlpsf4zr4hjc8miyim5cygi5nxxrlw7kd3";
};
nativeBuildInputs = [

View file

@ -15,7 +15,7 @@ rustPlatform.buildRustPackage rec {
installPhase = ''
mkdir -p $out/lib
cp target/release/librustzcash.a $out/lib/
cp $releaseDir/librustzcash.a $out/lib/
mkdir -p $out/include
cp librustzcash/include/librustzcash.h $out/include/
'';

View file

@ -13,14 +13,14 @@ let
sha256Hash = "0apxmp341m7mbpm2df3qvsbaifwy6yqq746kbhbwlw8bn9hrzv1k";
};
betaVersion = {
version = "4.0.0.13"; # "Android Studio 4.0 Beta 4"
build = "193.6348893";
sha256Hash = "0lchi3l50826n1af1z24yclpf27v2q5p1zjbvcmn37wz46d4s4g2";
version = "4.0.0.14"; # "Android Studio 4.0 Beta 5"
build = "193.6401094";
sha256Hash = "11fmpf58z44i78ldkapzivz6md65744vqczzbwv8mkjkv9nz95rs";
};
latestVersion = { # canary & dev
version = "4.1.0.6"; # "Android Studio 4.1 Canary 6"
build = "193.6381907";
sha256Hash = "0sa5plr96m90wv5hi9bqwa11j6k8k9wa0ji8qmlimdhnpyzhsdrx";
version = "4.1.0.7"; # "Android Studio 4.1 Canary 7"
build = "193.6401718";
sha256Hash = "1xa61rhi7dgxm0y6yl5dxd09x530mzyxvx9bp1jprzfwvc7s0byh";
};
in {
# Attributes are named by their corresponding release channels

View file

@ -925,10 +925,10 @@
elpaBuild {
pname = "ebdb";
ename = "ebdb";
version = "0.6.13";
version = "0.6.16";
src = fetchurl {
url = "https://elpa.gnu.org/packages/ebdb-0.6.13.tar";
sha256 = "1nxbp7w4xxij07q8manc15b896sl10yh2h1cg88prdqbw1wk62qr";
url = "https://elpa.gnu.org/packages/ebdb-0.6.16.tar";
sha256 = "0yn0nqjp68kwlrd4qs9fg3xizm9jnddkkyw25l0llq04b53zgjdl";
};
packageRequires = [ cl-lib emacs seq ];
meta = {
@ -1005,10 +1005,10 @@
elpaBuild {
pname = "eglot";
ename = "eglot";
version = "1.5";
version = "1.6";
src = fetchurl {
url = "https://elpa.gnu.org/packages/eglot-1.5.tar";
sha256 = "00ifgz9r9xvy19zsz1yfls6n1acvms14p86nbw0x6ldjgvpf279i";
url = "https://elpa.gnu.org/packages/eglot-1.6.tar";
sha256 = "15hd6sx7qrpvlvhwwkcgdiki8pswwf4mm7hkm0xvznskfcp44spx";
};
packageRequires = [ emacs flymake jsonrpc ];
meta = {
@ -1367,10 +1367,10 @@
elpaBuild {
pname = "gnorb";
ename = "gnorb";
version = "1.6.5";
version = "1.6.6";
src = fetchurl {
url = "https://elpa.gnu.org/packages/gnorb-1.6.5.tar";
sha256 = "1har3j8gb65mawrwn93939jg157wbap138qa1z1myznrrish6vzc";
url = "https://elpa.gnu.org/packages/gnorb-1.6.6.tar";
sha256 = "1vlb9q7a622qylrgip5ld2yrzp4l58gl543i2jdxr7jxvamy22bp";
};
packageRequires = [ cl-lib ];
meta = {
@ -2011,10 +2011,10 @@
elpaBuild {
pname = "modus-operandi-theme";
ename = "modus-operandi-theme";
version = "0.6.0";
version = "0.7.0";
src = fetchurl {
url = "https://elpa.gnu.org/packages/modus-operandi-theme-0.6.0.el";
sha256 = "10smvzaxp90lsg0g61s2nzmfxwnlrxq9dv4rn771vlhra249y08v";
url = "https://elpa.gnu.org/packages/modus-operandi-theme-0.7.0.el";
sha256 = "17zvcqplbl3rk39k61v43ganzv06j49rlyickanwll5m1a3iibw2";
};
packageRequires = [ emacs ];
meta = {
@ -2026,10 +2026,10 @@
elpaBuild {
pname = "modus-vivendi-theme";
ename = "modus-vivendi-theme";
version = "0.6.0";
version = "0.7.0";
src = fetchurl {
url = "https://elpa.gnu.org/packages/modus-vivendi-theme-0.6.0.el";
sha256 = "1b7wkz779f020gpil4spbdzmg2fx6l48wk1138564cv9kx3nkkz2";
url = "https://elpa.gnu.org/packages/modus-vivendi-theme-0.7.0.el";
sha256 = "1w4vrg39dghghkvll3h4kmzykc3zpp6pbychb39gcc13z2b06v8g";
};
packageRequires = [ emacs ];
meta = {
@ -2215,10 +2215,10 @@
elpaBuild {
pname = "oauth2";
ename = "oauth2";
version = "0.12";
version = "0.13";
src = fetchurl {
url = "https://elpa.gnu.org/packages/oauth2-0.12.el";
sha256 = "1rfyfy0h7shr3fmd8lh6s2i3ahfh28wb5fqiqlsjwspn5h77ll29";
url = "https://elpa.gnu.org/packages/oauth2-0.13.el";
sha256 = "0y5nbdwxz2hfr09xgsqgyv60vgx0rsaisibcpkz00klvgg26w33r";
};
packageRequires = [];
meta = {
@ -2320,10 +2320,10 @@
elpaBuild {
pname = "orgalist";
ename = "orgalist";
version = "1.11";
version = "1.12";
src = fetchurl {
url = "https://elpa.gnu.org/packages/orgalist-1.11.el";
sha256 = "0zbqkk540rax32s8szp5zgz3a02zw88fc1dmjmyw6h3ls04m91kl";
url = "https://elpa.gnu.org/packages/orgalist-1.12.el";
sha256 = "1hwm7j0hbv2pg9w885ky1c9qga3grcfq8v216jv2ivkw8xzavysd";
};
packageRequires = [ emacs ];
meta = {
@ -2455,10 +2455,10 @@
elpaBuild {
pname = "phps-mode";
ename = "phps-mode";
version = "0.3.38";
version = "0.3.43";
src = fetchurl {
url = "https://elpa.gnu.org/packages/phps-mode-0.3.38.tar";
sha256 = "1m8f1z259c66k0hf0cfjqidfd0cra2c2mb7k5lj71v1kfckwj6bh";
url = "https://elpa.gnu.org/packages/phps-mode-0.3.43.tar";
sha256 = "099s7c0ll8bbfgynijjaciv2qnyg4r2akajkhlmchh7y10kp5ii4";
};
packageRequires = [ emacs ];
meta = {
@ -2500,10 +2500,10 @@
elpaBuild {
pname = "posframe";
ename = "posframe";
version = "0.6.0";
version = "0.7.0";
src = fetchurl {
url = "https://elpa.gnu.org/packages/posframe-0.6.0.el";
sha256 = "14x2jgjn8di03rrad4x4mn8fhcqibk1j5c0ya0vmv8648fki6i9d";
url = "https://elpa.gnu.org/packages/posframe-0.7.0.el";
sha256 = "1kwl83jb5k1hnx0s2qw972v0gjqbbvk4sdcdb1qbdxsyw36sylc9";
};
packageRequires = [ emacs ];
meta = {
@ -2575,10 +2575,10 @@
elpaBuild {
pname = "rainbow-mode";
ename = "rainbow-mode";
version = "1.0.3";
version = "1.0.4";
src = fetchurl {
url = "https://elpa.gnu.org/packages/rainbow-mode-1.0.3.el";
sha256 = "0cpwqllhv3cb0gii22cj9i731rk3sbf2drm5m52w5yclm8sfr339";
url = "https://elpa.gnu.org/packages/rainbow-mode-1.0.4.el";
sha256 = "0rp76gix1ph1wrmdax6y2m3i9y1dmgv7ikjz8xsl5lizkygsy9cg";
};
packageRequires = [];
meta = {
@ -2857,6 +2857,21 @@
license = lib.licenses.free;
};
}) {};
scanner = callPackage ({ dash, elpaBuild, emacs, fetchurl, lib }:
elpaBuild {
pname = "scanner";
ename = "scanner";
version = "0.1";
src = fetchurl {
url = "https://elpa.gnu.org/packages/scanner-0.1.tar";
sha256 = "0hv4w7yzfdnz8vrfhw6i6agj9hs09vzsqr63nrp6dd93q0gk71mw";
};
packageRequires = [ dash emacs ];
meta = {
homepage = "https://elpa.gnu.org/packages/scanner.html";
license = lib.licenses.free;
};
}) {};
scroll-restore = callPackage ({ elpaBuild, fetchurl, lib }:
elpaBuild {
pname = "scroll-restore";
@ -2947,6 +2962,21 @@
license = lib.licenses.free;
};
}) {};
sm-c-mode = callPackage ({ elpaBuild, fetchurl, lib }:
elpaBuild {
pname = "sm-c-mode";
ename = "sm-c-mode";
version = "1.0";
src = fetchurl {
url = "https://elpa.gnu.org/packages/sm-c-mode-1.0.el";
sha256 = "1lq65dhcvrh6ybla37lvni7wmbjb5nhm75ja9cl79148da1zrg91";
};
packageRequires = [];
meta = {
homepage = "https://elpa.gnu.org/packages/sm-c-mode.html";
license = lib.licenses.free;
};
}) {};
smalltalk-mode = callPackage ({ elpaBuild, fetchurl, lib }:
elpaBuild {
pname = "smalltalk-mode";
@ -3539,10 +3569,10 @@
elpaBuild {
pname = "web-server";
ename = "web-server";
version = "0.1.1";
version = "0.1.2";
src = fetchurl {
url = "https://elpa.gnu.org/packages/web-server-0.1.1.tar";
sha256 = "1q51fhqw5al4iycdlighwv7jqgdpjb1a66glwd5jnc9b651yk42n";
url = "https://elpa.gnu.org/packages/web-server-0.1.2.tar";
sha256 = "10lcsl4dg2yr9zjd99gq9jz150wvvh6r5y9pd88l8y9vz16f2lim";
};
packageRequires = [ emacs ];
meta = {

View file

@ -0,0 +1,43 @@
{ stdenv, fetchurl, makeWrapper, emacs, tcl, tclx, espeak-ng }:
stdenv.mkDerivation rec {
pname = "emacspeak";
version = "51.0";
src = fetchurl {
url = "https://github.com/tvraman/emacspeak/releases/download/${version}/${pname}-${version}.tar.bz2";
sha256 = "09a0ywxlqa8jmc0wmvhaf7bdydnkyhy9nqfsdqcpbsgdzj6qpg90";
};
nativeBuildInputs = [ makeWrapper emacs ];
buildInputs = [ tcl tclx espeak-ng ];
preConfigure = ''
make config
'';
postBuild = ''
make -C servers/native-espeak PREFIX=$out "TCL_INCLUDE=${tcl}/include"
'';
postInstall = ''
make -C servers/native-espeak PREFIX=$out install
local d=$out/share/emacs/site-lisp/emacspeak/
install -d -- "$d"
cp -a . "$d"
find "$d" \( -type d -or \( -type f -executable \) \) -execdir chmod 755 {} +
find "$d" -type f -not -executable -execdir chmod 644 {} +
makeWrapper ${emacs}/bin/emacs $out/bin/emacspeak \
--set DTK_PROGRAM "${espeak-ng}/bin/espeak" \
--add-flags '-l "${placeholder "out"}/share/emacs/site-lisp/emacspeak/lisp/emacspeak-setup.elc"'
'';
meta = with stdenv.lib; {
homepage = https://github.com/tvraman/emacspeak/;
description = "Emacs extension that provides spoken output";
license = licenses.gpl2;
maintainers = [ dema ];
platforms = platforms.linux;
};
}

View file

@ -1,22 +0,0 @@
{ fetchurl, melpaBuild }:
melpaBuild {
pname = "filesets-plus";
version = "20170222.55";
src = fetchurl {
url = "https://www.emacswiki.org/emacs/download/filesets%2b.el";
sha256 = "0iajkgh0n3pbrwwxx9rmrrwz8dw2m7jsp4mggnhq7zsb20ighs00";
name = "filesets+.el";
};
recipe = fetchurl {
url = "https://raw.githubusercontent.com/milkypostman/melpa/a5d15f875b0080b12ce45cf696c581f6bbf061ba/recipes/filesets-plus+";
sha256 = "1wn99cb53ykds87lg9mrlfpalrmjj177nwskrnp9wglyqs65lk4g";
name = "filesets-plus";
};
meta = {
homepage = "https://melpa.org/#/filesets+";
};
}

Some files were not shown because too many files have changed in this diff Show more