mirror of
https://forgejo.ellis.link/continuwuation/continuwuity.git
synced 2025-09-11 17:53:01 +02:00
Compare commits
28 commits
34d0599a13
...
e76753b113
Author | SHA1 | Date | |
---|---|---|---|
|
e76753b113 | ||
|
ac3f2e9e9d | ||
|
2cd966db33 | ||
|
5484eff931 | ||
|
cb275b910b | ||
|
13bd3edbca | ||
|
59042ed096 | ||
|
5dafe80527 | ||
|
a61fd287ef | ||
|
e147f0f274 | ||
|
bf3dd254e8 | ||
|
bb4b625f63 | ||
|
a62e658e65 | ||
|
ae8127d44b | ||
|
3b5d5dcefa | ||
|
843e501902 |
||
|
0a8c13ffd2 |
||
|
a89ceb93d8 |
||
|
13de0ac822 |
||
|
4a5b122d77 |
||
|
2655acf269 |
||
|
3c320f6d6e |
||
|
946449d3e5 |
||
|
b17f278803 |
||
|
6a4905271e |
||
|
cfc64ddb40 |
||
|
6aceac3833 |
||
|
5bf20db8e7 |
29 changed files with 750 additions and 291 deletions
|
@ -1,25 +1,15 @@
|
||||||
# Contributing guide
|
# Contributing guide
|
||||||
|
|
||||||
This page is about contributing to Continuwuity. The
|
This page is about contributing to Continuwuity. The
|
||||||
[development](./development.md) page may be of interest for you as well.
|
[development](./development.md) and [code style guide](./development/code_style.md) pages may be of interest for you as well.
|
||||||
|
|
||||||
If you would like to work on an [issue][issues] that is not assigned, preferably
|
If you would like to work on an [issue][issues] that is not assigned, preferably
|
||||||
ask in the Matrix room first at [#continuwuity:continuwuity.org][continuwuity-matrix],
|
ask in the Matrix room first at [#continuwuity:continuwuity.org][continuwuity-matrix],
|
||||||
and comment on it.
|
and comment on it.
|
||||||
|
|
||||||
### Linting and Formatting
|
### Code Style
|
||||||
|
|
||||||
It is mandatory all your changes satisfy the lints (clippy, rustc, rustdoc, etc)
|
Please review and follow the [code style guide](./development/code_style.md) for formatting, linting, naming conventions, and other code standards.
|
||||||
and your code is formatted via the **nightly** rustfmt (`cargo +nightly fmt`). A lot of the
|
|
||||||
`rustfmt.toml` features depend on nightly toolchain. It would be ideal if they
|
|
||||||
weren't nightly-exclusive features, but they currently still are. CI's rustfmt
|
|
||||||
uses nightly.
|
|
||||||
|
|
||||||
If you need to allow a lint, please make sure it's either obvious as to why
|
|
||||||
(e.g. clippy saying redundant clone but it's actually required) or it has a
|
|
||||||
comment saying why. Do not write inefficient code for the sake of satisfying
|
|
||||||
lints. If a lint is wrong and provides a more inefficient solution or
|
|
||||||
suggestion, allow the lint and mention that in a comment.
|
|
||||||
|
|
||||||
### Pre-commit Checks
|
### Pre-commit Checks
|
||||||
|
|
||||||
|
@ -36,6 +26,10 @@ You can run these checks locally by installing [prefligit](https://github.com/j1
|
||||||
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
# Requires UV: https://docs.astral.sh/uv/getting-started/installation/
|
||||||
|
# Mac/linux: curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||||
|
# Windows: powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
|
||||||
|
|
||||||
# Install prefligit using cargo-binstall
|
# Install prefligit using cargo-binstall
|
||||||
cargo binstall prefligit
|
cargo binstall prefligit
|
||||||
|
|
||||||
|
@ -48,6 +42,8 @@ prefligit --all-files
|
||||||
|
|
||||||
Alternatively, you can use [pre-commit](https://pre-commit.com/):
|
Alternatively, you can use [pre-commit](https://pre-commit.com/):
|
||||||
```bash
|
```bash
|
||||||
|
# Requires python
|
||||||
|
|
||||||
# Install pre-commit
|
# Install pre-commit
|
||||||
pip install pre-commit
|
pip install pre-commit
|
||||||
|
|
||||||
|
@ -58,7 +54,7 @@ pre-commit install
|
||||||
pre-commit run --all-files
|
pre-commit run --all-files
|
||||||
```
|
```
|
||||||
|
|
||||||
These same checks are run in CI via the prefligit-checks workflow to ensure consistency.
|
These same checks are run in CI via the prefligit-checks workflow to ensure consistency. These must pass before the PR is merged.
|
||||||
|
|
||||||
### Running tests locally
|
### Running tests locally
|
||||||
|
|
||||||
|
@ -107,37 +103,13 @@ To build the documentation locally:
|
||||||
|
|
||||||
The output of the mdbook generation is in `public/`. You can open the HTML files directly in your browser without needing a web server.
|
The output of the mdbook generation is in `public/`. You can open the HTML files directly in your browser without needing a web server.
|
||||||
|
|
||||||
### Inclusivity and Diversity
|
|
||||||
|
|
||||||
All **MUST** code and write with inclusivity and diversity in mind. See the
|
|
||||||
[following page by Google on writing inclusive code and
|
|
||||||
documentation](https://developers.google.com/style/inclusive-documentation).
|
|
||||||
|
|
||||||
This **EXPLICITLY** forbids usage of terms like "blacklist"/"whitelist" and
|
|
||||||
"master"/"slave", [forbids gender-specific words and
|
|
||||||
phrases](https://developers.google.com/style/pronouns#gender-neutral-pronouns),
|
|
||||||
forbids ableist language like "sanity-check", "cripple", or "insane", and
|
|
||||||
forbids culture-specific language (e.g. US-only holidays or cultures).
|
|
||||||
|
|
||||||
No exceptions are allowed. Dependencies that may use these terms are allowed but
|
|
||||||
[do not replicate the name in your functions or
|
|
||||||
variables](https://developers.google.com/style/inclusive-documentation#write-around).
|
|
||||||
|
|
||||||
In addition to language, write and code with the user experience in mind. This
|
|
||||||
is software that intends to be used by everyone, so make it easy and comfortable
|
|
||||||
for everyone to use. 🏳️⚧️
|
|
||||||
|
|
||||||
### Variable, comment, function, etc standards
|
|
||||||
|
|
||||||
Rust's default style and standards with regards to [function names, variable
|
|
||||||
names, comments](https://rust-lang.github.io/api-guidelines/naming.html), etc
|
|
||||||
applies here.
|
|
||||||
|
|
||||||
### Commit Messages
|
### Commit Messages
|
||||||
|
|
||||||
Continuwuity follows the [Conventional Commits](https://www.conventionalcommits.org/) specification for commit messages. This provides a standardized format that makes the commit history more readable and enables automated tools to generate changelogs.
|
Continuwuity follows the [Conventional Commits](https://www.conventionalcommits.org/) specification for commit messages. This provides a standardized format that makes the commit history more readable and enables automated tools to generate changelogs.
|
||||||
|
|
||||||
The basic structure is:
|
The basic structure is:
|
||||||
|
|
||||||
```
|
```
|
||||||
<type>[(optional scope)]: <description>
|
<type>[(optional scope)]: <description>
|
||||||
|
|
||||||
|
@ -178,11 +150,10 @@ of it, especially when the CI completed successfully and everything so it
|
||||||
|
|
||||||
Before submitting a pull request, please ensure:
|
Before submitting a pull request, please ensure:
|
||||||
1. Your code passes all CI checks (formatting, linting, typo detection, etc.)
|
1. Your code passes all CI checks (formatting, linting, typo detection, etc.)
|
||||||
2. Your commit messages follow the conventional commits format
|
2. Your code follows the [code style guide](./development/code_style.md)
|
||||||
3. Tests are added for new functionality
|
3. Your commit messages follow the conventional commits format
|
||||||
4. Documentation is updated if needed
|
4. Tests are added for new functionality
|
||||||
|
5. Documentation is updated if needed
|
||||||
|
|
||||||
|
|
||||||
Direct all PRs/MRs to the `main` branch.
|
Direct all PRs/MRs to the `main` branch.
|
||||||
|
|
||||||
|
|
|
@ -18,7 +18,7 @@ StandardInput=tty-force
|
||||||
StandardOutput=tty
|
StandardOutput=tty
|
||||||
StandardError=journal+console
|
StandardError=journal+console
|
||||||
|
|
||||||
Environment="CONTINUWUITY_LOG_TO_JOURNALD=1"
|
Environment="CONTINUWUITY_LOG_TO_JOURNALD=true"
|
||||||
Environment="CONTINUWUITY_JOURNALD_IDENTIFIER=%N"
|
Environment="CONTINUWUITY_JOURNALD_IDENTIFIER=%N"
|
||||||
|
|
||||||
TTYReset=yes
|
TTYReset=yes
|
||||||
|
|
|
@ -1041,7 +1041,7 @@
|
||||||
# 3 to 5 = Statistics with possible performance impact.
|
# 3 to 5 = Statistics with possible performance impact.
|
||||||
# 6 = All statistics.
|
# 6 = All statistics.
|
||||||
#
|
#
|
||||||
#rocksdb_stats_level = 1
|
#rocksdb_stats_level = 3
|
||||||
|
|
||||||
# This is a password that can be configured that will let you login to the
|
# This is a password that can be configured that will let you login to the
|
||||||
# server bot account (currently `@conduit`) for emergency troubleshooting
|
# server bot account (currently `@conduit`) for emergency troubleshooting
|
||||||
|
@ -1655,11 +1655,9 @@
|
||||||
#stream_amplification = 1024
|
#stream_amplification = 1024
|
||||||
|
|
||||||
# Number of sender task workers; determines sender parallelism. Default is
|
# Number of sender task workers; determines sender parallelism. Default is
|
||||||
# '0' which means the value is determined internally, likely matching the
|
# number of CPU cores. Override by setting a different value.
|
||||||
# number of tokio worker-threads or number of cores, etc. Override by
|
|
||||||
# setting a non-zero value.
|
|
||||||
#
|
#
|
||||||
#sender_workers = 0
|
#sender_workers = 4
|
||||||
|
|
||||||
# Enables listener sockets; can be set to false to disable listening. This
|
# Enables listener sockets; can be set to false to disable listening. This
|
||||||
# option is intended for developer/diagnostic purposes only.
|
# option is intended for developer/diagnostic purposes only.
|
||||||
|
|
22
debian/README.md
vendored
22
debian/README.md
vendored
|
@ -1,29 +1,23 @@
|
||||||
# Continuwuity for Debian
|
# Continuwuity for Debian
|
||||||
|
|
||||||
Information about downloading and deploying the Debian package. This may also be
|
This document provides information about downloading and deploying the Debian package. You can also use this guide for other `apt`-based distributions such as Ubuntu.
|
||||||
referenced for other `apt`-based distros such as Ubuntu.
|
|
||||||
|
|
||||||
### Installation
|
### Installation
|
||||||
|
|
||||||
It is recommended to see the [generic deployment guide](../deploying/generic.md)
|
See the [generic deployment guide](../deploying/generic.md) for additional information about using the Debian package.
|
||||||
for further information if needed as usage of the Debian package is generally
|
|
||||||
related.
|
|
||||||
|
|
||||||
No `apt` repository is currently offered yet, it is in the works/development.
|
No `apt` repository is currently available. This feature is in development.
|
||||||
|
|
||||||
### Configuration
|
### Configuration
|
||||||
|
|
||||||
When installed, the example config is placed at `/etc/conduwuit/conduwuit.toml`
|
After installation, Continuwuity places the example configuration at `/etc/conduwuit/conduwuit.toml` as the default configuration file. The configuration file indicates which settings you must change before starting the service.
|
||||||
as the default config. The config mentions things required to be changed before
|
|
||||||
starting.
|
|
||||||
|
|
||||||
You can tweak more detailed settings by uncommenting and setting the config
|
You can customize additional settings by uncommenting and modifying the configuration options in `/etc/conduwuit/conduwuit.toml`.
|
||||||
options in `/etc/conduwuit/conduwuit.toml`.
|
|
||||||
|
|
||||||
### Running
|
### Running
|
||||||
|
|
||||||
The package uses the [`conduwuit.service`](../configuration/examples.md#example-systemd-unit-file) systemd unit file to start and stop Continuwuity. The binary is installed at `/usr/sbin/conduwuit`.
|
The package uses the [`conduwuit.service`](../configuration/examples.md#example-systemd-unit-file) systemd unit file to start and stop Continuwuity. The binary installs at `/usr/sbin/conduwuit`.
|
||||||
|
|
||||||
This package assumes by default that conduwuit will be placed behind a reverse proxy. The default config options apply (listening on `localhost` and TCP port `6167`). Matrix federation requires a valid domain name and TLS, so you will need to set up TLS certificates and renewal for it to work properly if you intend to federate.
|
By default, this package assumes that Continuwuity runs behind a reverse proxy. The default configuration options apply (listening on `localhost` and TCP port `6167`). Matrix federation requires a valid domain name and TLS. To federate properly, you must set up TLS certificates and certificate renewal.
|
||||||
|
|
||||||
Consult various online documentation and guides on setting up a reverse proxy and TLS. Caddy is documented at the [generic deployment guide](../deploying/generic.md#setting-up-the-reverse-proxy) as it's the easiest and most user friendly.
|
For information about setting up a reverse proxy and TLS, consult online documentation and guides. The [generic deployment guide](../deploying/generic.md#setting-up-the-reverse-proxy) documents Caddy, which is the most user-friendly option for reverse proxy configuration.
|
||||||
|
|
2
debian/conduwuit.service
vendored
2
debian/conduwuit.service
vendored
|
@ -14,7 +14,7 @@ Type=notify
|
||||||
|
|
||||||
Environment="CONTINUWUITY_CONFIG=/etc/conduwuit/conduwuit.toml"
|
Environment="CONTINUWUITY_CONFIG=/etc/conduwuit/conduwuit.toml"
|
||||||
|
|
||||||
Environment="CONTINUWUITY_LOG_TO_JOURNALD=1"
|
Environment="CONTINUWUITY_LOG_TO_JOURNALD=true"
|
||||||
Environment="CONTINUWUITY_JOURNALD_IDENTIFIER=%N"
|
Environment="CONTINUWUITY_JOURNALD_IDENTIFIER=%N"
|
||||||
|
|
||||||
ExecStart=/usr/sbin/conduwuit
|
ExecStart=/usr/sbin/conduwuit
|
||||||
|
|
|
@ -18,6 +18,7 @@
|
||||||
- [Admin Command Reference](admin_reference.md)
|
- [Admin Command Reference](admin_reference.md)
|
||||||
- [Development](development.md)
|
- [Development](development.md)
|
||||||
- [Contributing](contributing.md)
|
- [Contributing](contributing.md)
|
||||||
|
- [Code Style Guide](development/code_style.md)
|
||||||
- [Testing](development/testing.md)
|
- [Testing](development/testing.md)
|
||||||
- [Hot Reloading ("Live" Development)](development/hot_reload.md)
|
- [Hot Reloading ("Live" Development)](development/hot_reload.md)
|
||||||
- [Community (and Guidelines)](community.md)
|
- [Community (and Guidelines)](community.md)
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
# Continuwuity for Arch Linux
|
# Continuwuity for Arch Linux
|
||||||
|
|
||||||
Continuwuity is available on the `archlinuxcn` repository and AUR, with the same package name `continuwuity`, which includes latest taggged version. The development version is available on AUR as `continuwuity-git`
|
Continuwuity is available in the `archlinuxcn` repository and AUR with the same package name `continuwuity`, which includes the latest tagged version. The development version is available on AUR as `continuwuity-git`.
|
||||||
|
|
||||||
Simply install the `continuwuity` package. Configure the service in `/etc/conduwuit/conduwuit.toml`, then enable/start the continuwuity.service.
|
Simply install the `continuwuity` package. Configure the service in `/etc/conduwuit/conduwuit.toml`, then enable and start the continuwuity.service.
|
||||||
|
|
|
@ -2,7 +2,7 @@
|
||||||
|
|
||||||
## Docker
|
## Docker
|
||||||
|
|
||||||
To run Continuwuity with Docker you can either build the image yourself or pull it
|
To run Continuwuity with Docker, you can either build the image yourself or pull it
|
||||||
from a registry.
|
from a registry.
|
||||||
|
|
||||||
### Use a registry
|
### Use a registry
|
||||||
|
@ -26,7 +26,7 @@ to pull it to your machine.
|
||||||
|
|
||||||
### Run
|
### Run
|
||||||
|
|
||||||
When you have the image you can simply run it with
|
When you have the image, you can simply run it with
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker run -d -p 8448:6167 \
|
docker run -d -p 8448:6167 \
|
||||||
|
@ -36,7 +36,7 @@ docker run -d -p 8448:6167 \
|
||||||
--name continuwuity $LINK
|
--name continuwuity $LINK
|
||||||
```
|
```
|
||||||
|
|
||||||
or you can use [docker compose](#docker-compose).
|
or you can use [Docker Compose](#docker-compose).
|
||||||
|
|
||||||
The `-d` flag lets the container run in detached mode. You may supply an
|
The `-d` flag lets the container run in detached mode. You may supply an
|
||||||
optional `continuwuity.toml` config file, the example config can be found
|
optional `continuwuity.toml` config file, the example config can be found
|
||||||
|
@ -46,15 +46,15 @@ using env vars. For an overview of possible values, please take a look at the
|
||||||
[`docker-compose.yml`](docker-compose.yml) file.
|
[`docker-compose.yml`](docker-compose.yml) file.
|
||||||
|
|
||||||
If you just want to test Continuwuity for a short time, you can use the `--rm`
|
If you just want to test Continuwuity for a short time, you can use the `--rm`
|
||||||
flag, which will clean up everything related to your container after you stop
|
flag, which cleans up everything related to your container after you stop
|
||||||
it.
|
it.
|
||||||
|
|
||||||
### Docker-compose
|
### Docker-compose
|
||||||
|
|
||||||
If the `docker run` command is not for you or your setup, you can also use one
|
If the `docker run` command is not suitable for you or your setup, you can also use one
|
||||||
of the provided `docker-compose` files.
|
of the provided `docker-compose` files.
|
||||||
|
|
||||||
Depending on your proxy setup, you can use one of the following files;
|
Depending on your proxy setup, you can use one of the following files:
|
||||||
|
|
||||||
- If you already have a `traefik` instance set up, use
|
- If you already have a `traefik` instance set up, use
|
||||||
[`docker-compose.for-traefik.yml`](docker-compose.for-traefik.yml)
|
[`docker-compose.for-traefik.yml`](docker-compose.for-traefik.yml)
|
||||||
|
@ -65,7 +65,7 @@ Depending on your proxy setup, you can use one of the following files;
|
||||||
`example.com` placeholders with your own domain
|
`example.com` placeholders with your own domain
|
||||||
- For any other reverse proxy, use [`docker-compose.yml`](docker-compose.yml)
|
- For any other reverse proxy, use [`docker-compose.yml`](docker-compose.yml)
|
||||||
|
|
||||||
When picking the traefik-related compose file, rename it so it matches
|
When picking the Traefik-related compose file, rename it to
|
||||||
`docker-compose.yml`, and rename the override file to
|
`docker-compose.yml`, and rename the override file to
|
||||||
`docker-compose.override.yml`. Edit the latter with the values you want for your
|
`docker-compose.override.yml`. Edit the latter with the values you want for your
|
||||||
server.
|
server.
|
||||||
|
@ -77,18 +77,18 @@ create the `caddy` network before spinning up the containers:
|
||||||
docker network create caddy
|
docker network create caddy
|
||||||
```
|
```
|
||||||
|
|
||||||
After that, you can rename it so it matches `docker-compose.yml` and spin up the
|
After that, you can rename it to `docker-compose.yml` and spin up the
|
||||||
containers!
|
containers!
|
||||||
|
|
||||||
Additional info about deploying Continuwuity can be found [here](generic.md).
|
Additional info about deploying Continuwuity can be found [here](generic.md).
|
||||||
|
|
||||||
### Build
|
### Build
|
||||||
|
|
||||||
Official Continuwuity images are built using **Docker Buildx** and the Dockerfile found at [`docker/Dockerfile`][dockerfile-path]. This approach uses common Docker tooling and enables multi-platform builds efficiently.
|
Official Continuwuity images are built using **Docker Buildx** and the Dockerfile found at [`docker/Dockerfile`][dockerfile-path]. This approach uses common Docker tooling and enables efficient multi-platform builds.
|
||||||
|
|
||||||
The resulting images are broadly compatible with Docker and other container runtimes like Podman or containerd.
|
The resulting images are widely compatible with Docker and other container runtimes like Podman or containerd.
|
||||||
|
|
||||||
The images *do not contain a shell*. They contain only the Continuwuity binary, required libraries, TLS certificates and metadata. Please refer to the [`docker/Dockerfile`][dockerfile-path] for the specific details of the image composition.
|
The images *do not contain a shell*. They contain only the Continuwuity binary, required libraries, TLS certificates, and metadata. Please refer to the [`docker/Dockerfile`][dockerfile-path] for the specific details of the image composition.
|
||||||
|
|
||||||
To build an image locally using Docker Buildx, you can typically run a command like:
|
To build an image locally using Docker Buildx, you can typically run a command like:
|
||||||
|
|
||||||
|
@ -109,8 +109,8 @@ Refer to the Docker Buildx documentation for more advanced build options.
|
||||||
|
|
||||||
### Run
|
### Run
|
||||||
|
|
||||||
If you already have built the image or want to use one from the registries, you
|
If you have already built the image or want to use one from the registries, you
|
||||||
can just start the container and everything else in the compose file in detached
|
can start the container and everything else in the compose file in detached
|
||||||
mode with:
|
mode with:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
@ -121,22 +121,24 @@ docker compose up -d
|
||||||
|
|
||||||
### Use Traefik as Proxy
|
### Use Traefik as Proxy
|
||||||
|
|
||||||
As a container user, you probably know about Traefik. It is a easy to use
|
As a container user, you probably know about Traefik. It is an easy-to-use
|
||||||
reverse proxy for making containerized app and services available through the
|
reverse proxy for making containerized apps and services available through the
|
||||||
web. With the two provided files,
|
web. With the two provided files,
|
||||||
[`docker-compose.for-traefik.yml`](docker-compose.for-traefik.yml) (or
|
[`docker-compose.for-traefik.yml`](docker-compose.for-traefik.yml) (or
|
||||||
[`docker-compose.with-traefik.yml`](docker-compose.with-traefik.yml)) and
|
[`docker-compose.with-traefik.yml`](docker-compose.with-traefik.yml)) and
|
||||||
[`docker-compose.override.yml`](docker-compose.override.yml), it is equally easy
|
[`docker-compose.override.yml`](docker-compose.override.yml), it is equally easy
|
||||||
to deploy and use Continuwuity, with a little caveat. If you already took a look at
|
to deploy and use Continuwuity, with a small caveat. If you have already looked at
|
||||||
the files, then you should have seen the `well-known` service, and that is the
|
the files, you should have seen the `well-known` service, which is the
|
||||||
little caveat. Traefik is simply a proxy and loadbalancer and is not able to
|
small caveat. Traefik is simply a proxy and load balancer and cannot
|
||||||
serve any kind of content, but for Continuwuity to federate, we need to either
|
serve any kind of content. For Continuwuity to federate, we need to either
|
||||||
expose ports `443` and `8448` or serve two endpoints `.well-known/matrix/client`
|
expose ports `443` and `8448` or serve two endpoints: `.well-known/matrix/client`
|
||||||
and `.well-known/matrix/server`.
|
and `.well-known/matrix/server`.
|
||||||
|
|
||||||
With the service `well-known` we use a single `nginx` container that will serve
|
With the service `well-known`, we use a single `nginx` container that serves
|
||||||
those two files.
|
those two files.
|
||||||
|
|
||||||
|
Alternatively, you can use Continuwuity's built-in delegation file capability. Set up the delegation files in the configuration file, and then proxy paths under `/.well-known/matrix` to continuwuity. For example, the label ``traefik.http.routers.continuwuity.rule=(Host(`matrix.ellis.link`) || (Host(`ellis.link`) && PathPrefix(`/.well-known/matrix`)))`` does this for the domain `ellis.link`.
|
||||||
|
|
||||||
## Voice communication
|
## Voice communication
|
||||||
|
|
||||||
See the [TURN](../turn.md) page.
|
See the [TURN](../turn.md) page.
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
# Continuwuity for FreeBSD
|
# Continuwuity for FreeBSD
|
||||||
|
|
||||||
Continuwuity at the moment does not provide FreeBSD builds or have FreeBSD packaging, however Continuwuity does build and work on FreeBSD using the system-provided RocksDB.
|
Continuwuity currently does not provide FreeBSD builds or FreeBSD packaging. However, Continuwuity does build and work on FreeBSD using the system-provided RocksDB.
|
||||||
|
|
||||||
Contributions for getting Continuwuity packaged are welcome.
|
Contributions to get Continuwuity packaged for FreeBSD are welcome.
|
||||||
|
|
|
@ -13,31 +13,42 @@
|
||||||
You may simply download the binary that fits your machine architecture (x86_64
|
You may simply download the binary that fits your machine architecture (x86_64
|
||||||
or aarch64). Run `uname -m` to see what you need.
|
or aarch64). Run `uname -m` to see what you need.
|
||||||
|
|
||||||
Prebuilt fully static musl binaries can be downloaded from the latest tagged
|
You can download prebuilt fully static musl binaries from the latest tagged
|
||||||
release [here](https://forgejo.ellis.link/continuwuation/continuwuity/releases/latest) or
|
release [here](https://forgejo.ellis.link/continuwuation/continuwuity/releases/latest) or
|
||||||
`main` CI branch workflow artifact output. These also include Debian/Ubuntu
|
from the `main` CI branch workflow artifact output. These also include Debian/Ubuntu
|
||||||
packages.
|
packages.
|
||||||
|
|
||||||
These can be curl'd directly from. `ci-bins` are CI workflow binaries by commit
|
You can download these directly using curl. The `ci-bins` are CI workflow binaries organized by commit
|
||||||
hash/revision, and `releases` are tagged releases. Sort by descending last
|
hash/revision, and `releases` are tagged releases. Sort by descending last
|
||||||
modified for the latest.
|
modified date to find the latest.
|
||||||
|
|
||||||
These binaries have jemalloc and io_uring statically linked and included with
|
These binaries have jemalloc and io_uring statically linked and included with
|
||||||
them, so no additional dynamic dependencies need to be installed.
|
them, so no additional dynamic dependencies need to be installed.
|
||||||
|
|
||||||
For the **best** performance; if using an `x86_64` CPU made in the last ~15 years,
|
For the **best** performance: if you are using an `x86_64` CPU made in the last ~15 years,
|
||||||
we recommend using the `-haswell-` optimised binaries. This sets
|
we recommend using the `-haswell-` optimized binaries. These set
|
||||||
`-march=haswell` which is the most compatible and highest performance with
|
`-march=haswell`, which provides the most compatible and highest performance with
|
||||||
optimised binaries. The database backend, RocksDB, most benefits from this as it
|
optimized binaries. The database backend, RocksDB, benefits most from this as it
|
||||||
will then use hardware accelerated CRC32 hashing/checksumming which is critical
|
uses hardware-accelerated CRC32 hashing/checksumming, which is critical
|
||||||
for performance.
|
for performance.
|
||||||
|
|
||||||
### Compiling
|
### Compiling
|
||||||
|
|
||||||
Alternatively, you may compile the binary yourself. We recommend using
|
Alternatively, you may compile the binary yourself.
|
||||||
Nix (or [Lix](https://lix.systems)) to build Continuwuity as this has the most
|
|
||||||
guaranteed reproducibiltiy and easiest to get a build environment and output
|
### Building with the Rust toolchain
|
||||||
going. This also allows easy cross-compilation.
|
|
||||||
|
If wanting to build using standard Rust toolchains, make sure you install:
|
||||||
|
|
||||||
|
- (On linux) `liburing-dev` on the compiling machine, and `liburing` on the target host
|
||||||
|
- (On linux) `pkg-config` on the compiling machine to allow finding `liburing`
|
||||||
|
- A C++ compiler and (on linux) `libclang` for RocksDB
|
||||||
|
|
||||||
|
You can build Continuwuity using `cargo build --release --all-features`.
|
||||||
|
|
||||||
|
### Building with Nix
|
||||||
|
|
||||||
|
If you prefer, you can use Nix (or [Lix](https://lix.systems)) to build Continuwuity. This provides improved reproducibility and makes it easy to set up a build environment and generate output. This approach also allows for easy cross-compilation.
|
||||||
|
|
||||||
You can run the `nix build -L .#static-x86_64-linux-musl-all-features` or
|
You can run the `nix build -L .#static-x86_64-linux-musl-all-features` or
|
||||||
`nix build -L .#static-aarch64-linux-musl-all-features` commands based
|
`nix build -L .#static-aarch64-linux-musl-all-features` commands based
|
||||||
|
@ -45,17 +56,11 @@ on architecture to cross-compile the necessary static binary located at
|
||||||
`result/bin/conduwuit`. This is reproducible with the static binaries produced
|
`result/bin/conduwuit`. This is reproducible with the static binaries produced
|
||||||
in our CI.
|
in our CI.
|
||||||
|
|
||||||
If wanting to build using standard Rust toolchains, make sure you install:
|
|
||||||
- `liburing-dev` on the compiling machine, and `liburing` on the target host
|
|
||||||
- LLVM and libclang for RocksDB
|
|
||||||
|
|
||||||
You can build Continuwuity using `cargo build --release --all-features`
|
|
||||||
|
|
||||||
## Adding a Continuwuity user
|
## Adding a Continuwuity user
|
||||||
|
|
||||||
While Continuwuity can run as any user it is better to use dedicated users for
|
While Continuwuity can run as any user, it is better to use dedicated users for
|
||||||
different services. This also allows you to make sure that the file permissions
|
different services. This also ensures that the file permissions
|
||||||
are correctly set up.
|
are set up correctly.
|
||||||
|
|
||||||
In Debian, you can use this command to create a Continuwuity user:
|
In Debian, you can use this command to create a Continuwuity user:
|
||||||
|
|
||||||
|
@ -71,18 +76,18 @@ sudo useradd -r --shell /usr/bin/nologin --no-create-home continuwuity
|
||||||
|
|
||||||
## Forwarding ports in the firewall or the router
|
## Forwarding ports in the firewall or the router
|
||||||
|
|
||||||
Matrix's default federation port is port 8448, and clients must be using port 443.
|
Matrix's default federation port is 8448, and clients must use port 443.
|
||||||
If you would like to use only port 443, or a different port, you will need to setup
|
If you would like to use only port 443 or a different port, you will need to set up
|
||||||
delegation. Continuwuity has config options for doing delegation, or you can configure
|
delegation. Continuwuity has configuration options for delegation, or you can configure
|
||||||
your reverse proxy to manually serve the necessary JSON files to do delegation
|
your reverse proxy to manually serve the necessary JSON files for delegation
|
||||||
(see the `[global.well_known]` config section).
|
(see the `[global.well_known]` config section).
|
||||||
|
|
||||||
If Continuwuity runs behind a router or in a container and has a different public
|
If Continuwuity runs behind a router or in a container and has a different public
|
||||||
IP address than the host system these public ports need to be forwarded directly
|
IP address than the host system, you need to forward these public ports directly
|
||||||
or indirectly to the port mentioned in the config.
|
or indirectly to the port mentioned in the configuration.
|
||||||
|
|
||||||
Note for NAT users; if you have trouble connecting to your server from the inside
|
Note for NAT users: if you have trouble connecting to your server from inside
|
||||||
of your network, you need to research your router and see if it supports "NAT
|
your network, check if your router supports "NAT
|
||||||
hairpinning" or "NAT loopback".
|
hairpinning" or "NAT loopback".
|
||||||
|
|
||||||
If your router does not support this feature, you need to research doing local
|
If your router does not support this feature, you need to research doing local
|
||||||
|
@ -92,19 +97,19 @@ on the network level, consider something like NextDNS or Pi-Hole.
|
||||||
|
|
||||||
## Setting up a systemd service
|
## Setting up a systemd service
|
||||||
|
|
||||||
Two example systemd units for Continuwuity can be found
|
You can find two example systemd units for Continuwuity
|
||||||
[on the configuration page](../configuration/examples.md#debian-systemd-unit-file).
|
[on the configuration page](../configuration/examples.md#debian-systemd-unit-file).
|
||||||
You may need to change the `ExecStart=` path to where you placed the Continuwuity
|
You may need to change the `ExecStart=` path to match where you placed the Continuwuity
|
||||||
binary if it is not `/usr/bin/conduwuit`.
|
binary if it is not in `/usr/bin/conduwuit`.
|
||||||
|
|
||||||
On systems where rsyslog is used alongside journald (i.e. Red Hat-based distros
|
On systems where rsyslog is used alongside journald (i.e. Red Hat-based distros
|
||||||
and OpenSUSE), put `$EscapeControlCharactersOnReceive off` inside
|
and OpenSUSE), put `$EscapeControlCharactersOnReceive off` inside
|
||||||
`/etc/rsyslog.conf` to allow color in logs.
|
`/etc/rsyslog.conf` to allow color in logs.
|
||||||
|
|
||||||
If you are using a different `database_path` other than the systemd unit
|
If you are using a different `database_path` than the systemd unit's
|
||||||
configured default `/var/lib/conduwuit`, you need to add your path to the
|
configured default `/var/lib/conduwuit`, you need to add your path to the
|
||||||
systemd unit's `ReadWritePaths=`. This can be done by either directly editing
|
systemd unit's `ReadWritePaths=`. You can do this by either directly editing
|
||||||
`conduwuit.service` and reloading systemd, or running `systemctl edit conduwuit.service`
|
`conduwuit.service` and reloading systemd, or by running `systemctl edit conduwuit.service`
|
||||||
and entering the following:
|
and entering the following:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -114,8 +119,8 @@ ReadWritePaths=/path/to/custom/database/path
|
||||||
|
|
||||||
## Creating the Continuwuity configuration file
|
## Creating the Continuwuity configuration file
|
||||||
|
|
||||||
Now we need to create the Continuwuity's config file in
|
Now you need to create the Continuwuity configuration file in
|
||||||
`/etc/continuwuity/continuwuity.toml`. The example config can be found at
|
`/etc/continuwuity/continuwuity.toml`. You can find an example configuration at
|
||||||
[conduwuit-example.toml](../configuration/examples.md).
|
[conduwuit-example.toml](../configuration/examples.md).
|
||||||
|
|
||||||
**Please take a moment to read the config. You need to change at least the
|
**Please take a moment to read the config. You need to change at least the
|
||||||
|
@ -125,8 +130,8 @@ RocksDB is the only supported database backend.
|
||||||
|
|
||||||
## Setting the correct file permissions
|
## Setting the correct file permissions
|
||||||
|
|
||||||
If you are using a dedicated user for Continuwuity, you will need to allow it to
|
If you are using a dedicated user for Continuwuity, you need to allow it to
|
||||||
read the config. To do that you can run this:
|
read the configuration. To do this, run:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo chown -R root:root /etc/conduwuit
|
sudo chown -R root:root /etc/conduwuit
|
||||||
|
@ -143,13 +148,13 @@ sudo chmod 700 /var/lib/conduwuit/
|
||||||
|
|
||||||
## Setting up the Reverse Proxy
|
## Setting up the Reverse Proxy
|
||||||
|
|
||||||
We recommend Caddy as a reverse proxy, as it is trivial to use, handling TLS certificates, reverse proxy headers, etc transparently with proper defaults.
|
We recommend Caddy as a reverse proxy because it is trivial to use and handles TLS certificates, reverse proxy headers, etc. transparently with proper defaults.
|
||||||
For other software, please refer to their respective documentation or online guides.
|
For other software, please refer to their respective documentation or online guides.
|
||||||
|
|
||||||
### Caddy
|
### Caddy
|
||||||
|
|
||||||
After installing Caddy via your preferred method, create `/etc/caddy/conf.d/conduwuit_caddyfile`
|
After installing Caddy via your preferred method, create `/etc/caddy/conf.d/conduwuit_caddyfile`
|
||||||
and enter this (substitute for your server name).
|
and enter the following (substitute your actual server name):
|
||||||
|
|
||||||
```caddyfile
|
```caddyfile
|
||||||
your.server.name, your.server.name:8448 {
|
your.server.name, your.server.name:8448 {
|
||||||
|
@ -168,11 +173,11 @@ sudo systemctl enable --now caddy
|
||||||
|
|
||||||
### Other Reverse Proxies
|
### Other Reverse Proxies
|
||||||
|
|
||||||
As we would prefer our users to use Caddy, we will not provide configuration files for other proxys.
|
As we prefer our users to use Caddy, we do not provide configuration files for other proxies.
|
||||||
|
|
||||||
You will need to reverse proxy everything under following routes:
|
You will need to reverse proxy everything under the following routes:
|
||||||
- `/_matrix/` - core Matrix C-S and S-S APIs
|
- `/_matrix/` - core Matrix C-S and S-S APIs
|
||||||
- `/_conduwuit/` - ad-hoc Continuwuity routes such as `/local_user_count` and
|
- `/_conduwuit/` and/or `/_continuwuity/` - ad-hoc Continuwuity routes such as `/local_user_count` and
|
||||||
`/server_version`
|
`/server_version`
|
||||||
|
|
||||||
You can optionally reverse proxy the following individual routes:
|
You can optionally reverse proxy the following individual routes:
|
||||||
|
@ -193,16 +198,16 @@ Examples of delegation:
|
||||||
|
|
||||||
For Apache and Nginx there are many examples available online.
|
For Apache and Nginx there are many examples available online.
|
||||||
|
|
||||||
Lighttpd is not supported as it seems to mess with the `X-Matrix` Authorization
|
Lighttpd is not supported as it appears to interfere with the `X-Matrix` Authorization
|
||||||
header, making federation non-functional. If a workaround is found, feel free to share to get it added to the documentation here.
|
header, making federation non-functional. If you find a workaround, please share it so we can add it to this documentation.
|
||||||
|
|
||||||
If using Apache, you need to use `nocanon` in your `ProxyPass` directive to prevent httpd from messing with the `X-Matrix` header (note that Apache isn't very good as a general reverse proxy and we discourage the usage of it if you can).
|
If using Apache, you need to use `nocanon` in your `ProxyPass` directive to prevent httpd from interfering with the `X-Matrix` header (note that Apache is not ideal as a general reverse proxy, so we discourage using it if alternatives are available).
|
||||||
|
|
||||||
If using Nginx, you need to give Continuwuity the request URI using `$request_uri`, or like so:
|
If using Nginx, you need to pass the request URI to Continuwuity using `$request_uri`, like this:
|
||||||
- `proxy_pass http://127.0.0.1:6167$request_uri;`
|
- `proxy_pass http://127.0.0.1:6167$request_uri;`
|
||||||
- `proxy_pass http://127.0.0.1:6167;`
|
- `proxy_pass http://127.0.0.1:6167;`
|
||||||
|
|
||||||
Nginx users need to increase `client_max_body_size` (default is 1M) to match
|
Nginx users need to increase the `client_max_body_size` setting (default is 1M) to match the
|
||||||
`max_request_size` defined in conduwuit.toml.
|
`max_request_size` defined in conduwuit.toml.
|
||||||
|
|
||||||
## You're done
|
## You're done
|
||||||
|
@ -222,7 +227,7 @@ sudo systemctl enable conduwuit
|
||||||
## How do I know it works?
|
## How do I know it works?
|
||||||
|
|
||||||
You can open [a Matrix client](https://matrix.org/ecosystem/clients), enter your
|
You can open [a Matrix client](https://matrix.org/ecosystem/clients), enter your
|
||||||
homeserver and try to register.
|
homeserver address, and try to register.
|
||||||
|
|
||||||
You can also use these commands as a quick health check (replace
|
You can also use these commands as a quick health check (replace
|
||||||
`your.server.name`).
|
`your.server.name`).
|
||||||
|
@ -237,10 +242,10 @@ curl https://your.server.name:8448/_conduwuit/server_version
|
||||||
curl https://your.server.name:8448/_matrix/federation/v1/version
|
curl https://your.server.name:8448/_matrix/federation/v1/version
|
||||||
```
|
```
|
||||||
|
|
||||||
- To check if your server can talk with other homeservers, you can use the
|
- To check if your server can communicate with other homeservers, use the
|
||||||
[Matrix Federation Tester](https://federationtester.matrix.org/). If you can
|
[Matrix Federation Tester](https://federationtester.matrix.org/). If you can
|
||||||
register but cannot join federated rooms check your config again and also check
|
register but cannot join federated rooms, check your configuration and verify
|
||||||
if the port 8448 is open and forwarded correctly.
|
that port 8448 is open and forwarded correctly.
|
||||||
|
|
||||||
# What's next?
|
# What's next?
|
||||||
|
|
||||||
|
|
|
@ -1,9 +1,9 @@
|
||||||
# Continuwuity for Kubernetes
|
# Continuwuity for Kubernetes
|
||||||
|
|
||||||
Continuwuity doesn't support horizontal scalability or distributed loading
|
Continuwuity doesn't support horizontal scalability or distributed loading
|
||||||
natively, however a community maintained Helm Chart is available here to run
|
natively. However, a community-maintained Helm Chart is available here to run
|
||||||
conduwuit on Kubernetes: <https://gitlab.cronce.io/charts/conduwuit>
|
conduwuit on Kubernetes: <https://gitlab.cronce.io/charts/conduwuit>
|
||||||
|
|
||||||
This should be compatible with continuwuity, but you will need to change the image reference.
|
This should be compatible with Continuwuity, but you will need to change the image reference.
|
||||||
|
|
||||||
Should changes need to be made, please reach out to the maintainer as this is not maintained/controlled by the Continuwuity maintainers.
|
If changes need to be made, please reach out to the maintainer, as this is not maintained or controlled by the Continuwuity maintainers.
|
||||||
|
|
|
@ -1,75 +1,130 @@
|
||||||
# Continuwuity for NixOS
|
# Continuwuity for NixOS
|
||||||
|
|
||||||
Continuwuity can be acquired by Nix (or [Lix][lix]) from various places:
|
NixOS packages Continuwuity as `matrix-continuwuity`. This package includes both the Continuwuity software and a dedicated NixOS module for configuration and deployment.
|
||||||
|
|
||||||
* The `flake.nix` at the root of the repo
|
## Installation methods
|
||||||
* The `default.nix` at the root of the repo
|
|
||||||
* From Continuwuity's binary cache
|
|
||||||
|
|
||||||
### NixOS module
|
You can acquire Continuwuity with Nix (or [Lix][lix]) from these sources:
|
||||||
|
|
||||||
The `flake.nix` and `default.nix` do not currently provide a NixOS module (contributions
|
* Directly from Nixpkgs using the official package (`pkgs.matrix-continuwuity`)
|
||||||
welcome!), so [`services.matrix-conduit`][module] from Nixpkgs can be used to configure
|
* The `flake.nix` at the root of the Continuwuity repo
|
||||||
Continuwuity.
|
* The `default.nix` at the root of the Continuwuity repo
|
||||||
|
|
||||||
### Conduit NixOS Config Module and SQLite
|
## NixOS module
|
||||||
|
|
||||||
Beware! The [`services.matrix-conduit`][module] module defaults to SQLite as a database backend.
|
Continuwuity now has an official NixOS module that simplifies configuration and deployment. The module is available in Nixpkgs as `services.matrix-continuwuity` from NixOS 25.05.
|
||||||
Continuwuity dropped SQLite support in favor of exclusively supporting the much faster RocksDB.
|
|
||||||
Make sure that you are using the RocksDB backend before migrating!
|
|
||||||
|
|
||||||
There is a [tool to migrate a Conduit SQLite database to
|
Here's a basic example of how to use the module:
|
||||||
RocksDB](https://github.com/ShadowJonathan/conduit_toolbox/).
|
|
||||||
|
|
||||||
If you want to run the latest code, you should get Continuwuity from the `flake.nix`
|
```nix
|
||||||
or `default.nix` and set [`services.matrix-conduit.package`][package]
|
{ config, pkgs, ... }:
|
||||||
appropriately to use Continuwuity instead of Conduit.
|
|
||||||
|
{
|
||||||
|
services.matrix-continuwuity = {
|
||||||
|
enable = true;
|
||||||
|
settings = {
|
||||||
|
global = {
|
||||||
|
server_name = "example.com";
|
||||||
|
# Listening on localhost by default
|
||||||
|
# address and port are handled automatically
|
||||||
|
allow_registration = false;
|
||||||
|
allow_encryption = true;
|
||||||
|
allow_federation = true;
|
||||||
|
trusted_servers = [ "matrix.org" ];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Available options
|
||||||
|
|
||||||
|
The NixOS module provides these configuration options:
|
||||||
|
|
||||||
|
- `enable`: Enable the Continuwuity service
|
||||||
|
- `user`: The user to run Continuwuity as (defaults to "continuwuity")
|
||||||
|
- `group`: The group to run Continuwuity as (defaults to "continuwuity")
|
||||||
|
- `extraEnvironment`: Extra environment variables to pass to the Continuwuity server
|
||||||
|
- `package`: The Continuwuity package to use
|
||||||
|
- `settings`: The Continuwuity configuration (in TOML format)
|
||||||
|
|
||||||
|
Use the `settings` option to configure Continuwuity itself. See the [example configuration file](../configuration/examples.md#example-configuration) for all available options.
|
||||||
|
|
||||||
### UNIX sockets
|
### UNIX sockets
|
||||||
|
|
||||||
Due to the lack of a Continuwuity NixOS module, when using the `services.matrix-conduit` module
|
The NixOS module natively supports UNIX sockets through the `global.unix_socket_path` option. When using UNIX sockets, set `global.address` to `null`:
|
||||||
a workaround like the one below is necessary to use UNIX sockets. This is because the UNIX
|
|
||||||
socket option does not exist in Conduit, and the module forcibly sets the `address` and
|
|
||||||
`port` config options.
|
|
||||||
|
|
||||||
```nix
|
```nix
|
||||||
options.services.matrix-conduit.settings = lib.mkOption {
|
services.matrix-continuwuity = {
|
||||||
apply = old: old // (
|
enable = true;
|
||||||
if (old.global ? "unix_socket_path")
|
settings = {
|
||||||
then { global = builtins.removeAttrs old.global [ "address" "port" ]; }
|
global = {
|
||||||
else { }
|
server_name = "example.com";
|
||||||
);
|
address = null; # Must be null when using unix_socket_path
|
||||||
|
unix_socket_path = "/run/continuwuity/continuwuity.sock";
|
||||||
|
unix_socket_perms = 660; # Default permissions for the socket
|
||||||
|
# ...
|
||||||
|
};
|
||||||
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Additionally, the [`matrix-conduit` systemd unit][systemd-unit] in the module does not allow
|
The module automatically sets the correct `RestrictAddressFamilies` in the systemd service configuration to allow access to UNIX sockets.
|
||||||
the `AF_UNIX` socket address family in their systemd unit's `RestrictAddressFamilies=` which
|
|
||||||
disallows the namespace from accessing or creating UNIX sockets and has to be enabled like so:
|
|
||||||
|
|
||||||
```nix
|
### RocksDB database
|
||||||
systemd.services.conduit.serviceConfig.RestrictAddressFamilies = [ "AF_UNIX" ];
|
|
||||||
```
|
|
||||||
|
|
||||||
Even though those workarounds are feasible a Continuwuity NixOS configuration module, developed and
|
Continuwuity exclusively uses RocksDB as its database backend. The system configures the database path automatically to `/var/lib/continuwuity/` and you cannot change it due to the service's reliance on systemd's StateDir.
|
||||||
published by the community, would be appreciated.
|
|
||||||
|
If you're migrating from Conduit with SQLite, use this [tool to migrate a Conduit SQLite database to RocksDB](https://github.com/ShadowJonathan/conduit_toolbox/).
|
||||||
|
|
||||||
### jemalloc and hardened profile
|
### jemalloc and hardened profile
|
||||||
|
|
||||||
Continuwuity uses jemalloc by default. This may interfere with the [`hardened.nix` profile][hardened.nix]
|
Continuwuity uses jemalloc by default. This may interfere with the [`hardened.nix` profile][hardened.nix] because it uses `scudo` by default. Either disable/hide `scudo` from Continuwuity or disable jemalloc like this:
|
||||||
due to them using `scudo` by default. You must either disable/hide `scudo` from Continuwuity, or
|
|
||||||
disable jemalloc like so:
|
|
||||||
|
|
||||||
```nix
|
```nix
|
||||||
let
|
services.matrix-continuwuity = {
|
||||||
conduwuit = pkgs.unstable.conduwuit.override {
|
enable = true;
|
||||||
enableJemalloc = false;
|
package = pkgs.matrix-continuwuity.override {
|
||||||
};
|
enableJemalloc = false;
|
||||||
in
|
};
|
||||||
|
# ...
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
## Upgrading from Conduit
|
||||||
|
|
||||||
|
If you previously used Conduit with the `services.matrix-conduit` module:
|
||||||
|
|
||||||
|
1. Ensure your Conduit uses the RocksDB backend, or migrate from SQLite using the [migration tool](https://github.com/ShadowJonathan/conduit_toolbox/)
|
||||||
|
2. Switch to the new module by changing `services.matrix-conduit` to `services.matrix-continuwuity` in your configuration
|
||||||
|
3. Update any custom configuration to match the new module's structure
|
||||||
|
|
||||||
|
## Reverse proxy configuration
|
||||||
|
|
||||||
|
You'll need to set up a reverse proxy (like nginx or caddy) to expose Continuwuity to the internet. Configure your reverse proxy to forward requests to `/_matrix` on port 443 and 8448 to your Continuwuity instance.
|
||||||
|
|
||||||
|
Here's an example nginx configuration:
|
||||||
|
|
||||||
|
```nginx
|
||||||
|
server {
|
||||||
|
listen 443 ssl;
|
||||||
|
listen [::]:443 ssl;
|
||||||
|
listen 8448 ssl;
|
||||||
|
listen [::]:8448 ssl;
|
||||||
|
|
||||||
|
server_name example.com;
|
||||||
|
|
||||||
|
# SSL configuration here...
|
||||||
|
|
||||||
|
location /_matrix/ {
|
||||||
|
proxy_pass http://127.0.0.1:6167$request_uri;
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header X-Real-IP $remote_addr;
|
||||||
|
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||||
|
proxy_set_header X-Forwarded-Proto $scheme;
|
||||||
|
}
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
[lix]: https://lix.systems/
|
[lix]: https://lix.systems/
|
||||||
[module]: https://search.nixos.org/options?channel=unstable&query=services.matrix-conduit
|
[hardened.nix]: https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/profiles/hardened.nix
|
||||||
[package]: https://search.nixos.org/options?channel=unstable&query=services.matrix-conduit.package
|
|
||||||
[hardened.nix]: https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/profiles/hardened.nix#L22
|
|
||||||
[systemd-unit]: https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/matrix/conduit.nix#L132
|
|
||||||
|
|
|
@ -2,7 +2,7 @@
|
||||||
|
|
||||||
Information about developing the project. If you are only interested in using
|
Information about developing the project. If you are only interested in using
|
||||||
it, you can safely ignore this page. If you plan on contributing, see the
|
it, you can safely ignore this page. If you plan on contributing, see the
|
||||||
[contributor's guide](./contributing.md).
|
[contributor's guide](./contributing.md) and [code style guide](./development/code_style.md).
|
||||||
|
|
||||||
## Continuwuity project layout
|
## Continuwuity project layout
|
||||||
|
|
||||||
|
|
331
docs/development/code_style.md
Normal file
331
docs/development/code_style.md
Normal file
|
@ -0,0 +1,331 @@
|
||||||
|
# Code Style Guide
|
||||||
|
|
||||||
|
This guide outlines the coding standards and best practices for Continuwuity development. These guidelines help avoid bugs and maintain code consistency, readability, and quality across the project.
|
||||||
|
|
||||||
|
These guidelines apply to new code on a best-effort basis. When modifying existing code, follow existing patterns in the immediate area you're changing and then gradually improve code style when making substantial changes.
|
||||||
|
|
||||||
|
## General Principles
|
||||||
|
|
||||||
|
- **Clarity over cleverness**: Write code that is easy to understand and maintain
|
||||||
|
- **Consistency**: Pragmatically follow existing patterns in the codebase, rather than adding new dependencies.
|
||||||
|
- **Safety**: Prefer safe, explicit code over unsafe code with implicit requirements
|
||||||
|
- **Performance**: Consider performance implications, but not at the expense of correctness or maintainability
|
||||||
|
|
||||||
|
## Formatting and Linting
|
||||||
|
|
||||||
|
All code must satisfy lints (clippy, rustc, rustdoc, etc) and be formatted using **nightly** rustfmt (`cargo +nightly fmt`). Many of the `rustfmt.toml` features depend on the nightly toolchain.
|
||||||
|
|
||||||
|
If you need to allow a lint, ensure it's either obvious why (e.g. clippy saying redundant clone but it's actually required) or add a comment explaining the reason. Do not write inefficient code just to satisfy lints. If a lint is wrong and provides a less efficient solution, allow the lint and mention that in a comment.
|
||||||
|
|
||||||
|
If making large formatting changes across unrelated files, create a separate commit so it can be added to the `.git-blame-ignore-revs` file.
|
||||||
|
|
||||||
|
## Rust-Specific Guidelines
|
||||||
|
|
||||||
|
### Naming Conventions
|
||||||
|
|
||||||
|
Follow standard Rust naming conventions as outlined in the [Rust API Guidelines](https://rust-lang.github.io/api-guidelines/naming.html):
|
||||||
|
|
||||||
|
- Use `snake_case` for functions, variables, and modules
|
||||||
|
- Use `PascalCase` for types, traits, and enum variants
|
||||||
|
- Use `SCREAMING_SNAKE_CASE` for constants and statics
|
||||||
|
- Use descriptive names that clearly indicate purpose
|
||||||
|
|
||||||
|
```rs
|
||||||
|
// Good
|
||||||
|
fn process_user_request(user_id: &UserId) -> Result<Response, Error> { ... }
|
||||||
|
|
||||||
|
const MAX_RETRY_ATTEMPTS: usize = 3;
|
||||||
|
|
||||||
|
struct UserSession {
|
||||||
|
session_id: String,
|
||||||
|
created_at: SystemTime,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Avoid
|
||||||
|
fn proc_reqw(id: &str) -> Result<Resp, Err> { ... }
|
||||||
|
```
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
|
||||||
|
- Use `Result<T, E>` for operations that can fail
|
||||||
|
- Prefer specific error types over generic ones
|
||||||
|
- Use `?` operator for error propagation
|
||||||
|
- Provide meaningful error messages
|
||||||
|
- If needed, create or use an error enum.
|
||||||
|
|
||||||
|
```rs
|
||||||
|
// Good
|
||||||
|
fn parse_server_name(input: &str) -> Result<ServerName, InvalidServerNameError> {
|
||||||
|
ServerName::parse(input)
|
||||||
|
.map_err(|_| InvalidServerNameError::new(input))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Avoid
|
||||||
|
fn parse_server_name(input: &str) -> Result<ServerName, Box<dyn Error>> {
|
||||||
|
Ok(ServerName::parse(input).unwrap())
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Option Handling
|
||||||
|
|
||||||
|
- Prefer explicit `Option` handling over unwrapping
|
||||||
|
- Use combinators like `map`, `and_then`, `unwrap_or_else` when appropriate
|
||||||
|
|
||||||
|
```rs
|
||||||
|
// Good
|
||||||
|
let display_name = user.display_name
|
||||||
|
.as_ref()
|
||||||
|
.map(|name| name.trim())
|
||||||
|
.filter(|name| !name.is_empty())
|
||||||
|
.unwrap_or(&user.localpart);
|
||||||
|
|
||||||
|
// Avoid
|
||||||
|
let display_name = if user.display_name.is_some() {
|
||||||
|
user.display_name.as_ref().unwrap()
|
||||||
|
} else {
|
||||||
|
&user.localpart
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
## Logging Guidelines
|
||||||
|
|
||||||
|
### Structured Logging
|
||||||
|
|
||||||
|
**Always use structured logging instead of string interpolation.** This improves log parsing, filtering, and observability.
|
||||||
|
|
||||||
|
```rs
|
||||||
|
// Good - structured parameters
|
||||||
|
debug!(
|
||||||
|
room_id = %room_id,
|
||||||
|
user_id = %user_id,
|
||||||
|
event_type = ?event.event_type(),
|
||||||
|
"Processing room event"
|
||||||
|
);
|
||||||
|
|
||||||
|
info!(
|
||||||
|
server_name = %server_name,
|
||||||
|
response_time_ms = response_time.as_millis(),
|
||||||
|
"Federation request completed successfully"
|
||||||
|
);
|
||||||
|
|
||||||
|
// Avoid - string interpolation
|
||||||
|
debug!("Processing room event for {room_id} from {user_id}");
|
||||||
|
info!("Federation request to {server_name} took {response_time:?}");
|
||||||
|
```
|
||||||
|
|
||||||
|
### Log Levels
|
||||||
|
|
||||||
|
Use appropriate log levels:
|
||||||
|
|
||||||
|
- `error!`: Unrecoverable errors that affect functionality
|
||||||
|
- `warn!`: Potentially problematic situations that don't stop execution
|
||||||
|
- `info!`: General information about application flow
|
||||||
|
- `debug!`: Detailed information for debugging
|
||||||
|
- `trace!`: Very detailed information, typically only useful during development
|
||||||
|
|
||||||
|
Keep in mind the frequency that the log will be reached, and the relevancy to a server operator.
|
||||||
|
|
||||||
|
```rs
|
||||||
|
// Good
|
||||||
|
error!(
|
||||||
|
error = %err,
|
||||||
|
room_id = %room_id,
|
||||||
|
"Failed to send event to room"
|
||||||
|
);
|
||||||
|
|
||||||
|
warn!(
|
||||||
|
server_name = %server_name,
|
||||||
|
attempt = retry_count,
|
||||||
|
"Federation request failed, retrying"
|
||||||
|
);
|
||||||
|
|
||||||
|
info!(
|
||||||
|
user_id = %user_id,
|
||||||
|
"User registered successfully"
|
||||||
|
);
|
||||||
|
|
||||||
|
debug!(
|
||||||
|
event_id = %event_id,
|
||||||
|
auth_events = ?auth_event_ids,
|
||||||
|
"Validating event authorization"
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Sensitive Information
|
||||||
|
|
||||||
|
Never log sensitive information such as:
|
||||||
|
- Access tokens
|
||||||
|
- Passwords
|
||||||
|
- Private keys
|
||||||
|
- Personal user data (unless specifically needed for debugging)
|
||||||
|
|
||||||
|
```rs
|
||||||
|
// Good
|
||||||
|
debug!(
|
||||||
|
user_id = %user_id,
|
||||||
|
session_id = %session_id,
|
||||||
|
"Processing authenticated request"
|
||||||
|
);
|
||||||
|
|
||||||
|
// Avoid
|
||||||
|
debug!(
|
||||||
|
user_id = %user_id,
|
||||||
|
access_token = %access_token,
|
||||||
|
"Processing authenticated request"
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Lock Management
|
||||||
|
|
||||||
|
### Explicit Lock Scopes
|
||||||
|
|
||||||
|
**Always use closure guards instead of implicitly dropped guards.** This makes lock scopes explicit and helps prevent deadlocks.
|
||||||
|
|
||||||
|
Use the `WithLock` trait from `core::utils::with_lock`:
|
||||||
|
|
||||||
|
```rs
|
||||||
|
use conduwuit::utils::with_lock::WithLock;
|
||||||
|
|
||||||
|
// Good - explicit closure guard
|
||||||
|
shared_data.with_lock(|data| {
|
||||||
|
data.counter += 1;
|
||||||
|
data.last_updated = SystemTime::now();
|
||||||
|
// Lock is explicitly released here
|
||||||
|
});
|
||||||
|
|
||||||
|
// Avoid - implicit guard
|
||||||
|
{
|
||||||
|
let mut data = shared_data.lock().unwrap();
|
||||||
|
data.counter += 1;
|
||||||
|
data.last_updated = SystemTime::now();
|
||||||
|
// Lock released when guard goes out of scope - less explicit
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
For async contexts, use the async variant:
|
||||||
|
|
||||||
|
```rs
|
||||||
|
use conduwuit::utils::with_lock::WithLockAsync;
|
||||||
|
|
||||||
|
// Good - async closure guard
|
||||||
|
async_shared_data.with_lock(|data| {
|
||||||
|
data.process_async_update();
|
||||||
|
}).await;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Lock Ordering
|
||||||
|
|
||||||
|
When acquiring multiple locks, always acquire them in a consistent order to prevent deadlocks:
|
||||||
|
|
||||||
|
```rs
|
||||||
|
// Good - consistent ordering (e.g., by memory address or logical hierarchy)
|
||||||
|
let locks = [&lock_a, &lock_b, &lock_c];
|
||||||
|
locks.sort_by_key(|lock| lock as *const _ as usize);
|
||||||
|
|
||||||
|
for lock in locks {
|
||||||
|
lock.with_lock(|data| {
|
||||||
|
// Process data
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Avoid - inconsistent ordering that can cause deadlocks
|
||||||
|
lock_b.with_lock(|data_b| {
|
||||||
|
lock_a.with_lock(|data_a| {
|
||||||
|
// Deadlock risk if another thread acquires in A->B order
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Documentation
|
||||||
|
|
||||||
|
### Code Comments
|
||||||
|
|
||||||
|
- Reference related documentation or parts of the specification
|
||||||
|
- When a task has multiple ways of being acheved, explain your reasoning for your decision
|
||||||
|
- Update comments when code changes
|
||||||
|
|
||||||
|
```rs
|
||||||
|
/// Processes a federation request with automatic retries and backoff.
|
||||||
|
///
|
||||||
|
/// Implements exponential backoff to handle temporary
|
||||||
|
/// network issues and server overload gracefully.
|
||||||
|
pub async fn send_federation_request(
|
||||||
|
destination: &ServerName,
|
||||||
|
request: FederationRequest,
|
||||||
|
) -> Result<FederationResponse, FederationError> {
|
||||||
|
// Retry with exponential backoff because federation can be flaky
|
||||||
|
// due to network issues or temporary server overload
|
||||||
|
let mut retry_delay = Duration::from_millis(100);
|
||||||
|
|
||||||
|
for attempt in 1..=MAX_RETRIES {
|
||||||
|
match try_send_request(destination, &request).await {
|
||||||
|
Ok(response) => return Ok(response),
|
||||||
|
Err(err) if err.is_retriable() && attempt < MAX_RETRIES => {
|
||||||
|
warn!(
|
||||||
|
destination = %destination,
|
||||||
|
attempt = attempt,
|
||||||
|
error = %err,
|
||||||
|
retry_delay_ms = retry_delay.as_millis(),
|
||||||
|
"Federation request failed, retrying"
|
||||||
|
);
|
||||||
|
|
||||||
|
tokio::time::sleep(retry_delay).await;
|
||||||
|
retry_delay *= 2; // Exponential backoff
|
||||||
|
}
|
||||||
|
Err(err) => return Err(err),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
unreachable!("Loop should have returned or failed by now")
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Async Patterns
|
||||||
|
|
||||||
|
- Use `async`/`await` appropriately
|
||||||
|
- Avoid blocking operations in async contexts
|
||||||
|
- Consider using `tokio::task::spawn_blocking` for CPU-intensive work
|
||||||
|
|
||||||
|
```rs
|
||||||
|
// Good - non-blocking async operation
|
||||||
|
pub async fn fetch_user_profile(
|
||||||
|
&self,
|
||||||
|
user_id: &UserId,
|
||||||
|
) -> Result<UserProfile, Error> {
|
||||||
|
let profile = self.db
|
||||||
|
.get_user_profile(user_id)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(profile)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Good - CPU-intensive work moved to blocking thread
|
||||||
|
pub async fn generate_thumbnail(
|
||||||
|
&self,
|
||||||
|
image_data: Vec<u8>,
|
||||||
|
) -> Result<Vec<u8>, Error> {
|
||||||
|
tokio::task::spawn_blocking(move || {
|
||||||
|
image::generate_thumbnail(image_data)
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.map_err(|_| Error::TaskJoinError)?
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Inclusivity and Diversity Guidelines
|
||||||
|
|
||||||
|
All code and documentation must be written with inclusivity and diversity in mind. This ensures our software is welcoming and accessible to all users and contributors. Follow the [Google guide on writing inclusive code and documentation](https://developers.google.com/style/inclusive-documentation) for comprehensive guidance.
|
||||||
|
|
||||||
|
The following types of language are explicitly forbidden in all code, comments, documentation, and commit messages:
|
||||||
|
|
||||||
|
**Ableist language:** Avoid terms like "sanity check", "crazy", "insane", "cripple", or "blind to". Use alternatives like "validation", "unexpected", "disable", or "unaware of".
|
||||||
|
|
||||||
|
**Socially-charged technical terms:** Replace overly divisive terminology with neutral alternatives:
|
||||||
|
- "whitelist/blacklist" → "allowlist/denylist" or "permitted/blocked"
|
||||||
|
- "master/slave" → "primary/replica", "controller/worker", or "parent/child"
|
||||||
|
|
||||||
|
When working with external dependencies that use non-inclusive terminology, avoid propagating them in your own APIs and variable names.
|
||||||
|
|
||||||
|
Use diverse examples in documentation that avoid culturally-specific references, assumptions about user demographics, or unnecessarily gendered language. Design with accessibility and inclusivity in mind by providing clear error messages and considering diverse user needs.
|
||||||
|
|
||||||
|
This software is intended to be used by everyone regardless of background, identity, or ability. Write code and documentation that reflects this commitment to inclusivity.
|
24
docs/turn.md
24
docs/turn.md
|
@ -68,3 +68,27 @@ documentation](https://github.com/coturn/coturn/blob/master/docker/coturn/README
|
||||||
|
|
||||||
For security recommendations see Synapse's [Coturn
|
For security recommendations see Synapse's [Coturn
|
||||||
documentation](https://element-hq.github.io/synapse/latest/turn-howto.html).
|
documentation](https://element-hq.github.io/synapse/latest/turn-howto.html).
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
|
||||||
|
To make sure turn credentials are being correctly served to clients, you can manually make a HTTP request to the turnServer endpoint.
|
||||||
|
|
||||||
|
`curl "https://<matrix.example.com>/_matrix/client/r0/voip/turnServer" -H 'Authorization: Bearer <your_client_token>' | jq`
|
||||||
|
|
||||||
|
You should get a response like this:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"username": "1752792167:@jade:example.com",
|
||||||
|
"password": "KjlDlawdPbU9mvP4bhdV/2c/h65=",
|
||||||
|
"uris": [
|
||||||
|
"turns:coturn.example.com?transport=udp",
|
||||||
|
"turns:coturn.example.com?transport=tcp",
|
||||||
|
"turn:coturn.example.com?transport=udp",
|
||||||
|
"turn:coturn.example.com?transport=tcp"
|
||||||
|
],
|
||||||
|
"ttl": 86400
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
You can test these credentials work using [Trickle ICE](https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/)
|
||||||
|
|
|
@ -187,6 +187,7 @@ pub fn build(router: Router<State>, server: &Server) -> Router<State> {
|
||||||
.ruma_route(&client::well_known_support)
|
.ruma_route(&client::well_known_support)
|
||||||
.ruma_route(&client::well_known_client)
|
.ruma_route(&client::well_known_client)
|
||||||
.route("/_conduwuit/server_version", get(client::conduwuit_server_version))
|
.route("/_conduwuit/server_version", get(client::conduwuit_server_version))
|
||||||
|
.route("/_continuwuity/server_version", get(client::conduwuit_server_version))
|
||||||
.ruma_route(&client::room_initial_sync_route)
|
.ruma_route(&client::room_initial_sync_route)
|
||||||
.route("/client/server.json", get(client::syncv3_client_server_json));
|
.route("/client/server.json", get(client::syncv3_client_server_json));
|
||||||
|
|
||||||
|
@ -226,13 +227,15 @@ pub fn build(router: Router<State>, server: &Server) -> Router<State> {
|
||||||
.ruma_route(&server::well_known_server)
|
.ruma_route(&server::well_known_server)
|
||||||
.ruma_route(&server::get_content_route)
|
.ruma_route(&server::get_content_route)
|
||||||
.ruma_route(&server::get_content_thumbnail_route)
|
.ruma_route(&server::get_content_thumbnail_route)
|
||||||
.route("/_conduwuit/local_user_count", get(client::conduwuit_local_user_count));
|
.route("/_conduwuit/local_user_count", get(client::conduwuit_local_user_count))
|
||||||
|
.route("/_continuwuity/local_user_count", get(client::conduwuit_local_user_count));
|
||||||
} else {
|
} else {
|
||||||
router = router
|
router = router
|
||||||
.route("/_matrix/federation/*path", any(federation_disabled))
|
.route("/_matrix/federation/*path", any(federation_disabled))
|
||||||
.route("/.well-known/matrix/server", any(federation_disabled))
|
.route("/.well-known/matrix/server", any(federation_disabled))
|
||||||
.route("/_matrix/key/*path", any(federation_disabled))
|
.route("/_matrix/key/*path", any(federation_disabled))
|
||||||
.route("/_conduwuit/local_user_count", any(federation_disabled));
|
.route("/_conduwuit/local_user_count", any(federation_disabled))
|
||||||
|
.route("/_continuwuity/local_user_count", any(federation_disabled));
|
||||||
}
|
}
|
||||||
|
|
||||||
if config.allow_legacy_media {
|
if config.allow_legacy_media {
|
||||||
|
|
|
@ -1207,7 +1207,7 @@ pub struct Config {
|
||||||
/// 3 to 5 = Statistics with possible performance impact.
|
/// 3 to 5 = Statistics with possible performance impact.
|
||||||
/// 6 = All statistics.
|
/// 6 = All statistics.
|
||||||
///
|
///
|
||||||
/// default: 1
|
/// default: 3
|
||||||
#[serde(default = "default_rocksdb_stats_level")]
|
#[serde(default = "default_rocksdb_stats_level")]
|
||||||
pub rocksdb_stats_level: u8,
|
pub rocksdb_stats_level: u8,
|
||||||
|
|
||||||
|
@ -1889,12 +1889,10 @@ pub struct Config {
|
||||||
pub stream_amplification: usize,
|
pub stream_amplification: usize,
|
||||||
|
|
||||||
/// Number of sender task workers; determines sender parallelism. Default is
|
/// Number of sender task workers; determines sender parallelism. Default is
|
||||||
/// '0' which means the value is determined internally, likely matching the
|
/// core count. Override by setting a different value.
|
||||||
/// number of tokio worker-threads or number of cores, etc. Override by
|
|
||||||
/// setting a non-zero value.
|
|
||||||
///
|
///
|
||||||
/// default: 0
|
/// default: core count
|
||||||
#[serde(default)]
|
#[serde(default = "default_sender_workers")]
|
||||||
pub sender_workers: usize,
|
pub sender_workers: usize,
|
||||||
|
|
||||||
/// Enables listener sockets; can be set to false to disable listening. This
|
/// Enables listener sockets; can be set to false to disable listening. This
|
||||||
|
@ -2125,45 +2123,48 @@ fn default_database_backups_to_keep() -> i16 { 1 }
|
||||||
|
|
||||||
fn default_db_write_buffer_capacity_mb() -> f64 { 48.0 + parallelism_scaled_f64(4.0) }
|
fn default_db_write_buffer_capacity_mb() -> f64 { 48.0 + parallelism_scaled_f64(4.0) }
|
||||||
|
|
||||||
fn default_db_cache_capacity_mb() -> f64 { 128.0 + parallelism_scaled_f64(64.0) }
|
fn default_db_cache_capacity_mb() -> f64 { 512.0 + parallelism_scaled_f64(512.0) }
|
||||||
|
|
||||||
fn default_pdu_cache_capacity() -> u32 { parallelism_scaled_u32(10_000).saturating_add(100_000) }
|
fn default_pdu_cache_capacity() -> u32 { parallelism_scaled_u32(50_000).saturating_add(100_000) }
|
||||||
|
|
||||||
fn default_cache_capacity_modifier() -> f64 { 1.0 }
|
fn default_cache_capacity_modifier() -> f64 { 1.0 }
|
||||||
|
|
||||||
fn default_auth_chain_cache_capacity() -> u32 {
|
fn default_auth_chain_cache_capacity() -> u32 {
|
||||||
parallelism_scaled_u32(10_000).saturating_add(100_000)
|
|
||||||
}
|
|
||||||
|
|
||||||
fn default_shorteventid_cache_capacity() -> u32 {
|
|
||||||
parallelism_scaled_u32(50_000).saturating_add(100_000)
|
parallelism_scaled_u32(50_000).saturating_add(100_000)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn default_shorteventid_cache_capacity() -> u32 {
|
||||||
|
parallelism_scaled_u32(100_000).saturating_add(100_000)
|
||||||
|
}
|
||||||
|
|
||||||
fn default_eventidshort_cache_capacity() -> u32 {
|
fn default_eventidshort_cache_capacity() -> u32 {
|
||||||
parallelism_scaled_u32(25_000).saturating_add(100_000)
|
parallelism_scaled_u32(50_000).saturating_add(100_000)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn default_eventid_pdu_cache_capacity() -> u32 {
|
fn default_eventid_pdu_cache_capacity() -> u32 {
|
||||||
parallelism_scaled_u32(25_000).saturating_add(100_000)
|
parallelism_scaled_u32(50_000).saturating_add(100_000)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn default_shortstatekey_cache_capacity() -> u32 {
|
fn default_shortstatekey_cache_capacity() -> u32 {
|
||||||
parallelism_scaled_u32(10_000).saturating_add(100_000)
|
parallelism_scaled_u32(50_000).saturating_add(100_000)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn default_statekeyshort_cache_capacity() -> u32 {
|
fn default_statekeyshort_cache_capacity() -> u32 {
|
||||||
parallelism_scaled_u32(10_000).saturating_add(100_000)
|
parallelism_scaled_u32(50_000).saturating_add(100_000)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn default_servernameevent_data_cache_capacity() -> u32 {
|
fn default_servernameevent_data_cache_capacity() -> u32 {
|
||||||
parallelism_scaled_u32(100_000).saturating_add(500_000)
|
parallelism_scaled_u32(100_000).saturating_add(100_000)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn default_stateinfo_cache_capacity() -> u32 { parallelism_scaled_u32(100) }
|
fn default_stateinfo_cache_capacity() -> u32 {
|
||||||
|
parallelism_scaled_u32(500).clamp(100, 12000)
|
||||||
|
}
|
||||||
|
|
||||||
fn default_roomid_spacehierarchy_cache_capacity() -> u32 { parallelism_scaled_u32(1000) }
|
fn default_roomid_spacehierarchy_cache_capacity() -> u32 {
|
||||||
|
parallelism_scaled_u32(500).clamp(100, 12000) }
|
||||||
|
|
||||||
fn default_dns_cache_entries() -> u32 { 32768 }
|
fn default_dns_cache_entries() -> u32 { 327680 }
|
||||||
|
|
||||||
fn default_dns_min_ttl() -> u64 { 60 * 180 }
|
fn default_dns_min_ttl() -> u64 { 60 * 180 }
|
||||||
|
|
||||||
|
@ -2265,7 +2266,7 @@ fn default_typing_client_timeout_max_s() -> u64 { 45 }
|
||||||
|
|
||||||
fn default_rocksdb_recovery_mode() -> u8 { 1 }
|
fn default_rocksdb_recovery_mode() -> u8 { 1 }
|
||||||
|
|
||||||
fn default_rocksdb_log_level() -> String { "error".to_owned() }
|
fn default_rocksdb_log_level() -> String { "info".to_owned() }
|
||||||
|
|
||||||
fn default_rocksdb_log_time_to_roll() -> usize { 0 }
|
fn default_rocksdb_log_time_to_roll() -> usize { 0 }
|
||||||
|
|
||||||
|
@ -2297,7 +2298,7 @@ fn default_rocksdb_compression_level() -> i32 { 32767 }
|
||||||
#[allow(clippy::doc_markdown)]
|
#[allow(clippy::doc_markdown)]
|
||||||
fn default_rocksdb_bottommost_compression_level() -> i32 { 32767 }
|
fn default_rocksdb_bottommost_compression_level() -> i32 { 32767 }
|
||||||
|
|
||||||
fn default_rocksdb_stats_level() -> u8 { 1 }
|
fn default_rocksdb_stats_level() -> u8 { 3 }
|
||||||
|
|
||||||
// I know, it's a great name
|
// I know, it's a great name
|
||||||
#[must_use]
|
#[must_use]
|
||||||
|
@ -2352,14 +2353,13 @@ fn default_admin_log_capture() -> String {
|
||||||
fn default_admin_room_tag() -> String { "m.server_notice".to_owned() }
|
fn default_admin_room_tag() -> String { "m.server_notice".to_owned() }
|
||||||
|
|
||||||
#[allow(clippy::as_conversions, clippy::cast_precision_loss)]
|
#[allow(clippy::as_conversions, clippy::cast_precision_loss)]
|
||||||
fn parallelism_scaled_f64(val: f64) -> f64 { val * (sys::available_parallelism() as f64) }
|
pub fn parallelism_scaled_f64(val: f64) -> f64 { val * (sys::available_parallelism() as f64) }
|
||||||
|
|
||||||
fn parallelism_scaled_u32(val: u32) -> u32 {
|
pub fn parallelism_scaled_u32(val: u32) -> u32 { val.saturating_mul(sys::available_parallelism() as u32) }
|
||||||
let val = val.try_into().expect("failed to cast u32 to usize");
|
|
||||||
parallelism_scaled(val).try_into().unwrap_or(u32::MAX)
|
|
||||||
}
|
|
||||||
|
|
||||||
fn parallelism_scaled(val: usize) -> usize { val.saturating_mul(sys::available_parallelism()) }
|
pub fn parallelism_scaled_i32(val: i32) -> i32 { val.saturating_mul(sys::available_parallelism() as i32) }
|
||||||
|
|
||||||
|
pub fn parallelism_scaled(val: usize) -> usize { val.saturating_mul(sys::available_parallelism()) }
|
||||||
|
|
||||||
fn default_trusted_server_batch_size() -> usize { 256 }
|
fn default_trusted_server_batch_size() -> usize { 256 }
|
||||||
|
|
||||||
|
@ -2379,6 +2379,8 @@ fn default_stream_width_scale() -> f32 { 1.0 }
|
||||||
|
|
||||||
fn default_stream_amplification() -> usize { 1024 }
|
fn default_stream_amplification() -> usize { 1024 }
|
||||||
|
|
||||||
|
fn default_sender_workers() -> usize { parallelism_scaled(1) }
|
||||||
|
|
||||||
fn default_client_receive_timeout() -> u64 { 75 }
|
fn default_client_receive_timeout() -> u64 { 75 }
|
||||||
|
|
||||||
fn default_client_request_timeout() -> u64 { 180 }
|
fn default_client_request_timeout() -> u64 { 180 }
|
||||||
|
|
|
@ -13,6 +13,7 @@ use ruma::{
|
||||||
power_levels::RoomPowerLevelsEventContent,
|
power_levels::RoomPowerLevelsEventContent,
|
||||||
third_party_invite::RoomThirdPartyInviteEventContent,
|
third_party_invite::RoomThirdPartyInviteEventContent,
|
||||||
},
|
},
|
||||||
|
EventId,
|
||||||
int,
|
int,
|
||||||
serde::{Base64, Raw},
|
serde::{Base64, Raw},
|
||||||
};
|
};
|
||||||
|
@ -21,7 +22,6 @@ use serde::{
|
||||||
de::{Error as _, IgnoredAny},
|
de::{Error as _, IgnoredAny},
|
||||||
};
|
};
|
||||||
use serde_json::{from_str as from_json_str, value::RawValue as RawJsonValue};
|
use serde_json::{from_str as from_json_str, value::RawValue as RawJsonValue};
|
||||||
|
|
||||||
use super::{
|
use super::{
|
||||||
Error, Event, Result, StateEventType, StateKey, TimelineEventType,
|
Error, Event, Result, StateEventType, StateKey, TimelineEventType,
|
||||||
power_levels::{
|
power_levels::{
|
||||||
|
@ -217,8 +217,9 @@ where
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
// TODO: In the past this code caused problems federating with synapse, maybe this has been
|
// TODO: In the past this code was commented as it caused problems with Synapse. This is no
|
||||||
// resolved already. Needs testing.
|
// longer the case. This needs to be implemented.
|
||||||
|
// See also: https://github.com/ruma/ruma/pull/2064
|
||||||
//
|
//
|
||||||
// 2. Reject if auth_events
|
// 2. Reject if auth_events
|
||||||
// a. auth_events cannot have duplicate keys since it's a BTree
|
// a. auth_events cannot have duplicate keys since it's a BTree
|
||||||
|
@ -241,20 +242,44 @@ where
|
||||||
}
|
}
|
||||||
*/
|
*/
|
||||||
|
|
||||||
let (room_create_event, power_levels_event, sender_member_event) = join3(
|
// let (room_create_event, power_levels_event, sender_member_event) = join3(
|
||||||
fetch_state(&StateEventType::RoomCreate, ""),
|
// fetch_state(&StateEventType::RoomCreate, ""),
|
||||||
fetch_state(&StateEventType::RoomPowerLevels, ""),
|
// fetch_state(&StateEventType::RoomPowerLevels, ""),
|
||||||
fetch_state(&StateEventType::RoomMember, sender.as_str()),
|
// fetch_state(&StateEventType::RoomMember, sender.as_str()),
|
||||||
)
|
// )
|
||||||
.await;
|
// .await;
|
||||||
|
|
||||||
|
let room_create_event = fetch_state(&StateEventType::RoomCreate, "").await;
|
||||||
|
let power_levels_event = fetch_state(&StateEventType::RoomPowerLevels, "").await;
|
||||||
|
let sender_member_event = fetch_state(&StateEventType::RoomMember, sender.as_str()).await;
|
||||||
|
|
||||||
let room_create_event = match room_create_event {
|
let room_create_event = match room_create_event {
|
||||||
| None => {
|
| None => {
|
||||||
warn!("no m.room.create event in auth chain");
|
error!(
|
||||||
|
create_event = room_create_event.as_ref().map(Event::event_id).unwrap_or(<&EventId>::try_from("$unknown").unwrap()).as_str(),
|
||||||
|
power_levels = power_levels_event.as_ref().map(Event::event_id).unwrap_or(<&EventId>::try_from("$unknown").unwrap()).as_str(),
|
||||||
|
member_event = sender_member_event.as_ref().map(Event::event_id).unwrap_or(<&EventId>::try_from("$unknown").unwrap()).as_str(),
|
||||||
|
"no m.room.create event found for {} ({})!",
|
||||||
|
incoming_event.event_id().as_str(),
|
||||||
|
incoming_event.room_id().as_str()
|
||||||
|
);
|
||||||
return Ok(false);
|
return Ok(false);
|
||||||
},
|
},
|
||||||
| Some(e) => e,
|
| Some(e) => e,
|
||||||
};
|
};
|
||||||
|
// just re-check 1.2 to work around a bug
|
||||||
|
let Some(room_id_server_name) = incoming_event.room_id().server_name() else {
|
||||||
|
warn!("room ID has no servername");
|
||||||
|
return Ok(false);
|
||||||
|
};
|
||||||
|
|
||||||
|
if room_id_server_name != room_create_event.sender().server_name() {
|
||||||
|
warn!(
|
||||||
|
"servername of room ID origin ({}) does not match servername of m.room.create sender ({})",
|
||||||
|
room_id_server_name,
|
||||||
|
room_create_event.sender().server_name());
|
||||||
|
return Ok(false);
|
||||||
|
}
|
||||||
|
|
||||||
if incoming_event.room_id() != room_create_event.room_id() {
|
if incoming_event.room_id() != room_create_event.room_id() {
|
||||||
warn!("room_id of incoming event does not match room_id of m.room.create event");
|
warn!("room_id of incoming event does not match room_id of m.room.create event");
|
||||||
|
|
|
@ -733,8 +733,12 @@ where
|
||||||
Fut: Future<Output = Option<E>> + Send,
|
Fut: Future<Output = Option<E>> + Send,
|
||||||
E: Event + Send + Sync,
|
E: Event + Send + Sync,
|
||||||
{
|
{
|
||||||
|
let mut room_id = None;
|
||||||
while let Some(sort_ev) = event {
|
while let Some(sort_ev) = event {
|
||||||
debug!(event_id = sort_ev.event_id().as_str(), "mainline");
|
debug!(event_id = sort_ev.event_id().as_str(), "mainline");
|
||||||
|
if room_id.is_none() {
|
||||||
|
room_id = Some(sort_ev.room_id().to_owned());
|
||||||
|
}
|
||||||
|
|
||||||
let id = sort_ev.event_id();
|
let id = sort_ev.event_id();
|
||||||
if let Some(depth) = mainline_map.get(id) {
|
if let Some(depth) = mainline_map.get(id) {
|
||||||
|
@ -753,7 +757,7 @@ where
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
// Did not find a power level event so we default to zero
|
warn!("could not find a power event in the mainline map for {room_id:?}, defaulting to zero depth");
|
||||||
Ok(0)
|
Ok(0)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -19,6 +19,7 @@ pub mod sys;
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests;
|
mod tests;
|
||||||
pub mod time;
|
pub mod time;
|
||||||
|
pub mod with_lock;
|
||||||
|
|
||||||
pub use ::conduwuit_macros::implement;
|
pub use ::conduwuit_macros::implement;
|
||||||
pub use ::ctor::{ctor, dtor};
|
pub use ::ctor::{ctor, dtor};
|
||||||
|
|
65
src/core/utils/with_lock.rs
Normal file
65
src/core/utils/with_lock.rs
Normal file
|
@ -0,0 +1,65 @@
|
||||||
|
//! Traits for explicitly scoping the lifetime of locks.
|
||||||
|
|
||||||
|
use std::sync::{Arc, Mutex};
|
||||||
|
|
||||||
|
pub trait WithLock<T> {
|
||||||
|
/// Acquires a lock and executes the given closure with the locked data.
|
||||||
|
fn with_lock<F>(&self, f: F)
|
||||||
|
where
|
||||||
|
F: FnMut(&mut T);
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<T> WithLock<T> for Mutex<T> {
|
||||||
|
fn with_lock<F>(&self, mut f: F)
|
||||||
|
where
|
||||||
|
F: FnMut(&mut T),
|
||||||
|
{
|
||||||
|
// The locking and unlocking logic is hidden inside this function.
|
||||||
|
let mut data_guard = self.lock().unwrap();
|
||||||
|
f(&mut data_guard);
|
||||||
|
// Lock is released here when `data_guard` goes out of scope.
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<T> WithLock<T> for Arc<Mutex<T>> {
|
||||||
|
fn with_lock<F>(&self, mut f: F)
|
||||||
|
where
|
||||||
|
F: FnMut(&mut T),
|
||||||
|
{
|
||||||
|
// The locking and unlocking logic is hidden inside this function.
|
||||||
|
let mut data_guard = self.lock().unwrap();
|
||||||
|
f(&mut data_guard);
|
||||||
|
// Lock is released here when `data_guard` goes out of scope.
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub trait WithLockAsync<T> {
|
||||||
|
/// Acquires a lock and executes the given closure with the locked data.
|
||||||
|
fn with_lock<F>(&self, f: F) -> impl Future<Output = ()>
|
||||||
|
where
|
||||||
|
F: FnMut(&mut T);
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<T> WithLockAsync<T> for futures::lock::Mutex<T> {
|
||||||
|
async fn with_lock<F>(&self, mut f: F)
|
||||||
|
where
|
||||||
|
F: FnMut(&mut T),
|
||||||
|
{
|
||||||
|
// The locking and unlocking logic is hidden inside this function.
|
||||||
|
let mut data_guard = self.lock().await;
|
||||||
|
f(&mut data_guard);
|
||||||
|
// Lock is released here when `data_guard` goes out of scope.
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<T> WithLockAsync<T> for Arc<futures::lock::Mutex<T>> {
|
||||||
|
async fn with_lock<F>(&self, mut f: F)
|
||||||
|
where
|
||||||
|
F: FnMut(&mut T),
|
||||||
|
{
|
||||||
|
// The locking and unlocking logic is hidden inside this function.
|
||||||
|
let mut data_guard = self.lock().await;
|
||||||
|
f(&mut data_guard);
|
||||||
|
// Lock is released here when `data_guard` goes out of scope.
|
||||||
|
}
|
||||||
|
}
|
|
@ -29,7 +29,7 @@ fn descriptor_cf_options(
|
||||||
set_table_options(&mut opts, &desc, cache)?;
|
set_table_options(&mut opts, &desc, cache)?;
|
||||||
|
|
||||||
opts.set_min_write_buffer_number(1);
|
opts.set_min_write_buffer_number(1);
|
||||||
opts.set_max_write_buffer_number(2);
|
opts.set_max_write_buffer_number(3);
|
||||||
opts.set_write_buffer_size(desc.write_size);
|
opts.set_write_buffer_size(desc.write_size);
|
||||||
|
|
||||||
opts.set_target_file_size_base(desc.file_size);
|
opts.set_target_file_size_base(desc.file_size);
|
||||||
|
|
|
@ -1,8 +1,6 @@
|
||||||
use std::{cmp, convert::TryFrom};
|
use conduwuit::{Config, Result};
|
||||||
|
|
||||||
use conduwuit::{Config, Result, utils};
|
|
||||||
use rocksdb::{Cache, DBRecoveryMode, Env, LogLevel, Options, statistics::StatsLevel};
|
use rocksdb::{Cache, DBRecoveryMode, Env, LogLevel, Options, statistics::StatsLevel};
|
||||||
|
use conduwuit::config::{parallelism_scaled_i32, parallelism_scaled_u32};
|
||||||
use super::{cf_opts::cache_size_f64, logger::handle as handle_log};
|
use super::{cf_opts::cache_size_f64, logger::handle as handle_log};
|
||||||
|
|
||||||
/// Create database-wide options suitable for opening the database. This also
|
/// Create database-wide options suitable for opening the database. This also
|
||||||
|
@ -23,8 +21,8 @@ pub(crate) fn db_options(config: &Config, env: &Env, row_cache: &Cache) -> Resul
|
||||||
set_logging_defaults(&mut opts, config);
|
set_logging_defaults(&mut opts, config);
|
||||||
|
|
||||||
// Processing
|
// Processing
|
||||||
opts.set_max_background_jobs(num_threads::<i32>(config)?);
|
opts.set_max_background_jobs(parallelism_scaled_i32(1));
|
||||||
opts.set_max_subcompactions(num_threads::<u32>(config)?);
|
opts.set_max_subcompactions(parallelism_scaled_u32(1));
|
||||||
opts.set_avoid_unnecessary_blocking_io(true);
|
opts.set_avoid_unnecessary_blocking_io(true);
|
||||||
opts.set_max_file_opening_threads(0);
|
opts.set_max_file_opening_threads(0);
|
||||||
|
|
||||||
|
@ -126,15 +124,3 @@ fn set_logging_defaults(opts: &mut Options, config: &Config) {
|
||||||
opts.set_callback_logger(rocksdb_log_level, &handle_log);
|
opts.set_callback_logger(rocksdb_log_level, &handle_log);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn num_threads<T: TryFrom<usize>>(config: &Config) -> Result<T> {
|
|
||||||
const MIN_PARALLELISM: usize = 2;
|
|
||||||
|
|
||||||
let requested = if config.rocksdb_parallelism_threads != 0 {
|
|
||||||
config.rocksdb_parallelism_threads
|
|
||||||
} else {
|
|
||||||
utils::available_parallelism()
|
|
||||||
};
|
|
||||||
|
|
||||||
utils::math::try_into::<T, usize>(cmp::max(MIN_PARALLELISM, requested))
|
|
||||||
}
|
|
||||||
|
|
|
@ -306,28 +306,25 @@ impl super::Service {
|
||||||
|
|
||||||
#[tracing::instrument(name = "srv", level = "debug", skip(self))]
|
#[tracing::instrument(name = "srv", level = "debug", skip(self))]
|
||||||
async fn query_srv_record(&self, hostname: &'_ str) -> Result<Option<FedDest>> {
|
async fn query_srv_record(&self, hostname: &'_ str) -> Result<Option<FedDest>> {
|
||||||
let hostnames =
|
self.services.server.check_running()?;
|
||||||
[format!("_matrix-fed._tcp.{hostname}."), format!("_matrix._tcp.{hostname}.")];
|
|
||||||
|
|
||||||
for hostname in hostnames {
|
debug!("querying SRV for {hostname:?}");
|
||||||
self.services.server.check_running()?;
|
|
||||||
|
|
||||||
debug!("querying SRV for {hostname:?}");
|
let hostname_suffix = format!("_matrix-fed._tcp.{hostname}.");
|
||||||
let hostname = hostname.trim_end_matches('.');
|
let hostname = hostname_suffix.trim_end_matches('.');
|
||||||
match self.resolver.resolver.srv_lookup(hostname).await {
|
match self.resolver.resolver.srv_lookup(hostname).await {
|
||||||
| Err(e) => Self::handle_resolve_error(&e, hostname)?,
|
| Err(e) => Self::handle_resolve_error(&e, hostname)?,
|
||||||
| Ok(result) => {
|
| Ok(result) => {
|
||||||
return Ok(result.iter().next().map(|result| {
|
return Ok(result.iter().next().map(|result| {
|
||||||
FedDest::Named(
|
FedDest::Named(
|
||||||
result.target().to_string().trim_end_matches('.').to_owned(),
|
result.target().to_string().trim_end_matches('.').to_owned(),
|
||||||
format!(":{}", result.port())
|
format!(":{}", result.port())
|
||||||
.as_str()
|
.as_str()
|
||||||
.try_into()
|
.try_into()
|
||||||
.unwrap_or_else(|_| FedDest::default_port()),
|
.unwrap_or_else(|_| FedDest::default_port()),
|
||||||
)
|
)
|
||||||
}));
|
}));
|
||||||
},
|
},
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(None)
|
Ok(None)
|
||||||
|
|
|
@ -122,10 +122,7 @@ where
|
||||||
}
|
}
|
||||||
|
|
||||||
// The original create event must be in the auth events
|
// The original create event must be in the auth events
|
||||||
if !matches!(
|
if !auth_events.contains_key(&(StateEventType::RoomCreate, String::new().into())) {
|
||||||
auth_events.get(&(StateEventType::RoomCreate, String::new().into())),
|
|
||||||
Some(_) | None
|
|
||||||
) {
|
|
||||||
return Err!(Request(InvalidParam("Incoming event refers to wrong create event.")));
|
return Err!(Request(InvalidParam("Incoming event refers to wrong create event.")));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -6,6 +6,7 @@ use conduwuit::{
|
||||||
trace,
|
trace,
|
||||||
utils::stream::{BroadbandExt, ReadyExt},
|
utils::stream::{BroadbandExt, ReadyExt},
|
||||||
warn,
|
warn,
|
||||||
|
info
|
||||||
};
|
};
|
||||||
use futures::{FutureExt, StreamExt, future::ready};
|
use futures::{FutureExt, StreamExt, future::ready};
|
||||||
use ruma::{CanonicalJsonValue, RoomId, ServerName, events::StateEventType};
|
use ruma::{CanonicalJsonValue, RoomId, ServerName, events::StateEventType};
|
||||||
|
@ -149,7 +150,7 @@ where
|
||||||
let extremities: Vec<_> = self
|
let extremities: Vec<_> = self
|
||||||
.services
|
.services
|
||||||
.state
|
.state
|
||||||
.get_forward_extremities(room_id)
|
.get_forward_extremities(room_id, &state_lock)
|
||||||
.map(ToOwned::to_owned)
|
.map(ToOwned::to_owned)
|
||||||
.ready_filter(|event_id| {
|
.ready_filter(|event_id| {
|
||||||
// Remove any that are referenced by this incoming event's prev_events
|
// Remove any that are referenced by this incoming event's prev_events
|
||||||
|
@ -167,6 +168,8 @@ where
|
||||||
.collect()
|
.collect()
|
||||||
.await;
|
.await;
|
||||||
|
|
||||||
|
if extremities.len() == 0 { info!("Retained zero extremities when upgrading outlier PDU to timeline PDU with {} previous events, event id: {}", incoming_pdu.prev_events.len(), incoming_pdu.event_id) }
|
||||||
|
|
||||||
debug!(
|
debug!(
|
||||||
"Retained {} extremities checked against {} prev_events",
|
"Retained {} extremities checked against {} prev_events",
|
||||||
extremities.len(),
|
extremities.len(),
|
||||||
|
|
|
@ -388,6 +388,7 @@ impl Service {
|
||||||
pub fn get_forward_extremities<'a>(
|
pub fn get_forward_extremities<'a>(
|
||||||
&'a self,
|
&'a self,
|
||||||
room_id: &'a RoomId,
|
room_id: &'a RoomId,
|
||||||
|
_state_lock: &'a RoomMutexGuard,
|
||||||
) -> impl Stream<Item = &EventId> + Send + '_ {
|
) -> impl Stream<Item = &EventId> + Send + '_ {
|
||||||
let prefix = (room_id, Interfix);
|
let prefix = (room_id, Interfix);
|
||||||
|
|
||||||
|
|
|
@ -42,7 +42,7 @@ pub async fn create_hash_and_sign_event(
|
||||||
let prev_events: Vec<OwnedEventId> = self
|
let prev_events: Vec<OwnedEventId> = self
|
||||||
.services
|
.services
|
||||||
.state
|
.state
|
||||||
.get_forward_extremities(room_id)
|
.get_forward_extremities(room_id, _mutex_lock)
|
||||||
.take(20)
|
.take(20)
|
||||||
.map(Into::into)
|
.map(Into::into)
|
||||||
.collect()
|
.collect()
|
||||||
|
|
|
@ -401,16 +401,10 @@ impl Service {
|
||||||
|
|
||||||
fn num_senders(args: &crate::Args<'_>) -> usize {
|
fn num_senders(args: &crate::Args<'_>) -> usize {
|
||||||
const MIN_SENDERS: usize = 1;
|
const MIN_SENDERS: usize = 1;
|
||||||
// Limit the number of senders to the number of workers threads or number of
|
// Limit the maximum number of senders to the number of cores.
|
||||||
// cores, conservatively.
|
let max_senders = available_parallelism();
|
||||||
let max_senders = args
|
|
||||||
.server
|
|
||||||
.metrics
|
|
||||||
.num_workers()
|
|
||||||
.min(available_parallelism());
|
|
||||||
|
|
||||||
// If the user doesn't override the default 0, this is intended to then default
|
// default is 4 senders. clamp between 1 and core count.
|
||||||
// to 1 for now as multiple senders is experimental.
|
|
||||||
args.server
|
args.server
|
||||||
.config
|
.config
|
||||||
.sender_workers
|
.sender_workers
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue