Compare commits

..

32 commits

Author SHA1 Message Date
Jacob Taylor
f6f9730e19 sender_workers scaling. this time, with feeling!
Some checks failed
Release Docker Image / define-variables (push) Failing after 1s
Release Docker Image / build-image (linux/amd64, release, linux-amd64, base) (push) Has been skipped
Release Docker Image / build-image (linux/arm64, release, linux-arm64, base) (push) Has been skipped
Release Docker Image / merge (push) Has been skipped
Rust Checks / Format (push) Failing after 3s
Rust Checks / Clippy (push) Failing after 8s
Rust Checks / Cargo Test (push) Failing after 16s
2025-06-21 08:13:30 -07:00
Jacob Taylor
3fae827e2b vehicle loan documentation now available at window 7 2025-06-21 08:06:15 -07:00
Jacob Taylor
d157ff25a4 lock the getter instead ??? c/o M 2025-06-21 08:04:00 -07:00
Jacob Taylor
d88dd042f4 make fetching key room events less smart 2025-06-21 08:02:52 -07:00
Jacob Taylor
af5330f642 change rocksdb stats level to 3
scale rocksdb background jobs and subcompactions

change rocksdb default error level to info from error

delete unused num_threads function

fix warns from cargo
2025-06-21 08:02:49 -07:00
nexy7574
da659f17ab modify more log strings so they're more useful than not 2025-06-21 08:02:28 -07:00
nexy7574
d3321390df When in doubt, log all the things 2025-06-21 08:02:28 -07:00
nexy7574
a02a82f316 log which room struggled to get mainline depth 2025-06-21 08:02:28 -07:00
nexy7574
4e92962694 more logs 2025-06-21 08:02:28 -07:00
nexy7574
86f369d8d6 Unsafe, untested, and potentially overeager PDU sanity checks 2025-06-21 08:02:28 -07:00
nexy7574
01d1224d1c Fix room ID check 2025-06-21 08:02:28 -07:00
nexy7574
65e2061447 Kick up a fuss when m.room.create is unfindable 2025-06-21 08:02:28 -07:00
nexy7574
9246f3be44 Note about ruma#2064 in TODO 2025-06-21 08:02:28 -07:00
nexy7574
c23912fd42 fix an auth rule not applying correctly 2025-06-21 08:02:28 -07:00
nexy7574
94c4d85716 Always calculate state diff IDs in syncv3
seemingly fixes #779
2025-06-21 08:02:28 -07:00
Jacob Taylor
6ef00341ea upgrade some settings to enable 5g in continuwuity
enable converged 6g at the edge in continuwuity

better stateinfo_cache_capacity default

better roomid_spacehierarchy_cache_capacity

make sender workers default better and clamp value to core count

update sender workers documentation

add more parallelism_scaled and make them public

update 1 document
2025-06-21 08:02:05 -07:00
Jacob Taylor
45ddec699a add futures::FutureExt to make cb15ac3c01 work 2025-06-21 07:52:20 -07:00
Jason Volk
ec8abacbf3 Mitigate large futures
Signed-off-by: Jason Volk <jason@zemos.net>
2025-06-21 07:52:20 -07:00
Jacob Taylor
43c5f2572e bump the number of allowed immutable memtables by 1, to allow for greater flood protection
this should probably not be applied if you have rocksdb_atomic_flush = false (the default)
2025-06-21 07:52:20 -07:00
Jacob Taylor
d263c52ddc probably incorrectly delete support for non-standardized matrix srv record 2025-06-21 07:52:20 -07:00
Jacob Taylor
4b0f2c3dfa Fix spaces rooms list load error. rev2 2025-06-21 07:52:20 -07:00
Jade Ellis
b5a6b56d53 fix: Filter out invalid replacements from bundled aggregations 2025-06-21 07:52:20 -07:00
Jade Ellis
5528911e2a feat: Add bundled aggregations support
Add support for the m.replace and m.reference bundled
aggregations.
This should fix plenty of subtle client issues.
Threads are not included in the new code as they have
historically been written to the database. Replacing the
old system would result in issues when switching away from
continuwuity, so saved for later.
Some TODOs have been left re event visibility and ignored users.
These should be OK for now, though.
2025-06-21 07:52:20 -07:00
Jade Ellis
1b08e0e33d refactor: Promote handling unsigned data out of timeline
Also fixes:
- Transaction IDs leaking in event route
- Age not being set for event relations or threads
- Both of the above for search results

Notes down concern with relations table
2025-06-21 07:52:20 -07:00
Jade Ellis
add5c7052c
chore: Update lockfile
Some checks failed
Release Docker Image / define-variables (push) Failing after 1s
Documentation / Build and Deploy Documentation (push) Failing after 4s
Release Docker Image / build-image (linux/amd64, release, linux-amd64, base) (push) Has been skipped
Release Docker Image / build-image (linux/arm64, release, linux-arm64, base) (push) Has been skipped
Release Docker Image / merge (push) Has been skipped
Rust Checks / Format (push) Failing after 1s
Rust Checks / Clippy (push) Failing after 32s
Rust Checks / Cargo Test (push) Failing after 37s
2025-06-20 21:51:53 +01:00
Jade Ellis
01200d9b54
build: Allow specifying build profile
Additionally splits caches by target CPU
2025-06-20 21:48:37 +01:00
Jade Ellis
0ba4a265be
build: Upgrade to Rust 1.87 2025-06-20 21:45:29 +01:00
Jade Ellis
08fbcbba69
build: Use newer LLVM for rust 1.87 2025-06-20 21:35:48 +01:00
Jade Ellis
b526935d45
build: Specify debian version 2025-06-20 21:35:03 +01:00
Jade Ellis
a737d845a4
chore: Don't specify targets in rust-toolchain 2025-06-20 21:25:34 +01:00
nex
e508b1197f feat: allow overriding the "most recent event" when forcing a state download (#853)
Some checks failed
Release Docker Image / define-variables (push) Failing after 3s
Release Docker Image / build-image (linux/amd64, linux-amd64) (push) Has been skipped
Release Docker Image / build-image (linux/arm64, linux-arm64) (push) Has been skipped
Release Docker Image / merge (push) Has been skipped
Rust Checks / Format (push) Failing after 2s
Documentation / Build and Deploy Documentation (push) Failing after 13s
Rust Checks / Clippy (push) Failing after 32s
Rust Checks / Cargo Test (push) Failing after 32s
Add option to select which event to set the state at to, for the force-set-room-state admin command.

This allows us to work around issues where the latest PDU is one that remote servers don't know about (i.e. failed federation for whatever reason)

Closes #852

Reviewed-on: https://forgejo.ellis.link/continuwuation/continuwuity/pulls/853
Reviewed-by: Jade Ellis <jade@ellis.link>
Co-authored-by: nex <nex@noreply.forgejo.ellis.link>
Co-committed-by: nex <nex@noreply.forgejo.ellis.link>
2025-06-19 21:27:50 +00:00
Kimiblock
d6fd30393c Update docs/deploying/arch-linux.md 2025-06-19 12:36:49 +00:00
9 changed files with 475 additions and 273 deletions

View file

@ -49,6 +49,7 @@ jobs:
const platforms = ['linux/amd64', 'linux/arm64']
core.setOutput('build_matrix', JSON.stringify({
platform: platforms,
target_cpu: ['base'],
include: platforms.map(platform => { return {
platform,
slug: platform.replace('/', '-')
@ -66,6 +67,8 @@ jobs:
strategy:
matrix:
{
"target_cpu": ["base"],
"profile": ["release"],
"include":
[
{ "platform": "linux/amd64", "slug": "linux-amd64" },
@ -73,6 +76,7 @@ jobs:
],
"platform": ["linux/amd64", "linux/arm64"],
}
steps:
- name: Echo strategy
run: echo '${{ toJSON(fromJSON(needs.define-variables.outputs.build_matrix)) }}'
@ -140,8 +144,8 @@ jobs:
uses: actions/cache@v3
with:
path: |
cargo-target-${{ matrix.slug }}
key: cargo-target-${{ matrix.slug }}-${{hashFiles('**/Cargo.lock') }}-${{steps.rust-toolchain.outputs.rustc_version}}
cargo-target-${{ matrix.target_cpu }}-${{ matrix.slug }}-${{ matrix.profile }}
key: cargo-target-${{ matrix.target_cpu }}-${{ matrix.slug }}-${{ matrix.profile }}-${{hashFiles('**/Cargo.lock') }}-${{steps.rust-toolchain.outputs.rustc_version}}
- name: Cache apt cache
id: cache-apt
uses: actions/cache@v3
@ -163,9 +167,9 @@ jobs:
{
".cargo/registry": "/usr/local/cargo/registry",
".cargo/git/db": "/usr/local/cargo/git/db",
"cargo-target-${{ matrix.slug }}": {
"cargo-target-${{ matrix.target_cpu }}-${{ matrix.slug }}-${{ matrix.profile }}": {
"target": "/app/target",
"id": "cargo-target-${{ matrix.platform }}"
"id": "cargo-target-${{ matrix.target_cpu }}-${{ matrix.slug }}-${{ matrix.profile }}"
},
"var-cache-apt-${{ matrix.slug }}": "/var/cache/apt",
"var-lib-apt-${{ matrix.slug }}": "/var/lib/apt"

647
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -1,15 +1,16 @@
ARG RUST_VERSION=1
ARG DEBIAN_VERSION=bookworm
FROM --platform=$BUILDPLATFORM docker.io/tonistiigi/xx AS xx
FROM --platform=$BUILDPLATFORM rust:${RUST_VERSION}-slim-bookworm AS base
FROM --platform=$BUILDPLATFORM rust:${RUST_VERSION}-slim-bookworm AS toolchain
FROM --platform=$BUILDPLATFORM rust:${RUST_VERSION}-slim-${DEBIAN_VERSION} AS base
FROM --platform=$BUILDPLATFORM rust:${RUST_VERSION}-slim-${DEBIAN_VERSION} AS toolchain
# Prevent deletion of apt cache
RUN rm -f /etc/apt/apt.conf.d/docker-clean
# Match Rustc version as close as possible
# rustc -vV
ARG LLVM_VERSION=19
ARG LLVM_VERSION=20
# ENV RUSTUP_TOOLCHAIN=${RUST_VERSION}
# Install repo tools
@ -19,10 +20,18 @@ ARG LLVM_VERSION=19
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \
clang-${LLVM_VERSION} lld-${LLVM_VERSION} pkg-config make jq \
curl git \
pkg-config make jq \
curl git software-properties-common \
file
# LLVM packages
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
curl https://apt.llvm.org/llvm.sh > llvm.sh && \
chmod +x llvm.sh && \
./llvm.sh ${LLVM_VERSION} && \
rm llvm.sh
# Create symlinks for LLVM tools
RUN <<EOF
set -o xtrace
@ -39,7 +48,7 @@ EOF
# Developer tool versions
# renovate: datasource=github-releases depName=cargo-bins/cargo-binstall
ENV BINSTALL_VERSION=1.12.3
ENV BINSTALL_VERSION=1.13.0
# renovate: datasource=github-releases depName=psastras/sbom-rs
ENV CARGO_SBOM_VERSION=0.9.1
# renovate: datasource=crate depName=lddtree
@ -140,11 +149,12 @@ ENV GIT_REMOTE_COMMIT_URL=$GIT_REMOTE_COMMIT_URL
ENV CONDUWUIT_VERSION_EXTRA=$CONDUWUIT_VERSION_EXTRA
ENV CONTINUWUITY_VERSION_EXTRA=$CONTINUWUITY_VERSION_EXTRA
ARG RUST_PROFILE=release
# Build the binary
RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/usr/local/cargo/git/db \
--mount=type=cache,target=/app/target,id=cargo-target-${TARGETPLATFORM} \
--mount=type=cache,target=/app/target,id=cargo-target-${TARGET_CPU}-${TARGETPLATFORM}-${RUST_PROFILE} \
bash <<'EOF'
set -o allexport
set -o xtrace
@ -153,7 +163,7 @@ RUN --mount=type=cache,target=/usr/local/cargo/registry \
jq -r ".target_directory"))
mkdir /out/sbin
PACKAGE=conduwuit
xx-cargo build --locked --release \
xx-cargo build --locked --profile ${RUST_PROFILE} \
-p $PACKAGE;
BINARIES=($(cargo metadata --no-deps --format-version 1 | \
jq -r ".packages[] | select(.name == \"$PACKAGE\") | .targets[] | select( .kind | map(. == \"bin\") | any ) | .name"))

View file

@ -1,3 +1,5 @@
# Continuwuity for Arch Linux
Continuwuity does not have any Arch Linux packages at this time.
Continuwuity is available on the `archlinuxcn` repository and AUR, with the same package name `continuwuity`, which includes latest taggged version. The development version is available on AUR as `continuwuity-git`
Simply install the `continuwuity` package. Configure the service in `/etc/conduwuit/conduwuit.toml`, then enable/start the continuwuity.service.

View file

@ -9,7 +9,7 @@
# If you're having trouble making the relevant changes, bug a maintainer.
[toolchain]
channel = "1.86.0"
channel = "1.87.0"
profile = "minimal"
components = [
# For rust-analyzer
@ -19,11 +19,3 @@ components = [
"rustfmt",
"clippy",
]
targets = [
#"x86_64-apple-darwin",
"x86_64-unknown-linux-gnu",
"x86_64-unknown-linux-musl",
"aarch64-unknown-linux-musl",
"aarch64-unknown-linux-gnu",
#"aarch64-apple-darwin",
]

View file

@ -239,10 +239,11 @@ pub(super) async fn get_remote_pdu(
})
.await
{
| Err(e) =>
| Err(e) => {
return Err!(
"Remote server did not have PDU or failed sending request to remote server: {e}"
),
);
},
| Ok(response) => {
let json: CanonicalJsonObject =
serde_json::from_str(response.pdu.get()).map_err(|e| {
@ -384,8 +385,9 @@ pub(super) async fn change_log_level(&self, filter: Option<String>, reset: bool)
.reload
.reload(&old_filter_layer, Some(handles))
{
| Err(e) =>
return Err!("Failed to modify and reload the global tracing log level: {e}"),
| Err(e) => {
return Err!("Failed to modify and reload the global tracing log level: {e}");
},
| Ok(()) => {
let value = &self.services.server.config.log;
let out = format!("Successfully changed log level back to config value {value}");
@ -408,8 +410,9 @@ pub(super) async fn change_log_level(&self, filter: Option<String>, reset: bool)
.reload(&new_filter_layer, Some(handles))
{
| Ok(()) => return self.write_str("Successfully changed log level").await,
| Err(e) =>
return Err!("Failed to modify and reload the global tracing log level: {e}"),
| Err(e) => {
return Err!("Failed to modify and reload the global tracing log level: {e}");
},
}
}
@ -529,6 +532,7 @@ pub(super) async fn force_set_room_state_from_server(
&self,
room_id: OwnedRoomId,
server_name: OwnedServerName,
at_event: Option<OwnedEventId>,
) -> Result {
if !self
.services
@ -540,13 +544,18 @@ pub(super) async fn force_set_room_state_from_server(
return Err!("We are not participating in the room / we don't know about the room ID.");
}
let first_pdu = self
.services
.rooms
.timeline
.latest_pdu_in_room(&room_id)
.await
.map_err(|_| err!(Database("Failed to find the latest PDU in database")))?;
let at_event_id = match at_event {
| Some(event_id) => event_id,
| None => self
.services
.rooms
.timeline
.latest_pdu_in_room(&room_id)
.await
.map_err(|_| err!(Database("Failed to find the latest PDU in database")))?
.event_id
.clone(),
};
let room_version = self.services.rooms.state.get_room_version(&room_id).await?;
@ -557,7 +566,7 @@ pub(super) async fn force_set_room_state_from_server(
.sending
.send_federation_request(&server_name, get_room_state::v1::Request {
room_id: room_id.clone(),
event_id: first_pdu.event_id.clone(),
event_id: at_event_id,
})
.await?;

View file

@ -177,6 +177,9 @@ pub(super) enum DebugCommand {
room_id: OwnedRoomId,
/// The server we will use to query the room state for
server_name: OwnedServerName,
/// The event ID of the latest known PDU in the room. Will be found
/// automatically if not provided.
event_id: Option<OwnedEventId>,
},
/// - Runs a server name through conduwuit's true destination resolution

View file

@ -1823,9 +1823,9 @@ pub struct Config {
pub stream_amplification: usize,
/// Number of sender task workers; determines sender parallelism. Default is
/// '4'. Override by setting a different value. Values clamped 1 to core count.
/// core count. Override by setting a different value.
///
/// default: 4
/// default: core count
#[serde(default = "default_sender_workers")]
pub sender_workers: usize,
@ -2312,7 +2312,7 @@ fn default_stream_width_scale() -> f32 { 1.0 }
fn default_stream_amplification() -> usize { 1024 }
fn default_sender_workers() -> usize { 4 }
fn default_sender_workers() -> usize { parallelism_scaled(1) }
fn default_client_receive_timeout() -> u64 { 75 }

View file

@ -976,9 +976,8 @@ impl Service {
state_lock: &'a RoomMutexGuard,
) -> Result<Option<RawPduId>>
where
Leaves: Iterator<Item = &'a EventId> + Send + Clone + 'a,
Leaves: Iterator<Item = &'a EventId> + Send + 'a,
{
assert!(new_room_leaves.clone().count() > 0, "extremities are empty");
// We append to state before appending the pdu, so we don't have a moment in
// time with the pdu without it's state. This is okay because append_pdu can't
// fail.