Compare commits

..

38 commits

Author SHA1 Message Date
Jacob Taylor
8a596b272d vehicle loan documentation now available at window 7
Some checks failed
Release Docker Image / define-variables (push) Failing after 1s
Release Docker Image / build-image (linux/amd64, linux-amd64) (push) Has been skipped
Release Docker Image / build-image (linux/arm64, linux-arm64) (push) Has been skipped
Release Docker Image / merge (push) Has been skipped
Rust Checks / Format (push) Failing after 1s
Rust Checks / Clippy (push) Failing after 8s
Rust Checks / Cargo Test (push) Failing after 17s
2025-06-21 06:52:40 -07:00
Jacob Taylor
e42a01eecf partially revert assertions
Some checks failed
Release Docker Image / define-variables (push) Failing after 7s
Release Docker Image / build-image (linux/amd64, linux-amd64) (push) Has been skipped
Release Docker Image / build-image (linux/arm64, linux-arm64) (push) Has been skipped
Release Docker Image / merge (push) Has been skipped
Rust Checks / Format (push) Failing after 6s
Rust Checks / Clippy (push) Failing after 28s
Rust Checks / Cargo Test (push) Failing after 28s
2025-06-19 20:23:49 -07:00
nexy7574
d02e032cbd Add suggested assertations to prevent potentially broken extremities
Some checks failed
Release Docker Image / define-variables (push) Failing after 1s
Release Docker Image / build-image (linux/amd64, linux-amd64) (push) Has been skipped
Release Docker Image / build-image (linux/arm64, linux-arm64) (push) Has been skipped
Release Docker Image / merge (push) Has been skipped
Rust Checks / Format (push) Failing after 2s
Rust Checks / Clippy (push) Failing after 32s
Rust Checks / Cargo Test (push) Failing after 31s
2025-06-19 08:09:54 -07:00
Jacob Taylor
660c260b8d lock the getter instead ??? c/o M
Some checks failed
Release Docker Image / define-variables (push) Failing after 1s
Release Docker Image / build-image (linux/amd64, linux-amd64) (push) Has been skipped
Release Docker Image / build-image (linux/arm64, linux-arm64) (push) Has been skipped
Release Docker Image / merge (push) Has been skipped
Rust Checks / Format (push) Failing after 1s
Rust Checks / Clippy (push) Failing after 24s
Rust Checks / Cargo Test (push) Failing after 25s
2025-06-18 16:57:19 -07:00
Jacob Taylor
cd29b06221 Revert "I am calling you today about your car's extended warranty"
Some checks failed
Rust Checks / Format (push) Failing after 2s
Release Docker Image / define-variables (push) Failing after 5s
Release Docker Image / build-image (linux/amd64, linux-amd64) (push) Has been skipped
Release Docker Image / build-image (linux/arm64, linux-arm64) (push) Has been skipped
Release Docker Image / merge (push) Has been skipped
Rust Checks / Clippy (push) Failing after 48s
Rust Checks / Cargo Test (push) Failing after 48s
the ringing was too loud

This reverts commit a3361f215b.
2025-06-18 16:10:20 -07:00
Jacob Taylor
a3361f215b I am calling you today about your car's extended warranty 2025-06-18 15:51:23 -07:00
Jacob Taylor
540f8c2100 fix warns from cargo 2025-06-18 15:51:23 -07:00
Jacob Taylor
bc515aad31 update 1 document 2025-06-18 15:51:23 -07:00
Jacob Taylor
37d17f603e delete unused num_threads function 2025-06-18 15:51:23 -07:00
Jacob Taylor
29265473b8 make fetching key room events less smart 2025-06-18 14:39:08 -07:00
Jacob Taylor
e4a4ed71e2 change rocksdb default error level to info from error 2025-06-18 14:39:08 -07:00
Jacob Taylor
4e89fe8882 scale rocksdb background jobs and subcompactions 2025-06-18 14:39:08 -07:00
Jacob Taylor
461c5976d6 change rocksdb stats level to 3 2025-06-18 14:39:08 -07:00
Jacob Taylor
d7e2c263d9 add more parallelism_scaled and make them public 2025-06-18 14:39:08 -07:00
Jacob Taylor
14174a79ba update sender workers documentation 2025-06-18 14:39:08 -07:00
nexy7574
ba43217696 modify more log strings so they're more useful than not 2025-06-18 14:39:08 -07:00
nexy7574
2b49cd72fe When in doubt, log all the things 2025-06-18 14:39:08 -07:00
Jacob Taylor
e7478f1eac make sender workers default better and clamp value to core count 2025-06-18 14:39:08 -07:00
Jacob Taylor
b9248c879d better roomid_spacehierarchy_cache_capacity 2025-06-18 14:39:08 -07:00
nexy7574
3db5837696 log which room struggled to get mainline depth 2025-06-18 14:39:08 -07:00
nexy7574
7adb1d9d30 more logs 2025-06-18 14:39:08 -07:00
nexy7574
9875e25e1e Unsafe, untested, and potentially overeager PDU sanity checks 2025-06-18 14:39:08 -07:00
nexy7574
c76dc56f5b Fix room ID check 2025-06-18 14:39:08 -07:00
nexy7574
bc2c77f56a Kick up a fuss when m.room.create is unfindable 2025-06-18 14:39:08 -07:00
nexy7574
4d5434dd1d Note about ruma#2064 in TODO 2025-06-18 14:39:08 -07:00
nexy7574
da7fb29696 fix an auth rule not applying correctly 2025-06-18 14:39:08 -07:00
Jacob Taylor
844f058756 better stateinfo_cache_capacity default 2025-06-18 14:39:08 -07:00
Jacob Taylor
d013e0bded enable converged 6g at the edge in continuwuity 2025-06-18 14:39:08 -07:00
nexy7574
2deb8df924 Always calculate state diff IDs in syncv3
seemingly fixes #779
2025-06-18 14:39:08 -07:00
Jacob Taylor
604cb9657f upgrade some settings to enable 5g in continuwuity 2025-06-18 14:39:08 -07:00
Jacob Taylor
43f9339bec add futures::FutureExt to make cb15ac3c01 work 2025-06-18 14:39:08 -07:00
Jason Volk
3ecd496af0 Mitigate large futures
Signed-off-by: Jason Volk <jason@zemos.net>
2025-06-18 14:39:08 -07:00
Jacob Taylor
61feec28c6 bump the number of allowed immutable memtables by 1, to allow for greater flood protection
this should probably not be applied if you have rocksdb_atomic_flush = false (the default)
2025-06-18 14:39:08 -07:00
Jacob Taylor
9ff45a5587 probably incorrectly delete support for non-standardized matrix srv record 2025-06-18 14:39:08 -07:00
Jacob Taylor
fa74c3487a Fix spaces rooms list load error. rev2 2025-06-18 14:39:08 -07:00
Jade Ellis
908efef692 fix: Filter out invalid replacements from bundled aggregations 2025-06-18 14:39:08 -07:00
Jade Ellis
69c66af0ea feat: Add bundled aggregations support
Add support for the m.replace and m.reference bundled
aggregations.
This should fix plenty of subtle client issues.
Threads are not included in the new code as they have
historically been written to the database. Replacing the
old system would result in issues when switching away from
continuwuity, so saved for later.
Some TODOs have been left re event visibility and ignored users.
These should be OK for now, though.
2025-06-18 14:39:08 -07:00
Jade Ellis
44497898f7 refactor: Promote handling unsigned data out of timeline
Also fixes:
- Transaction IDs leaking in event route
- Age not being set for event relations or threads
- Both of the above for search results

Notes down concern with relations table
2025-06-18 14:39:08 -07:00
9 changed files with 270 additions and 472 deletions

View file

@ -49,7 +49,6 @@ jobs:
const platforms = ['linux/amd64', 'linux/arm64'] const platforms = ['linux/amd64', 'linux/arm64']
core.setOutput('build_matrix', JSON.stringify({ core.setOutput('build_matrix', JSON.stringify({
platform: platforms, platform: platforms,
target_cpu: ['base'],
include: platforms.map(platform => { return { include: platforms.map(platform => { return {
platform, platform,
slug: platform.replace('/', '-') slug: platform.replace('/', '-')
@ -67,8 +66,6 @@ jobs:
strategy: strategy:
matrix: matrix:
{ {
"target_cpu": ["base"],
"profile": ["release"],
"include": "include":
[ [
{ "platform": "linux/amd64", "slug": "linux-amd64" }, { "platform": "linux/amd64", "slug": "linux-amd64" },
@ -76,7 +73,6 @@ jobs:
], ],
"platform": ["linux/amd64", "linux/arm64"], "platform": ["linux/amd64", "linux/arm64"],
} }
steps: steps:
- name: Echo strategy - name: Echo strategy
run: echo '${{ toJSON(fromJSON(needs.define-variables.outputs.build_matrix)) }}' run: echo '${{ toJSON(fromJSON(needs.define-variables.outputs.build_matrix)) }}'
@ -144,8 +140,8 @@ jobs:
uses: actions/cache@v3 uses: actions/cache@v3
with: with:
path: | path: |
cargo-target-${{ matrix.target_cpu }}-${{ matrix.slug }}-${{ matrix.profile }} cargo-target-${{ matrix.slug }}
key: cargo-target-${{ matrix.target_cpu }}-${{ matrix.slug }}-${{ matrix.profile }}-${{hashFiles('**/Cargo.lock') }}-${{steps.rust-toolchain.outputs.rustc_version}} key: cargo-target-${{ matrix.slug }}-${{hashFiles('**/Cargo.lock') }}-${{steps.rust-toolchain.outputs.rustc_version}}
- name: Cache apt cache - name: Cache apt cache
id: cache-apt id: cache-apt
uses: actions/cache@v3 uses: actions/cache@v3
@ -167,9 +163,9 @@ jobs:
{ {
".cargo/registry": "/usr/local/cargo/registry", ".cargo/registry": "/usr/local/cargo/registry",
".cargo/git/db": "/usr/local/cargo/git/db", ".cargo/git/db": "/usr/local/cargo/git/db",
"cargo-target-${{ matrix.target_cpu }}-${{ matrix.slug }}-${{ matrix.profile }}": { "cargo-target-${{ matrix.slug }}": {
"target": "/app/target", "target": "/app/target",
"id": "cargo-target-${{ matrix.target_cpu }}-${{ matrix.slug }}-${{ matrix.profile }}" "id": "cargo-target-${{ matrix.platform }}"
}, },
"var-cache-apt-${{ matrix.slug }}": "/var/cache/apt", "var-cache-apt-${{ matrix.slug }}": "/var/cache/apt",
"var-lib-apt-${{ matrix.slug }}": "/var/lib/apt" "var-lib-apt-${{ matrix.slug }}": "/var/lib/apt"

641
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -1,16 +1,15 @@
ARG RUST_VERSION=1 ARG RUST_VERSION=1
ARG DEBIAN_VERSION=bookworm
FROM --platform=$BUILDPLATFORM docker.io/tonistiigi/xx AS xx FROM --platform=$BUILDPLATFORM docker.io/tonistiigi/xx AS xx
FROM --platform=$BUILDPLATFORM rust:${RUST_VERSION}-slim-${DEBIAN_VERSION} AS base FROM --platform=$BUILDPLATFORM rust:${RUST_VERSION}-slim-bookworm AS base
FROM --platform=$BUILDPLATFORM rust:${RUST_VERSION}-slim-${DEBIAN_VERSION} AS toolchain FROM --platform=$BUILDPLATFORM rust:${RUST_VERSION}-slim-bookworm AS toolchain
# Prevent deletion of apt cache # Prevent deletion of apt cache
RUN rm -f /etc/apt/apt.conf.d/docker-clean RUN rm -f /etc/apt/apt.conf.d/docker-clean
# Match Rustc version as close as possible # Match Rustc version as close as possible
# rustc -vV # rustc -vV
ARG LLVM_VERSION=20 ARG LLVM_VERSION=19
# ENV RUSTUP_TOOLCHAIN=${RUST_VERSION} # ENV RUSTUP_TOOLCHAIN=${RUST_VERSION}
# Install repo tools # Install repo tools
@ -20,18 +19,10 @@ ARG LLVM_VERSION=20
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \ --mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \ apt-get update && apt-get install -y \
pkg-config make jq \ clang-${LLVM_VERSION} lld-${LLVM_VERSION} pkg-config make jq \
curl git software-properties-common \ curl git \
file file
# LLVM packages
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
curl https://apt.llvm.org/llvm.sh > llvm.sh && \
chmod +x llvm.sh && \
./llvm.sh ${LLVM_VERSION} && \
rm llvm.sh
# Create symlinks for LLVM tools # Create symlinks for LLVM tools
RUN <<EOF RUN <<EOF
set -o xtrace set -o xtrace
@ -48,7 +39,7 @@ EOF
# Developer tool versions # Developer tool versions
# renovate: datasource=github-releases depName=cargo-bins/cargo-binstall # renovate: datasource=github-releases depName=cargo-bins/cargo-binstall
ENV BINSTALL_VERSION=1.13.0 ENV BINSTALL_VERSION=1.12.3
# renovate: datasource=github-releases depName=psastras/sbom-rs # renovate: datasource=github-releases depName=psastras/sbom-rs
ENV CARGO_SBOM_VERSION=0.9.1 ENV CARGO_SBOM_VERSION=0.9.1
# renovate: datasource=crate depName=lddtree # renovate: datasource=crate depName=lddtree
@ -149,12 +140,11 @@ ENV GIT_REMOTE_COMMIT_URL=$GIT_REMOTE_COMMIT_URL
ENV CONDUWUIT_VERSION_EXTRA=$CONDUWUIT_VERSION_EXTRA ENV CONDUWUIT_VERSION_EXTRA=$CONDUWUIT_VERSION_EXTRA
ENV CONTINUWUITY_VERSION_EXTRA=$CONTINUWUITY_VERSION_EXTRA ENV CONTINUWUITY_VERSION_EXTRA=$CONTINUWUITY_VERSION_EXTRA
ARG RUST_PROFILE=release
# Build the binary # Build the binary
RUN --mount=type=cache,target=/usr/local/cargo/registry \ RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/usr/local/cargo/git/db \ --mount=type=cache,target=/usr/local/cargo/git/db \
--mount=type=cache,target=/app/target,id=cargo-target-${TARGET_CPU}-${TARGETPLATFORM}-${RUST_PROFILE} \ --mount=type=cache,target=/app/target,id=cargo-target-${TARGETPLATFORM} \
bash <<'EOF' bash <<'EOF'
set -o allexport set -o allexport
set -o xtrace set -o xtrace
@ -163,7 +153,7 @@ RUN --mount=type=cache,target=/usr/local/cargo/registry \
jq -r ".target_directory")) jq -r ".target_directory"))
mkdir /out/sbin mkdir /out/sbin
PACKAGE=conduwuit PACKAGE=conduwuit
xx-cargo build --locked --profile ${RUST_PROFILE} \ xx-cargo build --locked --release \
-p $PACKAGE; -p $PACKAGE;
BINARIES=($(cargo metadata --no-deps --format-version 1 | \ BINARIES=($(cargo metadata --no-deps --format-version 1 | \
jq -r ".packages[] | select(.name == \"$PACKAGE\") | .targets[] | select( .kind | map(. == \"bin\") | any ) | .name")) jq -r ".packages[] | select(.name == \"$PACKAGE\") | .targets[] | select( .kind | map(. == \"bin\") | any ) | .name"))

View file

@ -1,5 +1,3 @@
# Continuwuity for Arch Linux # Continuwuity for Arch Linux
Continuwuity is available on the `archlinuxcn` repository and AUR, with the same package name `continuwuity`, which includes latest taggged version. The development version is available on AUR as `continuwuity-git` Continuwuity does not have any Arch Linux packages at this time.
Simply install the `continuwuity` package. Configure the service in `/etc/conduwuit/conduwuit.toml`, then enable/start the continuwuity.service.

View file

@ -9,7 +9,7 @@
# If you're having trouble making the relevant changes, bug a maintainer. # If you're having trouble making the relevant changes, bug a maintainer.
[toolchain] [toolchain]
channel = "1.87.0" channel = "1.86.0"
profile = "minimal" profile = "minimal"
components = [ components = [
# For rust-analyzer # For rust-analyzer
@ -19,3 +19,11 @@ components = [
"rustfmt", "rustfmt",
"clippy", "clippy",
] ]
targets = [
#"x86_64-apple-darwin",
"x86_64-unknown-linux-gnu",
"x86_64-unknown-linux-musl",
"aarch64-unknown-linux-musl",
"aarch64-unknown-linux-gnu",
#"aarch64-apple-darwin",
]

View file

@ -239,11 +239,10 @@ pub(super) async fn get_remote_pdu(
}) })
.await .await
{ {
| Err(e) => { | Err(e) =>
return Err!( return Err!(
"Remote server did not have PDU or failed sending request to remote server: {e}" "Remote server did not have PDU or failed sending request to remote server: {e}"
); ),
},
| Ok(response) => { | Ok(response) => {
let json: CanonicalJsonObject = let json: CanonicalJsonObject =
serde_json::from_str(response.pdu.get()).map_err(|e| { serde_json::from_str(response.pdu.get()).map_err(|e| {
@ -385,9 +384,8 @@ pub(super) async fn change_log_level(&self, filter: Option<String>, reset: bool)
.reload .reload
.reload(&old_filter_layer, Some(handles)) .reload(&old_filter_layer, Some(handles))
{ {
| Err(e) => { | Err(e) =>
return Err!("Failed to modify and reload the global tracing log level: {e}"); return Err!("Failed to modify and reload the global tracing log level: {e}"),
},
| Ok(()) => { | Ok(()) => {
let value = &self.services.server.config.log; let value = &self.services.server.config.log;
let out = format!("Successfully changed log level back to config value {value}"); let out = format!("Successfully changed log level back to config value {value}");
@ -410,9 +408,8 @@ pub(super) async fn change_log_level(&self, filter: Option<String>, reset: bool)
.reload(&new_filter_layer, Some(handles)) .reload(&new_filter_layer, Some(handles))
{ {
| Ok(()) => return self.write_str("Successfully changed log level").await, | Ok(()) => return self.write_str("Successfully changed log level").await,
| Err(e) => { | Err(e) =>
return Err!("Failed to modify and reload the global tracing log level: {e}"); return Err!("Failed to modify and reload the global tracing log level: {e}"),
},
} }
} }
@ -532,7 +529,6 @@ pub(super) async fn force_set_room_state_from_server(
&self, &self,
room_id: OwnedRoomId, room_id: OwnedRoomId,
server_name: OwnedServerName, server_name: OwnedServerName,
at_event: Option<OwnedEventId>,
) -> Result { ) -> Result {
if !self if !self
.services .services
@ -544,18 +540,13 @@ pub(super) async fn force_set_room_state_from_server(
return Err!("We are not participating in the room / we don't know about the room ID."); return Err!("We are not participating in the room / we don't know about the room ID.");
} }
let at_event_id = match at_event { let first_pdu = self
| Some(event_id) => event_id,
| None => self
.services .services
.rooms .rooms
.timeline .timeline
.latest_pdu_in_room(&room_id) .latest_pdu_in_room(&room_id)
.await .await
.map_err(|_| err!(Database("Failed to find the latest PDU in database")))? .map_err(|_| err!(Database("Failed to find the latest PDU in database")))?;
.event_id
.clone(),
};
let room_version = self.services.rooms.state.get_room_version(&room_id).await?; let room_version = self.services.rooms.state.get_room_version(&room_id).await?;
@ -566,7 +557,7 @@ pub(super) async fn force_set_room_state_from_server(
.sending .sending
.send_federation_request(&server_name, get_room_state::v1::Request { .send_federation_request(&server_name, get_room_state::v1::Request {
room_id: room_id.clone(), room_id: room_id.clone(),
event_id: at_event_id, event_id: first_pdu.event_id.clone(),
}) })
.await?; .await?;

View file

@ -177,9 +177,6 @@ pub(super) enum DebugCommand {
room_id: OwnedRoomId, room_id: OwnedRoomId,
/// The server we will use to query the room state for /// The server we will use to query the room state for
server_name: OwnedServerName, server_name: OwnedServerName,
/// The event ID of the latest known PDU in the room. Will be found
/// automatically if not provided.
event_id: Option<OwnedEventId>,
}, },
/// - Runs a server name through conduwuit's true destination resolution /// - Runs a server name through conduwuit's true destination resolution

View file

@ -1823,9 +1823,9 @@ pub struct Config {
pub stream_amplification: usize, pub stream_amplification: usize,
/// Number of sender task workers; determines sender parallelism. Default is /// Number of sender task workers; determines sender parallelism. Default is
/// core count. Override by setting a different value. /// '4'. Override by setting a different value. Values clamped 1 to core count.
/// ///
/// default: core count /// default: 4
#[serde(default = "default_sender_workers")] #[serde(default = "default_sender_workers")]
pub sender_workers: usize, pub sender_workers: usize,
@ -2312,7 +2312,7 @@ fn default_stream_width_scale() -> f32 { 1.0 }
fn default_stream_amplification() -> usize { 1024 } fn default_stream_amplification() -> usize { 1024 }
fn default_sender_workers() -> usize { parallelism_scaled(1) } fn default_sender_workers() -> usize { 4 }
fn default_client_receive_timeout() -> u64 { 75 } fn default_client_receive_timeout() -> u64 { 75 }

View file

@ -976,8 +976,9 @@ impl Service {
state_lock: &'a RoomMutexGuard, state_lock: &'a RoomMutexGuard,
) -> Result<Option<RawPduId>> ) -> Result<Option<RawPduId>>
where where
Leaves: Iterator<Item = &'a EventId> + Send + 'a, Leaves: Iterator<Item = &'a EventId> + Send + Clone + 'a,
{ {
assert!(new_room_leaves.clone().count() > 0, "extremities are empty");
// We append to state before appending the pdu, so we don't have a moment in // We append to state before appending the pdu, so we don't have a moment in
// time with the pdu without it's state. This is okay because append_pdu can't // time with the pdu without it's state. This is okay because append_pdu can't
// fail. // fail.