92 Commits

Author SHA1 Message Date
f7dd8b24f4 fix(ci): disable GitHub Actions cache and attestation for Gitea
Some checks failed
release server / resolve build metadata (push) Successful in 6s
release server / create release (push) Has been cancelled
release server / release summary (push) Has been cancelled
release server / build fluxer server (push) Has been cancelled
The GHA cache and attestation features don't exist in Gitea and were
causing the workflow to timeout/fail even though the build succeeded.
Disabled these GitHub-specific features.
2026-03-01 13:09:34 -05:00
84ec7653d2 docs: document successful Docker build
Successfully built fluxer-server image for both amd64 and arm64.
Image pushed to Gitea container registry with multiple tags.
All critical patches from third-party guide applied.
2026-03-01 12:55:08 -05:00
2e9010da53 fix(docker): allow fluxer_app/scripts/build directory
Some checks failed
test cassandra-backup / Test latest Cassandra backup (push) Has been cancelled
The **/build pattern was blocking the scripts/build/ directory
which contains critical rspack build scripts like lingui.mjs.
Explicitly allowing this directory to be included.
2026-03-01 12:16:31 -05:00
a890b11bf2 fix(docker): reinstall dependencies after copying source
Some checks failed
release server / resolve build metadata (push) Successful in 8s
release server / build fluxer server (push) Failing after 3m21s
release server / create release (push) Has been skipped
release server / release summary (push) Failing after 5s
After copying the full package source, we need to run pnpm install again
to ensure all dependencies are properly linked and TypeScript can find
all required modules like idna-uts46-hx.
2026-03-01 12:00:53 -05:00
7c903e72e0 fix(docker): generate locale files before app build
Some checks failed
release server / resolve build metadata (push) Successful in 6s
release server / build fluxer server (push) Failing after 3m19s
release server / create release (push) Has been skipped
release server / release summary (push) Failing after 5s
The build script runs TypeScript compilation (tsgo) before lingui:compile,
causing it to fail when looking for @app/locales/*/messages.mjs files.
Running lingui:compile separately before the build fixes this.
2026-03-01 11:55:43 -05:00
482b7dee25 fix(docker): properly allow locale and data files
Fixed .dockerignore syntax - was using '#!' instead of proper negation.
Now explicitly allowing fluxer_app/src/data/** and fluxer_app/src/locales/**
with proper negation syntax.
2026-03-01 11:38:01 -05:00
a258752adc fix(docker): allow emoji data and locale files in build
The .dockerignore was blocking critical files needed for frontend build:
- fluxer_app/src/data/emojis.json
- fluxer_app/src/locales/*/messages.*

These files must be included for the app build to succeed.
2026-03-01 11:31:32 -05:00
d848765cc2 fix(docker): skip typecheck in build - already validated in CI
Some checks failed
release server / resolve build metadata (push) Successful in 6s
release server / build fluxer server (push) Failing after 3m11s
release server / create release (push) Has been skipped
release server / release summary (push) Failing after 5s
The typecheck step was failing due to missing module declarations
and is redundant since CI already runs typecheck on every PR.
Removing this speeds up the Docker build.
2026-03-01 11:17:18 -05:00
3ad2ca08c3 fix(docker): apply critical patches from third-party guide
Some checks failed
release server / resolve build metadata (push) Successful in 6s
release server / build fluxer server (push) Failing after 1m31s
release server / create release (push) Has been skipped
release server / release summary (push) Failing after 5s
Based on https://gist.github.com/PaulMColeman/e7ef82e05035b24300d2ea1954527f10

Changes:
- Add ca-certificates for rustup HTTPS downloads
- Install Rust and WASM toolchain for frontend WebAssembly compilation
- Copy config directory needed for FLUXER_CONFIG env var
- Set FLUXER_CONFIG so rspack can derive API endpoints
- Update ENTRYPOINT to target @fluxer/server package specifically
- Fix .dockerignore to allow build scripts and locale files

These fixes address the most critical build issues documented in the
community guide for self-hosting Fluxer.
2026-03-01 11:12:56 -05:00
2f443dc661 fix(docker): correct package path from app to app_proxy
Some checks failed
release server / build fluxer server (push) Failing after 1m32s
release server / create release (push) Has been skipped
release server / release summary (push) Failing after 5s
release server / resolve build metadata (push) Successful in 7s
The packages/app directory doesn't exist, should be packages/app_proxy
2026-03-01 11:06:05 -05:00
d977b35636 fix: use registry_token secret for Gitea authentication
Some checks failed
release server / resolve build metadata (push) Successful in 12s
release server / build fluxer server (push) Failing after 3m40s
release server / create release (push) Has been skipped
release server / release summary (push) Failing after 5s
test cassandra-backup / Test latest Cassandra backup (push) Has been cancelled
Gitea doesn't allow secrets with GITEA_ or GITHUB_ prefixes.
Using registry_token secret instead.
2026-03-01 10:51:58 -05:00
3577f5fb95 fix: update Docker registry authentication for Gitea
Some checks failed
release server / resolve build metadata (push) Successful in 6s
release server / build fluxer server (push) Failing after 13s
release server / create release (push) Has been skipped
release server / release summary (push) Failing after 4s
- Use GITEA_TOKEN or github.token for authentication
- Add documentation for manual token setup if needed
2026-03-01 10:47:04 -05:00
0aed4041b8 chore: adapt CI workflows for Gitea and uwu.lc deployment
Some checks failed
release server / resolve build metadata (push) Successful in 6s
release server / build fluxer server (push) Failing after 2m56s
release server / create release (push) Has been skipped
release server / release summary (push) Failing after 4s
- Update all workflow runners from blacksmith to ubuntu-latest
- Configure release-server.yaml for Gitea container registry (git.i5.wtf)
- Change trigger branch from canary to uwu
- Add NOTES.md documenting branch setup and changes
2026-03-01 10:34:43 -05:00
16b88bca3f chore: update LFS config for self-hosted Gitea instance 2026-03-01 10:30:04 -05:00
Jiralite
77a6897180 chore: add status page disclaimer to issue templates
Some checks failed
test cassandra-backup / Test latest Cassandra backup (push) Has been cancelled
2026-02-27 20:24:23 +00:00
Sky Walters
9e8a9dafb8 fix(api): Fix nats filter subject 2026-02-27 03:28:30 -05:00
Hampus Kraft
7b1aa6ff2e fix(api): various subtle memory leaks
(and some not so subtle ones, *cough* ReportService *cough*)
2026-02-27 04:51:36 +00:00
M0N7Y5
848269a4d4 Added support for developing in devcontainer (#480) 2026-02-23 00:24:48 +01:00
Hampus Kraft
fd59bc219c [skip ci] fix(gateway): perf optimisations 2026-02-22 18:47:45 +00:00
Hampus Kraft
d843d6f3f8 fix(gateway): harden REQUEST_GUILD_MEMBERS path against DoS floods 2026-02-22 13:52:53 +00:00
Hampus Kraft
4f5704fa1f fix(docs): remove unreferenced image 2026-02-21 16:57:09 +00:00
Hampus Kraft
f54f62ae3c fix(docs): mintlify-produced syntax error & self-produced typos 2026-02-21 16:57:09 +00:00
Hampus Kraft
2db53689a1 fix(app): add masking of phones to settings 2026-02-21 16:57:09 +00:00
Hampus Kraft
a129b162b7 fix(app): remove broken app icon from oauth2 page 2026-02-21 16:57:09 +00:00
Hampus Kraft
5b8ceff991 fix(app): remove dummy endpoint hack 2026-02-21 16:57:09 +00:00
Hampus Kraft
575d50807a fix(app): accent colour not working 2026-02-21 16:57:08 +00:00
Hampus Kraft
3d0631dd85 [skip ci] 2026-02-21 16:57:08 +00:00
Hampus Kraft
5e41f81752 fix: various fixes to sentry-reported errors 2026-02-21 16:56:50 +00:00
Hampus Kraft
e437426c91 Revert "chore(i18n): update Russian via community contribution"
This reverts commit a974d11a47.
2026-02-21 16:56:49 +00:00
Hampus Kraft
aac20f41ce fix: channel ordering 2026-02-21 16:56:49 +00:00
Hampus Kraft
56258fdb10 chore(i18n): update Russian via community contribution 2026-02-21 16:56:49 +00:00
Hampus Kraft
0e7f370a1e fix: various fixes to sentry-reported errors 2026-02-21 16:56:32 +00:00
Hampus Kraft
c670406751 Revert "chore(i18n): update Russian via community contribution"
This reverts commit a974d11a47.
2026-02-21 16:56:23 +00:00
Hampus Kraft
3bc85d32f6 feat: make custom notification sounds free for all 2026-02-21 16:56:22 +00:00
Hampus Kraft
2a025f9ed2 fix: various fixes to things + simply app proxy sentry setup 2026-02-21 16:56:22 +00:00
Hampus Kraft
96a4be1ad8 fix: channel ordering 2026-02-21 16:56:22 +00:00
Hampus Kraft
b0275953bd chore(i18n): update Russian via community contribution 2026-02-21 16:56:10 +00:00
Hampus Kraft
f351767205 feat: make custom notification sounds free for all 2026-02-21 16:53:57 +00:00
Hampus Kraft
3fd2e8c623 fix: various fixes to things + simply app proxy sentry setup 2026-02-21 16:41:57 +00:00
Hampus Kraft
d90464c381 refactor: squash branch changes 2026-02-21 16:41:56 +00:00
Hampus
c2b69be17d Add images and content to quickstart guide 2026-02-21 16:27:52 +01:00
Hampus Kraft
38146cc2ba chore(i18n): sync i18n 2026-02-21 02:21:13 +00:00
Hampus Kraft
24e9a1529d fix: various fixes to sentry-reported errors 2026-02-21 01:32:33 +00:00
Hampus Kraft
eb194ae5be chore: sync openapi & i18n 2026-02-20 23:52:08 +00:00
Hampus Kraft
1730fc10a0 Revert "chore(i18n): update Russian via community contribution"
This reverts commit a974d11a47.
2026-02-20 23:39:12 +00:00
Hampus Kraft
76ec07da6e feat: new lifetime badge design 2026-02-20 23:35:21 +00:00
Hampus Kraft
e11f9bc52e fix: channel ordering 2026-02-20 23:35:21 +00:00
Hampus Kraft
fcc8463cd8 fix(api): enforce gdm call membership and ring recipient validation 2026-02-20 23:35:21 +00:00
Hampus Kraft
68ae760fa8 fix(gateway): safely unsubscribe from presence 2026-02-20 23:35:20 +00:00
Hampus Kraft
5fceaa79f3 fix(markdown): parse mentions inside brackets correctly 2026-02-20 23:35:20 +00:00
Hampus Kraft
b178c90879 fix(gateway): harden manager calls and guild circuit-breaker ETS handling 2026-02-20 23:35:20 +00:00
Hampus Kraft
2264f1decb feat: make custom notification sounds free for all 2026-02-20 23:32:58 +00:00
Hampus Kraft
d22fbd695a fix: various fixes to things + simply app proxy sentry setup 2026-02-20 23:32:55 +00:00
Hampus Kraft
17306abec6 fix(email): add dns validation of email addresses 2026-02-19 19:34:23 +00:00
Hampus Kraft
3cc07f5e9f fix(email): handle sweego complaints as hard bounces 2026-02-19 19:34:22 +00:00
Hampus Kraft
61984cd1a6 fix(app): pronoun localisation
they/them is difficult to localise correctly in many non-Western languages
2026-02-19 18:23:44 +00:00
Hampus Kraft
a974d11a47 chore(i18n): update Russian via community contribution 2026-02-19 18:23:44 +00:00
Hampus Kraft
f964a4eabb fix(api): prevent startup failure from meilisearch outage 2026-02-19 18:23:43 +00:00
Hampus Kraft
868ddecda4 feat: screenshare hardware acceleration 2026-02-19 16:48:26 +00:00
Hampus Kraft
1a1d13b571 chore: remove chunked uploads for now 2026-02-19 14:59:46 +00:00
Hampus Kraft
bc2a78e5af feat: make custom notification sounds free for all 2026-02-19 13:52:46 +00:00
Hampus Kraft
74e4fc594f fix: don't truncate username in chat 2026-02-19 01:23:38 +00:00
Hampus Kraft
cf06cadcfc chore: cleanup guild rpc 2026-02-19 01:23:38 +00:00
Hampus Kraft
528e4e0d7f fix: various fixes to things + simply app proxy sentry setup 2026-02-19 01:23:38 +00:00
Jiralite
ff1d15f7aa chore: use discussions for feature requests 2026-02-18 23:20:58 +00:00
Hampus Kraft
ffffd24372 chore(i18n): sync 2026-02-18 21:49:56 +00:00
Hampus Kraft
9b59f90f64 chore(openapi): sync 2026-02-18 21:43:10 +00:00
Hampus Kraft
bf92bb6fd3 chore: format code 2026-02-18 21:42:56 +00:00
Hampus Kraft
9f9d67b8aa feat: add lowercase fallback to invites 2026-02-18 21:42:43 +00:00
Hampus Kraft
f1bfd080e2 fix: allow disabling member lists 2026-02-18 21:42:30 +00:00
Hampus Kraft
67267d509d feat: improve guild collection rpcs 2026-02-18 20:50:11 +00:00
Hampus Kraft
571a8af29d fix(i18n): hungarian localisation 2026-02-18 20:22:47 +00:00
Hampus Kraft
fc8ed0934b fix: firefox bug causing badges not to show 2026-02-18 19:50:46 +00:00
Hampus Kraft
08eaad6f76 feat(admin): add resend verification email op 2026-02-18 19:04:25 +00:00
Hampus Kraft
2be274e762 various gw fixes 2026-02-18 18:33:38 +00:00
Hampus Kraft
133c0ad619 chore: remove unused code path 2026-02-18 17:29:07 +00:00
Hampus Kraft
261d4bd84c fix: eliminate more gateway bottlenecks 2026-02-18 17:27:51 +00:00
Hampus Kraft
dee08930f4 [skip ci] 2026-02-18 17:11:17 +00:00
Hampus Kraft
22ba6945ed [skip ci] fix: startup issues in gateway 2026-02-18 16:53:32 +00:00
Hampus Kraft
fda2962148 fix: marketing cors 2026-02-18 16:42:08 +00:00
Hampus Kraft
ac2aba7f0e fix: stripe refund webhook errors 2026-02-18 16:42:04 +00:00
Hampus Kraft
3dce089fe2 chore: remove CLA 2026-02-18 16:41:53 +00:00
Hampus Kraft
23f2d529f0 chore: remove unused workflow 2026-02-18 15:56:49 +00:00
Hampus Kraft
0517a966a3 fix: various fixes to sentry-reported errors and more 2026-02-18 15:38:51 +00:00
Hampus Kraft
302c0d2a0c feat(discovery): more work on discovery plus a few fixes 2026-02-17 15:41:08 +00:00
Hampus Kraft
b19e9fb243 fix(openapi): use a regular int64 type for visionary slot user id 2026-02-17 15:18:40 +00:00
Hampus Kraft
5eb02e272d fix(openapi): simplify nullable union schemas for codegen compat 2026-02-17 14:29:49 +00:00
Hampus Kraft
b227bd0a85 fix(openapi): flatten nullable unions to avoid nested anyOf/oneOf 2026-02-17 14:06:58 +00:00
Hampus Kraft
2db02ec255 fix(openapi): incorrect types on some fields 2026-02-17 13:58:14 +00:00
Hampus Kraft
d5abd1a7e4 refactor progress 2026-02-17 12:22:36 +00:00
Hampus
cb31608523 chore: add note about upcoming refactor to readme 2026-02-12 18:14:00 +01:00
Jiralite
17404f0e41 chore: update issue templates 2026-02-12 13:27:12 +00:00
8315 changed files with 1209831 additions and 762790 deletions

View File

@@ -0,0 +1,54 @@
# Like dev/Caddyfile.dev, but LiveKit and Mailpit are referenced by their
# Docker Compose hostnames instead of 127.0.0.1.
{
auto_https off
admin off
}
:48763 {
handle /_caddy_health {
respond "OK" 200
}
@gateway path /gateway /gateway/*
handle @gateway {
uri strip_prefix /gateway
reverse_proxy 127.0.0.1:49107
}
@marketing path /marketing /marketing/*
handle @marketing {
uri strip_prefix /marketing
reverse_proxy 127.0.0.1:49531
}
@server path /admin /admin/* /api /api/* /s3 /s3/* /queue /queue/* /media /media/* /_health /_ready /_live /.well-known/fluxer
handle @server {
reverse_proxy 127.0.0.1:49319
}
@livekit path /livekit /livekit/*
handle @livekit {
uri strip_prefix /livekit
reverse_proxy livekit:7880
}
redir /mailpit /mailpit/
handle_path /mailpit/* {
rewrite * /mailpit{path}
reverse_proxy mailpit:8025
}
handle {
reverse_proxy 127.0.0.1:49427 {
header_up Connection {http.request.header.Connection}
header_up Upgrade {http.request.header.Upgrade}
}
}
log {
output stdout
format console
}
}
}

40
.devcontainer/Dockerfile Normal file
View File

@@ -0,0 +1,40 @@
# Language runtimes (Node.js, Go, Rust, Python) are installed via devcontainer
# features. This Dockerfile handles Erlang/OTP (no feature available) and
# tools like Caddy, process-compose, rebar3, uv, ffmpeg, and exiftool.
FROM erlang:28-slim AS erlang
FROM mcr.microsoft.com/devcontainers/base:debian-13
ARG DEBIAN_FRONTEND=noninteractive
ARG REBAR3_VERSION=3.24.0
ARG PROCESS_COMPOSE_VERSION=1.90.0
# Both erlang:28-slim and debian-13 are Trixie-based, so OpenSSL versions match.
COPY --from=erlang /usr/local/lib/erlang /usr/local/lib/erlang
RUN ln -sf /usr/local/lib/erlang/bin/* /usr/local/bin/
RUN apt-get update && apt-get install -y --no-install-recommends \
libncurses6 libsctp1 \
build-essential pkg-config \
ffmpeg libimage-exiftool-perl \
sqlite3 libsqlite3-dev \
libssl-dev openssl \
gettext-base lsof iproute2 \
&& rm -rf /var/lib/apt/lists/*
RUN curl -fsSL "https://github.com/erlang/rebar3/releases/download/${REBAR3_VERSION}/rebar3" \
-o /usr/local/bin/rebar3 \
&& chmod +x /usr/local/bin/rebar3
RUN curl -fsSL "https://caddyserver.com/api/download?os=linux&arch=amd64" \
-o /usr/local/bin/caddy \
&& chmod +x /usr/local/bin/caddy
RUN curl -fsSL "https://github.com/F1bonacc1/process-compose/releases/download/v${PROCESS_COMPOSE_VERSION}/process-compose_linux_amd64.tar.gz" \
| tar xz -C /usr/local/bin process-compose \
&& chmod +x /usr/local/bin/process-compose
RUN curl -fsSL "https://github.com/astral-sh/uv/releases/latest/download/uv-x86_64-unknown-linux-gnu.tar.gz" \
| tar xz --strip-components=1 -C /usr/local/bin \
&& chmod +x /usr/local/bin/uv /usr/local/bin/uvx

View File

@@ -0,0 +1,75 @@
{
"name": "Fluxer",
"dockerComposeFile": "docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspace",
"features": {
"ghcr.io/devcontainers/features/node:1": {
"version": "24",
"pnpmVersion": "10.29.3"
},
"ghcr.io/devcontainers/features/go:1": {
"version": "1.24"
},
"ghcr.io/devcontainers/features/rust:1": {
"version": "1.93.0",
"targets": "wasm32-unknown-unknown"
},
"ghcr.io/devcontainers/features/python:1": {
"version": "os-provided",
"installTools": false
}
},
"onCreateCommand": ".devcontainer/on-create.sh",
"remoteEnv": {
"FLUXER_CONFIG": "${containerWorkspaceFolder}/config/config.json",
"FLUXER_DATABASE": "sqlite"
},
"forwardPorts": [48763, 6379, 7700, 7880],
"portsAttributes": {
"48763": {
"label": "Fluxer (Caddy)",
"onAutoForward": "openBrowser",
"protocol": "http"
},
"6379": {
"label": "Valkey",
"onAutoForward": "silent"
},
"7700": {
"label": "Meilisearch",
"onAutoForward": "silent"
},
"7880": {
"label": "LiveKit",
"onAutoForward": "silent"
},
"9229": {
"label": "Node.js Debugger",
"onAutoForward": "silent"
}
},
"customizations": {
"vscode": {
"extensions": [
"TypeScriptTeam.native-preview",
"biomejs.biome",
"clinyong.vscode-css-modules",
"pgourlain.erlang",
"golang.go",
"rust-lang.rust-analyzer"
],
"settings": {
"typescript.preferences.includePackageJsonAutoImports": "auto",
"typescript.suggest.autoImports": true,
"typescript.experimental.useTsgo": true
}
}
}
}

View File

@@ -0,0 +1,64 @@
services:
app:
build:
context: .
dockerfile: Dockerfile
volumes:
- ..:/workspace:cached
command: sleep infinity
valkey:
image: valkey/valkey:8-alpine
restart: unless-stopped
command: ['valkey-server', '--appendonly', 'yes', '--save', '60', '1', '--loglevel', 'warning']
volumes:
- valkey-data:/data
healthcheck:
test: ['CMD', 'valkey-cli', 'ping']
interval: 10s
timeout: 5s
retries: 5
meilisearch:
image: getmeili/meilisearch:v1.14
restart: unless-stopped
environment:
MEILI_NO_ANALYTICS: 'true'
MEILI_ENV: development
MEILI_MASTER_KEY: fluxer-devcontainer-meili-master-key
volumes:
- meilisearch-data:/meili_data
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:7700/health']
interval: 10s
timeout: 5s
retries: 5
livekit:
image: livekit/livekit-server:v1.9
restart: unless-stopped
command: --config /etc/livekit.yaml
volumes:
- ./livekit.yaml:/etc/livekit.yaml:ro
mailpit:
image: axllent/mailpit:latest
restart: unless-stopped
command: ['--webroot', '/mailpit/']
nats-core:
image: nats:2-alpine
restart: unless-stopped
command: ['--port', '4222']
nats-jetstream:
image: nats:2-alpine
restart: unless-stopped
command: ['--port', '4223', '--jetstream', '--store_dir', '/data']
volumes:
- nats-jetstream-data:/data
volumes:
valkey-data:
meilisearch-data:
nats-jetstream-data:

View File

@@ -0,0 +1,30 @@
# Credentials here must match the values on-create.sh writes to config.json.
port: 7880
keys:
fluxer-devcontainer-key: fluxer-devcontainer-secret-key-00000000
rtc:
tcp_port: 7881
port_range_start: 50000
port_range_end: 50100
use_external_ip: false
node_ip: 127.0.0.1
turn:
enabled: true
domain: localhost
udp_port: 3478
webhook:
api_key: fluxer-devcontainer-key
urls:
- http://app:49319/api/webhooks/livekit
room:
auto_create: true
max_participants: 100
empty_timeout: 300
development: true

70
.devcontainer/on-create.sh Executable file
View File

@@ -0,0 +1,70 @@
#!/usr/bin/env bash
# Runs once when the container is first created.
set -euo pipefail
REPO_ROOT="$(cd "$(dirname "$0")/.." && pwd)"
export FLUXER_CONFIG="${FLUXER_CONFIG:-$REPO_ROOT/config/config.json}"
GREEN='\033[0;32m'
NC='\033[0m'
info() { printf "%b\n" "${GREEN}[devcontainer]${NC} $1"; }
info "Installing pnpm dependencies..."
pnpm install
# Codegen outputs (e.g. MasterZodSchema.generated.tsx) are gitignored.
info "Generating config schema..."
pnpm --filter @fluxer/config generate
if [ ! -f "$FLUXER_CONFIG" ]; then
info "Creating config from development template..."
cp "$REPO_ROOT/config/config.dev.template.json" "$FLUXER_CONFIG"
fi
# Point services at Docker Compose hostnames and adjust settings that differ
# from the default dev template.
info "Patching config for Docker Compose networking..."
jq '
# rspack defaults public_scheme to "https" when unset
.domain.public_scheme = "http" |
# Relative path so the app works on any hostname (localhost, 127.0.0.1, etc.)
.app_public.bootstrap_api_endpoint = "/api" |
.internal.kv = "redis://valkey:6379/0" |
.integrations.search.url = "http://meilisearch:7700" |
.integrations.search.api_key = "fluxer-devcontainer-meili-master-key" |
# Credentials must match .devcontainer/livekit.yaml
.integrations.voice.url = "ws://livekit:7880" |
.integrations.voice.webhook_url = "http://app:49319/api/webhooks/livekit" |
.integrations.voice.api_key = "fluxer-devcontainer-key" |
.integrations.voice.api_secret = "fluxer-devcontainer-secret-key-00000000" |
.integrations.email.smtp.host = "mailpit" |
.integrations.email.smtp.port = 1025 |
.services.nats.core_url = "nats://nats-core:4222" |
.services.nats.jetstream_url = "nats://nats-jetstream:4223" |
# Bluesky OAuth requires HTTPS + loopback IPs (RFC 8252), incompatible with
# the HTTP-only devcontainer setup.
.auth.bluesky.enabled = false
' "$FLUXER_CONFIG" > "$FLUXER_CONFIG.tmp" && mv "$FLUXER_CONFIG.tmp" "$FLUXER_CONFIG"
info "Running bootstrap..."
"$REPO_ROOT/scripts/dev_bootstrap.sh"
info "Pre-compiling Erlang gateway dependencies..."
(cd "$REPO_ROOT/fluxer_gateway" && rebar3 compile) || {
info "Gateway pre-compilation failed (non-fatal, will compile on first start)"
}
info "Devcontainer setup complete."
info ""
info " Start all dev processes: process-compose -f .devcontainer/process-compose.yml up"
info " Open the app: http://127.0.0.1:48763"
info " Dev email inbox: http://127.0.0.1:48763/mailpit/"
info ""

View File

@@ -0,0 +1,57 @@
# Application processes only — backing services (Valkey, Meilisearch, LiveKit,
# Mailpit, NATS) run via Docker Compose.
# process-compose -f .devcontainer/process-compose.yml up
is_tui_disabled: false
log_level: info
log_configuration:
flush_each_line: true
processes:
caddy:
command: caddy run --config .devcontainer/Caddyfile.dev --adapter caddyfile
log_location: dev/logs/caddy.log
readiness_probe:
http_get:
host: 127.0.0.1
port: 48763
path: /_caddy_health
availability:
restart: always
fluxer_server:
command: pnpm --filter fluxer_server dev
log_location: dev/logs/fluxer_server.log
availability:
restart: always
fluxer_app:
command: ./scripts/dev_fluxer_app.sh
environment:
- FORCE_COLOR=1
- FLUXER_APP_DEV_PORT=49427
log_location: dev/logs/fluxer_app.log
availability:
restart: always
fluxer_gateway:
command: ./scripts/dev_gateway.sh
environment:
- FLUXER_GATEWAY_NO_SHELL=1
log_location: dev/logs/fluxer_gateway.log
availability:
restart: always
marketing_dev:
command: pnpm --filter fluxer_marketing dev
environment:
- FORCE_COLOR=1
log_location: dev/logs/marketing_dev.log
availability:
restart: always
css_watch:
command: ./scripts/dev_css_watch.sh
log_location: dev/logs/css_watch.log
availability:
restart: always

View File

@@ -1,54 +1,55 @@
# Generated artifacts and caches
**/*.dump
**/*.lock
**/*.log
**/*.swo
**/*.swp
**/*.tmp
**/*~
**/.cache
**/.dev.vars
**/.DS_Store
**/.env
**/.env.*.local
**/.env.local
**/.git
**/.idea
**/.pnpm-store
**/.rebar
**/.rebar3
**/.turbo
**/.vscode
**/_build
**/_checkouts
**/_vendor
**/build
**/certificates
**/coverage
**/dist
**/erl_crash.dump
**/generated
**/.cache
**/.pnpm-store
**/target
**/certificates
**/node_modules
# Tooling & editor metadata
**/.idea
**/.vscode
**/.DS_Store
**/Thumbs.db
**/.git
**/.astro/
**/.env
**/.env.local
**/.env.*.local
**/.dev.vars
# Logs & temporary files
**/*.dump
**/*.lock
**/*.log
**/*.tmp
**/*.swo
**/*.swp
**/*~
**/log
**/logs
**/node_modules
**/npm-debug.log*
**/pnpm-debug.log*
**/rebar3.crashdump
**/target
**/Thumbs.db
**/yarn-debug.log*
**/yarn-error.log*
**/rebar3.crashdump
**/erl_crash.dump
# Runtime config
# Original exclusions for emojis/locales commented out - needed for build
# /fluxer_app/src/data/emojis.json
# /fluxer_app/src/locales/*/messages.js
dev
**/.rebar
**/.rebar3
/fluxer_app/src/data/emojis.json
/fluxer_app/src/locales/*/messages.js
!fluxer_app/dist
!fluxer_app/dist/**
!fluxer_devops/cassandra/migrations
!scripts/cassandra-migrate/Cargo.lock
# Explicitly allow critical build data (trailing slash means directory)
!fluxer_app/src/data
!fluxer_app/src/data/**
!fluxer_app/src/locales
!fluxer_app/src/locales/**
!**/scripts/
# Allow build scripts directory (not blocked by **/build pattern)
!fluxer_app/scripts/build
!fluxer_app/scripts/build/**

4
.envrc Normal file
View File

@@ -0,0 +1,4 @@
#!/usr/bin/env bash
eval "$(devenv direnvrc)"
use devenv

39
.github/DISCUSSION_TEMPLATE/ideas.yaml vendored Normal file
View File

@@ -0,0 +1,39 @@
body:
- type: markdown
attributes:
value: |
Thanks for the suggestion.
For larger changes, please align with maintainers before investing time.
Security issues should go to https://fluxer.app/security.
- type: textarea
id: problem
attributes:
label: Problem
description: What problem are you trying to solve, and for whom?
placeholder: "Right now, users can't ..., which causes ..."
validations:
required: true
- type: textarea
id: proposal
attributes:
label: Proposed solution
description: What would you like to see happen?
placeholder: "Add ..., so that ..."
validations:
required: true
- type: textarea
id: notes
attributes:
label: Notes (optional)
description: Constraints, rough plan, or links to relevant code.
placeholder: "Notes: ...\nPotential files/areas: ..."
validations:
required: false
- type: checkboxes
id: checks
attributes:
label: Checks
options:
- label: I searched for existing discussions and didn't find a duplicate.
required: true

1
.github/FUNDING.yml vendored
View File

@@ -1 +0,0 @@
custom: ['https://fluxer.app/donate']

View File

@@ -1,100 +1,57 @@
name: Bug report
description: Report a reproducible problem in fluxer
title: 'bug: '
description: Report a reproducible problem in Fluxer
labels: ['bug']
body:
- type: markdown
attributes:
value: |
Thanks for the report!
Thanks for the report.
**Security note:** Please do not report security issues here. Use https://fluxer.app/security instead.
Before filing, please check for existing issues and include enough detail for someone else to reproduce.
Please check our status page at https://fluxerstatus.com and search for existing issues before filing.
Security issues should go to https://fluxer.app/security.
- type: textarea
id: summary
attributes:
label: Summary
description: What happened, in one or two sentences?
placeholder: 'When I ..., the app ..., but I expected ...'
description: What happened, and what did you expect instead?
placeholder: "When I ..., the app ..., but I expected ..."
validations:
required: true
- type: textarea
id: repro
attributes:
label: Steps to reproduce
description: Provide clear, numbered steps. Include any relevant data/inputs.
description: Provide clear, numbered steps.
placeholder: |
1. Go to ...
2. Click ...
3. See error ...
3. See ...
validations:
required: true
- type: textarea
id: expected
attributes:
label: Expected behavior
placeholder: 'It should ...'
validations:
required: true
- type: textarea
id: actual
attributes:
label: Actual behavior
placeholder: 'Instead, it ...'
validations:
required: true
- type: dropdown
id: area
attributes:
label: Area
description: Where does this bug appear?
options:
- Backend / API
- Frontend / Web
- Mobile
- CLI / tooling
- CI / build
- Docs
- Not sure
validations:
required: true
- type: textarea
id: environment
attributes:
label: Environment
description: Include versions that matter (commit SHA/tag, OS, runtime, browser/device).
label: Environment (optional)
description: Include versions that matter (commit/tag, OS, runtime, browser/device).
placeholder: |
- Commit/Tag:
- OS:
- Runtime (node/go/python/etc):
- Runtime:
- Browser (if applicable):
- Deployment (local/dev/prod):
validations:
required: true
required: false
- type: textarea
id: logs
attributes:
label: Logs / screenshots
description: Paste logs (redact secrets) and/or attach screenshots/recordings.
placeholder: 'Paste stack traces, console output, network errors, etc.'
label: Logs or screenshots (optional)
description: Paste logs (redact secrets) or attach screenshots/recordings.
placeholder: "Paste stack traces, console output, network errors, etc."
validations:
required: false
- type: checkboxes
id: checks
attributes:
label: Pre-flight checks
label: Checks
options:
- label: I searched for existing issues and didn't find a duplicate.
required: true
- label: This is not a security vulnerability report (those go to https://fluxer.app/security).
required: true
- label: I included enough information to reproduce the issue.
required: true

View File

@@ -1,5 +1,8 @@
blank_issues_enabled: false
contact_links:
- name: Feature requests
url: https://github.com/orgs/fluxerapp/discussions
about: Suggest an improvement or new capability.
- name: Security vulnerability report
url: https://fluxer.app/security
about: Please report security issues privately using our security policy.

View File

@@ -1,45 +1,42 @@
name: Documentation
description: Report a docs issue or suggest an improvement
title: 'docs: '
labels: ['docs']
body:
- type: markdown
attributes:
value: |
Thanks! Clear docs save everyone time.
Thanks.
**Security note:** Please do not report security issues here. Use https://fluxer.app/security instead.
Please check our status page at https://fluxerstatus.com and search for existing issues before filing.
Security issues should go to https://fluxer.app/security.
- type: textarea
id: issue
attributes:
label: What's wrong or missing?
description: Describe the docs gap, error, ambiguity, or outdated info.
placeholder: 'The README says ..., but actually ...'
label: What needs fixing?
description: Describe the gap, error, or outdated content.
placeholder: "The README says ..., but actually ..."
validations:
required: true
- type: textarea
id: location
attributes:
label: Where is it?
label: Where is it? (optional)
description: Link the file/section if possible.
placeholder: "File: ...\nSection/heading: ...\nLink: ..."
validations:
required: false
- type: textarea
id: suggestion
attributes:
label: Suggested improvement (optional)
description: If you already know how it should read, propose wording.
placeholder: 'Proposed text: ...'
label: Suggested wording (optional)
description: If you already know how it should read, propose text.
placeholder: "Proposed text: ..."
validations:
required: false
- type: checkboxes
id: checks
attributes:
label: Pre-flight checks
label: Checks
options:
- label: I searched for existing issues and didn't find a duplicate.
required: true

View File

@@ -1,82 +0,0 @@
name: Feature request
description: Suggest an improvement or new capability
title: 'feat: '
labels: ['enhancement']
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to propose an improvement.
If this is **larger work** (new feature, meaningful refactor, new dependency, new API surface, behavior change),
it's best to align with maintainers early; an issue like this is a great place to do that.
**Security note:** Please do not report security issues here. Use https://fluxer.app/security instead.
- type: textarea
id: problem
attributes:
label: Problem / motivation
description: What problem are you trying to solve? Who is it for?
placeholder: "Right now, users can't ..., which causes ..."
validations:
required: true
- type: textarea
id: proposal
attributes:
label: Proposed solution
description: What would you like to see happen? Include UX/API shape if relevant.
placeholder: 'Add ..., so that ...'
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Alternatives considered
description: Other approaches you considered, and why they're worse.
placeholder: "- Option A: ...\n- Option B: ..."
validations:
required: false
- type: dropdown
id: impact
attributes:
label: Impact
description: Roughly how big is this change?
options:
- Small (localized, low risk)
- Medium (touches multiple areas)
- Large (new surface area / refactor / dependency)
- Not sure
validations:
required: true
- type: checkboxes
id: compatibility
attributes:
label: Compatibility
options:
- label: This might be a breaking change (behavior/API).
required: false
- label: This introduces a new dependency.
required: false
- type: textarea
id: scope
attributes:
label: Scope / implementation notes
description: Constraints, rough plan, or links to relevant code.
placeholder: "Notes: ...\nPotential files/areas: ..."
validations:
required: false
- type: checkboxes
id: contribution
attributes:
label: Contribution
options:
- label: I'm willing to open a PR for this (after maintainer alignment).
required: false
- label: I can help test/verify a PR for this.
required: false

View File

@@ -16,19 +16,16 @@
## Tests
<!-- List what you ran, or explain why tests werent added/changed. -->
<!-- List what you ran, or explain why tests weren't added/changed. -->
- [ ] Added/updated tests (where it makes sense)
- [ ] Unit tests:
- [ ] Integration tests:
- [ ] Added/updated unit tests (where it makes sense)
- [ ] Manual verification:
## Checklist
- [ ] PR targets `canary`
- [ ] PR title follows Conventional Commits (mostly lowercase)
- [ ] CI is green (or Im actively addressing failures)
- [ ] CLA signed (the bot will guide you on first PR)
- [ ] CI is green (or I'm actively addressing failures)
## Screenshots / recordings (UI changes)

View File

@@ -72,85 +72,59 @@ concurrency:
env:
CHANNEL: ${{ inputs.channel }}
BUILD_CHANNEL: ${{ inputs.channel == 'canary' && 'canary' || 'stable' }}
SOURCE_REF: ${{ inputs.ref && inputs.ref || (inputs.channel == 'canary' && 'canary' || 'main') }}
jobs:
meta:
name: Resolve build metadata
runs-on: blacksmith-2vcpu-ubuntu-2404
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
outputs:
version: ${{ steps.meta.outputs.version }}
pub_date: ${{ steps.meta.outputs.pub_date }}
channel: ${{ steps.meta.outputs.channel }}
build_channel: ${{ steps.meta.outputs.build_channel }}
source_ref: ${{ steps.meta.outputs.source_ref }}
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
sparse-checkout: scripts/ci
sparse-checkout-cone-mode: false
- name: Set metadata
id: meta
shell: bash
run: |
set -euo pipefail
VERSION="0.0.${GITHUB_RUN_NUMBER}"
PUB_DATE="$(date -u +%Y-%m-%dT%H:%M:%SZ)"
echo "version=${VERSION}" >> "$GITHUB_OUTPUT"
echo "pub_date=${PUB_DATE}" >> "$GITHUB_OUTPUT"
echo "channel=${{ inputs.channel }}" >> "$GITHUB_OUTPUT"
echo "build_channel=${{ inputs.channel == 'canary' && 'canary' || 'stable' }}" >> "$GITHUB_OUTPUT"
echo "source_ref=${{ (inputs.ref && inputs.ref) || (inputs.channel == 'canary' && 'canary' || 'main') }}" >> "$GITHUB_OUTPUT"
run: >-
python3 scripts/ci/workflows/build_desktop.py
--step set_metadata
--channel "${{ inputs.channel }}"
--ref "${{ inputs.ref }}"
matrix:
name: Resolve build matrix
runs-on: blacksmith-2vcpu-ubuntu-2404
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
sparse-checkout: scripts/ci
sparse-checkout-cone-mode: false
- name: Build platform matrix
id: set-matrix
shell: bash
run: |
set -euo pipefail
PLATFORMS='[
{"platform":"windows","arch":"x64","os":"windows-latest","electron_arch":"x64"},
{"platform":"windows","arch":"arm64","os":"windows-11-arm","electron_arch":"arm64"},
{"platform":"macos","arch":"x64","os":"macos-15-intel","electron_arch":"x64"},
{"platform":"macos","arch":"arm64","os":"macos-15","electron_arch":"arm64"},
{"platform":"linux","arch":"x64","os":"ubuntu-24.04","electron_arch":"x64"},
{"platform":"linux","arch":"arm64","os":"ubuntu-24.04-arm","electron_arch":"arm64"}
]'
FILTERED="$(echo "$PLATFORMS" | jq -c \
--argjson skipWin '${{ inputs.skip_windows }}' \
--argjson skipWinX64 '${{ inputs.skip_windows_x64 }}' \
--argjson skipWinArm '${{ inputs.skip_windows_arm64 }}' \
--argjson skipMac '${{ inputs.skip_macos }}' \
--argjson skipMacX64 '${{ inputs.skip_macos_x64 }}' \
--argjson skipMacArm '${{ inputs.skip_macos_arm64 }}' \
--argjson skipLinux '${{ inputs.skip_linux }}' \
--argjson skipLinuxX64 '${{ inputs.skip_linux_x64 }}' \
--argjson skipLinuxArm '${{ inputs.skip_linux_arm64 }}' '
[.[] | select(
(
((.platform == "windows") and (
$skipWin or
((.arch == "x64") and $skipWinX64) or
((.arch == "arm64") and $skipWinArm)
)) or
((.platform == "macos") and (
$skipMac or
((.arch == "x64") and $skipMacX64) or
((.arch == "arm64") and $skipMacArm)
)) or
((.platform == "linux") and (
$skipLinux or
((.arch == "x64") and $skipLinuxX64) or
((.arch == "arm64") and $skipLinuxArm)
))
) | not
)]
')"
echo "matrix={\"include\":$FILTERED}" >> "$GITHUB_OUTPUT"
run: >-
python3 scripts/ci/workflows/build_desktop.py
--step set_matrix
--skip-windows "${{ inputs.skip_windows }}"
--skip-windows-x64 "${{ inputs.skip_windows_x64 }}"
--skip-windows-arm64 "${{ inputs.skip_windows_arm64 }}"
--skip-macos "${{ inputs.skip_macos }}"
--skip-macos-x64 "${{ inputs.skip_macos_x64 }}"
--skip-macos-arm64 "${{ inputs.skip_macos_arm64 }}"
--skip-linux "${{ inputs.skip_linux }}"
--skip-linux-x64 "${{ inputs.skip_linux_x64 }}"
--skip-linux-arm64 "${{ inputs.skip_linux_arm64 }}"
build:
name: Build ${{ matrix.platform }} (${{ matrix.arch }})
@@ -158,70 +132,55 @@ jobs:
- meta
- matrix
runs-on: ${{ matrix.os }}
timeout-minutes: 25
strategy:
fail-fast: false
matrix: ${{ fromJson(needs.matrix.outputs.matrix) }}
env:
APP_WORKDIR: fluxer_app
CHANNEL: ${{ needs.meta.outputs.channel }}
BUILD_CHANNEL: ${{ needs.meta.outputs.build_channel }}
SOURCE_REF: ${{ needs.meta.outputs.source_ref }}
VERSION: ${{ needs.meta.outputs.version }}
PUB_DATE: ${{ needs.meta.outputs.pub_date }}
PLATFORM: ${{ matrix.platform }}
ARCH: ${{ matrix.arch }}
ELECTRON_ARCH: ${{ matrix.electron_arch }}
steps:
- name: Checkout source
uses: actions/checkout@v6
with:
ref: ${{ env.SOURCE_REF }}
ref: ${{ inputs.ref || '' }}
- name: Shorten Windows paths (workspace + temp for Squirrel) and pin pnpm store
if: runner.os == 'Windows'
shell: pwsh
run: |
subst W: "$env:GITHUB_WORKSPACE"
"APP_WORKDIR=W:\fluxer_app" | Out-File -FilePath $env:GITHUB_ENV -Append -Encoding utf8
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step windows_paths
New-Item -ItemType Directory -Force "C:\t" | Out-Null
New-Item -ItemType Directory -Force "C:\sq" | Out-Null
New-Item -ItemType Directory -Force "C:\ebcache" | Out-Null
"TEMP=C:\t" | Out-File -FilePath $env:GITHUB_ENV -Append -Encoding utf8
"TMP=C:\t" | Out-File -FilePath $env:GITHUB_ENV -Append -Encoding utf8
"SQUIRREL_TEMP=C:\sq" | Out-File -FilePath $env:GITHUB_ENV -Append -Encoding utf8
"ELECTRON_BUILDER_CACHE=C:\ebcache" | Out-File -FilePath $env:GITHUB_ENV -Append -Encoding utf8
New-Item -ItemType Directory -Force "C:\pnpm-store" | Out-Null
"NPM_CONFIG_STORE_DIR=C:\pnpm-store" | Out-File -FilePath $env:GITHUB_ENV -Append -Encoding utf8
"npm_config_store_dir=C:\pnpm-store" | Out-File -FilePath $env:GITHUB_ENV -Append -Encoding utf8
"store-dir=C:\pnpm-store" | Set-Content -Path "W:\.npmrc" -Encoding ascii
git config --global core.longpaths true
- name: Set workdir (Unix)
if: runner.os != 'Windows'
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step set_workdir_unix
- name: Set up pnpm
uses: pnpm/action-setup@v4
with:
version: 10.26.0
- name: Set up Node.js
uses: actions/setup-node@v6
with:
node-version: 20
node-version: 24
- name: Resolve pnpm store path (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
$store = pnpm store path --silent
"PNPM_STORE_PATH=$store" | Out-File -FilePath $env:GITHUB_ENV -Append -Encoding utf8
New-Item -ItemType Directory -Force $store | Out-Null
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step resolve_pnpm_store_windows
- name: Resolve pnpm store path (Unix)
if: runner.os != 'Windows'
shell: bash
run: |
set -euo pipefail
store="$(pnpm store path --silent)"
echo "PNPM_STORE_PATH=$store" >> "$GITHUB_ENV"
mkdir -p "$store"
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step resolve_pnpm_store_unix
- name: Cache pnpm store
uses: actions/cache@v4
@@ -233,44 +192,58 @@ jobs:
- name: Install Python setuptools (Windows ARM64)
if: matrix.platform == 'windows' && matrix.arch == 'arm64'
shell: pwsh
run: |
python -m pip install --upgrade pip
python -m pip install "setuptools>=69" wheel
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step install_setuptools_windows_arm64
- name: Install Python setuptools (macOS)
if: matrix.platform == 'macos'
run: brew install python-setuptools
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step install_setuptools_macos
- name: Install Linux dependencies
if: matrix.platform == 'linux'
env:
DEBIAN_FRONTEND: noninteractive
run: |
sudo apt-get update
sudo apt-get install -y \
libx11-dev libxtst-dev libxt-dev libxinerama-dev libxkbcommon-dev libxrandr-dev \
ruby ruby-dev build-essential rpm \
libpixman-1-dev libcairo2-dev libpango1.0-dev libjpeg-dev libgif-dev librsvg2-dev
sudo gem install --no-document fpm
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step install_linux_deps
- name: Install dependencies
working-directory: ${{ env.APP_WORKDIR }}
run: pnpm install --frozen-lockfile
working-directory: ${{ env.WORKDIR }}/fluxer_desktop
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step install_dependencies
- name: Update version
working-directory: ${{ env.APP_WORKDIR }}
run: pnpm version "${{ env.VERSION }}" --no-git-tag-version --allow-same-version
working-directory: ${{ env.WORKDIR }}/fluxer_desktop
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step update_version
- name: Build Electron main process
working-directory: ${{ env.APP_WORKDIR }}
- name: Set build channel
working-directory: ${{ env.WORKDIR }}/fluxer_desktop
env:
BUILD_CHANNEL: ${{ env.BUILD_CHANNEL }}
run: pnpm electron:compile
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step set_build_channel
- name: Build Electron main process
working-directory: ${{ env.WORKDIR }}/fluxer_desktop
env:
BUILD_CHANNEL: ${{ env.BUILD_CHANNEL }}
TURBO_API: https://turborepo.fluxer.dev
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: team_fluxer
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step build_electron_main
- name: Build Electron app (macOS)
if: matrix.platform == 'macos'
working-directory: ${{ env.APP_WORKDIR }}
working-directory: ${{ env.WORKDIR }}/fluxer_desktop
env:
BUILD_CHANNEL: ${{ env.BUILD_CHANNEL }}
CSC_LINK: ${{ secrets.APPLE_CERTIFICATE }}
@@ -278,176 +251,82 @@ jobs:
APPLE_ID: ${{ secrets.APPLE_ID }}
APPLE_APP_SPECIFIC_PASSWORD: ${{ secrets.APPLE_PASSWORD }}
APPLE_TEAM_ID: ${{ secrets.APPLE_TEAM_ID }}
run: pnpm exec electron-builder --config electron-builder.config.cjs --mac --${{ matrix.electron_arch }}
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step build_app_macos
- name: Verify macOS bundle ID (fail fast if wrong channel)
if: matrix.platform == 'macos'
working-directory: ${{ env.APP_WORKDIR }}
shell: bash
working-directory: ${{ env.WORKDIR }}/fluxer_desktop
env:
BUILD_CHANNEL: ${{ env.BUILD_CHANNEL }}
run: |
set -euo pipefail
DIST="dist-electron"
ZIP="$(ls -1 "$DIST"/*"${{ matrix.electron_arch }}"*.zip | head -n1)"
tmp="$(mktemp -d)"
ditto -xk "$ZIP" "$tmp"
APP="$(find "$tmp" -maxdepth 2 -name "*.app" -print -quit)"
BID=$(/usr/libexec/PlistBuddy -c 'Print :CFBundleIdentifier' "$APP/Contents/Info.plist")
expected="app.fluxer"
if [[ "${BUILD_CHANNEL:-stable}" == "canary" ]]; then expected="app.fluxer.canary"; fi
echo "Bundle id in zip: $BID (expected: $expected)"
test "$BID" = "$expected"
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step verify_bundle_id
- name: Build Electron app (Windows)
if: matrix.platform == 'windows'
working-directory: ${{ env.APP_WORKDIR }}
working-directory: ${{ env.WORKDIR }}/fluxer_desktop
env:
BUILD_CHANNEL: ${{ env.BUILD_CHANNEL }}
TEMP: C:\t
TMP: C:\t
SQUIRREL_TEMP: C:\sq
ELECTRON_BUILDER_CACHE: C:\ebcache
run: pnpm exec electron-builder --config electron-builder.config.cjs --win --${{ matrix.electron_arch }}
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step build_app_windows
- name: Analyze Squirrel nupkg for long paths
if: matrix.platform == 'windows'
working-directory: ${{ env.APP_WORKDIR }}
shell: pwsh
working-directory: ${{ env.WORKDIR }}/fluxer_desktop
env:
BUILD_VERSION: ${{ env.VERSION }}
MAX_WINDOWS_PATH_LEN: 260
PATH_HEADROOM: 10
run: |
$primaryDir = if ("${{ matrix.arch }}" -eq "arm64") { "dist-electron/squirrel-windows-arm64" } else { "dist-electron/squirrel-windows" }
$fallbackDir = if ("${{ matrix.arch }}" -eq "arm64") { "dist-electron/squirrel-windows" } else { "dist-electron/squirrel-windows-arm64" }
$dirs = @($primaryDir, $fallbackDir)
$nupkg = $null
foreach ($d in $dirs) {
if (Test-Path $d) {
$nupkg = Get-ChildItem -Path "$d/*.nupkg" -ErrorAction SilentlyContinue | Select-Object -First 1
if ($nupkg) { break }
}
}
if (-not $nupkg) {
throw "No Squirrel nupkg found in: $($dirs -join ', ')"
}
Write-Host "Analyzing Windows installer $($nupkg.FullName)"
$env:NUPKG_PATH = $nupkg.FullName
$lines = @(
'import os'
'import zipfile'
''
'path = os.environ["NUPKG_PATH"]'
'build_ver = os.environ["BUILD_VERSION"]'
'prefix = os.path.join(os.environ["LOCALAPPDATA"], "fluxer_app", f"app-{build_ver}", "resources", "app.asar.unpacked")'
'max_len = int(os.environ.get("MAX_WINDOWS_PATH_LEN", "260"))'
'headroom = int(os.environ.get("PATH_HEADROOM", "10"))'
'limit = max_len - headroom'
''
'with zipfile.ZipFile(path) as archive:'
' entries = []'
' for info in archive.infolist():'
' normalized = info.filename.lstrip("/\\\\")'
' total_len = len(os.path.join(prefix, normalized)) if normalized else len(prefix)'
' entries.append((total_len, info.filename))'
''
'if not entries:'
' raise SystemExit("nupkg archive contains no entries")'
''
'entries.sort(reverse=True)'
'print(f"Assumed install prefix: {prefix} ({len(prefix)} chars). Maximum allowed path length: {limit} (total reserve {max_len}, headroom {headroom}).")'
'print("Top 20 longest archived paths (length includes prefix):")'
'for length, name in entries[:20]:'
' print(f"{length:4d} {name}")'
''
'longest_len, longest_name = entries[0]'
'if longest_len > limit:'
' raise SystemExit(f"Longest path {longest_len} for {longest_name} exceeds limit {limit}")'
'print(f"Longest archived path {longest_len} is within the limit of {limit}.")'
)
$scriptPath = Join-Path $env:TEMP "nupkg-long-path-check.py"
Set-Content -Path $scriptPath -Value $lines -Encoding utf8
python $scriptPath
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step analyse_squirrel_paths
- name: Build Electron app (Linux)
if: matrix.platform == 'linux'
working-directory: ${{ env.APP_WORKDIR }}
working-directory: ${{ env.WORKDIR }}/fluxer_desktop
env:
BUILD_CHANNEL: ${{ env.BUILD_CHANNEL }}
USE_SYSTEM_FPM: true
run: pnpm exec electron-builder --config electron-builder.config.cjs --linux --${{ matrix.electron_arch }}
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step build_app_linux
- name: Prepare artifacts (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
New-Item -ItemType Directory -Force upload_staging | Out-Null
$dist = Join-Path $env:APP_WORKDIR "dist-electron"
$sqDirName = if ("${{ matrix.arch }}" -eq "arm64") { "squirrel-windows-arm64" } else { "squirrel-windows" }
$sqFallbackName = if ($sqDirName -eq "squirrel-windows") { "squirrel-windows-arm64" } else { "squirrel-windows" }
$sq = Join-Path $dist $sqDirName
$sqFallback = Join-Path $dist $sqFallbackName
$picked = $null
if (Test-Path $sq) { $picked = $sq }
elseif (Test-Path $sqFallback) { $picked = $sqFallback }
if ($picked) {
Copy-Item -Force -ErrorAction SilentlyContinue "$picked\*.exe" "upload_staging\"
Copy-Item -Force -ErrorAction SilentlyContinue "$picked\*.exe.blockmap" "upload_staging\"
Copy-Item -Force -ErrorAction SilentlyContinue "$picked\RELEASES*" "upload_staging\"
Copy-Item -Force -ErrorAction SilentlyContinue "$picked\*.nupkg" "upload_staging\"
Copy-Item -Force -ErrorAction SilentlyContinue "$picked\*.nupkg.blockmap" "upload_staging\"
}
if (Test-Path $dist) {
Copy-Item -Force -ErrorAction SilentlyContinue "$dist\*.yml" "upload_staging\"
Copy-Item -Force -ErrorAction SilentlyContinue "$dist\*.zip" "upload_staging\"
Copy-Item -Force -ErrorAction SilentlyContinue "$dist\*.zip.blockmap" "upload_staging\"
}
if (-not (Get-ChildItem upload_staging -Filter *.exe -ErrorAction SilentlyContinue)) {
throw "No installer .exe staged. Squirrel outputs were not copied."
}
Get-ChildItem -Force upload_staging | Format-Table -AutoSize
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step prepare_artifacts_windows
- name: Prepare artifacts (Unix)
if: runner.os != 'Windows'
shell: bash
run: |
set -euo pipefail
mkdir -p upload_staging
DIST="${{ env.APP_WORKDIR }}/dist-electron"
cp -f "$DIST"/*.dmg upload_staging/ 2>/dev/null || true
cp -f "$DIST"/*.zip upload_staging/ 2>/dev/null || true
cp -f "$DIST"/*.zip.blockmap upload_staging/ 2>/dev/null || true
cp -f "$DIST"/*.yml upload_staging/ 2>/dev/null || true
cp -f "$DIST"/*.AppImage upload_staging/ 2>/dev/null || true
cp -f "$DIST"/*.deb upload_staging/ 2>/dev/null || true
cp -f "$DIST"/*.rpm upload_staging/ 2>/dev/null || true
cp -f "$DIST"/*.tar.gz upload_staging/ 2>/dev/null || true
ls -la upload_staging/
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step prepare_artifacts_unix
- name: Normalize updater YAML (arm64)
if: matrix.arch == 'arm64'
shell: bash
run: |
set -euo pipefail
cd upload_staging
[[ "${{ matrix.platform }}" == "macos" && -f latest-mac.yml && ! -f latest-mac-arm64.yml ]] && mv latest-mac.yml latest-mac-arm64.yml || true
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step normalise_updater_yaml
- name: Generate SHA256 checksums (Unix)
if: runner.os != 'Windows'
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step generate_checksums_unix
- name: Generate SHA256 checksums (Windows)
if: runner.os == 'Windows'
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/build_desktop.py
--step generate_checksums_windows
- name: Upload artifacts
uses: actions/upload-artifact@v4
@@ -456,16 +335,24 @@ jobs:
path: |
upload_staging/*.exe
upload_staging/*.exe.blockmap
upload_staging/*.exe.sha256
upload_staging/*.dmg
upload_staging/*.dmg.sha256
upload_staging/*.zip
upload_staging/*.zip.blockmap
upload_staging/*.zip.sha256
upload_staging/*.AppImage
upload_staging/*.AppImage.sha256
upload_staging/*.deb
upload_staging/*.deb.sha256
upload_staging/*.rpm
upload_staging/*.rpm.sha256
upload_staging/*.tar.gz
upload_staging/*.tar.gz.sha256
upload_staging/*.yml
upload_staging/*.nupkg
upload_staging/*.nupkg.blockmap
upload_staging/*.nupkg.sha256
upload_staging/RELEASES*
retention-days: 30
@@ -474,16 +361,25 @@ jobs:
needs:
- meta
- build
runs-on: blacksmith-2vcpu-ubuntu-2404
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
env:
CHANNEL: ${{ needs.meta.outputs.build_channel }}
DISPLAY_CHANNEL: ${{ needs.meta.outputs.channel }}
VERSION: ${{ needs.meta.outputs.version }}
PUB_DATE: ${{ needs.meta.outputs.pub_date }}
S3_ENDPOINT: https://s3.us-east-va.io.cloud.ovh.us
S3_BUCKET: fluxer-downloads
PUBLIC_DL_BASE: https://api.fluxer.app/dl
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
sparse-checkout: scripts/ci
sparse-checkout-cone-mode: false
- name: Download all artifacts
uses: actions/download-artifact@v4
with:
@@ -491,175 +387,29 @@ jobs:
pattern: fluxer-desktop-${{ needs.meta.outputs.build_channel }}-*
- name: Install rclone
run: |
set -euo pipefail
if ! command -v rclone >/dev/null 2>&1; then
curl -fsSL https://rclone.org/install.sh | sudo bash
fi
run: >-
python3 scripts/ci/workflows/build_desktop.py
--step install_rclone
- name: Configure rclone (OVH S3)
run: |
set -euo pipefail
mkdir -p ~/.config/rclone
cat > ~/.config/rclone/rclone.conf <<'RCLONEEOF'
[ovh]
type = s3
provider = Other
env_auth = true
endpoint = https://s3.us-east-va.io.cloud.ovh.us
acl = private
RCLONEEOF
run: >-
python3 scripts/ci/workflows/build_desktop.py
--step configure_rclone
- name: Build S3 payload layout (+ manifest.json)
env:
VERSION: ${{ needs.meta.outputs.version }}
PUB_DATE: ${{ needs.meta.outputs.pub_date }}
run: |
set -euo pipefail
mkdir -p s3_payload
shopt -s nullglob
for dir in artifacts/fluxer-desktop-${CHANNEL}-*; do
[ -d "$dir" ] || continue
base="$(basename "$dir")"
if [[ "$base" =~ ^fluxer-desktop-[a-z]+-([a-z]+)-([a-z0-9]+)$ ]]; then
platform="${BASH_REMATCH[1]}"
arch="${BASH_REMATCH[2]}"
else
echo "Skipping unrecognized artifact dir: $base"
continue
fi
case "$platform" in
windows) plat="win32" ;;
macos) plat="darwin" ;;
linux) plat="linux" ;;
*)
echo "Unknown platform: $platform"
continue
;;
esac
dest="s3_payload/desktop/${CHANNEL}/${plat}/${arch}"
mkdir -p "$dest"
cp -av "$dir"/* "$dest/" || true
if [[ "$plat" == "darwin" ]]; then
zip_file=""
for z in "$dest"/*.zip; do
zip_file="$z"
break
done
if [[ -z "$zip_file" ]]; then
echo "No .zip found for macOS $arch in $dest (auto-update requires zip artifacts)."
else
zip_name="$(basename "$zip_file")"
url="${PUBLIC_DL_BASE}/desktop/${CHANNEL}/${plat}/${arch}/${zip_name}"
cat > "$dest/RELEASES.json" <<EOF
{
"currentRelease": "${VERSION}",
"releases": [
{
"version": "${VERSION}",
"updateTo": {
"version": "${VERSION}",
"pub_date": "${PUB_DATE}",
"notes": "",
"name": "${VERSION}",
"url": "${url}"
}
}
]
}
EOF
cp -f "$dest/RELEASES.json" "$dest/releases.json"
fi
fi
setup_file=""
dmg_file=""
zip_file2=""
appimage_file=""
deb_file=""
rpm_file=""
targz_file=""
if [[ "$plat" == "win32" ]]; then
setup_file="$(ls -1 "$dest"/*.exe 2>/dev/null | grep -i 'setup' | head -n1 || true)"
if [[ -z "$setup_file" ]]; then
setup_file="$(ls -1 "$dest"/*.exe 2>/dev/null | head -n1 || true)"
fi
fi
if [[ "$plat" == "darwin" ]]; then
dmg_file="$(ls -1 "$dest"/*.dmg 2>/dev/null | head -n1 || true)"
zip_file2="$(ls -1 "$dest"/*.zip 2>/dev/null | head -n1 || true)"
fi
if [[ "$plat" == "linux" ]]; then
appimage_file="$(ls -1 "$dest"/*.AppImage 2>/dev/null | head -n1 || true)"
deb_file="$(ls -1 "$dest"/*.deb 2>/dev/null | head -n1 || true)"
rpm_file="$(ls -1 "$dest"/*.rpm 2>/dev/null | head -n1 || true)"
targz_file="$(ls -1 "$dest"/*.tar.gz 2>/dev/null | head -n1 || true)"
fi
jq -n \
--arg channel "${CHANNEL}" \
--arg platform "${plat}" \
--arg arch "${arch}" \
--arg version "${VERSION}" \
--arg pub_date "${PUB_DATE}" \
--arg setup "$(basename "${setup_file:-}")" \
--arg dmg "$(basename "${dmg_file:-}")" \
--arg zip "$(basename "${zip_file2:-}")" \
--arg appimage "$(basename "${appimage_file:-}")" \
--arg deb "$(basename "${deb_file:-}")" \
--arg rpm "$(basename "${rpm_file:-}")" \
--arg tar_gz "$(basename "${targz_file:-}")" \
'{
channel: $channel,
platform: $platform,
arch: $arch,
version: $version,
pub_date: $pub_date,
files: {
setup: $setup,
dmg: $dmg,
zip: $zip,
appimage: $appimage,
deb: $deb,
rpm: $rpm,
tar_gz: $tar_gz
}
}' > "$dest/manifest.json"
done
echo "Payload tree:"
find s3_payload -maxdepth 6 -type f | sort
run: >-
python3 scripts/ci/workflows/build_desktop.py
--step build_payload
- name: Upload payload to S3
run: |
set -euo pipefail
rclone copy s3_payload/desktop "ovh:${S3_BUCKET}/desktop" \
--transfers 32 \
--checkers 16 \
--fast-list \
--s3-upload-concurrency 8 \
--s3-chunk-size 16M \
-v
run: >-
python3 scripts/ci/workflows/build_desktop.py
--step upload_payload
- name: Build summary
run: |
{
echo "## Desktop ${DISPLAY_CHANNEL^} Upload Complete"
echo ""
echo "**Version:** ${{ needs.meta.outputs.version }}"
echo ""
echo "**S3 prefix:** desktop/${CHANNEL}/"
echo ""
echo "**Redirect endpoint shape:** /dl/desktop/${CHANNEL}/{plat}/{arch}/{format}"
} >> "$GITHUB_STEP_SUMMARY"
run: >-
python3 scripts/ci/workflows/build_desktop.py
--step build_summary

View File

@@ -8,15 +8,9 @@ on:
github_ref_name:
type: string
required: false
github_ref:
type: string
required: false
workflow_dispatch_channel:
type: string
required: false
workflow_dispatch_ref:
type: string
required: false
outputs:
channel:
@@ -25,9 +19,6 @@ on:
is_canary:
description: 'Whether this is a canary deploy (true|false)'
value: ${{ jobs.emit.outputs.is_canary }}
source_ref:
description: 'Git ref to check out for the deploy'
value: ${{ jobs.emit.outputs.source_ref }}
stack_suffix:
description: "Suffix for stack/image names ('' or '-canary')"
value: ${{ jobs.emit.outputs.stack_suffix }}
@@ -35,60 +26,23 @@ on:
jobs:
emit:
runs-on: ubuntu-latest
timeout-minutes: 25
outputs:
channel: ${{ steps.compute.outputs.channel }}
is_canary: ${{ steps.compute.outputs.is_canary }}
source_ref: ${{ steps.compute.outputs.source_ref }}
stack_suffix: ${{ steps.compute.outputs.stack_suffix }}
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
sparse-checkout: scripts/ci
sparse-checkout-cone-mode: false
- name: Determine channel
id: compute
shell: bash
run: |
set -euo pipefail
event_name="${{ inputs.github_event_name }}"
ref_name="${{ inputs.github_ref_name || '' }}"
ref="${{ inputs.github_ref || '' }}"
dispatch_channel="${{ inputs.workflow_dispatch_channel || '' }}"
dispatch_ref="${{ inputs.workflow_dispatch_ref || '' }}"
channel="stable"
if [[ "${event_name}" == "push" ]]; then
if [[ "${ref_name}" == "canary" ]]; then
channel="canary"
fi
else
if [[ "${dispatch_channel}" == "canary" ]]; then
channel="canary"
fi
fi
if [[ "${event_name}" == "push" ]]; then
source_ref="${ref:-refs/heads/${ref_name:-main}}"
else
if [[ -n "${dispatch_ref}" ]]; then
source_ref="${dispatch_ref}"
else
if [[ "${channel}" == "canary" ]]; then
source_ref="refs/heads/canary"
else
source_ref="refs/heads/main"
fi
fi
fi
stack_suffix=""
if [[ "${channel}" == "canary" ]]; then
stack_suffix="-canary"
fi
is_canary="false"
if [[ "${channel}" == "canary" ]]; then
is_canary="true"
fi
printf 'channel=%s\n' "${channel}" >> "$GITHUB_OUTPUT"
printf 'is_canary=%s\n' "${is_canary}" >> "$GITHUB_OUTPUT"
printf 'source_ref=%s\n' "${source_ref}" >> "$GITHUB_OUTPUT"
printf 'stack_suffix=%s\n' "${stack_suffix}" >> "$GITHUB_OUTPUT"
run: >-
python3 scripts/ci/workflows/channel_vars.py
--event-name "${{ inputs.github_event_name }}"
--ref-name "${{ inputs.github_ref_name || '' }}"
--dispatch-channel "${{ inputs.workflow_dispatch_channel || '' }}"

137
.github/workflows/ci.yaml vendored Normal file
View File

@@ -0,0 +1,137 @@
name: CI
on:
pull_request:
types: [opened, reopened, synchronize]
jobs:
typecheck:
runs-on: ubuntu-latest
timeout-minutes: 25
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Install pnpm
uses: pnpm/action-setup@v4
- name: Install Node.js
uses: actions/setup-node@v4
with:
node-version: '24'
cache: 'pnpm'
- name: Install dependencies
run: python3 scripts/ci/workflows/ci.py --step install_dependencies
- name: Run typecheck
run: python3 scripts/ci/workflows/ci.py --step typecheck
env:
TURBO_API: https://turborepo.fluxer.dev
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: team_fluxer
test:
runs-on: ubuntu-latest
timeout-minutes: 25
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Install pnpm
uses: pnpm/action-setup@v4
- name: Install Node.js
uses: actions/setup-node@v4
with:
node-version: '24'
cache: 'pnpm'
- name: Install dependencies
run: python3 scripts/ci/workflows/ci.py --step install_dependencies
- name: Run tests
run: python3 scripts/ci/workflows/ci.py --step test
env:
FLUXER_CONFIG: config/config.test.json
TURBO_API: https://turborepo.fluxer.dev
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: team_fluxer
gateway:
runs-on: ubuntu-latest
timeout-minutes: 25
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Set up Erlang
uses: erlef/setup-beam@v1
with:
otp-version: '28'
rebar3-version: '3.24.0'
- name: Cache rebar3 dependencies
uses: actions/cache@v4
with:
path: |
fluxer_gateway/_build
~/.cache/rebar3
key: rebar3-${{ runner.os }}-${{ hashFiles('fluxer_gateway/rebar.lock') }}
restore-keys: |
rebar3-${{ runner.os }}-
- name: Compile
run: python3 scripts/ci/workflows/ci.py --step gateway_compile
- name: Run dialyzer
run: python3 scripts/ci/workflows/ci.py --step gateway_dialyzer
- name: Run eunit tests
run: python3 scripts/ci/workflows/ci.py --step gateway_eunit
env:
FLUXER_CONFIG: ../config/config.test.json
knip:
runs-on: ubuntu-latest
timeout-minutes: 25
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Install pnpm
uses: pnpm/action-setup@v4
- name: Install Node.js
uses: actions/setup-node@v4
with:
node-version: '24'
cache: 'pnpm'
- name: Install dependencies
run: python3 scripts/ci/workflows/ci.py --step install_dependencies
- name: Run knip
run: python3 scripts/ci/workflows/ci.py --step knip
env:
TURBO_API: https://turborepo.fluxer.dev
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: team_fluxer
ci-scripts:
runs-on: ubuntu-latest
timeout-minutes: 25
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Set up uv
uses: astral-sh/setup-uv@v7
with:
python-version: "3.12"
- name: Sync ci python dependencies
run: python3 scripts/ci/workflows/ci_scripts.py --step sync
- name: Run ci python tests
run: python3 scripts/ci/workflows/ci_scripts.py --step test

View File

@@ -16,12 +16,12 @@ on:
- stable
- canary
default: stable
description: Channel to deploy
description: Release channel to deploy
ref:
type: string
required: false
default: ''
description: Optional git ref to deploy (defaults to main/canary based on channel)
description: Optional git ref (defaults to the triggering branch)
concurrency:
group: deploy-fluxer-admin-${{ github.event_name == 'workflow_dispatch' && inputs.channel || (github.ref_name == 'canary' && 'canary') || 'stable' }}
@@ -35,43 +35,33 @@ jobs:
with:
github_event_name: ${{ github.event_name }}
github_ref_name: ${{ github.ref_name }}
github_ref: ${{ github.ref }}
workflow_dispatch_channel: ${{ github.event_name == 'workflow_dispatch' && inputs.channel || '' }}
workflow_dispatch_ref: ${{ github.event_name == 'workflow_dispatch' && inputs.ref || '' }}
deploy:
name: Deploy admin
needs: channel-vars
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 10
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
env:
CHANNEL: ${{ needs.channel-vars.outputs.channel }}
IS_CANARY: ${{ needs.channel-vars.outputs.is_canary }}
SOURCE_REF: ${{ needs.channel-vars.outputs.source_ref }}
STACK_SUFFIX: ${{ needs.channel-vars.outputs.stack_suffix }}
STACK: ${{ format('fluxer-admin{0}', needs.channel-vars.outputs.stack_suffix) }}
CACHE_SCOPE: ${{ format('deploy-fluxer-admin{0}', needs.channel-vars.outputs.stack_suffix) }}
CADDY_DOMAIN: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'admin.canary.fluxer.app' || 'admin.fluxer.app' }}
APP_ENDPOINT: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'https://web.canary.fluxer.app' || 'https://web.fluxer.app' }}
API_PUBLIC_ENDPOINT: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'https://api.canary.fluxer.app' || 'https://api.fluxer.app' }}
ADMIN_ENDPOINT: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'https://admin.canary.fluxer.app' || 'https://admin.fluxer.app' }}
ADMIN_REDIRECT_URI: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'https://admin.canary.fluxer.app/oauth2_callback' || 'https://admin.fluxer.app/oauth2_callback' }}
REPLICAS: ${{ needs.channel-vars.outputs.is_canary == 'true' && 1 || 2 }}
RELEASE_CHANNEL: ${{ needs.channel-vars.outputs.channel }}
steps:
- uses: actions/checkout@v6
with:
ref: ${{ env.SOURCE_REF }}
ref: ${{ inputs.ref || '' }}
fetch-depth: 0
- name: Record deploy commit
run: |
set -euo pipefail
sha=$(git rev-parse HEAD)
echo "Deploying commit ${sha}"
printf 'DEPLOY_SHA=%s\n' "$sha" >> "$GITHUB_ENV"
run: python3 scripts/ci/workflows/deploy_admin.py --step record_deploy_commit
- name: Set build timestamp
run: echo "BUILD_TIMESTAMP=$(date -u +%s)" >> "$GITHUB_ENV"
run: python3 scripts/ci/workflows/deploy_admin.py --step set_build_timestamp
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -85,7 +75,7 @@ jobs:
- name: Build image
uses: docker/build-push-action@v6
with:
context: fluxer_admin
context: .
file: fluxer_admin/Dockerfile
tags: ${{ env.STACK }}:${{ env.DEPLOY_SHA }}
load: true
@@ -93,18 +83,16 @@ jobs:
cache-from: type=gha,scope=${{ env.CACHE_SCOPE }}
cache-to: type=gha,mode=max,scope=${{ env.CACHE_SCOPE }}
build-args: |
BUILD_SHA=${{ env.DEPLOY_SHA }}
BUILD_NUMBER=${{ github.run_number }}
BUILD_TIMESTAMP=${{ env.BUILD_TIMESTAMP }}
RELEASE_CHANNEL=${{ env.RELEASE_CHANNEL }}
env:
DOCKER_BUILD_SUMMARY: false
DOCKER_BUILD_RECORD_UPLOAD: false
- name: Install docker-pussh
run: |
set -euo pipefail
mkdir -p ~/.docker/cli-plugins
curl -fsSL https://raw.githubusercontent.com/psviderski/unregistry/v0.3.1/docker-pussh \
-o ~/.docker/cli-plugins/docker-pussh
chmod +x ~/.docker/cli-plugins/docker-pussh
run: python3 scripts/ci/workflows/deploy_admin.py --step install_docker_pussh
- name: Set up SSH agent
uses: webfactory/ssh-agent@v0.9.1
@@ -112,96 +100,13 @@ jobs:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY_SERVER }}
- name: Add server to known hosts
run: |
set -euo pipefail
mkdir -p ~/.ssh
ssh-keyscan -H ${{ secrets.SERVER_IP }} >> ~/.ssh/known_hosts
run: python3 scripts/ci/workflows/deploy_admin.py --step add_known_hosts --server-ip ${{ secrets.SERVER_IP }}
- name: Push image and deploy
env:
IMAGE_TAG: ${{ env.STACK }}:${{ env.DEPLOY_SHA }}
SERVER: ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_IP }}
STACK: ${{ env.STACK }}
APP_ENDPOINT: ${{ env.APP_ENDPOINT }}
API_PUBLIC_ENDPOINT: ${{ env.API_PUBLIC_ENDPOINT }}
ADMIN_ENDPOINT: ${{ env.ADMIN_ENDPOINT }}
ADMIN_REDIRECT_URI: ${{ env.ADMIN_REDIRECT_URI }}
CADDY_DOMAIN: ${{ env.CADDY_DOMAIN }}
REPLICAS: ${{ env.REPLICAS }}
run: |
set -euo pipefail
docker pussh "${IMAGE_TAG}" "${SERVER}"
ssh "${SERVER}" \
"IMAGE_TAG=${IMAGE_TAG} STACK=${STACK} APP_ENDPOINT=${APP_ENDPOINT} API_PUBLIC_ENDPOINT=${API_PUBLIC_ENDPOINT} ADMIN_ENDPOINT=${ADMIN_ENDPOINT} ADMIN_REDIRECT_URI=${ADMIN_REDIRECT_URI} CADDY_DOMAIN=${CADDY_DOMAIN} REPLICAS=${REPLICAS} bash" << 'EOF'
set -euo pipefail
sudo mkdir -p "/opt/${STACK}"
sudo chown -R "${USER}:${USER}" "/opt/${STACK}"
cd "/opt/${STACK}"
cat > compose.yaml << COMPOSEEOF
x-deploy-base: &deploy_base
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
update_config:
parallelism: 1
delay: 10s
order: start-first
rollback_config:
parallelism: 1
delay: 10s
x-healthcheck: &healthcheck
test: ['CMD', 'curl', '-f', 'http://localhost:8080/']
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
services:
app:
image: ${IMAGE_TAG}
env_file:
- /etc/fluxer/fluxer.env
environment:
FLUXER_API_PUBLIC_ENDPOINT: ${API_PUBLIC_ENDPOINT}
FLUXER_APP_ENDPOINT: ${APP_ENDPOINT}
FLUXER_MEDIA_ENDPOINT: https://fluxerusercontent.com
FLUXER_CDN_ENDPOINT: https://fluxerstatic.com
FLUXER_ADMIN_ENDPOINT: ${ADMIN_ENDPOINT}
FLUXER_PATH_ADMIN: /
APP_MODE: admin
FLUXER_ADMIN_PORT: 8080
ADMIN_OAUTH2_REDIRECT_URI: ${ADMIN_REDIRECT_URI}
ADMIN_OAUTH2_CLIENT_ID: 1440355698178071552
ADMIN_OAUTH2_AUTO_CREATE: "false"
FLUXER_METRICS_HOST: fluxer-metrics_app:8080
deploy:
<<: *deploy_base
replicas: ${REPLICAS}
labels:
- "caddy=${CADDY_DOMAIN}"
- 'caddy.reverse_proxy={{upstreams 8080}}'
- 'caddy.header.X-Robots-Tag="noindex, nofollow, nosnippet, noimageindex"'
- 'caddy.header.Strict-Transport-Security="max-age=31536000; includeSubDomains; preload"'
- 'caddy.header.X-Xss-Protection="1; mode=block"'
- 'caddy.header.X-Content-Type-Options=nosniff'
- 'caddy.header.Referrer-Policy=strict-origin-when-cross-origin'
- 'caddy.header.X-Frame-Options=DENY'
networks: [fluxer-shared]
healthcheck: *healthcheck
networks:
fluxer-shared:
external: true
COMPOSEEOF
docker stack deploy \
--with-registry-auth \
--detach=false \
--resolve-image never \
-c compose.yaml \
"${STACK}"
EOF
run: python3 scripts/ci/workflows/deploy_admin.py --step push_and_deploy

View File

@@ -16,12 +16,12 @@ on:
- stable
- canary
default: stable
description: Channel to deploy
description: Release channel to deploy
ref:
type: string
required: false
default: ''
description: Optional git ref to deploy (defaults to main/canary based on channel)
description: Optional git ref (defaults to the triggering branch)
concurrency:
group: deploy-fluxer-api-${{ github.event_name == 'workflow_dispatch' && inputs.channel || (github.ref_name == 'canary' && 'canary') || 'stable' }}
@@ -36,48 +36,33 @@ jobs:
with:
github_event_name: ${{ github.event_name }}
github_ref_name: ${{ github.ref_name }}
github_ref: ${{ github.ref }}
workflow_dispatch_channel: ${{ github.event_name == 'workflow_dispatch' && inputs.channel || '' }}
workflow_dispatch_ref: ${{ github.event_name == 'workflow_dispatch' && inputs.ref || '' }}
deploy:
name: Deploy api
needs: channel-vars
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 10
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
env:
CHANNEL: ${{ needs.channel-vars.outputs.channel }}
IS_CANARY: ${{ needs.channel-vars.outputs.is_canary }}
SOURCE_REF: ${{ needs.channel-vars.outputs.source_ref }}
STACK_SUFFIX: ${{ needs.channel-vars.outputs.stack_suffix }}
STACK: ${{ format('fluxer-api{0}', needs.channel-vars.outputs.stack_suffix) }}
WORKER_STACK: ${{ format('fluxer-api-worker{0}', needs.channel-vars.outputs.stack_suffix) }}
WORKER_STACK: fluxer-api-worker
CANARY_WORKER_REPLICAS: 3
CACHE_SCOPE: ${{ format('deploy-fluxer-api{0}', needs.channel-vars.outputs.stack_suffix) }}
API_PUBLIC_ENDPOINT: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'https://api.canary.fluxer.app' || 'https://api.fluxer.app' }}
API_CLIENT_ENDPOINT: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'https://web.canary.fluxer.app/api' || 'https://web.fluxer.app/api' }}
APP_ENDPOINT: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'https://web.canary.fluxer.app' || 'https://web.fluxer.app' }}
MARKETING_ENDPOINT: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'https://canary.fluxer.app' || 'https://fluxer.app' }}
ADMIN_ENDPOINT: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'https://admin.canary.fluxer.app' || 'https://admin.fluxer.app' }}
ADMIN_REDIRECT_URI: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'https://admin.canary.fluxer.app/oauth2_callback' || 'https://admin.fluxer.app/oauth2_callback' }}
CADDY_DOMAIN: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'api.canary.fluxer.app' || 'api.fluxer.app' }}
RELEASE_CHANNEL: ${{ needs.channel-vars.outputs.channel }}
steps:
- uses: actions/checkout@v6
with:
ref: ${{ env.SOURCE_REF }}
ref: ${{ inputs.ref || '' }}
fetch-depth: 0
- name: Record deploy commit
run: |
set -euo pipefail
sha=$(git rev-parse HEAD)
echo "Deploying commit ${sha}"
printf 'DEPLOY_SHA=%s\n' "$sha" >> "$GITHUB_ENV"
run: python3 scripts/ci/workflows/deploy_api.py --step record_deploy_commit
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -91,7 +76,7 @@ jobs:
- name: Build image(s)
uses: docker/build-push-action@v6
with:
context: fluxer_api
context: .
file: fluxer_api/Dockerfile
tags: |
${{ env.STACK }}:${{ env.DEPLOY_SHA }}
@@ -100,17 +85,17 @@ jobs:
platforms: linux/amd64
cache-from: type=gha,scope=${{ env.CACHE_SCOPE }}
cache-to: type=gha,mode=max,scope=${{ env.CACHE_SCOPE }}
build-args: |
BUILD_SHA=${{ env.SENTRY_BUILD_SHA }}
BUILD_NUMBER=${{ env.SENTRY_BUILD_NUMBER }}
BUILD_TIMESTAMP=${{ env.SENTRY_BUILD_TIMESTAMP }}
RELEASE_CHANNEL=${{ env.RELEASE_CHANNEL }}
env:
DOCKER_BUILD_SUMMARY: false
DOCKER_BUILD_RECORD_UPLOAD: false
- name: Install docker-pussh
run: |
set -euo pipefail
mkdir -p ~/.docker/cli-plugins
curl -fsSL https://raw.githubusercontent.com/psviderski/unregistry/v0.3.1/docker-pussh \
-o ~/.docker/cli-plugins/docker-pussh
chmod +x ~/.docker/cli-plugins/docker-pussh
run: python3 scripts/ci/workflows/deploy_api.py --step install_docker_pussh
- name: Set up SSH agent
uses: webfactory/ssh-agent@v0.9.1
@@ -118,240 +103,17 @@ jobs:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY_SERVER }}
- name: Add server to known hosts
run: |
set -euo pipefail
mkdir -p ~/.ssh
ssh-keyscan -H ${{ secrets.SERVER_IP }} >> ~/.ssh/known_hosts
run: python3 scripts/ci/workflows/deploy_api.py --step add_known_hosts --server-ip ${{ secrets.SERVER_IP }}
- name: Push image(s) and deploy
env:
SERVER: ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_IP }}
IMAGE_TAG_APP: ${{ env.STACK }}:${{ env.DEPLOY_SHA }}
IMAGE_TAG_WORKER: ${{ env.WORKER_STACK }}:${{ env.DEPLOY_SHA }}
run: |
set -euo pipefail
docker pussh "${IMAGE_TAG_APP}" "${SERVER}"
if [[ "${IS_CANARY}" == "true" ]]; then
docker pussh "${IMAGE_TAG_WORKER}" "${SERVER}"
fi
ssh "${SERVER}" \
"IMAGE_TAG_APP=${IMAGE_TAG_APP} IMAGE_TAG_WORKER=${IMAGE_TAG_WORKER} STACK=${STACK} WORKER_STACK=${WORKER_STACK} IS_CANARY=${IS_CANARY} APP_ENDPOINT=${APP_ENDPOINT} API_PUBLIC_ENDPOINT=${API_PUBLIC_ENDPOINT} API_CLIENT_ENDPOINT=${API_CLIENT_ENDPOINT} MARKETING_ENDPOINT=${MARKETING_ENDPOINT} ADMIN_ENDPOINT=${ADMIN_ENDPOINT} ADMIN_REDIRECT_URI=${ADMIN_REDIRECT_URI} CADDY_DOMAIN=${CADDY_DOMAIN} bash" << 'EOF'
set -euo pipefail
write_runtime_env() {
local dir="$1"
cat > "${dir}/runtime.env" << ENVEOF
NODE_ENV=production
FLUXER_API_PORT=8080
SENTRY_DSN=https://bb16e8b823b82d788db49a666b3b4b90@o4510149383094272.ingest.us.sentry.io/4510205804019712
CASSANDRA_HOSTS=cassandra
CASSANDRA_KEYSPACE=fluxer
CASSANDRA_LOCAL_DC=dc1
FLUXER_GATEWAY_RPC_HOST=fluxer-gateway_app
FLUXER_GATEWAY_RPC_PORT=8081
FLUXER_MEDIA_PROXY_HOST=fluxer-media-proxy_app
FLUXER_MEDIA_PROXY_PORT=8080
FLUXER_METRICS_HOST=fluxer-metrics_app:8080
FLUXER_API_CLIENT_ENDPOINT=${API_CLIENT_ENDPOINT}
FLUXER_APP_ENDPOINT=${APP_ENDPOINT}
FLUXER_CDN_ENDPOINT=https://fluxerstatic.com
FLUXER_MEDIA_ENDPOINT=https://fluxerusercontent.com
FLUXER_INVITE_ENDPOINT=https://fluxer.gg
FLUXER_GIFT_ENDPOINT=https://fluxer.gift
AWS_S3_ENDPOINT=https://s3.us-east-va.io.cloud.ovh.us
AWS_S3_BUCKET_CDN=fluxer
AWS_S3_BUCKET_UPLOADS=fluxer-uploads
AWS_S3_BUCKET_REPORTS=fluxer-reports
AWS_S3_BUCKET_HARVESTS=fluxer-harvests
AWS_S3_BUCKET_DOWNLOADS=fluxer-downloads
SENDGRID_FROM_EMAIL=noreply@fluxer.app
SENDGRID_FROM_NAME=Fluxer
SENDGRID_WEBHOOK_PUBLIC_KEY=MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEoeqQS37o9s8ZcLBJUtT4hghAmI5RqsvcQ0OvsUn3XPfl7GkjxljufyxuL8+m1mCHP2IA1jdYT3kJQoQYXP6ZpQ==
FLUXER_API_PUBLIC_ENDPOINT=${API_PUBLIC_ENDPOINT}
FLUXER_GATEWAY_ENDPOINT=wss://gateway.fluxer.app
FLUXER_MARKETING_ENDPOINT=${MARKETING_ENDPOINT}
FLUXER_PATH_MARKETING=/
FLUXER_ADMIN_ENDPOINT=${ADMIN_ENDPOINT}
FLUXER_PATH_ADMIN=/
ADMIN_OAUTH2_CLIENT_ID=1440355698178071552
ADMIN_OAUTH2_REDIRECT_URI=${ADMIN_REDIRECT_URI}
ADMIN_OAUTH2_AUTO_CREATE=false
PASSKEYS_ENABLED=true
PASSKEY_RP_NAME=Fluxer
PASSKEY_RP_ID=fluxer.app
PASSKEY_ALLOWED_ORIGINS=https://web.fluxer.app,https://web.canary.fluxer.app
CAPTCHA_ENABLED=true
CAPTCHA_PRIMARY_PROVIDER=turnstile
HCAPTCHA_SITE_KEY=9cbad400-df84-4e0c-bda6-e65000be78aa
TURNSTILE_SITE_KEY=0x4AAAAAAB_lAoDdTWznNHMq
EMAIL_ENABLED=true
SMS_ENABLED=true
VOICE_ENABLED=true
SEARCH_ENABLED=true
MEILISEARCH_URL=http://meilisearch:7700
STRIPE_ENABLED=true
STRIPE_PRICE_ID_MONTHLY_USD=price_1SJHZzFPC94Os7FdzBgvz0go
STRIPE_PRICE_ID_YEARLY_USD=price_1SJHabFPC94Os7FdhSOWVfcr
STRIPE_PRICE_ID_VISIONARY_USD=price_1SJHGTFPC94Os7FdWTyqvJZ8
STRIPE_PRICE_ID_MONTHLY_EUR=price_1SJHaFFPC94Os7FdmcrGicXa
STRIPE_PRICE_ID_YEARLY_EUR=price_1SJHarFPC94Os7Fddbyzr5I8
STRIPE_PRICE_ID_VISIONARY_EUR=price_1SJHGnFPC94Os7FdZn23KkYB
STRIPE_PRICE_ID_GIFT_VISIONARY_USD=price_1SKhWqFPC94Os7FdxRmQrg3k
STRIPE_PRICE_ID_GIFT_VISIONARY_EUR=price_1SKhXrFPC94Os7FdcepLrJqr
STRIPE_PRICE_ID_GIFT_1_MONTH_USD=price_1SJHHKFPC94Os7FdGwUs1EQg
STRIPE_PRICE_ID_GIFT_1_YEAR_USD=price_1SJHHrFPC94Os7FdWrQN5tKl
STRIPE_PRICE_ID_GIFT_1_MONTH_EUR=price_1SJHHaFPC94Os7FdwwpwhliW
STRIPE_PRICE_ID_GIFT_1_YEAR_EUR=price_1SJHI5FPC94Os7Fd3DpLxb0D
FLUXER_VISIONARIES_GUILD_ID=1428504839258075143
FLUXER_OPERATORS_GUILD_ID=1434192442151473226
CLOUDFLARE_PURGE_ENABLED=true
CLAMAV_ENABLED=true
CLAMAV_HOST=clamav
CLAMAV_PORT=3310
MAXMIND_DB_PATH=/data/GeoLite2-City.mmdb
VAPID_PUBLIC_KEY=BEIwQxIwfj6m90tLYAR0AU_GJWU4kw8J_zJcHQG55NCUWSyRy-dzMOgvxk8yEDwdVyJZa6xUL4fmwngijq8T2pY
ENVEOF
}
deploy_api_stack() {
sudo mkdir -p "/opt/${STACK}"
sudo chown -R "${USER}:${USER}" "/opt/${STACK}"
cd "/opt/${STACK}"
write_runtime_env "$(pwd)"
cat > compose.yaml << COMPOSEEOF
x-deploy-base: &deploy_base
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
update_config:
parallelism: 1
delay: 10s
order: start-first
rollback_config:
parallelism: 1
delay: 10s
x-healthcheck: &healthcheck
test: ['CMD', 'curl', '-f', 'http://localhost:8080/_health']
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
services:
app:
image: ${IMAGE_TAG_APP}
command: ['npm', 'run', 'start']
env_file:
- /etc/fluxer/fluxer.env
- ./runtime.env
volumes:
- /opt/geoip/GeoLite2-City.mmdb:/data/GeoLite2-City.mmdb:ro
deploy:
<<: *deploy_base
replicas: 2
labels:
- "caddy=${CADDY_DOMAIN}"
- 'caddy.reverse_proxy={{upstreams 8080}}'
- 'caddy.header.Strict-Transport-Security="max-age=31536000; includeSubDomains; preload"'
- 'caddy.header.X-Xss-Protection="1; mode=block"'
- 'caddy.header.X-Content-Type-Options=nosniff'
- 'caddy.header.Referrer-Policy=strict-origin-when-cross-origin'
- 'caddy.header.X-Frame-Options=DENY'
- 'caddy.header.Expect-Ct="max-age=86400, report-uri=\\"https://o4510149383094272.ingest.us.sentry.io/api/4510205804019712/security/?sentry_key=bb16e8b823b82d788db49a666b3b4b90\\""'
networks: [fluxer-shared]
healthcheck: *healthcheck
networks:
fluxer-shared:
external: true
COMPOSEEOF
docker stack deploy \
--with-registry-auth \
--detach=false \
--resolve-image never \
-c compose.yaml \
"${STACK}"
}
deploy_worker_stack_canary_only() {
if [[ "${IS_CANARY}" != "true" ]]; then
return 0
fi
sudo mkdir -p "/opt/${WORKER_STACK}"
sudo chown -R "${USER}:${USER}" "/opt/${WORKER_STACK}"
cd "/opt/${WORKER_STACK}"
write_runtime_env "$(pwd)"
cat > compose.yaml << COMPOSEEOF
x-deploy-base: &deploy_base
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
update_config:
parallelism: 1
delay: 10s
order: start-first
rollback_config:
parallelism: 1
delay: 10s
services:
worker:
image: ${IMAGE_TAG_WORKER}
command: ['npm', 'run', 'start:worker']
env_file:
- /etc/fluxer/fluxer.env
- ./runtime.env
deploy:
<<: *deploy_base
replicas: 1
networks: [fluxer-shared]
networks:
fluxer-shared:
external: true
COMPOSEEOF
docker stack deploy \
--with-registry-auth \
--detach=false \
--resolve-image never \
-c compose.yaml \
"${WORKER_STACK}"
}
deploy_api_stack
deploy_worker_stack_canary_only
EOF
CANARY_WORKER_REPLICAS: ${{ env.CANARY_WORKER_REPLICAS }}
SENTRY_BUILD_SHA: ${{ env.SENTRY_BUILD_SHA }}
SENTRY_BUILD_NUMBER: ${{ env.SENTRY_BUILD_NUMBER }}
SENTRY_BUILD_TIMESTAMP: ${{ env.SENTRY_BUILD_TIMESTAMP }}
RELEASE_CHANNEL: ${{ env.CHANNEL }}
SENTRY_RELEASE: ${{ format('fluxer-api@{0}', env.SENTRY_BUILD_SHA) }}
run: python3 scripts/ci/workflows/deploy_api.py --step push_and_deploy

View File

@@ -7,6 +7,7 @@ on:
- canary
paths:
- fluxer_app/**
- fluxer_app_proxy/**
- .github/workflows/deploy-app.yaml
workflow_dispatch:
inputs:
@@ -16,12 +17,12 @@ on:
- stable
- canary
default: stable
description: Channel to deploy
description: Release channel to deploy
ref:
type: string
required: false
default: ''
description: Optional git ref to deploy (defaults to main/canary based on channel)
description: Optional git ref (defaults to the triggering branch)
concurrency:
group: deploy-fluxer-app-${{ github.event_name == 'workflow_dispatch' && inputs.channel || (github.ref_name == 'canary' && 'canary') || 'stable' }}
@@ -36,50 +37,33 @@ jobs:
with:
github_event_name: ${{ github.event_name }}
github_ref_name: ${{ github.ref_name }}
github_ref: ${{ github.ref }}
workflow_dispatch_channel: ${{ github.event_name == 'workflow_dispatch' && inputs.channel || '' }}
workflow_dispatch_ref: ${{ github.event_name == 'workflow_dispatch' && inputs.ref || '' }}
deploy:
name: Deploy app
needs: channel-vars
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 10
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
env:
CHANNEL: ${{ needs.channel-vars.outputs.channel }}
IS_CANARY: ${{ needs.channel-vars.outputs.is_canary }}
SOURCE_REF: ${{ needs.channel-vars.outputs.source_ref }}
STACK_SUFFIX: ${{ needs.channel-vars.outputs.stack_suffix }}
SERVICE_NAME: ${{ format('fluxer-app{0}', needs.channel-vars.outputs.stack_suffix) }}
DOCKERFILE: fluxer_app/proxy/Dockerfile
SENTRY_PROXY_PATH: /error-reporting-proxy
DOCKERFILE: fluxer_app_proxy/Dockerfile
CACHE_SCOPE: ${{ format('fluxer-app{0}', needs.channel-vars.outputs.stack_suffix) }}
PUBLIC_BOOTSTRAP_API_ENDPOINT: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'https://web.canary.fluxer.app/api' || 'https://web.fluxer.app/api' }}
PUBLIC_BOOTSTRAP_API_PUBLIC_ENDPOINT: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'https://api.canary.fluxer.app' || 'https://api.fluxer.app' }}
PUBLIC_PROJECT_ENV: ${{ needs.channel-vars.outputs.channel }}
PUBLIC_SENTRY_DSN: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'https://59ced0e2666ab83dd1ddb056cdd22d1b@sentry.web.canary.fluxer.app/4510205815291904' || 'https://59ced0e2666ab83dd1ddb056cdd22d1b@sentry.web.fluxer.app/4510205815291904' }}
SENTRY_REPORT_HOST: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'https://sentry.web.canary.fluxer.app' || 'https://sentry.web.fluxer.app' }}
API_TARGET: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'fluxer-api-canary_app' || 'fluxer-api_app' }}
CADDY_APP_DOMAIN: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'web.canary.fluxer.app' || 'web.fluxer.app' }}
SENTRY_CADDY_DOMAIN: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'sentry.web.canary.fluxer.app' || 'sentry.web.fluxer.app' }}
RELEASE_CHANNEL: ${{ needs.channel-vars.outputs.channel }}
APP_REPLICAS: ${{ needs.channel-vars.outputs.is_canary == 'true' && 1 || 2 }}
steps:
- uses: actions/checkout@v6
with:
ref: ${{ env.SOURCE_REF }}
ref: ${{ inputs.ref || '' }}
fetch-depth: 0
- name: Set up pnpm
uses: pnpm/action-setup@v4
with:
version: 10.26.0
- name: Set up Node.js
uses: actions/setup-node@v6
@@ -88,25 +72,18 @@ jobs:
cache: pnpm
cache-dependency-path: fluxer_app/pnpm-lock.yaml
- name: Set up Go
uses: actions/setup-go@v6
with:
go-version: '1.25.5'
- name: Install dependencies
working-directory: fluxer_app
run: pnpm install --frozen-lockfile
run: python3 scripts/ci/workflows/deploy_app.py --step install_dependencies
- name: Run Lingui i18n tasks
working-directory: fluxer_app
run: pnpm lingui:extract && pnpm lingui:compile --strict
run: python3 scripts/ci/workflows/deploy_app.py --step run_lingui
env:
TURBO_API: https://turborepo.fluxer.dev
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: team_fluxer
- name: Record deploy commit
run: |
set -euo pipefail
sha=$(git rev-parse HEAD)
echo "Deploying commit ${sha}"
printf 'DEPLOY_SHA=%s\n' "$sha" >> "$GITHUB_ENV"
run: python3 scripts/ci/workflows/deploy_app.py --step record_deploy_commit
- name: Set up Rust
uses: dtolnay/rust-toolchain@stable
@@ -127,74 +104,45 @@ jobs:
${{ runner.os }}-cargo-
- name: Install wasm-pack
run: |
set -euo pipefail
if ! command -v wasm-pack >/dev/null 2>&1; then
cargo install wasm-pack --version 0.13.1
fi
run: python3 scripts/ci/workflows/deploy_app.py --step install_wasm_pack
- name: Generate wasm artifacts
working-directory: fluxer_app
run: pnpm wasm:codegen
run: python3 scripts/ci/workflows/deploy_app.py --step generate_wasm
env:
TURBO_API: https://turborepo.fluxer.dev
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: team_fluxer
- name: Set up SSH agent
uses: webfactory/ssh-agent@v0.9.1
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY_SERVER }}
- name: Add server to known hosts
run: python3 scripts/ci/workflows/deploy_app.py --step add_known_hosts --server-ip ${{ secrets.SERVER_IP }}
- name: Fetch deployment config
env:
SERVER: ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_IP }}
RELEASE_CHANNEL: ${{ env.RELEASE_CHANNEL }}
run: python3 scripts/ci/workflows/deploy_app.py --step fetch_deployment_config
- name: Build application
working-directory: fluxer_app
env:
NODE_ENV: production
PUBLIC_BOOTSTRAP_API_ENDPOINT: ${{ env.PUBLIC_BOOTSTRAP_API_ENDPOINT }}
PUBLIC_BOOTSTRAP_API_PUBLIC_ENDPOINT: ${{ env.PUBLIC_BOOTSTRAP_API_PUBLIC_ENDPOINT }}
PUBLIC_API_VERSION: 1
PUBLIC_PROJECT_ENV: ${{ env.PUBLIC_PROJECT_ENV }}
PUBLIC_SENTRY_PROJECT_ID: 4510205815291904
PUBLIC_SENTRY_PUBLIC_KEY: 59ced0e2666ab83dd1ddb056cdd22d1b
PUBLIC_SENTRY_DSN: ${{ env.PUBLIC_SENTRY_DSN }}
PUBLIC_SENTRY_PROXY_PATH: ${{ env.SENTRY_PROXY_PATH }}
PUBLIC_BUILD_NUMBER: ${{ github.run_number }}
run: |
set -euo pipefail
export PUBLIC_BUILD_SHA=$(git rev-parse --short HEAD)
export PUBLIC_BUILD_TIMESTAMP=$(date +%s)
pnpm build
cat > dist/version.json << EOF
{
"sha": "$PUBLIC_BUILD_SHA",
"buildNumber": $PUBLIC_BUILD_NUMBER,
"timestamp": $PUBLIC_BUILD_TIMESTAMP,
"env": "$PUBLIC_PROJECT_ENV"
}
EOF
FLUXER_CONFIG: ${{ github.workspace }}/fluxer_app/config.json
TURBO_API: https://turborepo.fluxer.dev
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: team_fluxer
run: python3 scripts/ci/workflows/deploy_app.py --step build_application
- name: Install rclone
run: |
set -euo pipefail
if ! command -v rclone >/dev/null 2>&1; then
curl -fsSL https://rclone.org/install.sh | sudo bash
fi
run: python3 scripts/ci/workflows/deploy_app.py --step install_rclone
- name: Upload assets to S3 static bucket
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
set -euo pipefail
mkdir -p ~/.config/rclone
cat > ~/.config/rclone/rclone.conf << RCLONEEOF
[ovh]
type = s3
provider = Other
env_auth = true
endpoint = https://s3.us-east-va.io.cloud.ovh.us
acl = public-read
RCLONEEOF
rclone copy fluxer_app/dist/assets ovh:fluxer-static/assets \
--transfers 32 \
--checkers 16 \
--size-only \
--fast-list \
--s3-upload-concurrency 8 \
--s3-chunk-size 16M \
-v
run: python3 scripts/ci/workflows/deploy_app.py --step upload_assets
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -205,6 +153,9 @@ jobs:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Set build timestamp
run: python3 scripts/ci/workflows/deploy_app.py --step set_build_timestamp
- name: Build image
uses: docker/build-push-action@v6
with:
@@ -215,28 +166,17 @@ jobs:
platforms: linux/amd64
cache-from: type=gha,scope=${{ env.CACHE_SCOPE }}
cache-to: type=gha,mode=max,scope=${{ env.CACHE_SCOPE }}
build-args: |
BUILD_SHA=${{ env.DEPLOY_SHA }}
BUILD_NUMBER=${{ github.run_number }}
BUILD_TIMESTAMP=${{ env.BUILD_TIMESTAMP }}
RELEASE_CHANNEL=${{ env.RELEASE_CHANNEL }}
env:
DOCKER_BUILD_SUMMARY: false
DOCKER_BUILD_RECORD_UPLOAD: false
- name: Install docker-pussh
run: |
set -euo pipefail
mkdir -p ~/.docker/cli-plugins
curl -fsSL https://raw.githubusercontent.com/psviderski/unregistry/v0.3.1/docker-pussh \
-o ~/.docker/cli-plugins/docker-pussh
chmod +x ~/.docker/cli-plugins/docker-pussh
- name: Set up SSH agent
uses: webfactory/ssh-agent@v0.9.1
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY_SERVER }}
- name: Add server to known hosts
run: |
set -euo pipefail
mkdir -p ~/.ssh
ssh-keyscan -H ${{ secrets.SERVER_IP }} >> ~/.ssh/known_hosts
run: python3 scripts/ci/workflows/deploy_app.py --step install_docker_pussh
- name: Push image and deploy
env:
@@ -246,108 +186,6 @@ jobs:
SERVICE_NAME: ${{ env.SERVICE_NAME }}
COMPOSE_STACK: ${{ env.SERVICE_NAME }}
SENTRY_DSN: https://59ced0e2666ab83dd1ddb056cdd22d1b@o4510149383094272.ingest.us.sentry.io/4510205815291904
SENTRY_PROXY_PATH: ${{ env.SENTRY_PROXY_PATH }}
SENTRY_REPORT_HOST: ${{ env.SENTRY_REPORT_HOST }}
CADDY_APP_DOMAIN: ${{ env.CADDY_APP_DOMAIN }}
SENTRY_CADDY_DOMAIN: ${{ env.SENTRY_CADDY_DOMAIN }}
API_TARGET: ${{ env.API_TARGET }}
RELEASE_CHANNEL: ${{ env.RELEASE_CHANNEL }}
APP_REPLICAS: ${{ env.APP_REPLICAS }}
run: |
set -euo pipefail
docker pussh "${IMAGE_TAG}" "${SERVER}"
ssh "${SERVER}" \
"IMAGE_TAG=${IMAGE_TAG} SERVICE_NAME=${SERVICE_NAME} COMPOSE_STACK=${COMPOSE_STACK} SENTRY_DSN=${SENTRY_DSN} SENTRY_PROXY_PATH=${SENTRY_PROXY_PATH} SENTRY_REPORT_HOST=${SENTRY_REPORT_HOST} CADDY_APP_DOMAIN=${CADDY_APP_DOMAIN} SENTRY_CADDY_DOMAIN=${SENTRY_CADDY_DOMAIN} API_TARGET=${API_TARGET} RELEASE_CHANNEL=${RELEASE_CHANNEL} APP_REPLICAS=${APP_REPLICAS} bash" << 'EOF'
set -euo pipefail
sudo mkdir -p "/opt/${SERVICE_NAME}"
sudo chown -R "${USER}:${USER}" "/opt/${SERVICE_NAME}"
cd "/opt/${SERVICE_NAME}"
cat > compose.yaml << COMPOSEEOF
x-deploy-base: &deploy_base
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
update_config:
parallelism: 1
delay: 10s
order: start-first
rollback_config:
parallelism: 1
delay: 10s
x-common-caddy-headers: &common_caddy_headers
caddy.header.Strict-Transport-Security: "max-age=31536000; includeSubDomains; preload"
caddy.header.X-Xss-Protection: "1; mode=block"
caddy.header.X-Content-Type-Options: "nosniff"
caddy.header.Referrer-Policy: "strict-origin-when-cross-origin"
caddy.header.X-Frame-Options: "DENY"
caddy.header.Expect-Ct: "max-age=86400, report-uri=\\"${SENTRY_REPORT_HOST}/api/4510205815291904/security/?sentry_key=59ced0e2666ab83dd1ddb056cdd22d1b\\""
caddy.header.Cache-Control: "no-store, no-cache, must-revalidate"
caddy.header.Pragma: "no-cache"
caddy.header.Expires: "0"
x-env-base: &env_base
PORT: 8080
RELEASE_CHANNEL: ${RELEASE_CHANNEL}
FLUXER_METRICS_HOST: fluxer-metrics_app:8080
SENTRY_DSN: ${SENTRY_DSN}
SENTRY_REPORT_HOST: ${SENTRY_REPORT_HOST}
x-healthcheck: &healthcheck
test: ['CMD', 'curl', '-f', 'http://localhost:8080/_health']
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
services:
app:
image: ${IMAGE_TAG}
deploy:
<<: *deploy_base
replicas: ${APP_REPLICAS}
labels:
<<: *common_caddy_headers
caddy: ${CADDY_APP_DOMAIN}
caddy.handle_path_0: /api*
caddy.handle_path_0.reverse_proxy: "http://${API_TARGET}:8080"
caddy.reverse_proxy: "{{upstreams 8080}}"
environment:
<<: *env_base
SENTRY_PROXY_PATH: ${SENTRY_PROXY_PATH}
networks: [fluxer-shared]
healthcheck: *healthcheck
sentry:
image: ${IMAGE_TAG}
deploy:
<<: *deploy_base
replicas: 1
labels:
<<: *common_caddy_headers
caddy: ${SENTRY_CADDY_DOMAIN}
caddy.reverse_proxy: "{{upstreams 8080}}"
environment:
<<: *env_base
SENTRY_PROXY_PATH: /
networks: [fluxer-shared]
healthcheck: *healthcheck
networks:
fluxer-shared:
external: true
COMPOSEEOF
docker stack deploy \
--with-registry-auth \
--detach=false \
--resolve-image never \
-c compose.yaml \
"${COMPOSE_STACK}"
EOF
run: python3 scripts/ci/workflows/deploy_app.py --step push_and_deploy

View File

@@ -1,193 +0,0 @@
name: deploy docs
on:
push:
branches:
- main
- canary
paths:
- fluxer_docs/**
- .github/workflows/deploy-docs.yaml
workflow_dispatch:
inputs:
channel:
type: choice
options:
- stable
- canary
default: stable
description: Channel to deploy
ref:
type: string
required: false
default: ''
description: Optional git ref to deploy (defaults to main/canary based on channel)
concurrency:
group: deploy-fluxer-docs-${{ github.event_name == 'workflow_dispatch' && inputs.channel || (github.ref_name == 'canary' && 'canary') || 'stable' }}
cancel-in-progress: true
permissions:
contents: read
jobs:
channel-vars:
uses: ./.github/workflows/channel-vars.yaml
with:
github_event_name: ${{ github.event_name }}
github_ref_name: ${{ github.ref_name }}
github_ref: ${{ github.ref }}
workflow_dispatch_channel: ${{ github.event_name == 'workflow_dispatch' && inputs.channel || '' }}
workflow_dispatch_ref: ${{ github.event_name == 'workflow_dispatch' && inputs.ref || '' }}
deploy:
name: Deploy docs
needs: channel-vars
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 10
env:
CHANNEL: ${{ needs.channel-vars.outputs.channel }}
IS_CANARY: ${{ needs.channel-vars.outputs.is_canary }}
SOURCE_REF: ${{ needs.channel-vars.outputs.source_ref }}
STACK_SUFFIX: ${{ needs.channel-vars.outputs.stack_suffix }}
STACK: ${{ format('fluxer-docs{0}', needs.channel-vars.outputs.stack_suffix) }}
CACHE_SCOPE: ${{ format('deploy-fluxer-docs{0}', needs.channel-vars.outputs.stack_suffix) }}
CADDY_DOMAIN: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'docs.canary.fluxer.app' || 'docs.fluxer.app' }}
steps:
- uses: actions/checkout@v6
with:
ref: ${{ env.SOURCE_REF }}
fetch-depth: 0
- name: Record deploy commit
run: |
set -euo pipefail
sha=$(git rev-parse HEAD)
echo "Deploying commit ${sha}"
printf 'DEPLOY_SHA=%s\n' "$sha" >> "$GITHUB_ENV"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build image
uses: docker/build-push-action@v6
with:
context: fluxer_docs
file: fluxer_docs/Dockerfile
tags: ${{ env.STACK }}:${{ env.DEPLOY_SHA }}
load: true
platforms: linux/amd64
cache-from: type=gha,scope=${{ env.CACHE_SCOPE }}
cache-to: type=gha,mode=max,scope=${{ env.CACHE_SCOPE }}
env:
DOCKER_BUILD_SUMMARY: false
DOCKER_BUILD_RECORD_UPLOAD: false
- name: Install docker-pussh
run: |
set -euo pipefail
mkdir -p ~/.docker/cli-plugins
curl -fsSL https://raw.githubusercontent.com/psviderski/unregistry/v0.3.1/docker-pussh \
-o ~/.docker/cli-plugins/docker-pussh
chmod +x ~/.docker/cli-plugins/docker-pussh
- name: Set up SSH agent
uses: webfactory/ssh-agent@v0.9.1
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY_SERVER }}
- name: Add server to known hosts
run: |
set -euo pipefail
mkdir -p ~/.ssh
ssh-keyscan -H ${{ secrets.SERVER_IP }} >> ~/.ssh/known_hosts
- name: Push image and deploy
env:
IMAGE_TAG: ${{ env.STACK }}:${{ env.DEPLOY_SHA }}
SERVER: ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_IP }}
STACK: ${{ env.STACK }}
CADDY_DOMAIN: ${{ env.CADDY_DOMAIN }}
IS_CANARY: ${{ env.IS_CANARY }}
run: |
set -euo pipefail
docker pussh "${IMAGE_TAG}" "${SERVER}"
ssh "${SERVER}" "IMAGE_TAG=${IMAGE_TAG} STACK=${STACK} CADDY_DOMAIN=${CADDY_DOMAIN} IS_CANARY=${IS_CANARY} bash" << 'EOF'
set -euo pipefail
sudo mkdir -p "/opt/${STACK}"
sudo chown -R "${USER}:${USER}" "/opt/${STACK}"
cd "/opt/${STACK}"
cat > compose.yaml << COMPOSEEOF
x-deploy-base: &deploy_base
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
update_config:
parallelism: 1
delay: 10s
order: start-first
rollback_config:
parallelism: 1
delay: 10s
services:
app:
image: ${IMAGE_TAG}
env_file:
- /etc/fluxer/fluxer.env
environment:
- NODE_ENV=production
deploy:
<<: *deploy_base
replicas: 2
labels:
caddy: "${CADDY_DOMAIN}"
caddy.reverse_proxy: "{{upstreams 3000}}"
COMPOSEEOF
if [[ "${IS_CANARY}" == "true" ]]; then
cat >> compose.yaml << 'COMPOSEEOF'
caddy.header.X-Robots-Tag: "noindex, nofollow, nosnippet, noimageindex"
COMPOSEEOF
fi
cat >> compose.yaml << 'COMPOSEEOF'
caddy.header.Strict-Transport-Security: "max-age=31536000; includeSubDomains; preload"
caddy.header.X-Xss-Protection: "1; mode=block"
caddy.header.X-Content-Type-Options: "nosniff"
caddy.header.Referrer-Policy: "strict-origin-when-cross-origin"
caddy.header.X-Frame-Options: "DENY"
networks:
- fluxer-shared
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000']
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
networks:
fluxer-shared:
external: true
COMPOSEEOF
docker stack deploy \
--with-registry-auth \
--detach=false \
--resolve-image never \
-c compose.yaml \
"${STACK}"
EOF

View File

@@ -2,6 +2,12 @@ name: deploy gateway
on:
workflow_dispatch:
inputs:
ref:
type: string
required: false
default: ''
description: Optional git ref (defaults to the triggering branch)
push:
branches:
- canary
@@ -18,13 +24,16 @@ permissions:
jobs:
deploy:
name: Deploy (hot patch)
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 10
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
steps:
- uses: actions/checkout@v6
with:
sparse-checkout: fluxer_gateway
ref: ${{ inputs.ref || '' }}
sparse-checkout: |
fluxer_gateway
scripts/ci
- name: Set up Erlang
uses: erlef/setup-beam@v1
@@ -33,10 +42,7 @@ jobs:
rebar3-version: '3.24.0'
- name: Compile
working-directory: fluxer_gateway
run: |
set -euo pipefail
rebar3 as prod compile
run: python3 scripts/ci/workflows/deploy_gateway.py --step compile
- name: Set up SSH
uses: webfactory/ssh-agent@v0.9.1
@@ -44,234 +50,13 @@ jobs:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY_SERVER }}
- name: Add server to known hosts
run: |
set -euo pipefail
mkdir -p ~/.ssh
ssh-keyscan -H ${{ secrets.SERVER_IP }} >> ~/.ssh/known_hosts
run: python3 scripts/ci/workflows/deploy_gateway.py --step add_known_hosts --server-ip ${{ secrets.SERVER_IP }}
- name: Record deploy commit
run: python3 scripts/ci/workflows/deploy_gateway.py --step record_deploy_commit
- name: Deploy
env:
SERVER: ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_IP }}
GATEWAY_ADMIN_SECRET: ${{ secrets.GATEWAY_ADMIN_SECRET }}
run: |
set -euo pipefail
CONTAINER_ID="$(ssh "${SERVER}" "docker ps -q --filter label=com.docker.swarm.service.name=fluxer-gateway_app | head -1")"
if [ -z "${CONTAINER_ID}" ]; then
echo "::error::No running container found for service fluxer-gateway_app"
ssh "${SERVER}" "docker ps --filter 'name=fluxer-gateway_app' --format '{{.ID}} {{.Names}} {{.Status}}'" || true
exit 1
fi
echo "Container: ${CONTAINER_ID}"
LOCAL_MD5_LINES="$(
erl -noshell -eval '
Files = filelib:wildcard("fluxer_gateway/_build/prod/lib/fluxer_gateway/ebin/*.beam"),
lists:foreach(
fun(F) ->
{ok, {M, Md5}} = beam_lib:md5(F),
Hex = binary:encode_hex(Md5, lowercase),
io:format("~s ~s ~s~n", [atom_to_list(M), binary_to_list(Hex), F])
end,
Files
),
halt().'
)"
REMOTE_MD5_LINES="$(
ssh "${SERVER}" "docker exec ${CONTAINER_ID} /opt/fluxer_gateway/bin/fluxer_gateway eval '
Mods = hot_reload:get_loaded_modules(),
lists:foreach(
fun(M) ->
case hot_reload:get_module_info(M) of
{ok, Info} ->
V = maps:get(loaded_md5, Info),
S = case V of
null -> \"null\";
B when is_binary(B) -> binary_to_list(B)
end,
io:format(\"~s ~s~n\", [atom_to_list(M), S]);
_ ->
ok
end
end,
Mods
),
ok.
' " | tr -d '\r'
)"
LOCAL_MD5_FILE="$(mktemp)"
REMOTE_MD5_FILE="$(mktemp)"
CHANGED_FILE_LIST="$(mktemp)"
CHANGED_MAIN_LIST="$(mktemp)"
CHANGED_SELF_LIST="$(mktemp)"
RELOAD_RESULT_MAIN="$(mktemp)"
RELOAD_RESULT_SELF="$(mktemp)"
trap 'rm -f "${LOCAL_MD5_FILE}" "${REMOTE_MD5_FILE}" "${CHANGED_FILE_LIST}" "${CHANGED_MAIN_LIST}" "${CHANGED_SELF_LIST}" "${RELOAD_RESULT_MAIN}" "${RELOAD_RESULT_SELF}"' EXIT
printf '%s' "${LOCAL_MD5_LINES}" > "${LOCAL_MD5_FILE}"
printf '%s' "${REMOTE_MD5_LINES}" > "${REMOTE_MD5_FILE}"
python3 - <<'PY' "${LOCAL_MD5_FILE}" "${REMOTE_MD5_FILE}" "${CHANGED_FILE_LIST}"
import sys
local_path, remote_path, out_path = sys.argv[1:4]
remote = {}
with open(remote_path, "r", encoding="utf-8") as f:
for line in f:
line = line.strip()
if not line:
continue
parts = line.split(None, 1)
if len(parts) != 2:
continue
mod, md5 = parts
remote[mod] = md5.strip()
changed_paths = []
with open(local_path, "r", encoding="utf-8") as f:
for line in f:
line = line.strip()
if not line:
continue
parts = line.split(" ", 2)
if len(parts) != 3:
continue
mod, md5, path = parts
r = remote.get(mod)
if r is None or r == "null" or r != md5:
changed_paths.append(path)
with open(out_path, "w", encoding="utf-8") as f:
for p in changed_paths:
f.write(p + "\n")
PY
mapfile -t CHANGED_FILES < "${CHANGED_FILE_LIST}"
if [ "${#CHANGED_FILES[@]}" -eq 0 ]; then
echo "No BEAM changes detected, nothing to hot-reload."
exit 0
fi
echo "Changed modules count: ${#CHANGED_FILES[@]}"
while IFS= read -r p; do
[ -n "${p}" ] || continue
m="$(basename "${p}")"
m="${m%.beam}"
if [ "${m}" = "hot_reload" ] || [ "${m}" = "hot_reload_handler" ]; then
printf '%s\n' "${p}" >> "${CHANGED_SELF_LIST}"
else
printf '%s\n' "${p}" >> "${CHANGED_MAIN_LIST}"
fi
done < "${CHANGED_FILE_LIST}"
build_json() {
python3 - "$1" <<'PY'
import sys, json, base64, os
list_path = sys.argv[1]
beams = []
with open(list_path, "r", encoding="utf-8") as f:
for path in f:
path = path.strip()
if not path:
continue
mod = os.path.basename(path)
if not mod.endswith(".beam"):
continue
mod = mod[:-5]
with open(path, "rb") as bf:
b = bf.read()
beams.append({"module": mod, "beam_b64": base64.b64encode(b).decode("ascii")})
print(json.dumps({"beams": beams, "purge": "soft"}, separators=(",", ":")))
PY
}
strict_verify() {
python3 -c '
import json, sys
raw = sys.stdin.read()
if not raw.strip():
print("::error::Empty reload response")
raise SystemExit(1)
try:
data = json.loads(raw)
except Exception as e:
print(f"::error::Invalid JSON reload response: {e}")
raise SystemExit(1)
results = data.get("results", [])
if not isinstance(results, list):
print("::error::Reload response missing results array")
raise SystemExit(1)
bad = [
r for r in results
if r.get("status") != "ok"
or r.get("verified") is not True
or r.get("purged_old_code") is not True
or (r.get("lingering_count") or 0) != 0
]
if bad:
print("::error::Hot reload verification failed")
print(json.dumps(bad, indent=2))
raise SystemExit(1)
print(f"Verified {len(results)} modules")
'
}
self_verify() {
python3 -c '
import json, sys
raw = sys.stdin.read()
if not raw.strip():
print("::error::Empty reload response")
raise SystemExit(1)
try:
data = json.loads(raw)
except Exception as e:
print(f"::error::Invalid JSON reload response: {e}")
raise SystemExit(1)
results = data.get("results", [])
if not isinstance(results, list):
print("::error::Reload response missing results array")
raise SystemExit(1)
bad = [
r for r in results
if r.get("status") != "ok"
or r.get("verified") is not True
]
if bad:
print("::error::Hot reload verification failed")
print(json.dumps(bad, indent=2))
raise SystemExit(1)
warns = [
r for r in results
if r.get("purged_old_code") is not True
or (r.get("lingering_count") or 0) != 0
]
if warns:
print("::warning::Self-reload modules may linger until request completes")
print(json.dumps(warns, indent=2))
print(f"Verified {len(results)} self modules")
'
}
if [ -s "${CHANGED_MAIN_LIST}" ]; then
if ! build_json "${CHANGED_MAIN_LIST}" | ssh "${SERVER}" "docker exec -i ${CONTAINER_ID} curl -fsS -X POST -H 'Authorization: Bearer ${GATEWAY_ADMIN_SECRET}' -H 'Content-Type: application/json' --data @- http://localhost:8081/_admin/reload" | tee "${RELOAD_RESULT_MAIN}" | strict_verify; then
echo "::group::Hot reload response (main)"
cat "${RELOAD_RESULT_MAIN}" || true
echo "::endgroup::"
exit 1
fi
fi
if [ -s "${CHANGED_SELF_LIST}" ]; then
if ! build_json "${CHANGED_SELF_LIST}" | ssh "${SERVER}" "docker exec -i ${CONTAINER_ID} curl -fsS -X POST -H 'Authorization: Bearer ${GATEWAY_ADMIN_SECRET}' -H 'Content-Type: application/json' --data @- http://localhost:8081/_admin/reload" | tee "${RELOAD_RESULT_SELF}" | self_verify; then
echo "::group::Hot reload response (self)"
cat "${RELOAD_RESULT_SELF}" || true
echo "::endgroup::"
exit 1
fi
fi
run: python3 scripts/ci/workflows/deploy_gateway.py --step deploy

View File

@@ -16,12 +16,12 @@ on:
- stable
- canary
default: stable
description: Channel to deploy
description: Release channel to deploy
ref:
type: string
required: false
default: ''
description: Optional git ref to deploy (defaults to main/canary based on channel)
description: Optional git ref (defaults to the triggering branch)
concurrency:
group: deploy-fluxer-marketing-${{ github.event_name == 'workflow_dispatch' && inputs.channel || (github.ref_name == 'canary' && 'canary') || 'stable' }}
@@ -36,46 +36,35 @@ jobs:
with:
github_event_name: ${{ github.event_name }}
github_ref_name: ${{ github.ref_name }}
github_ref: ${{ github.ref }}
workflow_dispatch_channel: ${{ github.event_name == 'workflow_dispatch' && inputs.channel || '' }}
workflow_dispatch_ref: ${{ github.event_name == 'workflow_dispatch' && inputs.ref || '' }}
deploy:
name: Deploy marketing
needs: channel-vars
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 10
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
env:
CHANNEL: ${{ needs.channel-vars.outputs.channel }}
IS_CANARY: ${{ needs.channel-vars.outputs.is_canary }}
SOURCE_REF: ${{ needs.channel-vars.outputs.source_ref }}
STACK_SUFFIX: ${{ needs.channel-vars.outputs.stack_suffix }}
STACK: ${{ format('fluxer-marketing{0}', needs.channel-vars.outputs.stack_suffix) }}
IMAGE_NAME: ${{ format('fluxer-marketing{0}', needs.channel-vars.outputs.stack_suffix) }}
CACHE_SCOPE: ${{ format('deploy-fluxer-marketing{0}', needs.channel-vars.outputs.stack_suffix) }}
APP_REPLICAS: ${{ needs.channel-vars.outputs.is_canary == 'true' && 1 || 2 }}
API_PUBLIC_ENDPOINT: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'https://api.canary.fluxer.app' || 'https://api.fluxer.app' }}
API_HOST: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'fluxer-api-canary_app:8080' || 'fluxer-api_app:8080' }}
APP_ENDPOINT: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'https://web.canary.fluxer.app' || 'https://web.fluxer.app' }}
MARKETING_ENDPOINT: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'https://canary.fluxer.app' || 'https://fluxer.app' }}
CADDY_DOMAIN: ${{ needs.channel-vars.outputs.is_canary == 'true' && 'canary.fluxer.app' || 'fluxer.app' }}
RELEASE_CHANNEL: ${{ needs.channel-vars.outputs.channel }}
steps:
- uses: actions/checkout@v6
with:
ref: ${{ env.SOURCE_REF }}
ref: ${{ inputs.ref || '' }}
fetch-depth: 0
- name: Record deploy commit
run: |-
set -euo pipefail
sha=$(git rev-parse HEAD)
echo "Deploying commit ${sha}"
printf 'DEPLOY_SHA=%s\n' "$sha" >> "$GITHUB_ENV"
run: python3 scripts/ci/workflows/deploy_marketing.py --step record_deploy_commit
- name: Set build timestamp
run: echo "BUILD_TIMESTAMP=$(date -u +%s)" >> "$GITHUB_ENV"
run: python3 scripts/ci/workflows/deploy_marketing.py --step set_build_timestamp
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -89,7 +78,7 @@ jobs:
- name: Build image
uses: docker/build-push-action@v6
with:
context: fluxer_marketing
context: .
file: fluxer_marketing/Dockerfile
tags: ${{ env.IMAGE_NAME }}:${{ env.DEPLOY_SHA }}
load: true
@@ -97,18 +86,16 @@ jobs:
cache-from: type=gha,scope=${{ env.CACHE_SCOPE }}
cache-to: type=gha,mode=max,scope=${{ env.CACHE_SCOPE }}
build-args: |
BUILD_SHA=${{ env.DEPLOY_SHA }}
BUILD_NUMBER=${{ github.run_number }}
BUILD_TIMESTAMP=${{ env.BUILD_TIMESTAMP }}
RELEASE_CHANNEL=${{ env.RELEASE_CHANNEL }}
env:
DOCKER_BUILD_SUMMARY: false
DOCKER_BUILD_RECORD_UPLOAD: false
- name: Install docker-pussh
run: |-
set -euo pipefail
mkdir -p ~/.docker/cli-plugins
curl -fsSL https://raw.githubusercontent.com/psviderski/unregistry/v0.3.1/docker-pussh \
-o ~/.docker/cli-plugins/docker-pussh
chmod +x ~/.docker/cli-plugins/docker-pussh
run: python3 scripts/ci/workflows/deploy_marketing.py --step install_docker_pussh
- name: Set up SSH agent
uses: webfactory/ssh-agent@v0.9.1
@@ -116,10 +103,7 @@ jobs:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY_SERVER }}
- name: Add server to known hosts
run: |-
set -euo pipefail
mkdir -p ~/.ssh
ssh-keyscan -H ${{ secrets.SERVER_IP }} >> ~/.ssh/known_hosts
run: python3 scripts/ci/workflows/deploy_marketing.py --step add_known_hosts --server-ip ${{ secrets.SERVER_IP }}
- name: Push image and deploy
env:
@@ -127,114 +111,7 @@ jobs:
SERVER: ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_IP }}
STACK: ${{ env.STACK }}
IS_CANARY: ${{ env.IS_CANARY }}
API_PUBLIC_ENDPOINT: ${{ env.API_PUBLIC_ENDPOINT }}
API_HOST: ${{ env.API_HOST }}
APP_ENDPOINT: ${{ env.APP_ENDPOINT }}
MARKETING_ENDPOINT: ${{ env.MARKETING_ENDPOINT }}
CADDY_DOMAIN: ${{ env.CADDY_DOMAIN }}
RELEASE_CHANNEL: ${{ env.RELEASE_CHANNEL }}
APP_REPLICAS: ${{ env.APP_REPLICAS }}
run: |-
set -euo pipefail
docker pussh "${IMAGE_TAG}" "${SERVER}"
ssh "${SERVER}" \
"IMAGE_TAG=${IMAGE_TAG} STACK=${STACK} IS_CANARY=${IS_CANARY} API_PUBLIC_ENDPOINT=${API_PUBLIC_ENDPOINT} API_HOST=${API_HOST} APP_ENDPOINT=${APP_ENDPOINT} MARKETING_ENDPOINT=${MARKETING_ENDPOINT} CADDY_DOMAIN=${CADDY_DOMAIN} RELEASE_CHANNEL=${RELEASE_CHANNEL} APP_REPLICAS=${APP_REPLICAS} bash" << 'EOF'
set -euo pipefail
sudo mkdir -p "/opt/${STACK}"
sudo chown -R "${USER}:${USER}" "/opt/${STACK}"
cd "/opt/${STACK}"
cat > compose.yaml << COMPOSEEOF
services:
app:
image: ${IMAGE_TAG}
env_file:
- /etc/fluxer/fluxer.env
environment:
- FLUXER_API_PUBLIC_ENDPOINT=${API_PUBLIC_ENDPOINT}
- FLUXER_API_HOST=${API_HOST}
- FLUXER_APP_ENDPOINT=${APP_ENDPOINT}
- FLUXER_CDN_ENDPOINT=https://fluxerstatic.com
- FLUXER_MARKETING_ENDPOINT=${MARKETING_ENDPOINT}
- FLUXER_MARKETING_PORT=8080
- FLUXER_PATH_MARKETING=/
- RELEASE_CHANNEL=${RELEASE_CHANNEL}
- FLUXER_METRICS_HOST=fluxer-metrics_app:8080
deploy:
replicas: ${APP_REPLICAS}
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
update_config:
parallelism: 1
delay: 10s
order: start-first
rollback_config:
parallelism: 1
delay: 10s
labels:
caddy: "${CADDY_DOMAIN}"
caddy.reverse_proxy: "{{upstreams 8080}}"
caddy.header.Strict-Transport-Security: "max-age=31536000; includeSubDomains; preload"
caddy.header.X-Xss-Protection: "1; mode=block"
caddy.header.X-Content-Type-Options: "nosniff"
caddy.header.Referrer-Policy: "strict-origin-when-cross-origin"
caddy.header.X-Frame-Options: "DENY"
COMPOSEEOF
if [[ "${IS_CANARY}" == "true" ]]; then
cat >> compose.yaml << 'COMPOSEEOF'
caddy.header.X-Robots-Tag: "noindex, nofollow, nosnippet, noimageindex"
caddy.@channels.path: "/channels /channels/*"
caddy.redir: "@channels https://web.canary.fluxer.app{uri}"
COMPOSEEOF
else
cat >> compose.yaml << 'COMPOSEEOF'
caddy.redir_0: "/channels/* https://web.fluxer.app{uri}"
caddy.redir_1: "/channels https://web.fluxer.app{uri}"
caddy.redir_2: "/delete-my-account https://fluxer.app/help/articles/1445724566704881664 302"
caddy.redir_3: "/delete-my-data https://fluxer.app/help/articles/1445730947679911936 302"
caddy.redir_4: "/export-my-data https://fluxer.app/help/articles/1445731738851475456 302"
caddy.redir_5: "/bugs https://fluxer.app/help/articles/1447264362996695040 302"
caddy_1: "www.fluxer.app"
caddy_1.redir: "https://fluxer.app{uri}"
caddy_3: "fluxer.gg"
caddy_3.redir: "https://web.fluxer.app/invite{uri}"
caddy_4: "fluxer.gift"
caddy_4.redir: "https://web.fluxer.app/gift{uri}"
caddy_5: "fluxerapp.com"
caddy_5.redir: "https://fluxer.app{uri}"
caddy_6: "www.fluxerapp.com"
caddy_6.redir: "https://fluxer.app{uri}"
caddy_7: "fluxer.dev"
caddy_7.redir: "https://docs.fluxer.app{uri}"
caddy_8: "www.fluxer.dev"
caddy_8.redir: "https://docs.fluxer.app{uri}"
COMPOSEEOF
fi
cat >> compose.yaml << 'COMPOSEEOF'
networks:
- fluxer-shared
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:8080/']
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
networks:
fluxer-shared:
external: true
COMPOSEEOF
docker stack deploy \
--with-registry-auth \
--detach=false \
--resolve-image never \
-c compose.yaml \
"${STACK}"
EOF
run: python3 scripts/ci/workflows/deploy_marketing.py --step push_and_deploy

View File

@@ -7,7 +7,13 @@ on:
paths:
- fluxer_media_proxy/**
- .github/workflows/deploy-media-proxy.yaml
workflow_dispatch: {}
workflow_dispatch:
inputs:
ref:
type: string
required: false
default: ''
description: Optional git ref (defaults to the triggering branch)
concurrency:
group: deploy-fluxer-media-proxy
@@ -25,17 +31,17 @@ env:
jobs:
deploy:
name: Deploy media proxy
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 10
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
env:
RELEASE_CHANNEL: stable
steps:
- uses: actions/checkout@v6
with:
ref: ${{ inputs.ref || '' }}
- name: Record deploy commit
run: |
set -euo pipefail
sha=$(git rev-parse HEAD)
echo "Deploying commit ${sha}"
printf 'DEPLOY_SHA=%s\n' "$sha" >> "$GITHUB_ENV"
run: python3 scripts/ci/workflows/deploy_media_proxy.py --step record_deploy_commit
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -46,27 +52,30 @@ jobs:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Set build timestamp
run: python3 scripts/ci/workflows/deploy_media_proxy.py --step set_build_timestamp
- name: Build image
uses: docker/build-push-action@v6
with:
context: ${{ env.CONTEXT_DIR }}
context: .
file: ${{ env.CONTEXT_DIR }}/Dockerfile
tags: ${{ env.IMAGE_NAME }}:${{ env.DEPLOY_SHA }}
load: true
platforms: linux/amd64
cache-from: type=gha,scope=${{ env.SERVICE_NAME }}
cache-to: type=gha,mode=max,scope=${{ env.SERVICE_NAME }}
build-args: |
BUILD_SHA=${{ env.DEPLOY_SHA }}
BUILD_NUMBER=${{ github.run_number }}
BUILD_TIMESTAMP=${{ env.BUILD_TIMESTAMP }}
RELEASE_CHANNEL=${{ env.RELEASE_CHANNEL }}
env:
DOCKER_BUILD_SUMMARY: false
DOCKER_BUILD_RECORD_UPLOAD: false
- name: Install docker-pussh
run: |
set -euo pipefail
mkdir -p ~/.docker/cli-plugins
curl -fsSL https://raw.githubusercontent.com/psviderski/unregistry/v0.3.1/docker-pussh \
-o ~/.docker/cli-plugins/docker-pussh
chmod +x ~/.docker/cli-plugins/docker-pussh
run: python3 scripts/ci/workflows/deploy_media_proxy.py --step install_docker_pussh
- name: Set up SSH agent
uses: webfactory/ssh-agent@v0.9.1
@@ -74,77 +83,10 @@ jobs:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY_SERVER }}
- name: Add server to known hosts
run: |
set -euo pipefail
mkdir -p ~/.ssh
ssh-keyscan -H ${{ secrets.SERVER_IP }} >> ~/.ssh/known_hosts
run: python3 scripts/ci/workflows/deploy_media_proxy.py --step add_known_hosts --server-ip ${{ secrets.SERVER_IP }}
- name: Push image and deploy
env:
IMAGE_TAG: ${{ env.IMAGE_NAME }}:${{ env.DEPLOY_SHA }}
SERVER: ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_IP }}
run: |
set -euo pipefail
docker pussh "${IMAGE_TAG}" "${SERVER}"
ssh "${SERVER}" "IMAGE_TAG=${IMAGE_TAG} SERVICE_NAME=${SERVICE_NAME} COMPOSE_STACK=${COMPOSE_STACK} bash" << 'EOF'
set -euo pipefail
sudo mkdir -p "/opt/${SERVICE_NAME}"
sudo chown -R "${USER}:${USER}" "/opt/${SERVICE_NAME}"
cd "/opt/${SERVICE_NAME}"
cat > compose.yaml << COMPOSEEOF
services:
app:
image: ${IMAGE_TAG}
command: ['pnpm', 'start']
env_file:
- /etc/fluxer/fluxer.env
environment:
- NODE_ENV=production
- FLUXER_MEDIA_PROXY_PORT=8080
- FLUXER_MEDIA_PROXY_REQUIRE_CLOUDFLARE=true
- SENTRY_DSN=https://2670068cd12b6a62f3a30a7f0055f0f1@o4510149383094272.ingest.us.sentry.io/4510205811556352
- AWS_S3_ENDPOINT=https://s3.us-east-va.io.cloud.ovh.us
- AWS_S3_BUCKET_CDN=fluxer
- AWS_S3_BUCKET_UPLOADS=fluxer-uploads
- FLUXER_METRICS_HOST=fluxer-metrics_app:8080
deploy:
replicas: 2
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
update_config:
parallelism: 1
delay: 10s
order: start-first
rollback_config:
parallelism: 1
delay: 10s
labels:
- 'caddy=http://fluxerusercontent.com'
- 'caddy.reverse_proxy={{upstreams 8080}}'
- 'caddy.header.X-Robots-Tag="noindex, nofollow, nosnippet, noimageindex"'
- 'caddy.header.Strict-Transport-Security="max-age=31536000; includeSubDomains; preload"'
- 'caddy.header.X-Xss-Protection="1; mode=block"'
- 'caddy.header.X-Content-Type-Options=nosniff'
- 'caddy.header.Referrer-Policy=strict-origin-when-cross-origin'
- 'caddy.header.X-Frame-Options=DENY'
- 'caddy.header.Expect-Ct="max-age=86400, report-uri=\"https://o4510149383094272.ingest.us.sentry.io/api/4510205811556352/security/?sentry_key=2670068cd12b6a62f3a30a7f0055f0f1\""'
networks:
- fluxer-shared
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:8080/_health']
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
networks:
fluxer-shared:
external: true
COMPOSEEOF
docker stack deploy --with-registry-auth --detach=false --resolve-image never -c compose.yaml "${COMPOSE_STACK}"
EOF
run: python3 scripts/ci/workflows/deploy_media_proxy.py --step push_and_deploy

View File

@@ -1,131 +0,0 @@
name: deploy metrics
on:
push:
branches:
- main
paths:
- fluxer_metrics/**
- .github/workflows/deploy-metrics.yaml
workflow_dispatch: {}
concurrency:
group: deploy-fluxer-metrics
cancel-in-progress: true
permissions:
contents: read
jobs:
deploy:
name: Deploy metrics
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 10
steps:
- uses: actions/checkout@v6
- name: Record deploy commit
run: |
set -euo pipefail
sha=$(git rev-parse HEAD)
echo "Deploying commit ${sha}"
printf 'DEPLOY_SHA=%s\n' "$sha" >> "$GITHUB_ENV"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build image
uses: docker/build-push-action@v6
with:
context: fluxer_metrics
file: fluxer_metrics/Dockerfile
tags: fluxer-metrics:${{ env.DEPLOY_SHA }}
load: true
platforms: linux/amd64
cache-from: type=gha,scope=deploy-fluxer-metrics
cache-to: type=gha,mode=max,scope=deploy-fluxer-metrics
env:
DOCKER_BUILD_SUMMARY: false
DOCKER_BUILD_RECORD_UPLOAD: false
- name: Install docker-pussh
run: |
set -euo pipefail
mkdir -p ~/.docker/cli-plugins
curl -fsSL https://raw.githubusercontent.com/psviderski/unregistry/v0.3.1/docker-pussh \
-o ~/.docker/cli-plugins/docker-pussh
chmod +x ~/.docker/cli-plugins/docker-pussh
- name: Set up SSH agent
uses: webfactory/ssh-agent@v0.9.1
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY_SERVER }}
- name: Add server to known hosts
run: |
set -euo pipefail
mkdir -p ~/.ssh
ssh-keyscan -H ${{ secrets.SERVER_IP }} >> ~/.ssh/known_hosts
- name: Push image and deploy
env:
IMAGE_TAG: fluxer-metrics:${{ env.DEPLOY_SHA }}
SERVER: ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_IP }}
run: |
set -euo pipefail
docker pussh "${IMAGE_TAG}" "${SERVER}"
ssh "${SERVER}" "IMAGE_TAG=${IMAGE_TAG} bash" << 'EOF'
set -euo pipefail
sudo mkdir -p /opt/fluxer-metrics
sudo chown -R "${USER}:${USER}" /opt/fluxer-metrics
cd /opt/fluxer-metrics
cat > compose.yaml << 'COMPOSEEOF'
services:
app:
image: ${IMAGE_TAG}
env_file:
- /etc/fluxer/fluxer.env
environment:
- METRICS_PORT=8080
- CLICKHOUSE_URL=http://clickhouse:8123
- CLICKHOUSE_DATABASE=fluxer_metrics
- CLICKHOUSE_USER=fluxer
- FLUXER_ADMIN_ENDPOINT=https://admin.fluxer.app
- ANOMALY_DETECTION_ENABLED=true
deploy:
replicas: 1
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
update_config:
parallelism: 1
delay: 10s
order: start-first
rollback_config:
parallelism: 1
delay: 10s
networks:
- fluxer-shared
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:8080/_health']
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
networks:
fluxer-shared:
external: true
COMPOSEEOF
docker stack deploy --with-registry-auth --detach=false --resolve-image never -c compose.yaml fluxer-metrics
EOF

View File

@@ -0,0 +1,91 @@
name: deploy relay directory
on:
push:
branches:
- canary
paths:
- fluxer_relay_directory/**
- .github/workflows/deploy-relay-directory.yaml
workflow_dispatch:
inputs:
ref:
type: string
required: false
default: ''
description: Optional git ref (defaults to the triggering branch)
concurrency:
group: deploy-fluxer-relay-directory
cancel-in-progress: true
permissions:
contents: read
jobs:
deploy:
name: Deploy relay directory
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
env:
STACK: fluxer-relay-directory
CACHE_SCOPE: deploy-fluxer-relay-directory
IS_CANARY: true
steps:
- uses: actions/checkout@v6
with:
ref: ${{ inputs.ref || '' }}
fetch-depth: 0
- name: Record deploy commit
run: python3 scripts/ci/workflows/deploy_relay_directory.py --step record_deploy_commit
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Set build timestamp
run: python3 scripts/ci/workflows/deploy_relay_directory.py --step set_build_timestamp
- name: Build image
uses: docker/build-push-action@v6
with:
context: .
file: fluxer_relay_directory/Dockerfile
tags: |
${{ env.STACK }}:${{ env.DEPLOY_SHA }}
load: true
platforms: linux/amd64
cache-from: type=gha,scope=${{ env.CACHE_SCOPE }}
cache-to: type=gha,mode=max,scope=${{ env.CACHE_SCOPE }}
build-args: |
BUILD_SHA=${{ env.DEPLOY_SHA }}
BUILD_NUMBER=${{ github.run_number }}
BUILD_TIMESTAMP=${{ env.BUILD_TIMESTAMP }}
RELEASE_CHANNEL=canary
env:
DOCKER_BUILD_SUMMARY: false
DOCKER_BUILD_RECORD_UPLOAD: false
- name: Install docker-pussh
run: python3 scripts/ci/workflows/deploy_relay_directory.py --step install_docker_pussh
- name: Set up SSH agent
uses: webfactory/ssh-agent@v0.9.1
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY_SERVER }}
- name: Add server to known hosts
run: python3 scripts/ci/workflows/deploy_relay_directory.py --step add_known_hosts --server-ip ${{ secrets.SERVER_IP }}
- name: Push image and deploy
env:
SERVER: ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_IP }}
IMAGE_TAG: ${{ env.STACK }}:${{ env.DEPLOY_SHA }}
run: python3 scripts/ci/workflows/deploy_relay_directory.py --step push_and_deploy

62
.github/workflows/deploy-relay.yaml vendored Normal file
View File

@@ -0,0 +1,62 @@
name: deploy relay
on:
workflow_dispatch:
inputs:
ref:
type: string
required: false
default: ''
description: Optional git ref (defaults to the triggering branch)
push:
branches:
- canary
paths:
- 'fluxer_relay/**'
concurrency:
group: deploy-relay
cancel-in-progress: true
permissions:
contents: read
jobs:
deploy:
name: Deploy (hot patch)
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
steps:
- uses: actions/checkout@v6
with:
ref: ${{ inputs.ref || '' }}
sparse-checkout: |
fluxer_relay
scripts/ci
- name: Set up Erlang
uses: erlef/setup-beam@v1
with:
otp-version: '28'
rebar3-version: '3.24.0'
- name: Compile
run: python3 scripts/ci/workflows/deploy_relay.py --step compile
- name: Set up SSH
uses: webfactory/ssh-agent@v0.9.1
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY_SERVER }}
- name: Add server to known hosts
run: python3 scripts/ci/workflows/deploy_relay.py --step add_known_hosts --server-ip ${{ secrets.SERVER_IP }}
- name: Record deploy commit
run: python3 scripts/ci/workflows/deploy_relay.py --step record_deploy_commit
- name: Deploy
env:
SERVER: ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_IP }}
RELAY_ADMIN_SECRET: ${{ secrets.RELAY_ADMIN_SECRET }}
run: python3 scripts/ci/workflows/deploy_relay.py --step deploy

View File

@@ -7,7 +7,13 @@ on:
paths:
- fluxer_media_proxy/**
- .github/workflows/deploy-static-proxy.yaml
workflow_dispatch: {}
workflow_dispatch:
inputs:
ref:
type: string
required: false
default: ''
description: Optional git ref (defaults to the triggering branch)
concurrency:
group: deploy-fluxer-static-proxy
@@ -25,17 +31,17 @@ env:
jobs:
deploy:
name: Deploy static proxy
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 10
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
env:
RELEASE_CHANNEL: stable
steps:
- uses: actions/checkout@v6
with:
ref: ${{ inputs.ref || '' }}
- name: Record deploy commit
run: |
set -euo pipefail
sha=$(git rev-parse HEAD)
echo "Deploying commit ${sha}"
printf 'DEPLOY_SHA=%s\n' "$sha" >> "$GITHUB_ENV"
run: python3 scripts/ci/workflows/deploy_static_proxy.py --step record_deploy_commit
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -46,27 +52,30 @@ jobs:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Set build timestamp
run: python3 scripts/ci/workflows/deploy_static_proxy.py --step set_build_timestamp
- name: Build image
uses: docker/build-push-action@v6
with:
context: ${{ env.CONTEXT_DIR }}
context: .
file: ${{ env.CONTEXT_DIR }}/Dockerfile
tags: ${{ env.IMAGE_NAME }}:${{ env.DEPLOY_SHA }}
load: true
platforms: linux/amd64
cache-from: type=gha,scope=${{ env.SERVICE_NAME }}
cache-to: type=gha,mode=max,scope=${{ env.SERVICE_NAME }}
build-args: |
BUILD_SHA=${{ env.DEPLOY_SHA }}
BUILD_NUMBER=${{ github.run_number }}
BUILD_TIMESTAMP=${{ env.BUILD_TIMESTAMP }}
RELEASE_CHANNEL=${{ env.RELEASE_CHANNEL }}
env:
DOCKER_BUILD_SUMMARY: false
DOCKER_BUILD_RECORD_UPLOAD: false
- name: Install docker-pussh
run: |
set -euo pipefail
mkdir -p ~/.docker/cli-plugins
curl -fsSL https://raw.githubusercontent.com/psviderski/unregistry/v0.3.1/docker-pussh \
-o ~/.docker/cli-plugins/docker-pussh
chmod +x ~/.docker/cli-plugins/docker-pussh
run: python3 scripts/ci/workflows/deploy_static_proxy.py --step install_docker_pussh
- name: Set up SSH agent
uses: webfactory/ssh-agent@v0.9.1
@@ -74,77 +83,10 @@ jobs:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY_SERVER }}
- name: Add server to known hosts
run: |
set -euo pipefail
mkdir -p ~/.ssh
ssh-keyscan -H ${{ secrets.SERVER_IP }} >> ~/.ssh/known_hosts
run: python3 scripts/ci/workflows/deploy_static_proxy.py --step add_known_hosts --server-ip ${{ secrets.SERVER_IP }}
- name: Push image and deploy
env:
IMAGE_TAG: ${{ env.IMAGE_NAME }}:${{ env.DEPLOY_SHA }}
SERVER: ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_IP }}
run: |
set -euo pipefail
docker pussh "${IMAGE_TAG}" "${SERVER}"
ssh "${SERVER}" "IMAGE_TAG=${IMAGE_TAG} SERVICE_NAME=${SERVICE_NAME} COMPOSE_STACK=${COMPOSE_STACK} bash" << 'EOF'
set -euo pipefail
sudo mkdir -p "/opt/${SERVICE_NAME}"
sudo chown -R "${USER}:${USER}" "/opt/${SERVICE_NAME}"
cd "/opt/${SERVICE_NAME}"
cat > compose.yaml << COMPOSEEOF
services:
app:
image: ${IMAGE_TAG}
command: ['pnpm', 'start']
env_file:
- /etc/fluxer/fluxer.env
environment:
- NODE_ENV=production
- FLUXER_MEDIA_PROXY_PORT=8080
- FLUXER_MEDIA_PROXY_STATIC_MODE=true
- FLUXER_MEDIA_PROXY_REQUIRE_CLOUDFLARE=true
- AWS_S3_ENDPOINT=https://s3.us-east-va.io.cloud.ovh.us
- AWS_S3_BUCKET_CDN=fluxer
- AWS_S3_BUCKET_UPLOADS=fluxer-uploads
- AWS_S3_BUCKET_STATIC=fluxer-static
deploy:
replicas: 2
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
update_config:
parallelism: 1
delay: 10s
order: start-first
rollback_config:
parallelism: 1
delay: 10s
labels:
- 'caddy=http://fluxerstatic.com'
- 'caddy.reverse_proxy={{upstreams 8080}}'
- 'caddy.header.X-Robots-Tag="noindex, nofollow, nosnippet, noimageindex"'
- 'caddy.header.Strict-Transport-Security="max-age=31536000; includeSubDomains; preload"'
- 'caddy.header.X-Xss-Protection="1; mode=block"'
- 'caddy.header.X-Content-Type-Options=nosniff'
- 'caddy.header.Referrer-Policy=strict-origin-when-cross-origin'
- 'caddy.header.X-Frame-Options=DENY'
- 'caddy.header.Expect-Ct="max-age=86400, report-uri=\"https://o4510149383094272.ingest.us.sentry.io/api/4510205811556352/security/?sentry_key=2670068cd12b6a62f3a30a7f0055f0f1\""'
networks:
- fluxer-shared
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:8080/_health']
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
networks:
fluxer-shared:
external: true
COMPOSEEOF
docker stack deploy --with-registry-auth --detach=false --resolve-image never -c compose.yaml "${COMPOSE_STACK}"
EOF
run: python3 scripts/ci/workflows/deploy_static_proxy.py --step push_and_deploy

View File

@@ -18,37 +18,26 @@ permissions:
jobs:
migrate:
name: Run database migrations
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 10
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
steps:
- uses: actions/checkout@v6
- name: Set up Rust
uses: dtolnay/rust-toolchain@stable
- name: Set up pnpm
uses: pnpm/action-setup@v4
- name: Cache Rust dependencies
uses: actions/cache@v5
- name: Set up Node.js
uses: actions/setup-node@v4
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
scripts/cassandra-migrate/target/
key: ${{ runner.os }}-cargo-${{ hashFiles('scripts/cassandra-migrate/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-
node-version: 24
cache: pnpm
cache-dependency-path: pnpm-lock.yaml
- name: Build migration tool
run: |
set -euo pipefail
cd scripts/cassandra-migrate
cargo build --release
- name: Install dependencies
run: python3 scripts/ci/workflows/migrate_cassandra.py --step install_dependencies
- name: Validate migrations
run: |
set -euo pipefail
./scripts/cassandra-migrate/target/release/cassandra-migrate check
run: python3 scripts/ci/workflows/migrate_cassandra.py --step validate_migrations
- name: Set up SSH agent
uses: webfactory/ssh-agent@v0.9.1
@@ -56,70 +45,23 @@ jobs:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY_SERVER }}
- name: Add server to known hosts
run: |
set -euo pipefail
mkdir -p ~/.ssh
ssh-keyscan -H ${{ secrets.SERVER_IP }} >> ~/.ssh/known_hosts
run: python3 scripts/ci/workflows/migrate_cassandra.py --step add_known_hosts --server-ip ${{ secrets.SERVER_IP }}
- name: Set up SSH tunnel for Cassandra
run: |
set -euo pipefail
nohup ssh -N -o ConnectTimeout=30 -o ServerAliveInterval=10 -o ServerAliveCountMax=30 -o ExitOnForwardFailure=yes -L 9042:localhost:9042 ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_IP }} > /tmp/ssh-tunnel.log 2>&1 &
SSH_TUNNEL_PID=$!
printf 'SSH_TUNNEL_PID=%s\n' "$SSH_TUNNEL_PID" >> "$GITHUB_ENV"
for i in {1..30}; do
if timeout 1 bash -c "echo > /dev/tcp/localhost/9042" 2>/dev/null; then
echo "SSH tunnel established"
break
elif command -v ss >/dev/null 2>&1 && ss -tln | grep -q ":9042 "; then
echo "SSH tunnel established"
break
elif command -v netstat >/dev/null 2>&1 && netstat -tln | grep -q ":9042 "; then
echo "SSH tunnel established"
break
fi
if [ $i -eq 30 ]; then
cat /tmp/ssh-tunnel.log || true
exit 1
fi
sleep 1
done
ps -p $SSH_TUNNEL_PID > /dev/null || exit 1
run: python3 scripts/ci/workflows/migrate_cassandra.py --step setup_tunnel --server-user ${{ secrets.SERVER_USER }} --server-ip ${{ secrets.SERVER_IP }}
- name: Test Cassandra connection
env:
CASSANDRA_USERNAME: ${{ secrets.CASSANDRA_USERNAME }}
CASSANDRA_PASSWORD: ${{ secrets.CASSANDRA_PASSWORD }}
run: |
set -euo pipefail
./scripts/cassandra-migrate/target/release/cassandra-migrate \
--host localhost \
--port 9042 \
--username "${CASSANDRA_USERNAME}" \
--password "${CASSANDRA_PASSWORD}" \
test
run: python3 scripts/ci/workflows/migrate_cassandra.py --step test_connection
- name: Run migrations
env:
CASSANDRA_USERNAME: ${{ secrets.CASSANDRA_USERNAME }}
CASSANDRA_PASSWORD: ${{ secrets.CASSANDRA_PASSWORD }}
run: |
set -euo pipefail
./scripts/cassandra-migrate/target/release/cassandra-migrate \
--host localhost \
--port 9042 \
--username "${CASSANDRA_USERNAME}" \
--password "${CASSANDRA_PASSWORD}" \
up
run: python3 scripts/ci/workflows/migrate_cassandra.py --step run_migrations
- name: Close SSH tunnel
if: always()
run: |
set -euo pipefail
if [ -n "${SSH_TUNNEL_PID:-}" ]; then
kill "$SSH_TUNNEL_PID" 2>/dev/null || true
fi
pkill -f "ssh.*9042:localhost:9042" || true
rm -f /tmp/ssh-tunnel.log || true
run: python3 scripts/ci/workflows/migrate_cassandra.py --step close_tunnel

View File

@@ -25,13 +25,13 @@ permissions:
jobs:
promote:
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 10
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
steps:
- name: Create GitHub App token
id: app-token
uses: actions/create-github-app-token@v1
uses: actions/create-github-app-token@v2
with:
app-id: ${{ secrets.PROMOTE_APP_ID }}
private-key: ${{ secrets.PROMOTE_APP_PRIVATE_KEY }}
@@ -45,49 +45,23 @@ jobs:
- name: Verify ff-only + summarize
id: verify
run: |
set -euo pipefail
src="${{ inputs.src }}"
dst="${{ inputs.dst }}"
git fetch origin "${dst}" "${src}" --prune
# Ensure HEAD is exactly origin/src
git reset --hard "origin/${src}"
# FF-only requirement: dst must be an ancestor of src
if ! git merge-base --is-ancestor "origin/${dst}" "origin/${src}"; then
echo "::error::Cannot fast-forward: origin/${dst} is not an ancestor of origin/${src} (branches diverged)."
exit 1
fi
ahead="$(git rev-list --count "origin/${dst}..origin/${src}")"
echo "ahead=$ahead" >> "$GITHUB_OUTPUT"
{
echo "## Promote \`${src}\` → \`${dst}\` (ff-only)"
echo ""
echo "- \`${dst}\`: \`$(git rev-parse "origin/${dst}")\`"
echo "- \`${src}\`: \`$(git rev-parse "origin/${src}")\`"
echo "- Commits to promote: **${ahead}**"
echo ""
echo "### Commits"
if [ "$ahead" -eq 0 ]; then
echo "_Nothing to promote._"
else
git log --oneline --decorate "origin/${dst}..origin/${src}"
fi
} >> "$GITHUB_STEP_SUMMARY"
run: >-
python3 scripts/ci/workflows/promote_canary_to_main.py
--step verify
--src "${{ inputs.src }}"
--dst "${{ inputs.dst }}"
- name: Push fast-forward
if: ${{ steps.verify.outputs.ahead != '0' && inputs.dry_run != true }}
run: |
set -euo pipefail
dst="${{ inputs.dst }}"
# Push src HEAD to dst (no merge commit, same SHAs)
git push origin "HEAD:refs/heads/${dst}"
run: >-
python3 scripts/ci/workflows/promote_canary_to_main.py
--step push
--dst "${{ inputs.dst }}"
- name: Dry run / no-op
if: ${{ steps.verify.outputs.ahead == '0' || inputs.dry_run == true }}
run: |
echo "No push performed (dry_run=${{ inputs.dry_run }}, ahead=${{ steps.verify.outputs.ahead }})."
run: >-
python3 scripts/ci/workflows/promote_canary_to_main.py
--step dry_run
--dry-run "${{ inputs.dry_run }}"
--ahead "${{ steps.verify.outputs.ahead }}"

View File

@@ -0,0 +1,151 @@
name: release livekitctl
on:
push:
tags:
- 'livekitctl-v*'
workflow_dispatch:
inputs:
version:
description: Version to release (e.g., 1.0.0)
required: true
type: string
permissions:
contents: write
concurrency:
group: release-livekitctl
cancel-in-progress: false
env:
GO_VERSION: '1.24'
jobs:
build:
name: Build ${{ matrix.goos }}/${{ matrix.goarch }}
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
strategy:
fail-fast: false
matrix:
include:
- goos: linux
goarch: amd64
- goos: linux
goarch: arm64
steps:
- name: Checkout
uses: actions/checkout@v6
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache-dependency-path: fluxer_devops/livekitctl/go.sum
- name: Determine version
id: version
run: >-
python3 scripts/ci/workflows/release_livekitctl.py
--step determine_version
--event-name "${{ github.event_name }}"
--input-version "${{ inputs.version }}"
--ref-name "${{ github.ref_name }}"
- name: Build binary
env:
GOOS: ${{ matrix.goos }}
GOARCH: ${{ matrix.goarch }}
CGO_ENABLED: 0
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/release_livekitctl.py
--step build_binary
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: livekitctl-${{ matrix.goos }}-${{ matrix.goarch }}
path: fluxer_devops/livekitctl/livekitctl-${{ matrix.goos }}-${{ matrix.goarch }}
retention-days: 1
release:
name: Create release
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
needs: build
steps:
- name: Checkout
uses: actions/checkout@v6
- name: Determine version
id: version
run: >-
python3 scripts/ci/workflows/release_livekitctl.py
--step determine_version
--event-name "${{ github.event_name }}"
--input-version "${{ inputs.version }}"
--ref-name "${{ github.ref_name }}"
- name: Download all artifacts
uses: actions/download-artifact@v4
with:
path: artifacts
- name: Prepare release assets
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/release_livekitctl.py
--step prepare_release_assets
- name: Generate checksums
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/release_livekitctl.py
--step generate_checksums
--release-dir release
- name: Create tag (workflow_dispatch only)
if: github.event_name == 'workflow_dispatch'
run: >-
python3 ${{ github.workspace }}/scripts/ci/workflows/release_livekitctl.py
--step create_tag
--tag "${{ steps.version.outputs.tag }}"
--version "${{ steps.version.outputs.version }}"
- name: Create GitHub release
uses: softprops/action-gh-release@v2
with:
tag_name: ${{ steps.version.outputs.tag }}
name: livekitctl v${{ steps.version.outputs.version }}
body: |
## livekitctl v${{ steps.version.outputs.version }}
Self-hosted LiveKit bootstrap and operations CLI.
### Installation
```bash
curl -fsSL https://fluxer.app/get/livekitctl | sudo bash
```
### Manual download
Download the appropriate binary for your system:
- `livekitctl-linux-amd64` - Linux x86_64
- `livekitctl-linux-arm64` - Linux ARM64
Then make it executable and move to your PATH:
```bash
chmod +x livekitctl-linux-*
sudo mv livekitctl-linux-* /usr/local/bin/livekitctl
```
### Checksums
See `checksums.txt` for SHA256 checksums.
files: |
release/livekitctl-linux-amd64
release/livekitctl-linux-arm64
release/checksums.txt
draft: false
prerelease: false

View File

@@ -0,0 +1,259 @@
name: release relay directory
on:
push:
branches: [canary]
paths:
- fluxer_relay_directory/**
- .github/workflows/release-relay-directory.yaml
workflow_dispatch:
inputs:
channel:
description: Release channel
type: choice
options: [stable, nightly]
default: nightly
required: false
ref:
description: Git ref (branch, tag, or commit SHA)
type: string
default: ''
required: false
version:
description: Stable version (e.g. 1.0.0). Defaults to 0.0.<run_number>
type: string
required: false
permissions:
contents: write
packages: write
id-token: write
attestations: write
concurrency:
group: release-relay-directory-${{ (github.event_name == 'workflow_dispatch' && github.event.inputs.channel) || 'nightly' }}
cancel-in-progress: true
defaults:
run:
shell: bash
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository_owner }}/fluxer-relay-directory
CHANNEL: ${{ (github.event_name == 'workflow_dispatch' && github.event.inputs.channel) || 'nightly' }}
SOURCE_REF: >-
${{ (github.event_name == 'workflow_dispatch' && github.event.inputs.ref)
|| ((github.event_name == 'workflow_dispatch' && github.event.inputs.channel == 'stable') && 'main')
|| 'canary' }}
jobs:
meta:
name: resolve build metadata
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
outputs:
version: ${{ steps.meta.outputs.version }}
channel: ${{ steps.meta.outputs.channel }}
source_ref: ${{ steps.meta.outputs.source_ref }}
sha_short: ${{ steps.meta.outputs.sha_short }}
timestamp: ${{ steps.meta.outputs.timestamp }}
date: ${{ steps.meta.outputs.date }}
build_number: ${{ steps.meta.outputs.build_number }}
steps:
- name: checkout
uses: actions/checkout@v6
with:
ref: ${{ env.SOURCE_REF }}
- name: metadata
id: meta
run: >-
python3 scripts/ci/workflows/release_relay_directory.py
--step metadata
--version-input "${{ github.event.inputs.version }}"
--channel "${{ env.CHANNEL }}"
--source-ref "${{ env.SOURCE_REF }}"
build:
name: build fluxer relay directory
needs: meta
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
outputs:
image_tags: ${{ steps.docker_meta.outputs.tags }}
image_digest: ${{ steps.build.outputs.digest }}
steps:
- name: checkout
uses: actions/checkout@v6
with:
ref: ${{ needs.meta.outputs.source_ref }}
- name: set up buildx
uses: docker/setup-buildx-action@v3
- name: login
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: docker metadata
id: docker_meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=raw,value=nightly,enable=${{ needs.meta.outputs.channel == 'nightly' }}
type=raw,value=nightly-${{ needs.meta.outputs.date }},enable=${{ needs.meta.outputs.channel == 'nightly' }}
type=raw,value=sha-${{ needs.meta.outputs.sha_short }},enable=${{ needs.meta.outputs.channel == 'nightly' }}
type=raw,value=stable,enable=${{ needs.meta.outputs.channel == 'stable' }}
type=raw,value=latest,enable=${{ needs.meta.outputs.channel == 'stable' }}
type=raw,value=v${{ needs.meta.outputs.version }},enable=${{ needs.meta.outputs.channel == 'stable' }}
type=semver,pattern={{version}},value=${{ needs.meta.outputs.version }},enable=${{ needs.meta.outputs.channel == 'stable' && !startsWith(needs.meta.outputs.version, '0.0.') }}
type=semver,pattern={{major}}.{{minor}},value=${{ needs.meta.outputs.version }},enable=${{ needs.meta.outputs.channel == 'stable' && !startsWith(needs.meta.outputs.version, '0.0.') }}
- name: build and push
id: build
uses: docker/build-push-action@v6
with:
context: .
file: fluxer_relay_directory/Dockerfile
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.docker_meta.outputs.tags }}
labels: |
${{ steps.docker_meta.outputs.labels }}
org.opencontainers.image.version=v${{ needs.meta.outputs.version }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.created=${{ needs.meta.outputs.timestamp }}
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
dev.fluxer.build.channel=${{ needs.meta.outputs.channel }}
dev.fluxer.build.number=${{ needs.meta.outputs.build_number }}
dev.fluxer.build.sha=${{ github.sha }}
dev.fluxer.build.short_sha=${{ needs.meta.outputs.sha_short }}
dev.fluxer.build.date=${{ needs.meta.outputs.date }}
build-args: |
BUILD_SHA=${{ github.sha }}
BUILD_NUMBER=${{ needs.meta.outputs.build_number }}
BUILD_TIMESTAMP=${{ needs.meta.outputs.timestamp }}
RELEASE_CHANNEL=${{ needs.meta.outputs.channel }}
cache-from: type=gha,scope=relay-directory-${{ needs.meta.outputs.channel }}
cache-to: type=gha,mode=max,scope=relay-directory-${{ needs.meta.outputs.channel }}
provenance: true
sbom: true
- name: attest
uses: actions/attest-build-provenance@v2
with:
subject-name: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
subject-digest: ${{ steps.build.outputs.digest }}
push-to-registry: true
create-release:
name: create release
needs: [meta, build]
if: |
always() &&
needs.meta.outputs.version != '' &&
needs.build.result == 'success'
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
steps:
- name: checkout
uses: actions/checkout@v6
with:
ref: ${{ needs.meta.outputs.source_ref }}
- name: stable release
if: needs.meta.outputs.channel == 'stable'
uses: softprops/action-gh-release@v2
with:
tag_name: relay-directory-v${{ needs.meta.outputs.version }}
name: Fluxer Relay Directory v${{ needs.meta.outputs.version }}
draft: false
prerelease: false
generate_release_notes: true
body: |
Fluxer Relay Directory
Pull:
```bash
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:v${{ needs.meta.outputs.version }}
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
```
Build:
- version: v${{ needs.meta.outputs.version }}
- build: ${{ needs.meta.outputs.build_number }}
- sha: ${{ github.sha }}
- time: ${{ needs.meta.outputs.timestamp }}
- channel: stable
Docs: https://docs.fluxer.app/federation
- name: nightly release
if: needs.meta.outputs.channel == 'nightly'
uses: softprops/action-gh-release@v2
with:
tag_name: relay-directory-nightly-${{ needs.meta.outputs.date }}-${{ needs.meta.outputs.sha_short }}
name: Relay Directory nightly ${{ needs.meta.outputs.date }} (${{ needs.meta.outputs.sha_short }})
draft: false
prerelease: true
generate_release_notes: true
body: |
Nightly Fluxer Relay Directory image from canary.
Pull:
```bash
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:nightly
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:nightly-${{ needs.meta.outputs.date }}
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:sha-${{ needs.meta.outputs.sha_short }}
```
Build:
- version: v${{ needs.meta.outputs.version }}
- build: ${{ needs.meta.outputs.build_number }}
- sha: ${{ github.sha }}
- time: ${{ needs.meta.outputs.timestamp }}
- channel: nightly
- branch: canary
release-summary:
name: release summary
needs: [meta, build]
if: always()
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
sparse-checkout: scripts/ci
sparse-checkout-cone-mode: false
- name: summary
run: >-
python3 scripts/ci/workflows/release_relay_directory.py
--step summary
--build-result "${{ needs.build.result }}"
--channel "${{ needs.meta.outputs.channel }}"
--version "${{ needs.meta.outputs.version }}"
--build-number "${{ needs.meta.outputs.build_number }}"
--sha-short "${{ needs.meta.outputs.sha_short }}"
--timestamp "${{ needs.meta.outputs.timestamp }}"
--date-ymd "${{ needs.meta.outputs.date }}"
--source-ref "${{ needs.meta.outputs.source_ref }}"
--image-tags "${{ needs.build.outputs.image_tags }}"
--image-digest "${{ needs.build.outputs.image_digest }}"
--registry "${{ env.REGISTRY }}"
--image-name "${{ env.IMAGE_NAME }}"

259
.github/workflows/release-relay.yaml vendored Normal file
View File

@@ -0,0 +1,259 @@
name: release relay
on:
push:
branches: [canary]
paths:
- fluxer_relay/**
- .github/workflows/release-relay.yaml
workflow_dispatch:
inputs:
channel:
description: Release channel
type: choice
options: [stable, nightly]
default: nightly
required: false
ref:
description: Git ref (branch, tag, or commit SHA)
type: string
default: ''
required: false
version:
description: Stable version (e.g. 1.0.0). Defaults to 0.0.<run_number>
type: string
required: false
permissions:
contents: write
packages: write
id-token: write
attestations: write
concurrency:
group: release-relay-${{ (github.event_name == 'workflow_dispatch' && github.event.inputs.channel) || 'nightly' }}
cancel-in-progress: true
defaults:
run:
shell: bash
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository_owner }}/fluxer-relay
CHANNEL: ${{ (github.event_name == 'workflow_dispatch' && github.event.inputs.channel) || 'nightly' }}
SOURCE_REF: >-
${{ (github.event_name == 'workflow_dispatch' && github.event.inputs.ref)
|| ((github.event_name == 'workflow_dispatch' && github.event.inputs.channel == 'stable') && 'main')
|| 'canary' }}
jobs:
meta:
name: resolve build metadata
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
outputs:
version: ${{ steps.meta.outputs.version }}
channel: ${{ steps.meta.outputs.channel }}
source_ref: ${{ steps.meta.outputs.source_ref }}
sha_short: ${{ steps.meta.outputs.sha_short }}
timestamp: ${{ steps.meta.outputs.timestamp }}
date: ${{ steps.meta.outputs.date }}
build_number: ${{ steps.meta.outputs.build_number }}
steps:
- name: checkout
uses: actions/checkout@v6
with:
ref: ${{ env.SOURCE_REF }}
- name: metadata
id: meta
run: >-
python3 scripts/ci/workflows/release_relay.py
--step metadata
--version-input "${{ github.event.inputs.version }}"
--channel "${{ env.CHANNEL }}"
--source-ref "${{ env.SOURCE_REF }}"
build:
name: build fluxer relay
needs: meta
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
outputs:
image_tags: ${{ steps.docker_meta.outputs.tags }}
image_digest: ${{ steps.build.outputs.digest }}
steps:
- name: checkout
uses: actions/checkout@v6
with:
ref: ${{ needs.meta.outputs.source_ref }}
- name: set up buildx
uses: docker/setup-buildx-action@v3
- name: login
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: docker metadata
id: docker_meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=raw,value=nightly,enable=${{ needs.meta.outputs.channel == 'nightly' }}
type=raw,value=nightly-${{ needs.meta.outputs.date }},enable=${{ needs.meta.outputs.channel == 'nightly' }}
type=raw,value=sha-${{ needs.meta.outputs.sha_short }},enable=${{ needs.meta.outputs.channel == 'nightly' }}
type=raw,value=stable,enable=${{ needs.meta.outputs.channel == 'stable' }}
type=raw,value=latest,enable=${{ needs.meta.outputs.channel == 'stable' }}
type=raw,value=v${{ needs.meta.outputs.version }},enable=${{ needs.meta.outputs.channel == 'stable' }}
type=semver,pattern={{version}},value=${{ needs.meta.outputs.version }},enable=${{ needs.meta.outputs.channel == 'stable' && !startsWith(needs.meta.outputs.version, '0.0.') }}
type=semver,pattern={{major}}.{{minor}},value=${{ needs.meta.outputs.version }},enable=${{ needs.meta.outputs.channel == 'stable' && !startsWith(needs.meta.outputs.version, '0.0.') }}
- name: build and push
id: build
uses: docker/build-push-action@v6
with:
context: fluxer_relay
file: fluxer_relay/Dockerfile
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.docker_meta.outputs.tags }}
labels: |
${{ steps.docker_meta.outputs.labels }}
org.opencontainers.image.version=v${{ needs.meta.outputs.version }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.created=${{ needs.meta.outputs.timestamp }}
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
dev.fluxer.build.channel=${{ needs.meta.outputs.channel }}
dev.fluxer.build.number=${{ needs.meta.outputs.build_number }}
dev.fluxer.build.sha=${{ github.sha }}
dev.fluxer.build.short_sha=${{ needs.meta.outputs.sha_short }}
dev.fluxer.build.date=${{ needs.meta.outputs.date }}
build-args: |
BUILD_SHA=${{ github.sha }}
BUILD_NUMBER=${{ needs.meta.outputs.build_number }}
BUILD_TIMESTAMP=${{ needs.meta.outputs.timestamp }}
RELEASE_CHANNEL=${{ needs.meta.outputs.channel }}
cache-from: type=gha,scope=relay-${{ needs.meta.outputs.channel }}
cache-to: type=gha,mode=max,scope=relay-${{ needs.meta.outputs.channel }}
provenance: true
sbom: true
- name: attest
uses: actions/attest-build-provenance@v2
with:
subject-name: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
subject-digest: ${{ steps.build.outputs.digest }}
push-to-registry: true
create-release:
name: create release
needs: [meta, build]
if: |
always() &&
needs.meta.outputs.version != '' &&
needs.build.result == 'success'
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
steps:
- name: checkout
uses: actions/checkout@v6
with:
ref: ${{ needs.meta.outputs.source_ref }}
- name: stable release
if: needs.meta.outputs.channel == 'stable'
uses: softprops/action-gh-release@v2
with:
tag_name: relay-v${{ needs.meta.outputs.version }}
name: Fluxer Relay v${{ needs.meta.outputs.version }}
draft: false
prerelease: false
generate_release_notes: true
body: |
Fluxer Relay
Pull:
```bash
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:v${{ needs.meta.outputs.version }}
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
```
Build:
- version: v${{ needs.meta.outputs.version }}
- build: ${{ needs.meta.outputs.build_number }}
- sha: ${{ github.sha }}
- time: ${{ needs.meta.outputs.timestamp }}
- channel: stable
Docs: https://docs.fluxer.app/federation
- name: nightly release
if: needs.meta.outputs.channel == 'nightly'
uses: softprops/action-gh-release@v2
with:
tag_name: relay-nightly-${{ needs.meta.outputs.date }}-${{ needs.meta.outputs.sha_short }}
name: Relay nightly ${{ needs.meta.outputs.date }} (${{ needs.meta.outputs.sha_short }})
draft: false
prerelease: true
generate_release_notes: true
body: |
Nightly Fluxer Relay image from canary.
Pull:
```bash
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:nightly
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:nightly-${{ needs.meta.outputs.date }}
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:sha-${{ needs.meta.outputs.sha_short }}
```
Build:
- version: v${{ needs.meta.outputs.version }}
- build: ${{ needs.meta.outputs.build_number }}
- sha: ${{ github.sha }}
- time: ${{ needs.meta.outputs.timestamp }}
- channel: nightly
- branch: canary
release-summary:
name: release summary
needs: [meta, build]
if: always()
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
sparse-checkout: scripts/ci
sparse-checkout-cone-mode: false
- name: summary
run: >-
python3 scripts/ci/workflows/release_relay.py
--step summary
--build-result "${{ needs.build.result }}"
--channel "${{ needs.meta.outputs.channel }}"
--version "${{ needs.meta.outputs.version }}"
--build-number "${{ needs.meta.outputs.build_number }}"
--sha-short "${{ needs.meta.outputs.sha_short }}"
--timestamp "${{ needs.meta.outputs.timestamp }}"
--date-ymd "${{ needs.meta.outputs.date }}"
--source-ref "${{ needs.meta.outputs.source_ref }}"
--image-tags "${{ needs.build.outputs.image_tags }}"
--image-digest "${{ needs.build.outputs.image_digest }}"
--registry "${{ env.REGISTRY }}"
--image-name "${{ env.IMAGE_NAME }}"

280
.github/workflows/release-server.yaml vendored Normal file
View File

@@ -0,0 +1,280 @@
name: release server
on:
push:
branches: [uwu]
paths:
- packages/**
- fluxer_server/**
- fluxer_gateway/**
- pnpm-lock.yaml
- .github/workflows/release-server.yaml
workflow_dispatch:
inputs:
channel:
description: Release channel
type: choice
options: [stable, nightly]
default: nightly
required: false
ref:
description: Git ref (branch, tag, or commit SHA)
type: string
default: ''
required: false
version:
description: Stable version (e.g. 1.0.0). Defaults to 0.0.<run_number>
type: string
required: false
build_server:
description: Build Fluxer Server
type: boolean
default: true
required: false
permissions:
contents: write
packages: write
id-token: write
attestations: write
concurrency:
group: release-server-${{ (github.event_name == 'workflow_dispatch' && github.event.inputs.channel) || 'nightly' }}
cancel-in-progress: true
defaults:
run:
shell: bash
env:
REGISTRY: git.i5.wtf
IMAGE_NAME_SERVER: fluxerapp/fluxer-server
CHANNEL: ${{ (github.event_name == 'workflow_dispatch' && github.event.inputs.channel) || 'nightly' }}
SOURCE_REF: >-
${{ (github.event_name == 'workflow_dispatch' && github.event.inputs.ref)
|| ((github.event_name == 'workflow_dispatch' && github.event.inputs.channel == 'stable') && 'main')
|| 'uwu' }}
jobs:
meta:
name: resolve build metadata
runs-on: ubuntu-latest
timeout-minutes: 25
outputs:
version: ${{ steps.meta.outputs.version }}
channel: ${{ steps.meta.outputs.channel }}
source_ref: ${{ steps.meta.outputs.source_ref }}
sha_short: ${{ steps.meta.outputs.sha_short }}
timestamp: ${{ steps.meta.outputs.timestamp }}
date: ${{ steps.meta.outputs.date }}
build_number: ${{ steps.meta.outputs.build_number }}
build_server: ${{ steps.should_build.outputs.server }}
steps:
- name: checkout
uses: actions/checkout@v6
with:
ref: ${{ env.SOURCE_REF }}
- name: metadata
id: meta
run: >-
python3 scripts/ci/workflows/release_server.py
--step metadata
--version-input "${{ github.event.inputs.version }}"
--channel "${{ env.CHANNEL }}"
--source-ref "${{ env.SOURCE_REF }}"
- name: determine build targets
id: should_build
run: >-
python3 scripts/ci/workflows/release_server.py
--step determine_build_targets
--event-name "${{ github.event_name }}"
--build-server-input "${{ github.event.inputs.build_server }}"
build-server:
name: build fluxer server
needs: meta
if: needs.meta.outputs.build_server == 'true'
runs-on: ubuntu-latest
timeout-minutes: 25
outputs:
image_tags: ${{ steps.docker_meta.outputs.tags }}
image_digest: ${{ steps.build.outputs.digest }}
steps:
- name: checkout
uses: actions/checkout@v6
with:
ref: ${{ needs.meta.outputs.source_ref }}
- name: set up buildx
uses: docker/setup-buildx-action@v3
- name: login
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.registry_token }}
- name: docker metadata
id: docker_meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME_SERVER }}
tags: |
type=raw,value=nightly,enable=${{ needs.meta.outputs.channel == 'nightly' }}
type=raw,value=nightly-${{ needs.meta.outputs.date }},enable=${{ needs.meta.outputs.channel == 'nightly' }}
type=raw,value=sha-${{ needs.meta.outputs.sha_short }},enable=${{ needs.meta.outputs.channel == 'nightly' }}
type=raw,value=stable,enable=${{ needs.meta.outputs.channel == 'stable' }}
type=raw,value=latest,enable=${{ needs.meta.outputs.channel == 'stable' }}
type=raw,value=v${{ needs.meta.outputs.version }},enable=${{ needs.meta.outputs.channel == 'stable' }}
type=semver,pattern={{version}},value=${{ needs.meta.outputs.version }},enable=${{ needs.meta.outputs.channel == 'stable' && !startsWith(needs.meta.outputs.version, '0.0.') }}
type=semver,pattern={{major}}.{{minor}},value=${{ needs.meta.outputs.version }},enable=${{ needs.meta.outputs.channel == 'stable' && !startsWith(needs.meta.outputs.version, '0.0.') }}
- name: build and push
id: build
uses: docker/build-push-action@v6
with:
context: .
file: fluxer_server/Dockerfile
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.docker_meta.outputs.tags }}
labels: |
${{ steps.docker_meta.outputs.labels }}
org.opencontainers.image.version=v${{ needs.meta.outputs.version }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.created=${{ needs.meta.outputs.timestamp }}
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
dev.fluxer.build.channel=${{ needs.meta.outputs.channel }}
dev.fluxer.build.number=${{ needs.meta.outputs.build_number }}
dev.fluxer.build.sha=${{ github.sha }}
dev.fluxer.build.short_sha=${{ needs.meta.outputs.sha_short }}
dev.fluxer.build.date=${{ needs.meta.outputs.date }}
build-args: |
BUILD_SHA=${{ github.sha }}
BUILD_NUMBER=${{ needs.meta.outputs.build_number }}
BUILD_TIMESTAMP=${{ needs.meta.outputs.timestamp }}
RELEASE_CHANNEL=${{ needs.meta.outputs.channel }}
# GitHub Actions cache not available in Gitea - disabled
# cache-from: type=gha,scope=server-${{ needs.meta.outputs.channel }}
# cache-to: type=gha,mode=max,scope=server-${{ needs.meta.outputs.channel }}
provenance: false
sbom: false
# GitHub-specific attestation - not available in Gitea
# - name: attest
# uses: actions/attest-build-provenance@v2
# with:
# subject-name: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME_SERVER }}
# subject-digest: ${{ steps.build.outputs.digest }}
# push-to-registry: true
create-release:
name: create release
needs: [meta, build-server]
if: |
always() &&
needs.meta.outputs.version != '' &&
(needs.build-server.result == 'success' || needs.build-server.result == 'skipped')
runs-on: ubuntu-latest
timeout-minutes: 25
steps:
- name: checkout
uses: actions/checkout@v6
with:
ref: ${{ needs.meta.outputs.source_ref }}
- name: stable release
if: needs.meta.outputs.channel == 'stable'
uses: softprops/action-gh-release@v2
with:
tag_name: v${{ needs.meta.outputs.version }}
name: Fluxer Server v${{ needs.meta.outputs.version }}
draft: false
prerelease: false
generate_release_notes: true
body: |
Fluxer Server
Pull:
```bash
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME_SERVER }}:v${{ needs.meta.outputs.version }}
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME_SERVER }}:latest
```
Build:
- version: v${{ needs.meta.outputs.version }}
- build: ${{ needs.meta.outputs.build_number }}
- sha: ${{ github.sha }}
- time: ${{ needs.meta.outputs.timestamp }}
- channel: stable
Docs: https://docs.fluxer.app/self-hosting
- name: nightly release
if: needs.meta.outputs.channel == 'nightly'
uses: softprops/action-gh-release@v2
with:
tag_name: nightly-${{ needs.meta.outputs.date }}-${{ needs.meta.outputs.sha_short }}
name: Nightly build ${{ needs.meta.outputs.date }} (${{ needs.meta.outputs.sha_short }})
draft: false
prerelease: true
generate_release_notes: true
body: |
Nightly Fluxer Server image from canary.
Pull:
```bash
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME_SERVER }}:nightly
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME_SERVER }}:nightly-${{ needs.meta.outputs.date }}
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME_SERVER }}:sha-${{ needs.meta.outputs.sha_short }}
```
Build:
- version: v${{ needs.meta.outputs.version }}
- build: ${{ needs.meta.outputs.build_number }}
- sha: ${{ github.sha }}
- time: ${{ needs.meta.outputs.timestamp }}
- channel: nightly
- branch: canary
release-summary:
name: release summary
needs: [meta, build-server]
if: always()
runs-on: ubuntu-latest
timeout-minutes: 25
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
sparse-checkout: scripts/ci
sparse-checkout-cone-mode: false
- name: summary
run: >-
python3 scripts/ci/workflows/release_server.py
--step summary
--build-result "${{ needs.build-server.result }}"
--channel "${{ needs.meta.outputs.channel }}"
--version "${{ needs.meta.outputs.version }}"
--build-number "${{ needs.meta.outputs.build_number }}"
--sha-short "${{ needs.meta.outputs.sha_short }}"
--timestamp "${{ needs.meta.outputs.timestamp }}"
--date-ymd "${{ needs.meta.outputs.date }}"
--source-ref "${{ needs.meta.outputs.source_ref }}"
--image-tags "${{ needs.build-server.outputs.image_tags }}"
--image-digest "${{ needs.build-server.outputs.image_digest }}"
--registry "${{ env.REGISTRY }}"
--image-name-server "${{ env.IMAGE_NAME_SERVER }}"

View File

@@ -20,28 +20,22 @@ env:
IMAGE_NAME: fluxer-gateway
CONTEXT_DIR: fluxer_gateway
COMPOSE_STACK: fluxer-gateway
RELEASE_CHANNEL: ${{ github.ref_name == 'canary' && 'staging' || 'production' }}
jobs:
restart:
name: Restart gateway
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 10
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
steps:
- name: Validate confirmation
if: ${{ github.event.inputs.confirmation != 'RESTART' }}
run: |
echo "::error::Confirmation failed. You must type 'RESTART' to proceed with a full restart."
echo "::error::For regular updates, use deploy-gateway.yaml instead."
exit 1
run: python3 scripts/ci/workflows/restart_gateway.py --step validate_confirmation --confirmation "${{ github.event.inputs.confirmation }}"
- uses: actions/checkout@v6
- name: Record deploy commit
run: |
set -euo pipefail
sha=$(git rev-parse HEAD)
echo "Deploying commit ${sha}"
printf 'DEPLOY_SHA=%s\n' "$sha" >> "$GITHUB_ENV"
run: python3 scripts/ci/workflows/restart_gateway.py --step record_deploy_commit
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -67,12 +61,7 @@ jobs:
DOCKER_BUILD_RECORD_UPLOAD: false
- name: Install docker-pussh
run: |
set -euo pipefail
mkdir -p ~/.docker/cli-plugins
curl -fsSL https://raw.githubusercontent.com/psviderski/unregistry/v0.3.1/docker-pussh \
-o ~/.docker/cli-plugins/docker-pussh
chmod +x ~/.docker/cli-plugins/docker-pussh
run: python3 scripts/ci/workflows/restart_gateway.py --step install_docker_pussh
- name: Set up SSH agent
uses: webfactory/ssh-agent@v0.9.1
@@ -80,70 +69,10 @@ jobs:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY_SERVER }}
- name: Add server to known hosts
run: |
set -euo pipefail
mkdir -p ~/.ssh
ssh-keyscan -H ${{ secrets.SERVER_IP }} >> ~/.ssh/known_hosts
run: python3 scripts/ci/workflows/restart_gateway.py --step add_known_hosts --server-ip ${{ secrets.SERVER_IP }}
- name: Push image and deploy
env:
IMAGE_TAG: ${{ env.IMAGE_NAME }}:${{ env.DEPLOY_SHA }}
SERVER: ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_IP }}
run: |
set -euo pipefail
docker pussh "${IMAGE_TAG}" "${SERVER}"
ssh "${SERVER}" "IMAGE_TAG=${IMAGE_TAG} SERVICE_NAME=${SERVICE_NAME} COMPOSE_STACK=${COMPOSE_STACK} bash" << 'EOF'
set -euo pipefail
sudo mkdir -p "/opt/${SERVICE_NAME}"
sudo chown -R "${USER}:${USER}" "/opt/${SERVICE_NAME}"
cd "/opt/${SERVICE_NAME}"
cat > compose.yaml << COMPOSEEOF
services:
app:
image: ${IMAGE_TAG}
hostname: "{{.Node.Hostname}}-{{.Task.Slot}}"
env_file:
- /etc/fluxer/fluxer.env
environment:
- API_HOST=fluxer-api_app:8080
- API_CANARY_HOST=fluxer-api-canary_app:8080
- RELEASE_NODE=fluxer_gateway@{{.Node.Hostname}}-{{.Task.Slot}}
- LOGGER_LEVEL=info
- VAPID_PUBLIC_KEY=BEIwQxIwfj6m90tLYAR0AU_GJWU4kw8J_zJcHQG55NCUWSyRy-dzMOgvxk8yEDwdVyJZa6xUL4fmwngijq8T2pY
- FLUXER_METRICS_HOST=fluxer-metrics_app:8080
- MEDIA_PROXY_ENDPOINT=https://fluxerusercontent.com
deploy:
replicas: 1
endpoint_mode: dnsrr
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
update_config:
parallelism: 1
delay: 10s
order: start-first
rollback_config:
parallelism: 1
delay: 10s
labels:
- 'caddy_gw=gateway.fluxer.app'
- 'caddy_gw.reverse_proxy={{upstreams 8080}}'
networks:
- fluxer-shared
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:8080/_health']
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
networks:
fluxer-shared:
external: true
COMPOSEEOF
docker stack deploy --with-registry-auth --detach=false --resolve-image never -c compose.yaml "${COMPOSE_STACK}"
EOF
run: python3 scripts/ci/workflows/restart_gateway.py --step push_and_deploy

102
.github/workflows/sync-desktop.yaml vendored Normal file
View File

@@ -0,0 +1,102 @@
name: sync desktop
on:
push:
branches:
- main
- canary
paths:
- 'fluxer_desktop/**'
workflow_dispatch:
inputs:
branch:
description: Branch to sync (main or canary)
required: false
default: ''
type: string
concurrency:
group: sync-desktop-${{ github.ref_name }}
cancel-in-progress: true
permissions:
contents: read
jobs:
sync:
name: Sync to fluxerapp/fluxer_desktop
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
steps:
- name: Checkout CI scripts
uses: actions/checkout@v6
with:
sparse-checkout: scripts/ci
sparse-checkout-cone-mode: false
- name: Create GitHub App token
id: app-token
uses: actions/create-github-app-token@v2
with:
app-id: ${{ secrets.SYNC_APP_ID }}
private-key: ${{ secrets.SYNC_APP_PRIVATE_KEY }}
owner: fluxerapp
repositories: fluxer_desktop
- name: Get GitHub App user ID
id: get-user-id
run: >-
python3 scripts/ci/workflows/sync_desktop.py
--step get_user_id
--app-slug "${{ steps.app-token.outputs.app-slug }}"
env:
GH_TOKEN: ${{ steps.app-token.outputs.token }}
- name: Checkout source repository
uses: actions/checkout@v6
with:
path: source
fetch-depth: 1
- name: Determine target branch
id: branch
run: >-
python3 scripts/ci/workflows/sync_desktop.py
--step determine_branch
--input-branch "${{ inputs.branch }}"
--ref-name "${{ github.ref_name }}"
- name: Clone target repository
run: >-
python3 scripts/ci/workflows/sync_desktop.py
--step clone_target
--token "${{ steps.app-token.outputs.token }}"
- name: Configure git
run: >-
python3 scripts/ci/workflows/sync_desktop.py
--step configure_git
--app-slug "${{ steps.app-token.outputs.app-slug }}"
--user-id "${{ steps.get-user-id.outputs.user-id }}"
- name: Checkout or create target branch
run: >-
python3 scripts/ci/workflows/sync_desktop.py
--step checkout_or_create_branch
--branch-name "${{ steps.branch.outputs.name }}"
- name: Sync files
run: python3 scripts/ci/workflows/sync_desktop.py --step sync_files
- name: Commit and push
run: >-
python3 scripts/ci/workflows/sync_desktop.py
--step commit_and_push
--branch-name "${{ steps.branch.outputs.name }}"
- name: Summary
run: >-
python3 scripts/ci/workflows/sync_desktop.py
--step summary
--branch-name "${{ steps.branch.outputs.name }}"

View File

@@ -15,7 +15,7 @@ concurrency:
jobs:
push:
runs-on: ubuntu-latest
timeout-minutes: 10
timeout-minutes: 25
permissions:
contents: read
env:
@@ -33,27 +33,10 @@ jobs:
fetch-depth: 0
- name: Install rclone
run: |
set -euo pipefail
if ! command -v rclone >/dev/null 2>&1; then
curl -fsSL https://rclone.org/install.sh | sudo bash
fi
run: python3 scripts/ci/workflows/sync_static.py --step install_rclone
- name: Push repo contents to bucket
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
set -euo pipefail
mkdir -p ~/.config/rclone
cat > ~/.config/rclone/rclone.conf << RCLONEEOF
[ovh]
type = s3
provider = Other
env_auth = true
endpoint = $RCLONE_ENDPOINT
acl = private
RCLONEEOF
mkdir -p "$RCLONE_SOURCE_DIR"
rclone sync "$RCLONE_SOURCE" "$RCLONE_REMOTE:$RCLONE_BUCKET" --create-empty-src-dirs --exclude "assets/**"
run: python3 scripts/ci/workflows/sync_static.py --step push

View File

@@ -15,7 +15,7 @@ permissions:
jobs:
test-backup:
name: Test latest Cassandra backup
runs-on: blacksmith-2vcpu-ubuntu-2404
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 45
env:
@@ -32,275 +32,59 @@ jobs:
uses: actions/checkout@v6
- name: Set temp paths
run: |
set -euo pipefail
: "${RUNNER_TEMP:?RUNNER_TEMP is not set}"
echo "WORKDIR=$RUNNER_TEMP/cassandra-restore-test" >> "$GITHUB_ENV"
run: >-
python3 scripts/ci/workflows/test_cassandra_backup.py
--step set_temp_paths
- name: Pre-clean
run: |
set -euo pipefail
docker rm -f "${CASS_CONTAINER}" "${UTIL_CONTAINER}" 2>/dev/null || true
docker volume rm "${CASS_VOLUME}" 2>/dev/null || true
docker volume rm "${BACKUP_VOLUME}" 2>/dev/null || true
rm -rf "${WORKDIR}" 2>/dev/null || true
run: >-
python3 scripts/ci/workflows/test_cassandra_backup.py
--step pre_clean
- name: Install tools
run: |
set -euo pipefail
sudo apt-get update -y
sudo apt-get install -y --no-install-recommends rclone age ca-certificates
run: >-
python3 scripts/ci/workflows/test_cassandra_backup.py
--step install_tools
- name: Find latest backup, validate freshness, download, decrypt, extract into Docker volume
env:
B2_KEY_ID: ${{ secrets.B2_KEY_ID }}
B2_APPLICATION_KEY: ${{ secrets.B2_APPLICATION_KEY }}
AGE_PRIVATE_KEY: ${{ secrets.CASSANDRA_AGE_PRIVATE_KEY }}
run: |
set -euo pipefail
rm -rf "$WORKDIR"
mkdir -p "$WORKDIR"
export RCLONE_CONFIG_B2S3_TYPE=s3
export RCLONE_CONFIG_B2S3_PROVIDER=Other
export RCLONE_CONFIG_B2S3_ACCESS_KEY_ID="${B2_KEY_ID}"
export RCLONE_CONFIG_B2S3_SECRET_ACCESS_KEY="${B2_APPLICATION_KEY}"
export RCLONE_CONFIG_B2S3_ENDPOINT="https://s3.eu-central-003.backblazeb2.com"
export RCLONE_CONFIG_B2S3_REGION="eu-central-003"
export RCLONE_CONFIG_B2S3_FORCE_PATH_STYLE=true
LATEST_BACKUP="$(
rclone lsf "B2S3:fluxer" --recursive --files-only --fast-list \
| grep -E '(^|/)cassandra-backup-[0-9]{8}-[0-9]{6}\.tar\.age$' \
| sort -r \
| head -n 1
)"
if [ -z "${LATEST_BACKUP}" ]; then
echo "Error: No backup found in bucket"
exit 1
fi
echo "LATEST_BACKUP=${LATEST_BACKUP}" >> "$GITHUB_ENV"
base="$(basename "${LATEST_BACKUP}")"
ts="${base#cassandra-backup-}"
ts="${ts%.tar.age}"
if ! [[ "$ts" =~ ^[0-9]{8}-[0-9]{6}$ ]]; then
echo "Error: Could not extract timestamp from backup filename: ${base}"
exit 1
fi
BACKUP_EPOCH="$(date -u -d "${ts:0:8} ${ts:9:2}:${ts:11:2}:${ts:13:2}" +%s)"
CURRENT_EPOCH="$(date -u +%s)"
AGE_HOURS=$(( (CURRENT_EPOCH - BACKUP_EPOCH) / 3600 ))
echo "Backup age: ${AGE_HOURS} hours"
if [ "${AGE_HOURS}" -ge 3 ]; then
echo "Error: Latest backup is ${AGE_HOURS} hours old (threshold: 3 hours)"
exit 1
fi
rclone copyto "B2S3:fluxer/${LATEST_BACKUP}" "${WORKDIR}/backup.tar.age" --fast-list
umask 077
printf '%s' "${AGE_PRIVATE_KEY}" > "${WORKDIR}/age.key"
docker volume create "${BACKUP_VOLUME}"
age -d -i "${WORKDIR}/age.key" "${WORKDIR}/backup.tar.age" \
| docker run --rm -i \
-v "${BACKUP_VOLUME}:/backup" \
--entrypoint bash \
"${CASSANDRA_IMAGE}" -lc '
set -euo pipefail
rm -rf /backup/*
mkdir -p /backup/_tmp
tar -C /backup/_tmp -xf -
top="$(find /backup/_tmp -maxdepth 1 -mindepth 1 -type d -name "cassandra-backup-*" | head -n 1 || true)"
if [ -n "$top" ] && [ -f "$top/schema.cql" ]; then
cp -a "$top"/. /backup/
elif [ -f /backup/_tmp/schema.cql ]; then
cp -a /backup/_tmp/. /backup/
else
echo "Error: schema.cql not found after extraction"
find /backup/_tmp -maxdepth 3 -type f -print | sed -n "1,80p" || true
exit 1
fi
rm -rf /backup/_tmp
'
docker run --rm \
-v "${BACKUP_VOLUME}:/backup:ro" \
--entrypoint bash \
"${CASSANDRA_IMAGE}" -lc '
set -euo pipefail
test -f /backup/schema.cql
echo "Extracted backup layout (top 3 levels):"
find /backup -maxdepth 3 -type d -print | sed -n "1,200p" || true
echo "Sample SSTables (*Data.db):"
find /backup -type f -name "*Data.db" | sed -n "1,30p" || true
'
run: >-
python3 scripts/ci/workflows/test_cassandra_backup.py
--step fetch_backup
- name: Create data volume
run: |
set -euo pipefail
docker volume create "${CASS_VOLUME}"
run: >-
python3 scripts/ci/workflows/test_cassandra_backup.py
--step create_data_volume
- name: Restore keyspaces into volume and promote snapshot SSTables
run: |
set -euo pipefail
docker run --rm \
--name "${UTIL_CONTAINER}" \
-v "${CASS_VOLUME}:/var/lib/cassandra" \
-v "${BACKUP_VOLUME}:/backup:ro" \
--entrypoint bash \
"${CASSANDRA_IMAGE}" -lc '
set -euo pipefail
shopt -s nullglob
BASE=/var/lib/cassandra
DATA_DIR="$BASE/data"
mkdir -p "$DATA_DIR" "$BASE/commitlog" "$BASE/hints" "$BASE/saved_caches"
ROOT=/backup
if [ -d "$ROOT/cassandra_data" ]; then ROOT="$ROOT/cassandra_data"; fi
if [ -d "$ROOT/data" ]; then ROOT="$ROOT/data"; fi
echo "Using backup ROOT=$ROOT"
echo "Restoring into DATA_DIR=$DATA_DIR"
restored=0
for keyspace_dir in "$ROOT"/*/; do
[ -d "$keyspace_dir" ] || continue
ks="$(basename "$keyspace_dir")"
if [ "$ks" = "system_schema" ] || ! [[ "$ks" =~ ^system ]]; then
echo "Restoring keyspace: $ks"
rm -rf "$DATA_DIR/$ks"
cp -a "$keyspace_dir" "$DATA_DIR/"
restored=$((restored + 1))
fi
done
if [ "$restored" -le 0 ]; then
echo "Error: No keyspaces restored from backup root: $ROOT"
echo "Debug: listing $ROOT:"
ls -la "$ROOT" || true
find "$ROOT" -maxdepth 2 -type d -print | sed -n "1,100p" || true
exit 1
fi
promoted=0
for ks_dir in "$DATA_DIR"/*/; do
[ -d "$ks_dir" ] || continue
ks="$(basename "$ks_dir")"
if [ "$ks" != "system_schema" ] && [[ "$ks" =~ ^system ]]; then
continue
fi
for table_dir in "$ks_dir"*/; do
[ -d "$table_dir" ] || continue
snap_root="$table_dir/snapshots"
[ -d "$snap_root" ] || continue
latest_snap="$(ls -1d "$snap_root"/*/ 2>/dev/null | sort -r | head -n 1 || true)"
[ -n "$latest_snap" ] || continue
files=( "$latest_snap"* )
if [ "${#files[@]}" -gt 0 ]; then
cp -av "${files[@]}" "$table_dir"
promoted=$((promoted + $(ls -1 "$latest_snap"/*Data.db 2>/dev/null | wc -l || true)))
fi
done
done
chown -R cassandra:cassandra "$BASE"
echo "Promoted Data.db files: $promoted"
if [ "$promoted" -le 0 ]; then
echo "Error: No *Data.db files were promoted out of snapshots"
echo "Debug: first snapshot dirs found:"
find "$DATA_DIR" -type d -path "*/snapshots/*" | sed -n "1,50p" || true
exit 1
fi
'
run: >-
python3 scripts/ci/workflows/test_cassandra_backup.py
--step restore_keyspaces
- name: Start Cassandra
run: |
set -euo pipefail
docker run -d \
--name "${CASS_CONTAINER}" \
-v "${CASS_VOLUME}:/var/lib/cassandra" \
-e MAX_HEAP_SIZE="${MAX_HEAP_SIZE}" \
-e HEAP_NEWSIZE="${HEAP_NEWSIZE}" \
-e JVM_OPTS="-Dcassandra.disable_mlock=true" \
"${CASSANDRA_IMAGE}"
for i in $(seq 1 150); do
status="$(docker inspect -f '{{.State.Status}}' "${CASS_CONTAINER}" 2>/dev/null || true)"
if [ "${status}" != "running" ]; then
docker inspect "${CASS_CONTAINER}" --format 'ExitCode={{.State.ExitCode}} OOMKilled={{.State.OOMKilled}} Error={{.State.Error}}' || true
docker logs --tail 300 "${CASS_CONTAINER}" || true
exit 1
fi
if docker exec "${CASS_CONTAINER}" cqlsh -e "SELECT now() FROM system.local;" >/dev/null 2>&1; then
break
fi
sleep 2
done
docker exec "${CASS_CONTAINER}" cqlsh -e "SELECT now() FROM system.local;" >/dev/null 2>&1
run: >-
python3 scripts/ci/workflows/test_cassandra_backup.py
--step start_cassandra
- name: Verify data
run: |
set -euo pipefail
USER_COUNT=""
for i in $(seq 1 20); do
USER_COUNT="$(
docker exec "${CASS_CONTAINER}" cqlsh -e "SELECT COUNT(*) FROM fluxer.users;" 2>/dev/null \
| awk "/^[[:space:]]*[0-9]+[[:space:]]*$/ {print \$1; exit}" || true
)"
if [ -n "${USER_COUNT}" ]; then
break
fi
sleep 2
done
if [ -n "${USER_COUNT}" ] && [ "${USER_COUNT}" -gt 0 ] 2>/dev/null; then
echo "Backup restore verification passed"
else
echo "Backup restore verification failed"
docker logs --tail 300 "${CASS_CONTAINER}" || true
exit 1
fi
run: >-
python3 scripts/ci/workflows/test_cassandra_backup.py
--step verify_data
- name: Cleanup
if: always()
run: |
set -euo pipefail
docker rm -f "${CASS_CONTAINER}" 2>/dev/null || true
docker volume rm "${CASS_VOLUME}" 2>/dev/null || true
docker volume rm "${BACKUP_VOLUME}" 2>/dev/null || true
rm -rf "${WORKDIR}" 2>/dev/null || true
run: >-
python3 scripts/ci/workflows/test_cassandra_backup.py
--step cleanup
- name: Report status
if: always()
run: |
set -euo pipefail
LATEST_BACKUP_NAME="${LATEST_BACKUP:-unknown}"
if [ "${{ job.status }}" = "success" ]; then
echo "Backup ${LATEST_BACKUP_NAME} is valid and restorable"
else
echo "Backup ${LATEST_BACKUP_NAME} test failed"
fi
env:
JOB_STATUS: ${{ job.status }}
run: >-
python3 scripts/ci/workflows/test_cassandra_backup.py
--step report_status

View File

@@ -7,8 +7,8 @@ on:
jobs:
update-word-lists:
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 10
runs-on: blacksmith-8vcpu-ubuntu-2404
timeout-minutes: 25
permissions:
contents: write
pull-requests: write
@@ -20,31 +20,15 @@ jobs:
ref: canary
- name: Download latest word lists
run: |
set -euo pipefail
curl -fsSL https://raw.githubusercontent.com/tailscale/tailscale/refs/heads/main/words/scales.txt -o /tmp/scales.txt
curl -fsSL https://raw.githubusercontent.com/tailscale/tailscale/refs/heads/main/words/tails.txt -o /tmp/tails.txt
run: python3 scripts/ci/workflows/update_word_lists.py --step download
- name: Check for changes
id: check_changes
run: |
set -euo pipefail
# Compare the downloaded files with the existing ones
if ! diff -q /tmp/scales.txt fluxer_api/src/words/scales.txt > /dev/null 2>&1 || \
! diff -q /tmp/tails.txt fluxer_api/src/words/tails.txt > /dev/null 2>&1; then
printf 'changes_detected=true\n' >> "$GITHUB_OUTPUT"
echo "Changes detected in word lists"
else
printf 'changes_detected=false\n' >> "$GITHUB_OUTPUT"
echo "No changes detected in word lists"
fi
run: python3 scripts/ci/workflows/update_word_lists.py --step check_changes
- name: Update word lists
if: steps.check_changes.outputs.changes_detected == 'true'
run: |
set -euo pipefail
cp /tmp/scales.txt fluxer_api/src/words/scales.txt
cp /tmp/tails.txt fluxer_api/src/words/tails.txt
run: python3 scripts/ci/workflows/update_word_lists.py --step update
- name: Create pull request for updated word lists
if: steps.check_changes.outputs.changes_detected == 'true'
@@ -70,4 +54,4 @@ jobs:
- name: No changes detected
if: steps.check_changes.outputs.changes_detected == 'false'
run: echo "Word lists are already up to date."
run: python3 scripts/ci/workflows/update_word_lists.py --step no_changes

152
.gitignore vendored
View File

@@ -1,93 +1,95 @@
# Build artifacts
**/_build
**/_checkouts
**/_vendor
**/.astro/
**/coverage
**/dist
**/generated
**/target
**/ebin
**/certificates
/fluxer_admin/build
/fluxer_marketing/build
# Caches & editor metadata
**/.cache
**/.*cache
**/.pnpm-store
**/.swc
**/.DS_Store
**/Thumbs.db
**/.idea
**/.vscode
# Environment and credentials
**/.env
**/.env.local
**/.env.*.local
**/.dev.vars
**/.erlang.cookie
**/.eunit
**/.rebar
**/.rebar3
**/fluxer.env
**/secrets.env
/dev/fluxer.env
# Logs, temporary files, and binaries
*.tsbuildinfo
**/*.beam
**/*.css.d.ts
**/*.dump
**/dump.rdb
**/*.iml
**/*.log
**/*.o
**/*.plt
**/*.source
**/*.swo
**/*.swp
**/*.tmp
**/*~
**/log
**/logs
**/npm-debug.log*
**/pnpm-debug.log*
**/yarn-debug.log*
**/yarn-error.log*
**/rebar3.crashdump
**/erl_crash.dump
## Dependencies
**/node_modules
# Framework & tooling buckets
**/.*cache
**/.cache
**/__pycache__
**/.dev-runner/
**/.devenv
.devenv.flake.nix
devenv.local.nix
**/.direnv
/dev/livekit.yaml
/dev/bluesky_oauth_key.pem
/dev/meilisearch_master_key
/dev/data/
**/.dev.vars
**/.DS_Store
**/.env
**/.env.*.local
**/.env.local
**/.erlang.cookie
**/.eunit
**/.idea
**/.next
**/.next/cache
**/.vercel
**/out
**/.pnp
**/.pnp.js
*.tsbuildinfo
next-env.d.ts
# Source files we never want tracked
**/.pnpm-store
**/.rebar
**/.rebar3
**/.source
**/*.source
# Project-specific artifacts
/fluxer_admin/priv/static/app.css
**/.swc
**/.turbo
**/.vercel
**/_build
**/_checkouts
**/_vendor
**/certificates
**/coverage
**/dist
**/ebin
**/erl_crash.dump
**/fluxer.env
**/generated
**/log
**/logs
**/node_modules
**/npm-debug.log*
**/out
**/pnpm-debug.log*
**/rebar3.crashdump
**/secrets.env
**/target
**/test-results.json
**/Thumbs.db
**/yarn-debug.log*
**/yarn-error.log*
/.devserver-cache.json
**/.devserver-cache.json
/.fluxer/
/config/config.json
/fluxer_app/src/assets/emoji-sprites/
/fluxer_app/src/locales/*/messages.js
/fluxer_app/src/locales/*/messages.mjs
/fluxer_gateway/config/sys.config
/fluxer_gateway/config/vm.args
/fluxer_marketing/priv/static/app.css
/fluxer_marketing/priv/locales
geoip_data
livekit.yaml
fluxer.yaml
# Generated CSS type definitions
**/*.css.d.ts
# Generated UI components
/fluxer_app/src/components/uikit/AvatarStatusGeometry.ts
/fluxer_app/src/components/uikit/SVGMasks.tsx
/fluxer_app/src/locales/*/messages.js
/fluxer_app/src/locales/*/messages.mjs
/fluxer_app/src/locales/*/messages.ts
/fluxer_admin/public/static/app.css
/fluxer_gateway/config/sys.config
/fluxer_gateway/config/vm.args
/fluxer_marketing/public/static/app.css
/fluxer_server/data/
/packages/admin/public/static/app.css
/packages/marketing/public/static/app.css
/packages/config/src/ConfigSchema.json
/packages/config/src/MasterZodSchema.generated.tsx
AGENTS.md
CLAUDE.md
fluxer.yaml
GEMINI.md
geoip_data
next-env.d.ts
.github/agents
.github/prompts

110
.ignore
View File

@@ -1,85 +1,73 @@
# Build artifact directories
**/_build
**/_checkouts
**/_vendor
**/.astro/
**/build
!fluxer_app/scripts/build
**/coverage
**/dist
fluxer_app/dist/
**/generated
**/target
**/ebin
**/certificates
# Dependency directories
**/node_modules
**/.pnpm-store
# Framework & tooling buckets
**/.next
**/.next/cache
**/.vercel
**/out
**/.pnp
**/.pnp.js
*.tsbuildinfo
next-env.d.ts
# Cache directories
**/.cache
**/.*cache
**/.swc
# Logs and temporary files
**/*.beam
**/*.css.d.ts
**/*.dump
**/dump.rdb
**/*.iml
**/*.lock
**/*.log
**/*.o
**/*.plt
**/*.source
**/*.swo
**/*.swp
**/*.tmp
**/*~
**/*.lock
**/log
**/logs
**/npm-debug.log*
**/pnpm-debug.log*
**/yarn-debug.log*
**/yarn-error.log*
**/rebar3.crashdump
**/erl_crash.dump
# Source files we never want tracked
**/.source
**/*.source
# Environment files
**/.env
**/.env.local
**/.env.*.local
**/.*cache
**/.cache
**/__pycache__
**/.dev.vars
**/.direnv
.devenv.flake.nix
**/.env
**/.env.*.local
**/.env.local
**/.erlang.cookie
**/.eunit
**/.next
**/.next/cache
**/.pnp
**/.pnp.js
**/.pnpm-store
**/.rebar
**/.rebar3
**/.source
**/.swc
**/.turbo
**/.vercel
**/_build
**/_checkouts
**/_vendor
**/build
**/certificates
**/coverage
**/dist
**/ebin
**/erl_crash.dump
**/fluxer.env
**/generated
**/log
**/logs
**/node_modules
**/npm-debug.log*
**/out
**/pnpm-debug.log*
**/rebar3.crashdump
**/secrets.env
/dev/fluxer.env
# Project-specific artifacts
**/target
**/yarn-debug.log*
**/yarn-error.log*
/.fluxer/
/fluxer_app/src/assets/emoji-sprites/
/fluxer_app/src/locales/*/messages.js
/fluxer_admin/priv/static/app.css
/fluxer_marketing/priv/static/app.css
app.css
/fluxer_admin/public/static/app.css
fluxer.yaml
fluxer_app/dist/
/fluxer_marketing/public/static/app.css
/fluxer_server/data/
fluxer_static
geoip_data
livekit.yaml
fluxer.yaml
# Generated CSS type definitions
**/*.css.d.ts
next-env.d.ts
/packages/marketing/public/static/app.css

View File

@@ -1,2 +1,2 @@
[lfs]
url = https://github.com/fluxerapp/fluxer.git/info/lfs
url = https://git.i5.wtf/fluxerapp/fluxer.git/info/lfs

1
.npmrc Normal file
View File

@@ -0,0 +1 @@
update-notifier=false

1
.nvmrc Normal file
View File

@@ -0,0 +1 @@
24

View File

@@ -1,25 +1,22 @@
node_modules
**/node_modules
*.log
**/*.css.d.ts
**/.cache
**/.pnpm-store
**/.swc
**/.turbo
**/node_modules
**/package-lock.json
**/pnpm-lock.yaml
.fluxer/
fluxer_app/dist
fluxer_app/src/assets/emoji-sprites
fluxer_app/src/locales/*/messages.js
fluxer_app/pkgs/libfluxcore
fluxer_app/pkgs/libfluxcore/**
fluxer_app/proxy/assets
fluxer_app/src/assets/emoji-sprites
fluxer_app/src/locales/*/messages.js
fluxer_app_proxy/assets
fluxer_gateway/_build
fluxer_marketing/build
fluxer_docs/.next
fluxer_docs/.next/cache
fluxer_docs/out
fluxer_docs/.vercel
fluxer_docs/.cache
fluxer_docs/coverage
fluxer_static/**
dev/geoip_data
dev/livekit.yaml
dev/fluxer.yaml
*.log
**/*.css.d.ts
node_modules
package-lock.json
pnpm-lock.yaml

10
.vscode/extensions.json vendored Normal file
View File

@@ -0,0 +1,10 @@
{
"recommendations": [
"TypeScriptTeam.native-preview",
"biomejs.biome",
"clinyong.vscode-css-modules",
"pgourlain.erlang",
"golang.go",
"rust-lang.rust-analyzer"
]
}

84
.vscode/launch.json vendored Normal file
View File

@@ -0,0 +1,84 @@
{
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "launch",
"name": "Debug: fluxer_server",
"program": "${workspaceFolder}/fluxer_server/src/startServer.tsx",
"runtimeArgs": ["--import", "tsx"],
"cwd": "${workspaceFolder}/fluxer_server",
"env": {
"FLUXER_CONFIG": "${workspaceFolder}/config/config.json",
"FLUXER_DATABASE": "sqlite"
},
"console": "integratedTerminal",
"skipFiles": ["<node_internals>/**", "**/node_modules/**"]
},
{
"type": "node",
"request": "launch",
"name": "Debug: fluxer_api (standalone)",
"program": "${workspaceFolder}/fluxer_api/src/AppEntrypoint.tsx",
"runtimeArgs": ["--import", "tsx"],
"cwd": "${workspaceFolder}/fluxer_api",
"env": {
"FLUXER_CONFIG": "${workspaceFolder}/config/config.json",
"FLUXER_DATABASE": "sqlite"
},
"console": "integratedTerminal",
"skipFiles": ["<node_internals>/**", "**/node_modules/**"]
},
{
"type": "node",
"request": "launch",
"name": "Debug: fluxer_marketing",
"program": "${workspaceFolder}/fluxer_marketing/src/index.tsx",
"runtimeArgs": ["--import", "tsx"],
"cwd": "${workspaceFolder}/fluxer_marketing",
"env": {
"FLUXER_CONFIG": "${workspaceFolder}/config/config.json"
},
"console": "integratedTerminal",
"skipFiles": ["<node_internals>/**", "**/node_modules/**"]
},
{
"type": "node",
"request": "launch",
"name": "Debug: fluxer_app (DevServer)",
"program": "${workspaceFolder}/fluxer_app/scripts/DevServer.tsx",
"runtimeArgs": ["--import", "tsx"],
"cwd": "${workspaceFolder}/fluxer_app",
"env": {
"FLUXER_APP_DEV_PORT": "49427",
"FORCE_COLOR": "1"
},
"console": "integratedTerminal",
"skipFiles": ["<node_internals>/**", "**/node_modules/**"]
},
{
"type": "node",
"request": "launch",
"name": "Debug: Test Current File",
"program": "${workspaceFolder}/node_modules/vitest/vitest.mjs",
"args": ["run", "--no-coverage", "${relativeFile}"],
"autoAttachChildProcesses": true,
"console": "integratedTerminal",
"skipFiles": ["<node_internals>/**", "**/node_modules/**"]
},
{
"type": "node",
"request": "attach",
"name": "Attach to Node Process",
"port": 9229,
"restart": true,
"skipFiles": ["<node_internals>/**", "**/node_modules/**"]
}
],
"compounds": [
{
"name": "Debug: Server + App",
"configurations": ["Debug: fluxer_server", "Debug: fluxer_app (DevServer)"]
}
]
}

5
.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,5 @@
{
"typescript.preferences.includePackageJsonAutoImports": "auto",
"typescript.suggest.autoImports": true,
"typescript.experimental.useTsgo": true
}

View File

@@ -1,12 +1,12 @@
# Contributing to fluxerapp/fluxer
# Contributing to Fluxer
Thanks for contributing. This document explains how we work so your changes can land smoothly, and so nobody wastes time on work we cannot merge.
Thanks for contributing. This document explains how we work so your changes can land smoothly and nobody wastes time on work we can't merge.
## Quick rules (please read)
### 1) All PRs must target `canary`
`canary` is our trunk branch. Open all pull requests against `canary`. PRs targeting other branches will be closed or you will be asked to retarget.
`canary` is our trunk branch. Open all pull requests against `canary`. PRs targeting other branches will be closed or retargeted.
### 2) All PRs must include a short description
@@ -16,24 +16,32 @@ Every PR must include a short description covering:
- why it changed
- anything reviewers should pay attention to
A few bullets is perfect.
A few bullets is fine.
### 3) Coordinate before starting larger work
### 3) Open an issue before submitting a PR
If you are planning anything beyond a small, obvious fix (new feature, meaningful refactor, new dependency, new API surface, behavior changes), coordinate with the maintainers first.
We strongly prefer that every PR addresses an existing issue. If one doesn't exist yet, open one describing the problem or improvement and your proposed approach. This gives maintainers a chance to weigh in on direction before you invest time, and avoids the mutual displeasure of:
This avoids the mutual displeasure of:
- you doing significant work, and
- us having to reject or postpone the change because it doesn't align with current goals, or because we aren't ready to maintain what it introduces
- you investing significant time, and
- us having to reject or postpone the change because it does not align with current goals, or because we are not ready to maintain what it introduces
For small, obvious fixes (typos, broken links, trivial one-liners) you can skip the issue and go straight to a PR.
Ways to coordinate:
Ways to coordinate on larger work:
- open an issue describing the problem and your proposed approach
- open a draft PR early to confirm direction
- discuss with a maintainer in any channel you already share
If you are unsure whether something counts as "larger work", ask first.
If you're unsure whether something needs an issue first, it probably does.
### 4) Understand the code you submit
You must have sufficient understanding of every change in your PR to explain it and defend it during review. You don't need to write an essay, but you should be able to give a short summary of what the patch does and why it's correct.
**LLM-assisted contributions.** You're welcome to use LLMs as a tool for automating mechanical work. We don't ask you to disclose this, since we assume you're acting in good faith: you're the one who signs off on the patch you submit in your own name, and you have the technical understanding to verify that it's accurate.
That said, don't use LLMs on areas of the codebase you don't understand well enough to verify the output. If part of your change touches code you aren't confident reviewing yourself, say so in the issue you opened beforehand and defer that work to someone else. The maintainers will be happy to help.
## Workflow
@@ -48,10 +56,10 @@ We strongly prefer small, focused PRs that are easy to review.
### Commit style and history
We squash-merge PRs, and the PR title becomes the single commit message on `canary`. For that reason:
We squash-merge PRs, so the PR title becomes the single commit message on `canary`. For that reason:
- PR titles must follow Conventional Commits.
- Individual commits inside the PR do not need to follow Conventional Commits.
- Individual commits inside the PR don't need to follow Conventional Commits.
If you like to commit in small increments, feel free. If you prefer a tidier PR history, force-pushes are welcome (for example, to squash or reorder commits before review). Just avoid rewriting history in a way that makes it hard for reviewers to follow along.
@@ -87,17 +95,14 @@ We care about confidence more than ceremony. Add tests when they provide real va
### Backend changes
For backend changes, we suggest adding an integration or unit test.
For backend changes, add a unit test.
- If a unit test would require heavy mocking to be meaningful, either:
- restructure the code so it can be tested without excessive mocking, or
- prefer an integration test if restructuring is not practical
- If you are unsure which route is best, discuss it with a maintainer before investing time.
- If a unit test would require heavy mocking to be meaningful, restructure the code so it can be tested cleanly through its interfaces.
- If you're unsure how to approach this, discuss it with a maintainer before investing time.
### Frontend changes
We generally do not encourage new unit tests for frontend code unless:
We don't generally encourage new unit tests for frontend code unless:
- the area already has unit tests, or
- the change is complex or sensitive, and a unit test clearly reduces risk
@@ -106,31 +111,9 @@ In most cases, clear PR notes and practical verification are more valuable.
## Formatting and linting
Do not block on formatting or linting before opening a PR. CI enforces required checks and will tell you what needs fixing before merge.
Don't block on formatting or linting before opening a PR. CI enforces required checks and will tell you what needs fixing before merge.
Open the PR when it is ready for review, then iterate based on CI and feedback.
## CLA (required)
We require a Contributor License Agreement (CLA) for this repository.
Why:
- The project is available under AGPLv3.
- We also offer a commercial license for organizations that cannot (or do not want to) comply with AGPL obligations.
- To keep both options possible, we need permission to include contributions in both distributions.
What it means for you:
- You keep ownership of your contribution.
- You can keep using your contribution in your own work.
- You grant us the rights needed to distribute your contribution as part of the project, including under a commercial license.
- We may refactor or remove code over time and are not required to include every contribution. However, any distributed version that includes your contribution remains properly licensed under the project license(s) that applied when you contributed.
How to sign:
- On your first PR, a bot will comment with a CLA link.
- Click it, sign with your GitHub account, and you are done.
Open the PR when it's ready for review, then iterate based on CI and feedback.
## PR checklist
@@ -139,9 +122,9 @@ Before requesting review:
- [ ] PR targets `canary`
- [ ] PR title follows Conventional Commits (mostly lowercase)
- [ ] PR includes a short description of what/why
- [ ] You understand every change in the PR and can explain it during review
- [ ] Tests added or updated where it makes sense (especially backend changes)
- [ ] CI is green (or you are actively addressing failures)
- [ ] CLA signed (the bot will guide you)
- [ ] CI is green (or you're actively addressing failures)
Optional but helpful:
@@ -150,13 +133,13 @@ Optional but helpful:
## Code of Conduct
This project follows a Code of Conduct. By participating, you are expected to uphold it:
This project follows a Code of Conduct. By participating, you're expected to uphold it:
- See [`CODE_OF_CONDUCT.md`](./CODE_OF_CONDUCT.md)
## Security
Please do not report security issues via public GitHub issues.
Please don't report security issues via public GitHub issues.
Use our security policy and reporting instructions here:

43
LICENSING.md Normal file
View File

@@ -0,0 +1,43 @@
# Licensing
Fluxer is licensed under the **GNU Affero General Public License v3.0 (AGPLv3)**. See [`LICENSE`](./LICENSE).
AGPLv3 is a strong copyleft licence designed to keep improvements available to the community, including when the software is used over a network.
## Self-hosting: fully unlocked
If you self-host Fluxer on your own hardware, all features are available by default. We don't charge to unlock functionality, remove limits, or increase instance caps for deployments you run yourself.
If Fluxer is useful to you, please consider [donating to support development](https://fluxer.app/donate).
## Commercial licensing
Some organisations can't use AGPLv3 due to policy or compliance requirements, or because they don't want to take on AGPL obligations for private modifications.
In these cases, Fluxer Platform AB can offer Fluxer under a separate commercial licence (sometimes called dual licensing). This is the same software, but the commercial terms remove AGPLv3's copyleft obligations for internal deployments.
Fluxer remains AGPLv3 and publicly available. The only difference is your obligations for private modifications. Under the commercial licence, you may keep internal modifications private rather than being required to publish them solely because you run the modified software.
A core requirement of the commercial licence is internal use only. You may not redistribute a modified version (or your modifications) to third parties under the commercial licence.
If you want to share changes, you can upstream them to this repository under Fluxer's AGPLv3 licence. The commercial licence makes upstreaming optional rather than required, but it doesn't grant permission to distribute modifications under any other licence.
To request a commercial licence, email [support@fluxer.app](mailto:support@fluxer.app) and include your employee count so we can provide an initial estimate. Commercial licences are offered at a custom price point.
## Contributor License Agreement
Code contributions require a signed contributor licence agreement: see [`CLA.md`](./CLA.md). You will be prompted to sign electronically via CLA Assistant when you open your first pull request.
Our CLA is based on the widely used Harmony Individual CLA. It is intended to be clear and fair:
- You keep ownership of your contribution and can still use it elsewhere.
- You grant Fluxer Platform AB the rights needed to distribute your contribution as part of Fluxer, including a patent licence to reduce patent-related risk for users.
- It includes standard warranty and liability disclaimers that protect contributors.
It also includes an outbound licensing clause. If Fluxer Platform AB relicenses your contribution (including commercially), Fluxer Platform AB will continue to license your contribution under the project licence(s) that applied when you contributed. Signing the CLA doesn't remove Fluxer from the community.
## Our FOSS commitment
Fluxer is committed to remaining 100% FOSS for public development and distribution.
The CLA doesn't change that. It ensures Fluxer Platform AB has the legal permission to offer a commercial licence to organisations that need different terms, while keeping the community version open, fully featured, and AGPLv3-licensed.

130
NOTES.md Normal file
View File

@@ -0,0 +1,130 @@
# uwu.lc Self-Hosting Notes
## Branch Setup
**Current branch**: `uwu` (based on `refactor`)
- **Tracks**: `origin/refactor` for rebasing upstream changes
- **Pushes to**: `i5/uwu` on your Gitea instance at git.i5.wtf
- **Current state**: 1 commit ahead (LFS config change)
## Workflow
### Pull upstream changes and rebase
```bash
git fetch origin
git rebase origin/refactor
```
### Push your changes to Gitea
```bash
git push i5 uwu
# If you've rebased, use: git push i5 uwu --force-with-lease
```
### View your changes
```bash
git log origin/refactor..uwu # Show commits you've added
```
## Why track `refactor` branch?
The `refactor` branch is a complete rewrite that:
- Is simpler and lighter to self-host
- Uses SQLite instead of complex database setup
- Removes payment/Plutonium stuff for self-hosted deployments
- Is much better documented
- Is where active development happens
The old `main`/`canary` branches have the legacy stack that's harder to self-host.
## Configuration Changes Made
1. **LFS Config** (`.lfsconfig`): Updated to point to Gitea instance
- Old: `https://github.com/fluxerapp-old/fluxer-private.git/info/lfs`
- New: `https://git.i5.wtf/fluxerapp/fluxer.git/info/lfs`
2. **CI Workflows**: Updated for Gitea compatibility
- Changed all runners from `blacksmith-8vcpu-ubuntu-2404` to `ubuntu-latest`
- `ci.yaml`: Main CI workflow (typecheck, test, gateway, knip, ci-scripts)
- `release-server.yaml`: Docker build workflow
- Registry: `ghcr.io``git.i5.wtf`
- Image: `fluxerapp/fluxer-server`
- Trigger branch: `canary``uwu`
- Default source ref: `canary``uwu`
## Gitea Setup Requirements
### Container Registry Authentication
The workflow tries to use `secrets.GITEA_TOKEN` or `github.token` for registry auth.
**Required**: Create a Gitea Personal Access Token:
1. Go to Gitea Settings → Applications → Generate New Token
2. Name: `CI_Container_Registry`
3. Permissions: Select `package` (write access)
4. Add to repository secrets as `registry_token` (Note: Can't use GITEA_ or GITHUB_ prefix)
**Alternative**: Update the workflow to use username/password:
- Create a secret `REGISTRY_USERNAME` with your Gitea username
- Create a secret `REGISTRY_PASSWORD` with a personal access token
### Container Registry URL Format
Gitea registry format is typically:
- `git.i5.wtf/fluxerapp/fluxer-server:tag`
If the registry requires a different format, check your Gitea container registry settings.
## Docker Build Fixes Applied
Successfully built fluxer-server Docker image! Fixes applied:
1. ✅ Fixed package path (app → app_proxy)
2. ✅ Added Rust/WASM toolchain for frontend
3. ✅ Added ca-certificates
4. ✅ Fixed .dockerignore (locale files, emoji data, build scripts)
5. ✅ Set FLUXER_CONFIG environment variable
6. ✅ Updated ENTRYPOINT to target @fluxer/server
7. ✅ Removed redundant typecheck step
8. ✅ Generated locale files before build (lingui:compile)
9. ✅ Reinstalled dependencies after copying source
10. ✅ Allowed scripts/build directory in .dockerignore
**Built image tags:**
- `git.i5.wtf/fluxerapp/fluxer-server:nightly`
- `git.i5.wtf/fluxerapp/fluxer-server:nightly-20260301`
- `git.i5.wtf/fluxer-server:sha-2e9010d`
## TODO
- [x] Modify GitHub Actions workflows for Gitea compatibility
- [x] Fix container registry authentication
- [x] Apply patches from third-party guide
- [x] Build Docker image
- [ ] Configure for uwu.lc domain
- [ ] Deploy to production
- [ ] Set up backing services (Valkey, NATS, Meilisearch, LiveKit)
## Resources
- **Third-party self-hosting guide**: https://gist.github.com/PaulMColeman/e7ef82e05035b24300d2ea1954527f10
- Documents 20 gotchas and fixes for building/deploying Fluxer
- Critical for successful Docker build
- Domain: uwu.lc
- Gitea: git.i5.wtf
## Known Build Issues from Third-Party Guide
The guide documents these critical Dockerfile fixes needed:
1. ✅ Fix package path (app → app_proxy)
2. ✅ Add Rust/WASM toolchain (frontend needs WebAssembly)
3. ✅ Add ca-certificates (for rustup HTTPS download)
4. ✅ Fix .dockerignore (unblock build scripts and locale files)
5. ✅ Set FLUXER_CONFIG env var (rspack needs this)
6. ✅ Copy config directory for build process
7. ✅ Update ENTRYPOINT to target fluxer_server package
Additional fixes that may be needed (will address if they come up):
- Empty CDN endpoint handling (frontend code)
- Content Security Policy adjustments
- NATS configuration
- LiveKit webhook configuration

178
README.md
View File

@@ -1,20 +1,170 @@
<div align="left" style="margin:12px 0 8px;">
<img src="./media/logo-graphic.png" alt="Fluxer graphic logo" width="360">
</div>
> [!NOTE]
> Learn about the developer behind Fluxer, the goals of the project, the tech stack, and what's coming next.
>
> [Read the launch blog post](https://blog.fluxer.app/how-i-built-fluxer-a-discord-like-chat-app/) · [View full roadmap](https://blog.fluxer.app/roadmap-2026/)
---
<p align="center">
<img src="./media/logo-graphic.png" alt="Fluxer graphic logo" width="400">
</p>
Fluxer is an open-source, independent instant messaging and VoIP platform. Built for friends, groups, and communities.
<p align="center">
<a href="https://fluxer.app/donate">
<img src="https://img.shields.io/badge/Donate-fluxer.app%2Fdonate-brightgreen" alt="Donate" /></a>
<a href="https://docs.fluxer.app">
<img src="https://img.shields.io/badge/Docs-docs.fluxer.app-blue" alt="Documentation" /></a>
<a href="./LICENSE">
<img src="https://img.shields.io/badge/License-AGPLv3-purple" alt="AGPLv3 License" /></a>
</p>
<div align="left" style="margin:16px 0 0; width:100%;">
<img
src="./media/app-showcase.png"
alt="Fluxer app showcase"
style="display:block; width:100%; max-width:1200px; box-sizing:border-box;"
>
</div>
# Fluxer
---
Fluxer is a **free and open source instant messaging and VoIP platform** for friends, groups, and communities. Self-host it and every feature is unlocked.
## Quick links
- [Self-hosting guide](https://docs.fluxer.app/self-hosting)
- [Documentation](https://docs.fluxer.app)
- [Donate to support development](https://fluxer.app/donate)
- [Security](https://fluxer.app/security)
## Features
<img src="./media/app-showcase.png" alt="Fluxer showcase" align="right" width="45%" />
**Real-time messaging** typing indicators, reactions, and threaded replies.
**Voice & video** calls in communities and DMs with screen sharing, powered by LiveKit.
**Rich media** link previews, image and video attachments, and GIF search via KLIPY.
**Communities and channels** text and voice channels organised into categories with granular permissions.
**Custom expressions** upload custom emojis and stickers for your community.
**Self-hostable** run your own instance with full control of your data and no vendor lock-in.
> [!NOTE]
> Docs are coming very soon! With your help and [donations](https://fluxer.app/donate), the self-hosting and documentation story will get a lot better.
> Native mobile apps and federation are top priorities. If you'd like to support this work, [donations](https://fluxer.app/donate) are greatly appreciated. You can also share feedback by emailing developers@fluxer.app.
## Self-hosting
> [!NOTE]
> New to Fluxer? Follow the [self-hosting guide](https://docs.fluxer.app/self-hosting) for step-by-step setup instructions.
TBD
### Deployment helpers
- [`livekitctl`](./fluxer_devops/livekitctl/README.md) bootstrap a LiveKit SFU for voice and video
## Development
### Tech stack
- [TypeScript](https://www.typescriptlang.org/) and [Node.js](https://nodejs.org/) for backend services
- [Hono](https://hono.dev/) as the web framework for all HTTP services
- [Erlang/OTP](https://www.erlang.org/) for the real-time WebSocket gateway (message routing and presence)
- [React](https://react.dev/) and [Electron](https://www.electronjs.org/) for the desktop and web client
- [Rust](https://www.rust-lang.org/) compiled to WebAssembly for performance-critical client code
- [SQLite](https://www.sqlite.org/) for storage by default, with optional [Cassandra](https://cassandra.apache.org/) for distributed deployments
- [Meilisearch](https://www.meilisearch.com/) for full-text search and indexing
- [Valkey](https://valkey.io/) (Redis-compatible) for caching, rate limiting, and ephemeral coordination
- [LiveKit](https://livekit.io/) for voice and video infrastructure
### Devenv development environment
Fluxer supports development through **devenv** only. It provides a reproducible Nix environment and a single, declarative process manager for the dev stack.
1. Install Nix and devenv using the [devenv getting started guide](https://devenv.sh/getting-started/).
2. Enter the environment:
```bash
devenv shell
```
If you use direnv, the repo includes a `.envrc` that loads devenv automatically run `direnv allow` once.
### Getting started
Start all services and the development server with:
```bash
devenv up
```
Open the instance in a browser at your dev server URL (e.g. `http://localhost:48763/`).
Emails sent during development (verification codes, notifications, etc.) are captured by a local Mailpit instance. Access the inbox at your dev server URL + `/mailpit/` (e.g. `http://localhost:48763/mailpit/`).
### Voice on a remote VM
If you develop on a remote VM behind Cloudflare Tunnels (or a similar HTTP-only tunnel), voice and video won't work out of the box. Cloudflare Tunnels only proxy HTTP/WebSocket traffic, so WebRTC media transport needs a direct path to the server. Open these ports on the VM's firewall:
| Port | Protocol | Purpose |
| ----------- | -------- | ---------------- |
| 3478 | UDP | TURN/STUN |
| 7881 | TCP | ICE-TCP fallback |
| 50000-50100 | UDP | RTP/RTCP media |
The bootstrap script configures LiveKit automatically based on `domain.base_domain` in your `config.json`. When set to a non-localhost domain, it enables external IP discovery so clients can connect directly for media while signaling continues through the tunnel.
### Devcontainer (experimental)
There is experimental support for developing in a **VS Code Dev Container** / GitHub Codespace without Nix. The `.devcontainer/` directory provides a Docker Compose setup with all required tooling and backing services.
```bash
# Inside the dev container, start all processes:
process-compose -f .devcontainer/process-compose.yml up
```
Open the app at `http://localhost:48763` and the dev email inbox at `http://localhost:48763/mailpit/`. Predefined VS Code debugging targets are available in `.vscode/launch.json`.
> [!WARNING]
> Bluesky OAuth is disabled in the devcontainer because it requires HTTPS. All other features work normally.
### Documentation
To develop the documentation site with live preview:
```bash
pnpm dev:docs
```
## Contributing
Fluxer is **free and open source software** licensed under **AGPLv3**. Contributions are welcome.
See [`CONTRIBUTING.md`](./CONTRIBUTING.md) for development processes and how to propose changes, and [`CODE_OF_CONDUCT.md`](./CODE_OF_CONDUCT.md) for community guidelines.
## Security
Report vulnerabilities at [fluxer.app/security](https://fluxer.app/security). Do not use public issues for security reports.
<details>
<summary><strong>License</strong></summary>
<br>
Copyright (c) 2026 Fluxer Contributors
Licensed under the [GNU Affero General Public License v3](./LICENSE):
```text
Copyright (c) 2026 Fluxer Contributors
This program is free software: you can redistribute it and/or modify it under
the terms of the GNU Affero General Public License as published by the Free
Software Foundation, either version 3 of the License, or (at your option) any
later version.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
details.
You should have received a copy of the GNU Affero General Public License along
with this program. If not, see https://www.gnu.org/licenses/
```
See [`LICENSING.md`](./LICENSING.md) for details on commercial licensing and the CLA.
</details>

View File

@@ -40,7 +40,8 @@
"quoteStyle": "single"
},
"parser": {
"cssModules": true
"cssModules": true,
"tailwindDirectives": true
}
},
"linter": {
@@ -49,7 +50,8 @@
"recommended": true,
"complexity": {
"noForEach": "off",
"noImportantStyles": "off"
"noImportantStyles": "off",
"useLiteralKeys": "off"
},
"correctness": {
"noUndeclaredVariables": "error",
@@ -61,6 +63,7 @@
"noArrayIndexKey": "off",
"noAssignInExpressions": "off",
"noExplicitAny": "off",
"noThenProperty": "off",
"noDoubleEquals": {
"level": "error",
"options": {
@@ -103,7 +106,7 @@
"noAutofocus": "warn",
"noAccessKey": "warn",
"useAriaActivedescendantWithTabindex": "error",
"noSvgWithoutTitle": "warn"
"noSvgWithoutTitle": "off"
},
"nursery": {
"useSortedClasses": "error"
@@ -121,6 +124,7 @@
"**",
"!**/.git",
"!**/app.css",
"!fluxer_admin/public/static/app.css",
"!**/build",
"fluxer_app/scripts/build",
"!**/dist",
@@ -132,7 +136,10 @@
"!**/*.html",
"!**/*.module.css.d.ts",
"!**/fluxer_app/src/components/uikit/SVGMasks.tsx",
"!fluxer_static"
"!fluxer_marketing/public/static/app.css",
"!packages/marketing/public/static/app.css",
"!fluxer_static",
"!fluxer_docs/api-reference/openapi.json"
],
"ignoreUnknown": true
}

112
compose.yaml Normal file
View File

@@ -0,0 +1,112 @@
x-logging: &default-logging
driver: json-file
options:
max-size: '10m'
max-file: '5'
services:
valkey:
image: valkey/valkey:8.0.6-alpine
container_name: valkey
restart: unless-stopped
command: ['valkey-server', '--appendonly', 'yes', '--save', '60', '1', '--loglevel', 'warning']
volumes:
- valkey_data:/data
healthcheck:
test: ['CMD', 'valkey-cli', 'ping']
interval: 10s
timeout: 5s
retries: 5
logging: *default-logging
fluxer_server:
image: ${FLUXER_SERVER_IMAGE:-ghcr.io/fluxerapp/fluxer-server:stable}
container_name: fluxer_server
restart: unless-stopped
init: true
environment:
FLUXER_CONFIG: /usr/src/app/config/config.json
NODE_ENV: production
ports:
- '${FLUXER_HTTP_PORT:-8080}:8080'
depends_on:
valkey:
condition: service_healthy
volumes:
- ./config:/usr/src/app/config:ro
- fluxer_data:/usr/src/app/data
healthcheck:
test: ['CMD-SHELL', 'curl -fsS http://127.0.0.1:8080/_health || exit 1']
interval: 15s
timeout: 5s
retries: 5
start_period: 15s
logging: *default-logging
meilisearch:
image: getmeili/meilisearch:v1.14
container_name: meilisearch
profiles: ['search']
restart: unless-stopped
environment:
MEILI_ENV: production
MEILI_MASTER_KEY: ${MEILI_MASTER_KEY:?Set MEILI_MASTER_KEY in .env or environment}
MEILI_DB_PATH: /meili_data
MEILI_HTTP_ADDR: 0.0.0.0:7700
ports:
- '${MEILI_PORT:-7700}:7700'
volumes:
- meilisearch_data:/meili_data
healthcheck:
test: ['CMD-SHELL', 'curl -fsS http://127.0.0.1:7700/health || exit 1']
interval: 15s
timeout: 5s
retries: 5
logging: *default-logging
elasticsearch:
image: elasticsearch:8.19.11
container_name: elasticsearch
profiles: ['search']
restart: unless-stopped
environment:
discovery.type: single-node
xpack.security.enabled: 'false'
xpack.security.http.ssl.enabled: 'false'
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
ports:
- '${ELASTICSEARCH_PORT:-9200}:9200'
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
healthcheck:
test: ['CMD-SHELL', 'curl -fsS http://127.0.0.1:9200/_cluster/health || exit 1']
interval: 15s
timeout: 5s
retries: 5
logging: *default-logging
livekit:
image: livekit/livekit-server:v1.9.11
container_name: livekit
profiles: ['voice']
restart: unless-stopped
command: ['--config', '/etc/livekit/livekit.yaml']
volumes:
- ./config/livekit.yaml:/etc/livekit/livekit.yaml:ro
ports:
- '${LIVEKIT_PORT:-7880}:7880'
- '7881:7881'
- '3478:3478/udp'
- '50000-50100:50000-50100/udp'
healthcheck:
test: ['CMD-SHELL', 'wget -qO- http://127.0.0.1:7880 || exit 1']
interval: 15s
timeout: 5s
retries: 5
logging: *default-logging
volumes:
valkey_data:
fluxer_data:
meilisearch_data:
elasticsearch_data:

View File

@@ -0,0 +1,116 @@
{
"$schema": "../packages/config/src/ConfigSchema.json",
"env": "development",
"domain": {
"base_domain": "localhost",
"public_port": 48763
},
"database": {
"backend": "sqlite",
"sqlite_path": "./data/dev.db"
},
"internal": {
"kv": "redis://127.0.0.1:6379/0",
"kv_mode": "standalone"
},
"s3": {
"access_key_id": "",
"secret_access_key": "",
"endpoint": "http://127.0.0.1:49319/s3"
},
"services": {
"server": {
"port": 49319,
"host": "0.0.0.0"
},
"media_proxy": {
"secret_key": ""
},
"admin": {
"secret_key_base": "",
"oauth_client_secret": ""
},
"marketing": {
"enabled": true,
"port": 49531,
"host": "0.0.0.0",
"secret_key_base": ""
},
"gateway": {
"port": 49107,
"admin_reload_secret": "",
"media_proxy_endpoint": "http://localhost:49319/media",
"logger_level": "debug"
},
"nats": {
"core_url": "nats://127.0.0.1:4222",
"jetstream_url": "nats://127.0.0.1:4223"
}
},
"auth": {
"sudo_mode_secret": "",
"connection_initiation_secret": "",
"vapid": {
"public_key": "",
"private_key": ""
},
"bluesky": {
"enabled": true,
"keys": []
}
},
"discovery": {
"min_member_count": 1
},
"dev": {
"disable_rate_limits": true
},
"integrations": {
"email": {
"enabled": true,
"provider": "smtp",
"from_email": "noreply@localhost",
"smtp": {
"host": "localhost",
"port": 49621,
"username": "dev",
"password": "",
"secure": false
}
},
"gif": {
"provider": "klipy"
},
"klipy": {
"api_key": ""
},
"tenor": {
"api_key": ""
},
"voice": {
"enabled": true,
"api_key": "",
"api_secret": "",
"url": "ws://localhost:7880",
"webhook_url": "http://localhost:49319/api/webhooks/livekit",
"default_region": {
"id": "default",
"name": "Default",
"emoji": "\ud83c\udf10",
"latitude": 0.0,
"longitude": 0.0
}
},
"search": {
"engine": "meilisearch",
"url": "http://127.0.0.1:7700",
"api_key": ""
}
},
"instance": {
"private_key_path": ""
},
"federation": {
"enabled": false
}
}

View File

@@ -0,0 +1,64 @@
{
"$schema": "../packages/config/src/ConfigSchema.json",
"env": "production",
"domain": {
"base_domain": "chat.example.com",
"public_scheme": "https",
"public_port": 443
},
"database": {
"backend": "sqlite",
"sqlite_path": "./data/fluxer.db"
},
"internal": {
"kv": "redis://valkey:6379/0",
"kv_mode": "standalone"
},
"s3": {
"access_key_id": "YOUR_S3_ACCESS_KEY",
"secret_access_key": "YOUR_S3_SECRET_KEY",
"endpoint": "http://127.0.0.1:8080/s3"
},
"services": {
"server": {
"port": 8080,
"host": "0.0.0.0"
},
"media_proxy": {
"secret_key": "GENERATE_A_64_CHAR_HEX_SECRET"
},
"admin": {
"secret_key_base": "GENERATE_A_64_CHAR_HEX_SECRET",
"oauth_client_secret": "GENERATE_A_64_CHAR_HEX_SECRET"
},
"marketing": {
"enabled": true,
"secret_key_base": "GENERATE_A_64_CHAR_HEX_SECRET"
},
"gateway": {
"port": 8082,
"admin_reload_secret": "GENERATE_A_64_CHAR_HEX_SECRET",
"media_proxy_endpoint": "http://127.0.0.1:8080/media"
},
"nats": {
"core_url": "nats://nats:4222",
"jetstream_url": "nats://nats:4222",
"auth_token": "GENERATE_A_NATS_AUTH_TOKEN"
}
},
"auth": {
"sudo_mode_secret": "GENERATE_A_64_CHAR_HEX_SECRET",
"connection_initiation_secret": "GENERATE_A_64_CHAR_HEX_SECRET",
"vapid": {
"public_key": "YOUR_VAPID_PUBLIC_KEY",
"private_key": "YOUR_VAPID_PRIVATE_KEY"
}
},
"integrations": {
"search": {
"engine": "meilisearch",
"url": "http://meilisearch:7700",
"api_key": "YOUR_MEILISEARCH_API_KEY"
}
}
}

View File

@@ -0,0 +1,4 @@
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$ref": "../packages/config/src/ConfigSchema.json"
}

73
config/config.test.json Normal file
View File

@@ -0,0 +1,73 @@
{
"env": "test",
"instance": {
"self_hosted": false
},
"domain": {
"base_domain": "localhost"
},
"database": {
"backend": "sqlite",
"sqlite_path": "./data/test.db"
},
"s3": {
"access_key_id": "test-access-key",
"secret_access_key": "test-secret-key"
},
"services": {
"media_proxy": {
"secret_key": "test-media-proxy-secret-key-minimum-32-chars"
},
"admin": {
"secret_key_base": "test-admin-secret-key-base-minimum-32-chars",
"oauth_client_secret": "test-oauth-client-secret"
},
"gateway": {
"admin_reload_secret": "test-gateway-admin-reload-secret-32-chars",
"media_proxy_endpoint": "http://localhost:8088/media"
}
},
"auth": {
"sudo_mode_secret": "test-sudo-mode-secret-minimum-32-chars",
"connection_initiation_secret": "test-connection-initiation-secret-32ch",
"vapid": {
"public_key": "test-vapid-public-key",
"private_key": "test-vapid-private-key"
},
"bluesky": {
"enabled": true,
"keys": []
}
},
"discovery": {
"min_member_count": 1
},
"dev": {
"disable_rate_limits": true,
"test_mode_enabled": true,
"relax_registration_rate_limits": true
},
"proxy": {
"trust_cf_connecting_ip": false
},
"integrations": {
"search": {
"url": "http://127.0.0.1:7700",
"api_key": "test-meilisearch-master-key"
},
"photo_dna": {
"enabled": true,
"hash_service_url": "https://api.microsoftmoderator.com/photodna/v1.0/Hash",
"hash_service_timeout_ms": 30000,
"match_endpoint": "https://api.microsoftmoderator.com/photodna/v1.0/Match",
"subscription_key": "test-subscription-key",
"match_enhance": false,
"rate_limit_rps": 10
},
"stripe": {
"enabled": true,
"secret_key": "sk_test_mock_key_for_testing",
"webhook_secret": "whsec_test_mock_webhook_secret"
}
}
}

View File

@@ -0,0 +1,16 @@
port: 7880
keys:
'<replace-with-api-key>': '<replace-with-api-secret>'
rtc:
tcp_port: 7881
turn:
enabled: true
udp_port: 3478
room:
auto_create: true
max_participants: 100
empty_timeout: 300

View File

@@ -1,166 +0,0 @@
NODE_ENV=development
FLUXER_API_PUBLIC_ENDPOINT=http://127.0.0.1:8088/api
FLUXER_API_CLIENT_ENDPOINT=
FLUXER_APP_ENDPOINT=http://localhost:8088
FLUXER_GATEWAY_ENDPOINT=ws://127.0.0.1:8088/gateway
FLUXER_MEDIA_ENDPOINT=http://127.0.0.1:8088/media
FLUXER_CDN_ENDPOINT=https://fluxerstatic.com
FLUXER_MARKETING_ENDPOINT=http://127.0.0.1:8088
FLUXER_ADMIN_ENDPOINT=http://127.0.0.1:8088
FLUXER_INVITE_ENDPOINT=http://fluxer.gg
FLUXER_GIFT_ENDPOINT=http://fluxer.gift
FLUXER_API_HOST=api:8080
FLUXER_API_PORT=8080
FLUXER_GATEWAY_WS_PORT=8080
FLUXER_GATEWAY_RPC_PORT=8081
FLUXER_MEDIA_PROXY_PORT=8080
FLUXER_ADMIN_PORT=8080
FLUXER_MARKETING_PORT=8080
FLUXER_PATH_GATEWAY=/gateway
FLUXER_PATH_ADMIN=/admin
FLUXER_PATH_MARKETING=/marketing
API_HOST=api:8080
FLUXER_GATEWAY_RPC_HOST=
FLUXER_GATEWAY_PUSH_ENABLED=false
FLUXER_GATEWAY_PUSH_USER_GUILD_SETTINGS_CACHE_MB=1024
FLUXER_GATEWAY_PUSH_SUBSCRIPTIONS_CACHE_MB=1024
FLUXER_GATEWAY_PUSH_BLOCKED_IDS_CACHE_MB=1024
FLUXER_GATEWAY_IDENTIFY_RATE_LIMIT_ENABLED=false
FLUXER_MEDIA_PROXY_HOST=
MEDIA_PROXY_ENDPOINT=
VAPID_PUBLIC_KEY=
VAPID_PRIVATE_KEY=
VAPID_EMAIL=support@fluxer.app
SUDO_MODE_SECRET=
PASSKEYS_ENABLED=true
PASSKEY_RP_NAME=Fluxer
PASSKEY_RP_ID=127.0.0.1
PASSKEY_ALLOWED_ORIGINS=http://127.0.0.1:8088,http://localhost:8088
ADMIN_OAUTH2_CLIENT_ID=
ADMIN_OAUTH2_CLIENT_SECRET=
ADMIN_OAUTH2_AUTO_CREATE=false
ADMIN_OAUTH2_REDIRECT_URI=http://127.0.0.1:8088/admin/oauth2_callback
RELEASE_CHANNEL=stable
DATABASE_URL=postgresql://postgres:postgres@postgres:5432/fluxer
REDIS_URL=redis://redis:6379
CASSANDRA_HOSTS=cassandra
CASSANDRA_KEYSPACE=fluxer
CASSANDRA_LOCAL_DC=datacenter1
CASSANDRA_USERNAME=cassandra
CASSANDRA_PASSWORD=cassandra
AWS_S3_ENDPOINT=http://minio:9000
AWS_ACCESS_KEY_ID=minioadmin
AWS_SECRET_ACCESS_KEY=minioadmin
AWS_S3_BUCKET_CDN=fluxer
AWS_S3_BUCKET_UPLOADS=fluxer-uploads
AWS_S3_BUCKET_DOWNLOADS=fluxer-downloads
AWS_S3_BUCKET_REPORTS=fluxer-reports
AWS_S3_BUCKET_HARVESTS=fluxer-harvests
R2_S3_ENDPOINT=http://minio:9000
R2_ACCESS_KEY_ID=minioadmin
R2_SECRET_ACCESS_KEY=minioadmin
METRICS_MODE=noop
CLICKHOUSE_URL=http://clickhouse:8123
CLICKHOUSE_DATABASE=fluxer_metrics
CLICKHOUSE_USER=fluxer
CLICKHOUSE_PASSWORD=fluxer_dev
ANOMALY_DETECTION_ENABLED=true
ANOMALY_WINDOW_SIZE=100
ANOMALY_ZSCORE_THRESHOLD=3.0
ANOMALY_CHECK_INTERVAL_SECS=60
ANOMALY_COOLDOWN_SECS=300
ANOMALY_ERROR_RATE_THRESHOLD=0.05
ALERT_WEBHOOK_URL=
EMAIL_ENABLED=false
SENDGRID_FROM_EMAIL=noreply@fluxer.app
SENDGRID_FROM_NAME=Fluxer
SENDGRID_API_KEY=
SENDGRID_WEBHOOK_PUBLIC_KEY=
SMS_ENABLED=false
TWILIO_ACCOUNT_SID=
TWILIO_AUTH_TOKEN=
TWILIO_VERIFY_SERVICE_SID=
CAPTCHA_ENABLED=true
CAPTCHA_PRIMARY_PROVIDER=turnstile
HCAPTCHA_SITE_KEY=10000000-ffff-ffff-ffff-000000000001
HCAPTCHA_PUBLIC_SITE_KEY=10000000-ffff-ffff-ffff-000000000001
HCAPTCHA_SECRET_KEY=0x0000000000000000000000000000000000000000
TURNSTILE_SITE_KEY=1x00000000000000000000AA
TURNSTILE_PUBLIC_SITE_KEY=1x00000000000000000000AA
TURNSTILE_SECRET_KEY=1x0000000000000000000000000000000AA
SEARCH_ENABLED=true
MEILISEARCH_URL=http://meilisearch:7700
MEILISEARCH_API_KEY=masterKey
STRIPE_ENABLED=false
STRIPE_SECRET_KEY=
STRIPE_WEBHOOK_SECRET=
STRIPE_PRICE_ID_MONTHLY_USD=
STRIPE_PRICE_ID_MONTHLY_EUR=
STRIPE_PRICE_ID_YEARLY_USD=
STRIPE_PRICE_ID_YEARLY_EUR=
STRIPE_PRICE_ID_VISIONARY_USD=
STRIPE_PRICE_ID_VISIONARY_EUR=
STRIPE_PRICE_ID_GIFT_VISIONARY_USD=
STRIPE_PRICE_ID_GIFT_VISIONARY_EUR=
STRIPE_PRICE_ID_GIFT_1_MONTH_USD=
STRIPE_PRICE_ID_GIFT_1_MONTH_EUR=
STRIPE_PRICE_ID_GIFT_1_YEAR_USD=
STRIPE_PRICE_ID_GIFT_1_YEAR_EUR=
CLOUDFLARE_PURGE_ENABLED=false
CLOUDFLARE_ZONE_ID=
CLOUDFLARE_API_TOKEN=
CLOUDFLARE_TUNNEL_TOKEN=
VOICE_ENABLED=true
LIVEKIT_API_KEY=
LIVEKIT_API_SECRET=
LIVEKIT_WEBHOOK_URL=http://api:8080/webhooks/livekit
LIVEKIT_AUTO_CREATE_DUMMY_DATA=true
CLAMAV_ENABLED=false
CLAMAV_HOST=clamav
CLAMAV_PORT=3310
TENOR_API_KEY=
YOUTUBE_API_KEY=
SECRET_KEY_BASE=
GATEWAY_RPC_SECRET=
GATEWAY_ADMIN_SECRET=
ERLANG_COOKIE=fluxer_dev_cookie
MEDIA_PROXY_SECRET_KEY=
SELF_HOSTED=false
AUTO_JOIN_INVITE_CODE=
FLUXER_VISIONARIES_GUILD_ID=
FLUXER_OPERATORS_GUILD_ID=
GIT_SHA=dev
BUILD_TIMESTAMP=

View File

@@ -1,59 +1,51 @@
:8088 {
encode zstd gzip
@api path /api/*
handle @api {
handle_path /api/* {
reverse_proxy api:8080
}
}
@media path /media/*
handle @media {
handle_path /media/* {
reverse_proxy media:8080
}
}
@s3 path /s3/*
handle @s3 {
handle_path /s3/* {
reverse_proxy minio:9000
}
}
@admin path /admin /admin/*
handle @admin {
uri strip_prefix /admin
reverse_proxy admin:8080
}
@marketing path /marketing /marketing/*
handle @marketing {
uri strip_prefix /marketing
reverse_proxy marketing:8080
}
@gateway path /gateway /gateway/*
handle @gateway {
uri strip_prefix /gateway
reverse_proxy gateway:8080
}
@livekit path /livekit /livekit/*
handle @livekit {
handle_path /livekit/* {
reverse_proxy livekit:7880
}
}
@metrics path /metrics /metrics/*
handle @metrics {
uri strip_prefix /metrics
reverse_proxy metrics:8080
}
handle {
reverse_proxy host.docker.internal:3000
}
{
auto_https off
admin off
}
:48763 {
handle /_caddy_health {
respond "OK" 200
}
@gateway path /gateway /gateway/*
handle @gateway {
uri strip_prefix /gateway
reverse_proxy 127.0.0.1:49107
}
@marketing path /marketing /marketing/*
handle @marketing {
uri strip_prefix /marketing
reverse_proxy 127.0.0.1:49531
}
@server path /admin /admin/* /api /api/* /s3 /s3/* /queue /queue/* /media /media/* /_health /_ready /_live /.well-known/fluxer
handle @server {
reverse_proxy 127.0.0.1:49319
}
@livekit path /livekit /livekit/*
handle @livekit {
uri strip_prefix /livekit
reverse_proxy 127.0.0.1:7880
}
redir /mailpit /mailpit/
handle_path /mailpit/* {
rewrite * /mailpit{path}
reverse_proxy 127.0.0.1:49667
}
handle {
reverse_proxy 127.0.0.1:49427 {
header_up Connection {http.request.header.Connection}
header_up Upgrade {http.request.header.Upgrade}
}
}
log {
output stdout
format console
}
}

View File

@@ -1,160 +0,0 @@
services:
postgres:
image: postgres:17
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: fluxer
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- fluxer-shared
restart: on-failure
cassandra:
image: scylladb/scylla:latest
command: --smp 1 --memory 512M --overprovisioned 1 --developer-mode 1 --api-address 0.0.0.0
ports:
- '9042:9042'
volumes:
- scylla_data:/var/lib/scylla
networks:
- fluxer-shared
restart: on-failure
healthcheck:
test: ['CMD-SHELL', 'cqlsh -e "describe cluster"']
interval: 30s
timeout: 10s
retries: 5
start_period: 90s
redis:
image: valkey/valkey:latest
volumes:
- redis_data:/data
command: valkey-server --save 60 1 --loglevel warning
networks:
- fluxer-shared
restart: on-failure
minio:
image: minio/minio
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
volumes:
- minio_data:/data
networks:
- fluxer-shared
restart: on-failure
healthcheck:
test: ['CMD', 'mc', 'ready', 'local']
interval: 5s
timeout: 5s
retries: 5
minio-setup:
image: minio/mc
depends_on:
minio:
condition: service_healthy
entrypoint: >
/bin/sh -c "
mc alias set minio http://minio:9000 minioadmin minioadmin;
mc mb --ignore-existing minio/fluxer-metrics;
mc mb --ignore-existing minio/fluxer-uploads;
exit 0;
"
networks:
- fluxer-shared
restart: 'no'
clamav:
image: clamav/clamav:latest
volumes:
- clamav_data:/var/lib/clamav
environment:
CLAMAV_NO_FRESHCLAMD: 'false'
CLAMAV_NO_CLAMD: 'false'
CLAMAV_NO_MILTERD: 'true'
networks:
- fluxer-shared
restart: on-failure
healthcheck:
test: ['CMD', '/usr/local/bin/clamdcheck.sh']
interval: 30s
timeout: 10s
retries: 5
start_period: 300s
meilisearch:
image: getmeili/meilisearch:v1.25.0
volumes:
- meilisearch_data:/meili_data
environment:
MEILI_ENV: development
MEILI_MASTER_KEY: masterKey
networks:
- fluxer-shared
restart: on-failure
livekit:
image: livekit/livekit-server:latest
command: --config /etc/livekit.yaml --dev
env_file:
- ./.env
volumes:
- ./livekit.yaml:/etc/livekit.yaml:ro
ports:
- '7880:7880'
- '7882:7882/udp'
- '7999:7999/udp'
networks:
- fluxer-shared
restart: on-failure
clickhouse:
image: clickhouse/clickhouse-server:24.8
hostname: clickhouse
profiles:
- clickhouse
environment:
- CLICKHOUSE_DB=fluxer_metrics
- CLICKHOUSE_USER=fluxer
- CLICKHOUSE_PASSWORD=fluxer_dev
- CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT=1
volumes:
- clickhouse_data:/var/lib/clickhouse
- clickhouse_logs:/var/log/clickhouse-server
networks:
- fluxer-shared
ports:
- '8123:8123'
- '9000:9000'
restart: on-failure
healthcheck:
test: ['CMD', 'clickhouse-client', '--query', 'SELECT 1']
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
ulimits:
nofile:
soft: 262144
hard: 262144
networks:
fluxer-shared:
name: fluxer-shared
external: true
volumes:
postgres_data:
scylla_data:
redis_data:
minio_data:
clamav_data:
meilisearch_data:
clickhouse_data:
clickhouse_logs:

View File

@@ -1,385 +0,0 @@
services:
caddy:
image: caddy:2
ports:
- '8088:8088'
volumes:
- ./Caddyfile.dev:/etc/caddy/Caddyfile:ro
- ../fluxer_app/dist:/app/dist:ro
networks:
- fluxer-shared
extra_hosts:
- 'host.docker.internal:host-gateway'
restart: on-failure
cloudflared:
image: cloudflare/cloudflared:latest
command: tunnel --no-autoupdate run --token ${CLOUDFLARE_TUNNEL_TOKEN}
env_file:
- ./.env
networks:
- fluxer-shared
restart: on-failure
api:
image: node:24-bookworm-slim
working_dir: /workspace
command: bash -lc "corepack enable pnpm && CI=true pnpm install && npx tsx watch --clear-screen=false src/App.ts"
env_file:
- ./.env
environment:
- CI=true
- VAPID_PUBLIC_KEY=BJHAPp7Xg4oeN_D6-EVu0D-bDyPDwFFJiLn7CzkUjUvaG_F-keQGpA_-RiNugCosTPhhdvdrn4mEOh-_1Bt35V8
- FLUXER_METRICS_HOST=metrics:8080
volumes:
- ../fluxer_api:/workspace
- api_node_modules:/workspace/node_modules
networks:
- fluxer-shared
restart: on-failure
worker:
image: node:24-bookworm-slim
working_dir: /workspace
command: bash -lc "corepack enable pnpm && CI=true pnpm install && npm run dev:worker"
env_file:
- ./.env
environment:
- CI=true
- FLUXER_METRICS_HOST=metrics:8080
volumes:
- ../fluxer_api:/workspace
- api_node_modules:/workspace/node_modules
networks:
- fluxer-shared
restart: on-failure
depends_on:
- postgres
- redis
- cassandra
media:
build:
context: ../fluxer_media_proxy
dockerfile: Dockerfile
target: build
working_dir: /workspace
command: >
bash -lc "
corepack enable pnpm &&
CI=true pnpm install &&
pnpm dev
"
user: root
env_file:
- ./.env
environment:
- CI=true
- NODE_ENV=development
- FLUXER_METRICS_HOST=metrics:8080
volumes:
- ../fluxer_media_proxy:/workspace
- media_node_modules:/workspace/node_modules
networks:
- fluxer-shared
restart: on-failure
admin:
build:
context: ../fluxer_admin
dockerfile: Dockerfile.dev
working_dir: /workspace
env_file:
- ./.env
environment:
- PORT=8080
- APP_MODE=admin
- FLUXER_METRICS_HOST=metrics:8080
volumes:
- admin_build:/workspace/build
networks:
- fluxer-shared
restart: on-failure
develop:
watch:
- action: rebuild
path: ../fluxer_admin/src
- action: rebuild
path: ../fluxer_admin/tailwind.css
marketing:
build:
context: ../fluxer_marketing
dockerfile: Dockerfile.dev
working_dir: /workspace
env_file:
- ./.env
environment:
- PORT=8080
- FLUXER_METRICS_HOST=metrics:8080
volumes:
- marketing_build:/workspace/build
networks:
- fluxer-shared
restart: on-failure
develop:
watch:
- action: rebuild
path: ../fluxer_marketing/src
- action: rebuild
path: ../fluxer_marketing/tailwind.css
docs:
image: node:24-bookworm-slim
working_dir: /workspace
command: bash -lc "corepack enable pnpm && CI=true pnpm install && pnpm dev"
env_file:
- ./.env
environment:
- CI=true
- NODE_ENV=development
volumes:
- ../fluxer_docs:/workspace
- docs_node_modules:/workspace/node_modules
networks:
- fluxer-shared
restart: on-failure
gateway:
image: erlang:28-slim
working_dir: /workspace
command: bash -c "apt-get update && apt-get install -y --no-install-recommends build-essential linux-libc-dev curl ca-certificates gettext-base git && curl -fsSL https://github.com/erlang/rebar3/releases/download/3.24.0/rebar3 -o /usr/local/bin/rebar3 && chmod +x /usr/local/bin/rebar3 && rebar3 compile && exec ./docker-entrypoint.sh"
hostname: gateway
env_file:
- ./.env
environment:
- RELEASE_NODE=fluxer_gateway@gateway
- LOGGER_LEVEL=debug
- CLUSTER_NAME=fluxer_gateway
- CLUSTER_DISCOVERY_DNS=gateway
- NODE_COOKIE=fluxer_dev_cookie
- VAPID_PUBLIC_KEY=BJHAPp7Xg4oeN_D6-EVu0D-bDyPDwFFJiLn7CzkUjUvaG_F-keQGpA_-RiNugCosTPhhdvdrn4mEOh-_1Bt35V8
- VAPID_PRIVATE_KEY=Ze8J4aSmwV5B77zz9NzTU_IdyFyR1hMiKaYF2G61Y-E
- VAPID_EMAIL=support@fluxer.app
- FLUXER_METRICS_HOST=metrics:8080
volumes:
- ../fluxer_gateway:/workspace
- gateway_build:/workspace/_build
networks:
- fluxer-shared
restart: on-failure
postgres:
image: postgres:17
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: fluxer
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- fluxer-shared
restart: on-failure
cassandra:
image: scylladb/scylla:latest
command: --smp 1 --memory 512M --overprovisioned 1 --developer-mode 1 --api-address 0.0.0.0
ports:
- '9042:9042'
volumes:
- scylla_data:/var/lib/scylla
networks:
- fluxer-shared
restart: on-failure
healthcheck:
test: ['CMD-SHELL', 'cqlsh -e "describe cluster"']
interval: 30s
timeout: 10s
retries: 5
start_period: 90s
redis:
image: valkey/valkey:latest
volumes:
- redis_data:/data
command: valkey-server --save 60 1 --loglevel warning
networks:
- fluxer-shared
restart: on-failure
minio:
image: minio/minio
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
volumes:
- minio_data:/data
networks:
- fluxer-shared
restart: on-failure
healthcheck:
test: ['CMD', 'mc', 'ready', 'local']
interval: 5s
timeout: 5s
retries: 5
minio-setup:
image: minio/mc
depends_on:
minio:
condition: service_healthy
entrypoint: >
/bin/sh -c "
mc alias set minio http://minio:9000 minioadmin minioadmin;
mc mb --ignore-existing minio/fluxer-metrics;
mc mb --ignore-existing minio/fluxer-uploads;
exit 0;
"
networks:
- fluxer-shared
restart: 'no'
clamav:
image: clamav/clamav:latest
volumes:
- clamav_data:/var/lib/clamav
environment:
CLAMAV_NO_FRESHCLAMD: 'false'
CLAMAV_NO_CLAMD: 'false'
CLAMAV_NO_MILTERD: 'true'
networks:
- fluxer-shared
restart: on-failure
healthcheck:
test: ['CMD', '/usr/local/bin/clamdcheck.sh']
interval: 30s
timeout: 10s
retries: 5
start_period: 300s
meilisearch:
image: getmeili/meilisearch:v1.25.0
volumes:
- meilisearch_data:/meili_data
environment:
MEILI_ENV: development
MEILI_MASTER_KEY: masterKey
networks:
- fluxer-shared
restart: on-failure
clickhouse:
image: clickhouse/clickhouse-server:24.8
hostname: clickhouse
profiles:
- clickhouse
environment:
- CLICKHOUSE_DB=fluxer_metrics
- CLICKHOUSE_USER=fluxer
- CLICKHOUSE_PASSWORD=fluxer_dev
- CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT=1
volumes:
- clickhouse_data:/var/lib/clickhouse
- clickhouse_logs:/var/log/clickhouse-server
networks:
- fluxer-shared
ports:
- '8123:8123'
- '9000:9000'
restart: on-failure
healthcheck:
test: ['CMD', 'clickhouse-client', '--query', 'SELECT 1']
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
ulimits:
nofile:
soft: 262144
hard: 262144
metrics:
build:
context: ../fluxer_metrics
dockerfile: Dockerfile
env_file:
- ./.env
environment:
- METRICS_PORT=8080
- METRICS_MODE=${METRICS_MODE:-noop}
- CLICKHOUSE_URL=http://clickhouse:8123
- CLICKHOUSE_DATABASE=fluxer_metrics
- CLICKHOUSE_USER=fluxer
- CLICKHOUSE_PASSWORD=fluxer_dev
- ANOMALY_DETECTION_ENABLED=true
- FLUXER_ADMIN_ENDPOINT=${FLUXER_ADMIN_ENDPOINT:-}
networks:
- fluxer-shared
restart: on-failure
metrics-clickhouse:
extends:
service: metrics
profiles:
- clickhouse
environment:
- METRICS_MODE=clickhouse
depends_on:
clickhouse:
condition: service_healthy
cassandra-migrate:
image: debian:bookworm-slim
command:
[
'bash',
'-lc',
'apt-get update && apt-get install -y dnsutils && sleep 30 && /cassandra-migrate --host cassandra --username cassandra --password cassandra up',
]
working_dir: /workspace
volumes:
- ../scripts/cassandra-migrate/target/release/cassandra-migrate:/cassandra-migrate
- ../fluxer_devops/cassandra/migrations:/workspace/fluxer_devops/cassandra/migrations
networks:
- fluxer-shared
depends_on:
cassandra:
condition: service_healthy
restart: 'no'
livekit:
image: livekit/livekit-server:latest
command: --config /etc/livekit.yaml --dev
env_file:
- ./.env
volumes:
- ./livekit.yaml:/etc/livekit.yaml:ro
ports:
- '7880:7880'
- '7882:7882/udp'
- '7999:7999/udp'
networks:
- fluxer-shared
restart: on-failure
networks:
fluxer-shared:
name: fluxer-shared
external: true
volumes:
postgres_data:
scylla_data:
redis_data:
minio_data:
clamav_data:
meilisearch_data:
clickhouse_data:
clickhouse_logs:
api_node_modules:
media_node_modules:
admin_build:
marketing_build:
gateway_build:
docs_node_modules:

28
dev/livekit.template.yaml Normal file
View File

@@ -0,0 +1,28 @@
port: 7880
keys:
'{{API_KEY}}': '{{API_SECRET}}'
rtc:
tcp_port: 7881
port_range_start: 50000
port_range_end: 50100
use_external_ip: false
node_ip: {{NODE_IP}}
turn:
enabled: true
domain: {{TURN_DOMAIN}}
udp_port: 3478
webhook:
api_key: '{{API_KEY}}'
urls:
- '{{WEBHOOK_URL}}'
room:
auto_create: true
max_participants: 100
empty_timeout: 300
development: true

330
devenv.lock Normal file
View File

@@ -0,0 +1,330 @@
{
"nodes": {
"cachix": {
"inputs": {
"devenv": [
"devenv"
],
"flake-compat": [
"devenv",
"flake-compat"
],
"git-hooks": [
"devenv",
"git-hooks"
],
"nixpkgs": [
"devenv",
"nixpkgs"
]
},
"locked": {
"lastModified": 1767714506,
"owner": "cachix",
"repo": "cachix",
"rev": "894c649f0daaa38bbcfb21de64be47dfa7cd0ec9",
"type": "github"
},
"original": {
"owner": "cachix",
"ref": "latest",
"repo": "cachix",
"type": "github"
}
},
"devenv": {
"inputs": {
"cachix": "cachix",
"flake-compat": "flake-compat",
"flake-parts": "flake-parts",
"git-hooks": "git-hooks",
"nix": "nix",
"nixpkgs": "nixpkgs"
},
"locked": {
"lastModified": 1764262844,
"owner": "cachix",
"repo": "devenv",
"rev": "42246161fa3bf7cd18f8334d08c73d6aaa8762d3",
"type": "github"
},
"original": {
"owner": "cachix",
"ref": "v1.11.2",
"repo": "devenv",
"type": "github"
}
},
"flake-compat": {
"flake": false,
"locked": {
"lastModified": 1767039857,
"owner": "edolstra",
"repo": "flake-compat",
"rev": "5edf11c44bc78a0d334f6334cdaf7d60d732daab",
"type": "github"
},
"original": {
"owner": "edolstra",
"repo": "flake-compat",
"type": "github"
}
},
"flake-compat_2": {
"flake": false,
"locked": {
"lastModified": 1767039857,
"owner": "NixOS",
"repo": "flake-compat",
"rev": "5edf11c44bc78a0d334f6334cdaf7d60d732daab",
"type": "github"
},
"original": {
"owner": "NixOS",
"repo": "flake-compat",
"type": "github"
}
},
"flake-parts": {
"inputs": {
"nixpkgs-lib": [
"devenv",
"nixpkgs"
]
},
"locked": {
"lastModified": 1769996383,
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "57928607ea566b5db3ad13af0e57e921e6b12381",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "flake-parts",
"type": "github"
}
},
"git-hooks": {
"inputs": {
"flake-compat": [
"devenv",
"flake-compat"
],
"gitignore": "gitignore",
"nixpkgs": [
"devenv",
"nixpkgs"
]
},
"locked": {
"lastModified": 1760663237,
"owner": "cachix",
"repo": "git-hooks.nix",
"rev": "ca5b894d3e3e151ffc1db040b6ce4dcc75d31c37",
"type": "github"
},
"original": {
"owner": "cachix",
"repo": "git-hooks.nix",
"type": "github"
}
},
"git-hooks_2": {
"inputs": {
"flake-compat": "flake-compat_2",
"gitignore": "gitignore_2",
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1769939035,
"owner": "cachix",
"repo": "git-hooks.nix",
"rev": "a8ca480175326551d6c4121498316261cbb5b260",
"type": "github"
},
"original": {
"owner": "cachix",
"repo": "git-hooks.nix",
"type": "github"
}
},
"gitignore": {
"inputs": {
"nixpkgs": [
"devenv",
"git-hooks",
"nixpkgs"
]
},
"locked": {
"lastModified": 1709087332,
"owner": "hercules-ci",
"repo": "gitignore.nix",
"rev": "637db329424fd7e46cf4185293b9cc8c88c95394",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "gitignore.nix",
"type": "github"
}
},
"gitignore_2": {
"inputs": {
"nixpkgs": [
"git-hooks",
"nixpkgs"
]
},
"locked": {
"lastModified": 1762808025,
"owner": "hercules-ci",
"repo": "gitignore.nix",
"rev": "cb5e3fdca1de58ccbc3ef53de65bd372b48f567c",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "gitignore.nix",
"type": "github"
}
},
"nix": {
"inputs": {
"flake-compat": [
"devenv",
"flake-compat"
],
"flake-parts": [
"devenv",
"flake-parts"
],
"git-hooks-nix": [
"devenv",
"git-hooks"
],
"nixpkgs": [
"devenv",
"nixpkgs"
],
"nixpkgs-23-11": [
"devenv"
],
"nixpkgs-regression": [
"devenv"
]
},
"locked": {
"lastModified": 1761648602,
"owner": "cachix",
"repo": "nix",
"rev": "3e5644da6830ef65f0a2f7ec22830c46285bfff6",
"type": "github"
},
"original": {
"owner": "cachix",
"ref": "devenv-2.30.6",
"repo": "nix",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1761313199,
"owner": "cachix",
"repo": "devenv-nixpkgs",
"rev": "d1c30452ebecfc55185ae6d1c983c09da0c274ff",
"type": "github"
},
"original": {
"owner": "cachix",
"ref": "rolling",
"repo": "devenv-nixpkgs",
"type": "github"
}
},
"nixpkgs-src": {
"flake": false,
"locked": {
"lastModified": 1769922788,
"narHash": "sha256-H3AfG4ObMDTkTJYkd8cz1/RbY9LatN5Mk4UF48VuSXc=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "207d15f1a6603226e1e223dc79ac29c7846da32e",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_2": {
"inputs": {
"nixpkgs-src": "nixpkgs-src"
},
"locked": {
"lastModified": 1770434727,
"owner": "cachix",
"repo": "devenv-nixpkgs",
"rev": "8430f16a39c27bdeef236f1eeb56f0b51b33d348",
"type": "github"
},
"original": {
"owner": "cachix",
"ref": "rolling",
"repo": "devenv-nixpkgs",
"type": "github"
}
},
"nixpkgs_3": {
"locked": {
"lastModified": 1770537093,
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "fef9403a3e4d31b0a23f0bacebbec52c248fbb51",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"devenv": "devenv",
"git-hooks": "git-hooks_2",
"nixpkgs": "nixpkgs_2",
"pre-commit-hooks": [
"git-hooks"
],
"rust-overlay": "rust-overlay"
}
},
"rust-overlay": {
"inputs": {
"nixpkgs": "nixpkgs_3"
},
"locked": {
"lastModified": 1770520253,
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "ebb8a141f60bb0ec33836333e0ca7928a072217f",
"type": "github"
},
"original": {
"owner": "oxalica",
"repo": "rust-overlay",
"type": "github"
}
}
},
"root": "root",
"version": 7
}

254
devenv.nix Normal file
View File

@@ -0,0 +1,254 @@
{ pkgs, config, lib, ... }:
{
imports = lib.optional (builtins.pathExists ./devenv.local.nix) ./devenv.local.nix;
env = {
FLUXER_CONFIG = "${config.git.root}/config/config.json";
FLUXER_DATABASE = "sqlite";
PC_DISABLE_TUI = "1";
};
dotenv.enable = false;
cachix.pull = [ "devenv" ];
process.manager.implementation = "process-compose";
process.managers.process-compose = {
port = 8090;
unixSocket.enable = true;
settings = {
is_tui_disabled = true;
log_level = "info";
log_configuration = {
flush_each_line = true;
};
processes = {
caddy = {
command = lib.mkForce "exec ${config.git.root}/scripts/dev_process_entry.sh caddy caddy run --config ${config.git.root}/dev/Caddyfile.dev --adapter caddyfile";
log_location = "${config.git.root}/dev/logs/caddy.log";
availability = {
restart = "always";
};
};
css_watch = {
command = lib.mkForce "exec ${config.git.root}/scripts/dev_process_entry.sh css_watch ${config.git.root}/scripts/dev_css_watch.sh";
log_location = "${config.git.root}/dev/logs/css_watch.log";
availability = {
restart = "always";
};
};
fluxer_app = {
command = lib.mkForce "exec ${config.git.root}/scripts/dev_process_entry.sh fluxer_app env FORCE_COLOR=1 FLUXER_APP_DEV_PORT=49427 ${config.git.root}/scripts/dev_fluxer_app.sh";
log_location = "${config.git.root}/dev/logs/fluxer_app.log";
availability = {
restart = "always";
};
};
fluxer_gateway = {
command = lib.mkForce "exec ${config.git.root}/scripts/dev_process_entry.sh fluxer_gateway env FLUXER_GATEWAY_NO_SHELL=1 ${config.git.root}/scripts/dev_gateway.sh";
log_location = "${config.git.root}/dev/logs/fluxer_gateway.log";
log_configuration = {
flush_each_line = true;
};
availability = {
restart = "always";
};
};
fluxer_server = {
command = lib.mkForce "exec ${config.git.root}/scripts/dev_process_entry.sh fluxer_server pnpm --filter fluxer_server dev";
log_location = "${config.git.root}/dev/logs/fluxer_server.log";
availability = {
restart = "always";
};
};
livekit = {
command = lib.mkForce "exec ${config.git.root}/scripts/dev_process_entry.sh livekit livekit-server --config ${config.git.root}/dev/livekit.yaml";
log_location = "${config.git.root}/dev/logs/livekit.log";
availability = {
restart = "always";
};
};
mailpit = {
command = lib.mkForce "exec ${config.git.root}/scripts/dev_process_entry.sh mailpit mailpit --listen 127.0.0.1:49667 --smtp 127.0.0.1:49621 --webroot /mailpit/";
log_location = "${config.git.root}/dev/logs/mailpit.log";
availability = {
restart = "always";
};
};
meilisearch = {
command = lib.mkForce "MEILI_NO_ANALYTICS=true exec ${config.git.root}/scripts/dev_process_entry.sh meilisearch meilisearch --env development --master-key \"$(cat ${config.git.root}/dev/meilisearch_master_key 2>/dev/null || true)\" --db-path ${config.git.root}/dev/data/meilisearch --http-addr 127.0.0.1:7700";
log_location = "${config.git.root}/dev/logs/meilisearch.log";
availability = {
restart = "always";
};
};
valkey = {
command = lib.mkForce "exec ${config.git.root}/scripts/dev_process_entry.sh valkey valkey-server --bind 127.0.0.1 --port 6379";
log_location = "${config.git.root}/dev/logs/valkey.log";
availability = {
restart = "always";
};
};
marketing_dev = {
command = lib.mkForce "exec ${config.git.root}/scripts/dev_process_entry.sh marketing_dev env FORCE_COLOR=1 pnpm --filter fluxer_marketing dev";
log_location = "${config.git.root}/dev/logs/marketing_dev.log";
availability = {
restart = "always";
};
};
nats_core = {
command = lib.mkForce "exec ${config.git.root}/scripts/dev_process_entry.sh nats_core nats-server -p 4222 -a 127.0.0.1";
log_location = "${config.git.root}/dev/logs/nats_core.log";
availability = {
restart = "always";
};
};
nats_jetstream = {
command = lib.mkForce "exec ${config.git.root}/scripts/dev_process_entry.sh nats_jetstream nats-server -p 4223 -js -sd ${config.git.root}/dev/data/nats_jetstream -a 127.0.0.1";
log_location = "${config.git.root}/dev/logs/nats_jetstream.log";
availability = {
restart = "always";
};
};
};
};
};
packages = with pkgs; [
nodejs_24
pnpm
erlang_28
rebar3
valkey
meilisearch
nats-server
ffmpeg
exiftool
caddy
livekit
mailpit
go_1_24
(rust-bin.stable."1.93.0".default.override {
targets = [ "wasm32-unknown-unknown" ];
})
jq
gettext
lsof
iproute2
python3
pkg-config
gcc
gnumake
sqlite
openssl
curl
uv
];
tasks."fluxer:bootstrap" = {
exec = "${config.git.root}/scripts/dev_bootstrap.sh";
before = [
"devenv:processes:meilisearch"
"devenv:processes:fluxer_server"
"devenv:processes:fluxer_app"
"devenv:processes:marketing_dev"
"devenv:processes:css_watch"
"devenv:processes:fluxer_gateway"
"devenv:processes:livekit"
"devenv:processes:mailpit"
"devenv:processes:valkey"
"devenv:processes:caddy"
"devenv:processes:nats_core"
"devenv:processes:nats_jetstream"
];
};
tasks."cassandra:mig:create" = {
exec = ''
name="$(echo "$DEVENV_TASK_INPUT" | jq -r '.name // empty')"
if [ -z "$name" ]; then
echo "Missing --input name" >&2
exit 1
fi
cd "${config.git.root}/fluxer_api"
pnpm tsx scripts/CassandraMigrate.tsx create "$name"
'';
};
tasks."cassandra:mig:check" = {
exec = ''
cd "${config.git.root}/fluxer_api"
pnpm tsx scripts/CassandraMigrate.tsx check
'';
};
tasks."cassandra:mig:status" = {
exec = ''
host="$(echo "$DEVENV_TASK_INPUT" | jq -r '.host // "localhost"')"
user="$(echo "$DEVENV_TASK_INPUT" | jq -r '.user // "cassandra"')"
pass="$(echo "$DEVENV_TASK_INPUT" | jq -r '.pass // "cassandra"')"
cd "${config.git.root}/fluxer_api"
pnpm tsx scripts/CassandraMigrate.tsx --host "$host" --username "$user" --password "$pass" status
'';
};
tasks."cassandra:mig:up" = {
exec = ''
host="$(echo "$DEVENV_TASK_INPUT" | jq -r '.host // "localhost"')"
user="$(echo "$DEVENV_TASK_INPUT" | jq -r '.user // "cassandra"')"
pass="$(echo "$DEVENV_TASK_INPUT" | jq -r '.pass // "cassandra"')"
cd "${config.git.root}/fluxer_api"
pnpm tsx scripts/CassandraMigrate.tsx --host "$host" --username "$user" --password "$pass" up
'';
};
tasks."licence:check" = {
exec = ''
cd "${config.git.root}/fluxer_api"
pnpm tsx scripts/LicenseEnforcer.tsx
'';
};
tasks."ci:py:sync" = {
exec = ''
cd "${config.git.root}/scripts/ci"
uv sync --dev
'';
};
tasks."ci:py:test" = {
exec = ''
cd "${config.git.root}/scripts/ci"
uv run pytest
'';
};
processes = {
fluxer_server.exec = "cd ${config.git.root} && pnpm --filter fluxer_server dev";
fluxer_app.exec = "cd ${config.git.root} && FORCE_COLOR=1 FLUXER_APP_DEV_PORT=49427 pnpm --filter fluxer_app dev";
marketing_dev.exec = "cd ${config.git.root} && FORCE_COLOR=1 pnpm --filter fluxer_marketing dev";
css_watch.exec = "cd ${config.git.root} && ${config.git.root}/scripts/dev_css_watch.sh";
fluxer_gateway.exec = "cd ${config.git.root} && ${config.git.root}/scripts/dev_gateway.sh";
meilisearch.exec = ''
MEILI_NO_ANALYTICS=true exec meilisearch \
--env development \
--master-key "$(cat ${config.git.root}/dev/meilisearch_master_key 2>/dev/null || true)" \
--db-path ${config.git.root}/dev/data/meilisearch \
--http-addr 127.0.0.1:7700
'';
livekit.exec = ''
exec livekit-server --config ${config.git.root}/dev/livekit.yaml
'';
mailpit.exec = ''
exec mailpit --listen 127.0.0.1:49667 --smtp 127.0.0.1:49621 --webroot /mailpit/
'';
valkey.exec = "exec valkey-server --bind 127.0.0.1 --port 6379";
caddy.exec = ''
exec caddy run --config ${config.git.root}/dev/Caddyfile.dev --adapter caddyfile
'';
nats_core.exec = "exec nats-server -p 4222 -a 127.0.0.1";
nats_jetstream.exec = ''
exec nats-server -p 4223 -js -sd ${config.git.root}/dev/data/nats_jetstream -a 127.0.0.1
'';
};
}

10
devenv.yaml Normal file
View File

@@ -0,0 +1,10 @@
# yaml-language-server: $schema=https://devenv.sh/devenv.schema.json
inputs:
devenv:
url: github:cachix/devenv/v1.11.2
nixpkgs:
url: github:cachix/devenv-nixpkgs/rolling
rust-overlay:
url: github:oxalica/rust-overlay
overlays:
- default

96
flake.lock generated Normal file
View File

@@ -0,0 +1,96 @@
{
"nodes": {
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1770115704,
"narHash": "sha256-KHFT9UWOF2yRPlAnSXQJh6uVcgNcWlFqqiAZ7OVlHNc=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "e6eae2ee2110f3d31110d5c222cd395303343b08",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_2": {
"locked": {
"lastModified": 1744536153,
"narHash": "sha256-awS2zRgF4uTwrOKwwiJcByDzDOdo3Q1rPZbiHQg/N38=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "18dd725c29603f582cf1900e0d25f9f1063dbf11",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"flake-utils": "flake-utils",
"nixpkgs": "nixpkgs",
"rust-overlay": "rust-overlay"
}
},
"rust-overlay": {
"inputs": {
"nixpkgs": "nixpkgs_2"
},
"locked": {
"lastModified": 1770088046,
"narHash": "sha256-4hfYDnUTvL1qSSZEA4CEThxfz+KlwSFQ30Z9jgDguO0=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "71f9daa4e05e49c434d08627e755495ae222bc34",
"type": "github"
},
"original": {
"owner": "oxalica",
"repo": "rust-overlay",
"type": "github"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",
"version": 7
}

View File

@@ -1,48 +1,85 @@
ARG BUILD_TIMESTAMP=0
FROM erlang:27.1.1.0-alpine AS builder
COPY --from=ghcr.io/gleam-lang/gleam:nightly-erlang /bin/gleam /bin/gleam
RUN apk add --no-cache git curl
WORKDIR /app
COPY gleam.toml manifest.toml ./
COPY src ./src
COPY priv ./priv
COPY tailwind.css ./
RUN gleam deps download
RUN gleam export erlang-shipment
ARG TAILWIND_VERSION=v4.1.17
RUN ARCH=$(uname -m) && \
if [ "$ARCH" = "x86_64" ]; then \
TAILWIND_ARCH="x64"; \
elif [ "$ARCH" = "aarch64" ]; then \
TAILWIND_ARCH="arm64"; \
else \
TAILWIND_ARCH="x64"; \
fi && \
echo "Downloading Tailwind CSS $TAILWIND_VERSION for Alpine Linux: linux-$TAILWIND_ARCH-musl" && \
curl -sSLf -o /tmp/tailwindcss "https://github.com/tailwindlabs/tailwindcss/releases/download/${TAILWIND_VERSION}/tailwindcss-linux-${TAILWIND_ARCH}-musl" && \
chmod +x /tmp/tailwindcss && \
/tmp/tailwindcss -i ./tailwind.css -o ./priv/static/app.css --minify
FROM erlang:27.1.1.0-alpine
ARG BUILD_SHA
ARG BUILD_NUMBER
ARG BUILD_TIMESTAMP
ARG RELEASE_CHANNEL=nightly
RUN apk add --no-cache openssl ncurses-libs curl
FROM node:24-bookworm-slim AS base
WORKDIR /app
WORKDIR /usr/src/app
COPY --from=builder /app/build/erlang-shipment /app
COPY --from=builder /app/priv ./priv
RUN corepack enable && corepack prepare pnpm@10.26.0 --activate
FROM base AS deps
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
COPY patches/ ./patches/
COPY packages/ ./packages/
COPY fluxer_admin/package.json ./fluxer_admin/
RUN pnpm install --frozen-lockfile
FROM deps AS build
COPY tsconfigs /usr/src/app/tsconfigs
COPY fluxer_admin/tsconfig.json ./fluxer_admin/
COPY fluxer_admin/src ./fluxer_admin/src
COPY fluxer_admin/public ./fluxer_admin/public
WORKDIR /usr/src/app/fluxer_admin
RUN pnpm --filter @fluxer/config generate
RUN pnpm build:css
FROM base AS prod-deps
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
COPY patches/ ./patches/
COPY packages/ ./packages/
COPY fluxer_admin/package.json ./fluxer_admin/
RUN pnpm install --frozen-lockfile --prod
COPY --from=build /usr/src/app/packages/admin/public /usr/src/app/packages/admin/public
FROM node:24-bookworm-slim
ARG BUILD_SHA
ARG BUILD_NUMBER
ARG BUILD_TIMESTAMP
ARG RELEASE_CHANNEL
WORKDIR /usr/src/app/fluxer_admin
RUN apt-get update && apt-get install -y --no-install-recommends \
curl && \
rm -rf /var/lib/apt/lists/*
RUN corepack enable && corepack prepare pnpm@10.26.0 --activate
COPY --from=prod-deps /usr/src/app/node_modules /usr/src/app/node_modules
COPY --from=prod-deps /usr/src/app/fluxer_admin/node_modules ./node_modules
COPY --from=prod-deps /usr/src/app/packages /usr/src/app/packages
COPY --from=build /usr/src/app/packages/config/src/ConfigSchema.json /usr/src/app/packages/config/src/ConfigSchema.json
COPY --from=build /usr/src/app/packages/config/src/MasterZodSchema.generated.tsx /usr/src/app/packages/config/src/MasterZodSchema.generated.tsx
COPY tsconfigs /usr/src/app/tsconfigs
COPY --from=build /usr/src/app/fluxer_admin/tsconfig.json ./tsconfig.json
COPY --from=build /usr/src/app/fluxer_admin/src ./src
COPY --from=build /usr/src/app/fluxer_admin/public ./public
COPY fluxer_admin/package.json ./
RUN mkdir -p /usr/src/app/.cache/corepack && \
chown -R nobody:nogroup /usr/src/app
ENV HOME=/usr/src/app
ENV COREPACK_HOME=/usr/src/app/.cache/corepack
ENV NODE_ENV=production
ENV FLUXER_ADMIN_PORT=8080
ENV BUILD_SHA=${BUILD_SHA}
ENV BUILD_NUMBER=${BUILD_NUMBER}
ENV BUILD_TIMESTAMP=${BUILD_TIMESTAMP}
ENV RELEASE_CHANNEL=${RELEASE_CHANNEL}
USER nobody
EXPOSE 8080
ENV PORT=8080
ENV BUILD_TIMESTAMP=${BUILD_TIMESTAMP}
CMD ["/app/entrypoint.sh", "run"]
CMD ["pnpm", "start"]

View File

@@ -1,21 +0,0 @@
FROM ghcr.io/gleam-lang/gleam:v1.13.0-erlang-alpine
WORKDIR /workspace
# Install dependencies
RUN apk add --no-cache curl
# Download gleam dependencies
COPY gleam.toml manifest.toml* ./
RUN gleam deps download
# Copy source code
COPY . .
# Download and setup tailwindcss, then build CSS
RUN mkdir -p build/bin && \
curl -sLo build/bin/tailwindcss https://github.com/tailwindlabs/tailwindcss/releases/download/v4.1.17/tailwindcss-linux-x64-musl && \
chmod +x build/bin/tailwindcss && \
build/bin/tailwindcss -i ./tailwind.css -o ./priv/static/app.css
CMD ["gleam", "run"]

View File

@@ -1,21 +0,0 @@
name = "fluxer_admin"
version = "1.0.0"
[dependencies]
gleam_stdlib = ">= 0.63.2 and < 1.0.0"
gleam_http = ">= 4.2.0 and < 5.0.0"
gleam_erlang = ">= 1.0.0 and < 2.0.0"
gleam_json = ">= 3.0.0 and < 4.0.0"
gleam_httpc = ">= 5.0.0 and < 6.0.0"
wisp = ">= 2.0.0 and < 3.0.0"
mist = ">= 5.0.0 and < 6.0.0"
lustre = ">= 5.3.0 and < 6.0.0"
dot_env = ">= 1.2.0 and < 2.0.0"
birl = ">= 1.8.0 and < 2.0.0"
logging = ">= 1.3.0 and < 2.0.0"
gleam_crypto = ">= 1.5.1 and < 2.0.0"
envoy = ">= 1.0.2 and < 2.0.0"
[dev-dependencies]
gleeunit = ">= 1.6.1 and < 2.0.0"
glailglind = ">= 2.2.0 and < 3.0.0"

View File

@@ -1,34 +0,0 @@
default:
@just --list
build:
gleam build
run:
just css && gleam run
test:
gleam test
css:
./build/bin/tailwindcss -i ./tailwind.css -o ./priv/static/app.css
css-watch:
./build/bin/tailwindcss -i ./tailwind.css -o ./priv/static/app.css --watch
clean:
rm -rf build/
rm -rf priv/static/app.css
deps:
gleam deps download
format:
gleam format
check: format build test
install-tailwind:
gleam run -m tailwind/install
setup: deps install-tailwind css

View File

@@ -1,55 +0,0 @@
# This file was generated by Gleam
# You typically do not need to edit this file
packages = [
{ name = "birl", version = "1.8.0", build_tools = ["gleam"], requirements = ["gleam_regexp", "gleam_stdlib", "ranger"], otp_app = "birl", source = "hex", outer_checksum = "2AC7BA26F998E3DFADDB657148BD5DDFE966958AD4D6D6957DD0D22E5B56C400" },
{ name = "directories", version = "1.2.0", build_tools = ["gleam"], requirements = ["envoy", "gleam_stdlib", "platform", "simplifile"], otp_app = "directories", source = "hex", outer_checksum = "D13090CFCDF6759B87217E8DDD73A75903A700148A82C1D33799F333E249BF9E" },
{ name = "dot_env", version = "1.2.0", build_tools = ["gleam"], requirements = ["gleam_stdlib", "simplifile"], otp_app = "dot_env", source = "hex", outer_checksum = "F2B4815F1B5AF8F20A6EADBB393E715C4C35203EBD5BE8200F766EA83A0B18DE" },
{ name = "envoy", version = "1.0.2", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "envoy", source = "hex", outer_checksum = "95FD059345AA982E89A0B6E2A3BF1CF43E17A7048DCD85B5B65D3B9E4E39D359" },
{ name = "exception", version = "2.1.0", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "exception", source = "hex", outer_checksum = "329D269D5C2A314F7364BD2711372B6F2C58FA6F39981572E5CA68624D291F8C" },
{ name = "filepath", version = "1.1.2", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "filepath", source = "hex", outer_checksum = "B06A9AF0BF10E51401D64B98E4B627F1D2E48C154967DA7AF4D0914780A6D40A" },
{ name = "glailglind", version = "2.2.0", build_tools = ["gleam"], requirements = ["gleam_erlang", "gleam_http", "gleam_httpc", "gleam_stdlib", "shellout", "simplifile", "tom"], otp_app = "glailglind", source = "hex", outer_checksum = "B0306F2C0A03A5A03633FC2BDF2D52B1E76FCAED656FB3F5EBCB7C31770E2524" },
{ name = "gleam_crypto", version = "1.5.1", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "gleam_crypto", source = "hex", outer_checksum = "50774BAFFF1144E7872814C566C5D653D83A3EBF23ACC3156B757A1B6819086E" },
{ name = "gleam_erlang", version = "1.3.0", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "gleam_erlang", source = "hex", outer_checksum = "1124AD3AA21143E5AF0FC5CF3D9529F6DB8CA03E43A55711B60B6B7B3874375C" },
{ name = "gleam_http", version = "4.3.0", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "gleam_http", source = "hex", outer_checksum = "82EA6A717C842456188C190AFB372665EA56CE13D8559BF3B1DD9E40F619EE0C" },
{ name = "gleam_httpc", version = "5.0.0", build_tools = ["gleam"], requirements = ["gleam_erlang", "gleam_http", "gleam_stdlib"], otp_app = "gleam_httpc", source = "hex", outer_checksum = "C545172618D07811494E97AAA4A0FB34DA6F6D0061FDC8041C2F8E3BE2B2E48F" },
{ name = "gleam_json", version = "3.0.2", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "gleam_json", source = "hex", outer_checksum = "874FA3C3BB6E22DD2BB111966BD40B3759E9094E05257899A7C08F5DE77EC049" },
{ name = "gleam_otp", version = "1.2.0", build_tools = ["gleam"], requirements = ["gleam_erlang", "gleam_stdlib"], otp_app = "gleam_otp", source = "hex", outer_checksum = "BA6A294E295E428EC1562DC1C11EA7530DCB981E8359134BEABC8493B7B2258E" },
{ name = "gleam_regexp", version = "1.1.1", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "gleam_regexp", source = "hex", outer_checksum = "9C215C6CA84A5B35BB934A9B61A9A306EC743153BE2B0425A0D032E477B062A9" },
{ name = "gleam_stdlib", version = "0.65.0", build_tools = ["gleam"], requirements = [], otp_app = "gleam_stdlib", source = "hex", outer_checksum = "7C69C71D8C493AE11A5184828A77110EB05A7786EBF8B25B36A72F879C3EE107" },
{ name = "gleam_time", version = "1.4.0", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "gleam_time", source = "hex", outer_checksum = "DCDDC040CE97DA3D2A925CDBBA08D8A78681139745754A83998641C8A3F6587E" },
{ name = "gleam_yielder", version = "1.1.0", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "gleam_yielder", source = "hex", outer_checksum = "8E4E4ECFA7982859F430C57F549200C7749823C106759F4A19A78AEA6687717A" },
{ name = "gleeunit", version = "1.6.1", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "gleeunit", source = "hex", outer_checksum = "FDC68A8C492B1E9B429249062CD9BAC9B5538C6FBF584817205D0998C42E1DAC" },
{ name = "glisten", version = "8.0.1", build_tools = ["gleam"], requirements = ["gleam_erlang", "gleam_otp", "gleam_stdlib", "logging", "telemetry"], otp_app = "glisten", source = "hex", outer_checksum = "534BB27C71FB9E506345A767C0D76B17A9E9199934340C975DC003C710E3692D" },
{ name = "gramps", version = "6.0.0", build_tools = ["gleam"], requirements = ["gleam_crypto", "gleam_erlang", "gleam_http", "gleam_stdlib"], otp_app = "gramps", source = "hex", outer_checksum = "8B7195978FBFD30B43DF791A8A272041B81E45D245314D7A41FC57237AA882A0" },
{ name = "houdini", version = "1.2.0", build_tools = ["gleam"], requirements = [], otp_app = "houdini", source = "hex", outer_checksum = "5DB1053F1AF828049C2B206D4403C18970ABEF5C18671CA3C2D2ED0DD64F6385" },
{ name = "hpack_erl", version = "0.3.0", build_tools = ["rebar3"], requirements = [], otp_app = "hpack", source = "hex", outer_checksum = "D6137D7079169D8C485C6962DFE261AF5B9EF60FBC557344511C1E65E3D95FB0" },
{ name = "logging", version = "1.3.0", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "logging", source = "hex", outer_checksum = "1098FBF10B54B44C2C7FDF0B01C1253CAFACDACABEFB4B0D027803246753E06D" },
{ name = "lustre", version = "5.3.5", build_tools = ["gleam"], requirements = ["gleam_erlang", "gleam_json", "gleam_otp", "gleam_stdlib", "houdini"], otp_app = "lustre", source = "hex", outer_checksum = "5CBB5DD2849D8316A2101792FC35AEB58CE4B151451044A9C2A2A70A2F7FCEB8" },
{ name = "marceau", version = "1.3.0", build_tools = ["gleam"], requirements = [], otp_app = "marceau", source = "hex", outer_checksum = "2D1C27504BEF45005F5DFB18591F8610FB4BFA91744878210BDC464412EC44E9" },
{ name = "mist", version = "5.0.3", build_tools = ["gleam"], requirements = ["exception", "gleam_erlang", "gleam_http", "gleam_otp", "gleam_stdlib", "gleam_yielder", "glisten", "gramps", "hpack_erl", "logging"], otp_app = "mist", source = "hex", outer_checksum = "7C4BE717A81305323C47C8A591E6B9BA4AC7F56354BF70B4D3DF08CC01192668" },
{ name = "platform", version = "1.0.0", build_tools = ["gleam"], requirements = [], otp_app = "platform", source = "hex", outer_checksum = "8339420A95AD89AAC0F82F4C3DB8DD401041742D6C3F46132A8739F6AEB75391" },
{ name = "ranger", version = "1.4.0", build_tools = ["gleam"], requirements = ["gleam_stdlib", "gleam_yielder"], otp_app = "ranger", source = "hex", outer_checksum = "C8988E8F8CDBD3E7F4D8F2E663EF76490390899C2B2885A6432E942495B3E854" },
{ name = "shellout", version = "1.7.0", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "shellout", source = "hex", outer_checksum = "1BDC03438FEB97A6AF3E396F4ABEB32BECF20DF2452EC9A8C0ACEB7BDDF70B14" },
{ name = "simplifile", version = "2.3.0", build_tools = ["gleam"], requirements = ["filepath", "gleam_stdlib"], otp_app = "simplifile", source = "hex", outer_checksum = "0A868DAC6063D9E983477981839810DC2E553285AB4588B87E3E9C96A7FB4CB4" },
{ name = "telemetry", version = "1.3.0", build_tools = ["rebar3"], requirements = [], otp_app = "telemetry", source = "hex", outer_checksum = "7015FC8919DBE63764F4B4B87A95B7C0996BD539E0D499BE6EC9D7F3875B79E6" },
{ name = "tom", version = "2.0.0", build_tools = ["gleam"], requirements = ["gleam_stdlib", "gleam_time"], otp_app = "tom", source = "hex", outer_checksum = "74D0C5A3761F7A7D06994755D4D5AD854122EF8E9F9F76A3E7547606D8C77091" },
{ name = "wisp", version = "2.1.0", build_tools = ["gleam"], requirements = ["directories", "exception", "filepath", "gleam_crypto", "gleam_erlang", "gleam_http", "gleam_json", "gleam_stdlib", "houdini", "logging", "marceau", "mist", "simplifile"], otp_app = "wisp", source = "hex", outer_checksum = "362BDDD11BF48EB38CDE51A73BC7D1B89581B395CA998E3F23F11EC026151C54" },
]
[requirements]
birl = { version = ">= 1.8.0 and < 2.0.0" }
dot_env = { version = ">= 1.2.0 and < 2.0.0" }
glailglind = { version = ">= 2.2.0 and < 3.0.0" }
gleam_erlang = { version = ">= 1.0.0 and < 2.0.0" }
gleam_http = { version = ">= 4.2.0 and < 5.0.0" }
gleam_httpc = { version = ">= 5.0.0 and < 6.0.0" }
gleam_json = { version = ">= 3.0.0 and < 4.0.0" }
gleam_stdlib = { version = ">= 0.63.2 and < 1.0.0" }
gleeunit = { version = ">= 1.6.1 and < 2.0.0" }
logging = { version = ">= 1.3.0 and < 2.0.0" }
lustre = { version = ">= 5.3.0 and < 6.0.0" }
mist = { version = ">= 5.0.0 and < 6.0.0" }
wisp = { version = ">= 2.0.0 and < 3.0.0" }
gleam_crypto = { version = ">= 1.5.1 and < 2.0.0" }
envoy = { version = ">= 1.0.2 and < 2.0.0" }

27
fluxer_admin/package.json Normal file
View File

@@ -0,0 +1,27 @@
{
"name": "fluxer_admin",
"private": true,
"type": "module",
"scripts": {
"build:css": "pnpm --filter @fluxer/admin build:css",
"build:css:watch": "pnpm --filter @fluxer/admin build:css:watch",
"dev": "tsx watch --clear-screen=false src/index.tsx",
"start": "tsx src/index.tsx",
"typecheck": "tsgo --noEmit"
},
"dependencies": {
"@fluxer/admin": "workspace:*",
"@fluxer/config": "workspace:*",
"@fluxer/constants": "workspace:*",
"@fluxer/hono": "workspace:*",
"@fluxer/initialization": "workspace:*",
"@fluxer/logger": "workspace:*",
"tsx": "catalog:"
},
"devDependencies": {
"@types/node": "catalog:",
"@typescript/native-preview": "catalog:",
"tailwindcss": "catalog:"
},
"packageManager": "pnpm@10.29.3"
}

View File

@@ -0,0 +1,51 @@
/*
* Copyright (C) 2026 Fluxer Contributors
*
* This file is part of Fluxer.
*
* Fluxer is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Fluxer is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
*/
import {loadConfig} from '@fluxer/config/src/ConfigLoader';
import {
extractBaseServiceConfig,
extractBuildInfoConfig,
extractKVClientConfig,
extractRateLimit,
} from '@fluxer/config/src/ServiceConfigSlices';
import {ADMIN_OAUTH2_APPLICATION_ID} from '@fluxer/constants/src/Core';
const master = await loadConfig();
const adminOAuthRedirectUri = `${master.endpoints.admin}/oauth2_callback`;
export const Config = {
...extractBaseServiceConfig(master),
...extractKVClientConfig(master),
...extractBuildInfoConfig(),
secretKeyBase: master.services.admin.secret_key_base,
apiEndpoint: master.endpoints.api,
mediaEndpoint: master.endpoints.media,
staticCdnEndpoint: master.endpoints.static_cdn,
adminEndpoint: master.endpoints.admin,
webAppEndpoint: master.endpoints.app,
oauthClientId: ADMIN_OAUTH2_APPLICATION_ID.toString(),
oauthClientSecret: master.services.admin.oauth_client_secret,
oauthRedirectUri: adminOAuthRedirectUri,
port: master.services.admin.port,
basePath: master.services.admin.base_path,
selfHosted: master.instance.self_hosted,
rateLimit: extractRateLimit(master.services.admin.rate_limit),
};
export type Config = typeof Config;

View File

@@ -0,0 +1,26 @@
/*
* Copyright (C) 2026 Fluxer Contributors
*
* This file is part of Fluxer.
*
* Fluxer is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Fluxer is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
*/
import {Config} from '@app/Config';
import {createServiceInstrumentation} from '@fluxer/initialization/src/CreateServiceInstrumentation';
export const shutdownInstrumentation = createServiceInstrumentation({
serviceName: 'fluxer-admin',
config: Config,
});

View File

@@ -0,0 +1,23 @@
/*
* Copyright (C) 2026 Fluxer Contributors
*
* This file is part of Fluxer.
*
* Fluxer is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Fluxer is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
*/
import {createLogger, type Logger as FluxerLogger} from '@fluxer/logger/src/Logger';
export const Logger = createLogger('fluxer-admin');
export type Logger = FluxerLogger;

View File

@@ -1,72 +0,0 @@
//// Copyright (C) 2026 Fluxer Contributors
////
//// This file is part of Fluxer.
////
//// Fluxer is free software: you can redistribute it and/or modify
//// it under the terms of the GNU Affero General Public License as published by
//// the Free Software Foundation, either version 3 of the License, or
//// (at your option) any later version.
////
//// Fluxer is distributed in the hope that it will be useful,
//// but WITHOUT ANY WARRANTY; without even the implied warranty of
//// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//// GNU Affero General Public License for more details.
////
//// You should have received a copy of the GNU Affero General Public License
//// along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
import fluxer_admin/config
import fluxer_admin/middleware/cache_middleware
import fluxer_admin/router
import fluxer_admin/web.{type Context, Context, normalize_base_path}
import gleam/erlang/process
import mist
import wisp
import wisp/wisp_mist
pub fn main() {
wisp.configure_logger()
let assert Ok(cfg) = config.load_config()
let base_path = normalize_base_path(cfg.base_path)
let ctx =
Context(
api_endpoint: cfg.api_endpoint,
oauth_client_id: cfg.oauth_client_id,
oauth_client_secret: cfg.oauth_client_secret,
oauth_redirect_uri: cfg.oauth_redirect_uri,
secret_key_base: cfg.secret_key_base,
static_directory: "priv/static",
media_endpoint: cfg.media_endpoint,
cdn_endpoint: cfg.cdn_endpoint,
asset_version: cfg.build_timestamp,
base_path: base_path,
app_endpoint: cfg.admin_endpoint,
web_app_endpoint: cfg.web_app_endpoint,
metrics_endpoint: cfg.metrics_endpoint,
)
let assert Ok(_) =
wisp_mist.handler(handle_request(_, ctx), cfg.secret_key_base)
|> mist.new
|> mist.bind("0.0.0.0")
|> mist.port(cfg.port)
|> mist.start
process.sleep_forever()
}
fn handle_request(req: wisp.Request, ctx: Context) -> wisp.Response {
let static_dir = ctx.static_directory
case wisp.path_segments(req) {
["static", ..] -> {
use <- wisp.serve_static(req, under: "/static", from: static_dir)
router.handle_request(req, ctx)
}
_ -> router.handle_request(req, ctx)
}
|> cache_middleware.add_cache_headers
}

View File

@@ -1,24 +0,0 @@
//// Copyright (C) 2026 Fluxer Contributors
////
//// This file is part of Fluxer.
////
//// Fluxer is free software: you can redistribute it and/or modify
//// it under the terms of the GNU Affero General Public License as published by
//// the Free Software Foundation, either version 3 of the License, or
//// (at your option) any later version.
////
//// Fluxer is distributed in the hope that it will be useful,
//// but WITHOUT ANY WARRANTY; without even the implied warranty of
//// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//// GNU Affero General Public License for more details.
////
//// You should have received a copy of the GNU Affero General Public License
//// along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
import fluxer_admin/constants
import gleam/list
pub fn has_permission(admin_acls: List(String), required_acl: String) -> Bool {
list.contains(admin_acls, required_acl)
|| list.contains(admin_acls, constants.acl_wildcard)
}

View File

@@ -1,264 +0,0 @@
//// Copyright (C) 2026 Fluxer Contributors
////
//// This file is part of Fluxer.
////
//// Fluxer is free software: you can redistribute it and/or modify
//// it under the terms of the GNU Affero General Public License as published by
//// the Free Software Foundation, either version 3 of the License, or
//// (at your option) any later version.
////
//// Fluxer is distributed in the hope that it will be useful,
//// but WITHOUT ANY WARRANTY; without even the implied warranty of
//// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//// GNU Affero General Public License for more details.
////
//// You should have received a copy of the GNU Affero General Public License
//// along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
import fluxer_admin/api/common.{
type ApiError, Forbidden, NetworkError, NotFound, ServerError, Unauthorized,
admin_post_with_audit,
}
import fluxer_admin/web.{type Context, type Session}
import gleam/dynamic/decode
import gleam/http
import gleam/http/request
import gleam/httpc
import gleam/json
import gleam/list
import gleam/option.{type Option}
pub type Archive {
Archive(
archive_id: String,
subject_type: String,
subject_id: String,
requested_by: String,
requested_at: String,
started_at: Option(String),
completed_at: Option(String),
failed_at: Option(String),
file_size: Option(String),
progress_percent: Int,
progress_step: Option(String),
error_message: Option(String),
download_url_expires_at: Option(String),
expires_at: Option(String),
)
}
pub type ListArchivesResponse {
ListArchivesResponse(archives: List(Archive))
}
pub fn trigger_user_archive(
ctx: Context,
session: Session,
user_id: String,
audit_log_reason: Option(String),
) -> Result(Nil, ApiError) {
admin_post_with_audit(
ctx,
session,
"/admin/archives/user",
[#("user_id", json.string(user_id))],
audit_log_reason,
)
}
pub fn trigger_guild_archive(
ctx: Context,
session: Session,
guild_id: String,
audit_log_reason: Option(String),
) -> Result(Nil, ApiError) {
admin_post_with_audit(
ctx,
session,
"/admin/archives/guild",
[#("guild_id", json.string(guild_id))],
audit_log_reason,
)
}
fn archive_decoder() {
use archive_id <- decode.field("archive_id", decode.string)
use subject_type <- decode.field("subject_type", decode.string)
use subject_id <- decode.field("subject_id", decode.string)
use requested_by <- decode.field("requested_by", decode.string)
use requested_at <- decode.field("requested_at", decode.string)
use started_at <- decode.optional_field(
"started_at",
option.None,
decode.optional(decode.string),
)
use completed_at <- decode.optional_field(
"completed_at",
option.None,
decode.optional(decode.string),
)
use failed_at <- decode.optional_field(
"failed_at",
option.None,
decode.optional(decode.string),
)
use file_size <- decode.optional_field(
"file_size",
option.None,
decode.optional(decode.string),
)
use progress_percent <- decode.field("progress_percent", decode.int)
use progress_step <- decode.optional_field(
"progress_step",
option.None,
decode.optional(decode.string),
)
use error_message <- decode.optional_field(
"error_message",
option.None,
decode.optional(decode.string),
)
use download_url_expires_at <- decode.optional_field(
"download_url_expires_at",
option.None,
decode.optional(decode.string),
)
use expires_at <- decode.optional_field(
"expires_at",
option.None,
decode.optional(decode.string),
)
decode.success(Archive(
archive_id: archive_id,
subject_type: subject_type,
subject_id: subject_id,
requested_by: requested_by,
requested_at: requested_at,
started_at: started_at,
completed_at: completed_at,
failed_at: failed_at,
file_size: file_size,
progress_percent: progress_percent,
progress_step: progress_step,
error_message: error_message,
download_url_expires_at: download_url_expires_at,
expires_at: expires_at,
))
}
pub fn list_archives(
ctx: Context,
session: Session,
subject_type: String,
subject_id: Option(String),
include_expired: Bool,
) -> Result(ListArchivesResponse, ApiError) {
let fields = [
#("subject_type", json.string(subject_type)),
#("include_expired", json.bool(include_expired)),
]
let fields = case subject_id {
option.Some(id) -> fields |> list.append([#("subject_id", json.string(id))])
option.None -> fields
}
let url = ctx.api_endpoint <> "/admin/archives/list"
let body = json.object(fields) |> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let decoder = {
use archives <- decode.field("archives", decode.list(archive_decoder()))
decode.success(ListArchivesResponse(archives: archives))
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn get_archive_download_url(
ctx: Context,
session: Session,
subject_type: String,
subject_id: String,
archive_id: String,
) -> Result(#(String, String), ApiError) {
let url =
ctx.api_endpoint
<> "/admin/archives/"
<> subject_type
<> "/"
<> subject_id
<> "/"
<> archive_id
<> "/download"
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Get)
|> request.set_header("authorization", "Bearer " <> session.access_token)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let decoder = {
use download_url <- decode.field("downloadUrl", decode.string)
use expires_at <- decode.field("expiresAt", decode.string)
decode.success(#(download_url, expires_at))
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}

View File

@@ -1,128 +0,0 @@
//// Copyright (C) 2026 Fluxer Contributors
////
//// This file is part of Fluxer.
////
//// Fluxer is free software: you can redistribute it and/or modify
//// it under the terms of the GNU Affero General Public License as published by
//// the Free Software Foundation, either version 3 of the License, or
//// (at your option) any later version.
////
//// Fluxer is distributed in the hope that it will be useful,
//// but WITHOUT ANY WARRANTY; without even the implied warranty of
//// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//// GNU Affero General Public License for more details.
////
//// You should have received a copy of the GNU Affero General Public License
//// along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
import fluxer_admin/api/common.{
type ApiError, Forbidden, NetworkError, NotFound, ServerError, Unauthorized,
}
import fluxer_admin/web
import gleam/dynamic/decode
import gleam/http
import gleam/http/request
import gleam/httpc
import gleam/json
import gleam/option
pub type AssetPurgeResult {
AssetPurgeResult(
id: String,
asset_type: String,
found_in_db: Bool,
guild_id: option.Option(String),
)
}
pub type AssetPurgeError {
AssetPurgeError(id: String, error: String)
}
pub type AssetPurgeResponse {
AssetPurgeResponse(
processed: List(AssetPurgeResult),
errors: List(AssetPurgeError),
)
}
pub fn purge_assets(
ctx: web.Context,
session: web.Session,
ids: List(String),
audit_log_reason: option.Option(String),
) -> Result(AssetPurgeResponse, ApiError) {
let url = ctx.api_endpoint <> "/admin/assets/purge"
let body =
json.object([#("ids", json.array(ids, json.string))]) |> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
let req = case audit_log_reason {
option.Some(reason) -> request.set_header(req, "x-audit-log-reason", reason)
option.None -> req
}
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let result_decoder = {
use processed <- decode.field(
"processed",
decode.list({
use id <- decode.field("id", decode.string)
use asset_type <- decode.field("asset_type", decode.string)
use found_in_db <- decode.field("found_in_db", decode.bool)
use guild_id <- decode.field(
"guild_id",
decode.optional(decode.string),
)
decode.success(AssetPurgeResult(
id: id,
asset_type: asset_type,
found_in_db: found_in_db,
guild_id: guild_id,
))
}),
)
use errors <- decode.field(
"errors",
decode.list({
use id <- decode.field("id", decode.string)
use error <- decode.field("error", decode.string)
decode.success(AssetPurgeError(id: id, error: error))
}),
)
decode.success(AssetPurgeResponse(processed: processed, errors: errors))
}
case json.parse(resp.body, result_decoder) {
Ok(response) -> Ok(response)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}

View File

@@ -1,169 +0,0 @@
//// Copyright (C) 2026 Fluxer Contributors
////
//// This file is part of Fluxer.
////
//// Fluxer is free software: you can redistribute it and/or modify
//// it under the terms of the GNU Affero General Public License as published by
//// the Free Software Foundation, either version 3 of the License, or
//// (at your option) any later version.
////
//// Fluxer is distributed in the hope that it will be useful,
//// but WITHOUT ANY WARRANTY; without even the implied warranty of
//// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//// GNU Affero General Public License for more details.
////
//// You should have received a copy of the GNU Affero General Public License
//// along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
import fluxer_admin/api/common.{
type ApiError, Forbidden, NetworkError, NotFound, ServerError, Unauthorized,
}
import fluxer_admin/web
import gleam/dict
import gleam/dynamic/decode
import gleam/http
import gleam/http/request
import gleam/httpc
import gleam/json
import gleam/option
pub type AuditLog {
AuditLog(
log_id: String,
admin_user_id: String,
target_type: String,
target_id: String,
action: String,
audit_log_reason: option.Option(String),
metadata: List(#(String, String)),
created_at: String,
)
}
pub type ListAuditLogsResponse {
ListAuditLogsResponse(logs: List(AuditLog), total: Int)
}
pub fn search_audit_logs(
ctx: web.Context,
session: web.Session,
query: option.Option(String),
admin_user_id_filter: option.Option(String),
target_type: option.Option(String),
target_id: option.Option(String),
action: option.Option(String),
limit: Int,
offset: Int,
) -> Result(ListAuditLogsResponse, ApiError) {
let url = ctx.api_endpoint <> "/admin/audit-logs/search"
let mut_fields = [#("limit", json.int(limit)), #("offset", json.int(offset))]
let mut_fields = case query {
option.Some(q) if q != "" -> [#("query", json.string(q)), ..mut_fields]
_ -> mut_fields
}
let mut_fields = case admin_user_id_filter {
option.Some(id) if id != "" -> [
#("admin_user_id", json.string(id)),
..mut_fields
]
_ -> mut_fields
}
let mut_fields = case target_type {
option.Some(tt) if tt != "" -> [
#("target_type", json.string(tt)),
..mut_fields
]
_ -> mut_fields
}
let mut_fields = case target_id {
option.Some(tid) if tid != "" -> [
#("target_id", json.string(tid)),
..mut_fields
]
_ -> mut_fields
}
let mut_fields = case action {
option.Some(act) if act != "" -> [
#("action", json.string(act)),
..mut_fields
]
_ -> mut_fields
}
let body = json.object(mut_fields) |> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let audit_log_decoder = {
use log_id <- decode.field("log_id", decode.string)
use admin_user_id <- decode.field("admin_user_id", decode.string)
use target_type_val <- decode.field("target_type", decode.string)
use target_id_val <- decode.field("target_id", decode.string)
use action <- decode.field("action", decode.string)
use audit_log_reason <- decode.field(
"audit_log_reason",
decode.optional(decode.string),
)
use metadata <- decode.field(
"metadata",
decode.dict(decode.string, decode.string),
)
use created_at <- decode.field("created_at", decode.string)
let metadata_list =
metadata
|> dict.to_list
decode.success(AuditLog(
log_id: log_id,
admin_user_id: admin_user_id,
target_type: target_type_val,
target_id: target_id_val,
action: action,
audit_log_reason: audit_log_reason,
metadata: metadata_list,
created_at: created_at,
))
}
let decoder = {
use logs <- decode.field("logs", decode.list(audit_log_decoder))
use total <- decode.field("total", decode.int)
decode.success(ListAuditLogsResponse(logs: logs, total: total))
}
case json.parse(resp.body, decoder) {
Ok(response) -> Ok(response)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}

View File

@@ -1,249 +0,0 @@
//// Copyright (C) 2026 Fluxer Contributors
////
//// This file is part of Fluxer.
////
//// Fluxer is free software: you can redistribute it and/or modify
//// it under the terms of the GNU Affero General Public License as published by
//// the Free Software Foundation, either version 3 of the License, or
//// (at your option) any later version.
////
//// Fluxer is distributed in the hope that it will be useful,
//// but WITHOUT ANY WARRANTY; without even the implied warranty of
//// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//// GNU Affero General Public License for more details.
////
//// You should have received a copy of the GNU Affero General Public License
//// along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
import fluxer_admin/api/common.{
type ApiError, Forbidden, NetworkError, NotFound, ServerError, Unauthorized,
admin_post_simple, admin_post_with_audit,
}
import fluxer_admin/web
import gleam/dynamic/decode
import gleam/http
import gleam/http/request
import gleam/httpc
import gleam/json
import gleam/option
pub type CheckBanResponse {
CheckBanResponse(banned: Bool)
}
pub fn ban_email(
ctx: web.Context,
session: web.Session,
email: String,
audit_log_reason: option.Option(String),
) -> Result(Nil, ApiError) {
admin_post_with_audit(
ctx,
session,
"/admin/bans/email/add",
[#("email", json.string(email))],
audit_log_reason,
)
}
pub fn unban_email(
ctx: web.Context,
session: web.Session,
email: String,
audit_log_reason: option.Option(String),
) -> Result(Nil, ApiError) {
admin_post_with_audit(
ctx,
session,
"/admin/bans/email/remove",
[#("email", json.string(email))],
audit_log_reason,
)
}
pub fn check_email_ban(
ctx: web.Context,
session: web.Session,
email: String,
) -> Result(CheckBanResponse, ApiError) {
let url = ctx.api_endpoint <> "/admin/bans/email/check"
let body = json.object([#("email", json.string(email))]) |> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let decoder = {
use banned <- decode.field("banned", decode.bool)
decode.success(CheckBanResponse(banned: banned))
}
case json.parse(resp.body, decoder) {
Ok(response) -> Ok(response)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn ban_ip(
ctx: web.Context,
session: web.Session,
ip: String,
) -> Result(Nil, ApiError) {
admin_post_simple(ctx, session, "/admin/bans/ip/add", [
#("ip", json.string(ip)),
])
}
pub fn unban_ip(
ctx: web.Context,
session: web.Session,
ip: String,
) -> Result(Nil, ApiError) {
admin_post_simple(ctx, session, "/admin/bans/ip/remove", [
#("ip", json.string(ip)),
])
}
pub fn check_ip_ban(
ctx: web.Context,
session: web.Session,
ip: String,
) -> Result(CheckBanResponse, ApiError) {
let url = ctx.api_endpoint <> "/admin/bans/ip/check"
let body = json.object([#("ip", json.string(ip))]) |> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let decoder = {
use banned <- decode.field("banned", decode.bool)
decode.success(CheckBanResponse(banned: banned))
}
case json.parse(resp.body, decoder) {
Ok(response) -> Ok(response)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn ban_phone(
ctx: web.Context,
session: web.Session,
phone: String,
) -> Result(Nil, ApiError) {
admin_post_simple(ctx, session, "/admin/bans/phone/add", [
#("phone", json.string(phone)),
])
}
pub fn unban_phone(
ctx: web.Context,
session: web.Session,
phone: String,
) -> Result(Nil, ApiError) {
admin_post_simple(ctx, session, "/admin/bans/phone/remove", [
#("phone", json.string(phone)),
])
}
pub fn check_phone_ban(
ctx: web.Context,
session: web.Session,
phone: String,
) -> Result(CheckBanResponse, ApiError) {
let url = ctx.api_endpoint <> "/admin/bans/phone/check"
let body = json.object([#("phone", json.string(phone))]) |> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let decoder = {
use banned <- decode.field("banned", decode.bool)
decode.success(CheckBanResponse(banned: banned))
}
case json.parse(resp.body, decoder) {
Ok(response) -> Ok(response)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}

View File

@@ -1,332 +0,0 @@
//// Copyright (C) 2026 Fluxer Contributors
////
//// This file is part of Fluxer.
////
//// Fluxer is free software: you can redistribute it and/or modify
//// it under the terms of the GNU Affero General Public License as published by
//// the Free Software Foundation, either version 3 of the License, or
//// (at your option) any later version.
////
//// Fluxer is distributed in the hope that it will be useful,
//// but WITHOUT ANY WARRANTY; without even the implied warranty of
//// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//// GNU Affero General Public License for more details.
////
//// You should have received a copy of the GNU Affero General Public License
//// along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
import fluxer_admin/api/common.{
type ApiError, Forbidden, NetworkError, NotFound, ServerError, Unauthorized,
}
import fluxer_admin/web
import gleam/dynamic/decode
import gleam/http
import gleam/http/request
import gleam/httpc
import gleam/json
import gleam/option
pub type BulkOperationError {
BulkOperationError(id: String, error: String)
}
pub type BulkOperationResponse {
BulkOperationResponse(
successful: List(String),
failed: List(BulkOperationError),
)
}
pub fn bulk_update_user_flags(
ctx: web.Context,
session: web.Session,
user_ids: List(String),
add_flags: List(String),
remove_flags: List(String),
audit_log_reason: option.Option(String),
) -> Result(BulkOperationResponse, ApiError) {
let url = ctx.api_endpoint <> "/admin/users/bulk-update-flags"
let body =
json.object([
#("user_ids", json.array(user_ids, json.string)),
#("add_flags", json.array(add_flags, json.string)),
#("remove_flags", json.array(remove_flags, json.string)),
])
|> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
let req = case audit_log_reason {
option.Some(reason) -> request.set_header(req, "x-audit-log-reason", reason)
option.None -> req
}
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let error_decoder = {
use id <- decode.field("id", decode.string)
use error <- decode.field("error", decode.string)
decode.success(BulkOperationError(id: id, error: error))
}
let decoder = {
use successful <- decode.field("successful", decode.list(decode.string))
use failed <- decode.field("failed", decode.list(error_decoder))
decode.success(BulkOperationResponse(
successful: successful,
failed: failed,
))
}
case json.parse(resp.body, decoder) {
Ok(response) -> Ok(response)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn bulk_update_guild_features(
ctx: web.Context,
session: web.Session,
guild_ids: List(String),
add_features: List(String),
remove_features: List(String),
audit_log_reason: option.Option(String),
) -> Result(BulkOperationResponse, ApiError) {
let url = ctx.api_endpoint <> "/admin/guilds/bulk-update-features"
let body =
json.object([
#("guild_ids", json.array(guild_ids, json.string)),
#("add_features", json.array(add_features, json.string)),
#("remove_features", json.array(remove_features, json.string)),
])
|> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
let req = case audit_log_reason {
option.Some(reason) -> request.set_header(req, "x-audit-log-reason", reason)
option.None -> req
}
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let error_decoder = {
use id <- decode.field("id", decode.string)
use error <- decode.field("error", decode.string)
decode.success(BulkOperationError(id: id, error: error))
}
let decoder = {
use successful <- decode.field("successful", decode.list(decode.string))
use failed <- decode.field("failed", decode.list(error_decoder))
decode.success(BulkOperationResponse(
successful: successful,
failed: failed,
))
}
case json.parse(resp.body, decoder) {
Ok(response) -> Ok(response)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn bulk_add_guild_members(
ctx: web.Context,
session: web.Session,
guild_id: String,
user_ids: List(String),
audit_log_reason: option.Option(String),
) -> Result(BulkOperationResponse, ApiError) {
let url = ctx.api_endpoint <> "/admin/bulk/add-guild-members"
let body =
json.object([
#("guild_id", json.string(guild_id)),
#("user_ids", json.array(user_ids, json.string)),
])
|> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
let req = case audit_log_reason {
option.Some(reason) -> request.set_header(req, "x-audit-log-reason", reason)
option.None -> req
}
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let error_decoder = {
use id <- decode.field("id", decode.string)
use error <- decode.field("error", decode.string)
decode.success(BulkOperationError(id: id, error: error))
}
let decoder = {
use successful <- decode.field("successful", decode.list(decode.string))
use failed <- decode.field("failed", decode.list(error_decoder))
decode.success(BulkOperationResponse(
successful: successful,
failed: failed,
))
}
case json.parse(resp.body, decoder) {
Ok(response) -> Ok(response)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn bulk_schedule_user_deletion(
ctx: web.Context,
session: web.Session,
user_ids: List(String),
reason_code: Int,
public_reason: option.Option(String),
days_until_deletion: Int,
audit_log_reason: option.Option(String),
) -> Result(BulkOperationResponse, ApiError) {
let url = ctx.api_endpoint <> "/admin/bulk/schedule-user-deletion"
let fields = [
#("user_ids", json.array(user_ids, json.string)),
#("reason_code", json.int(reason_code)),
#("days_until_deletion", json.int(days_until_deletion)),
]
let fields = case public_reason {
option.Some(r) -> [#("public_reason", json.string(r)), ..fields]
option.None -> fields
}
let body = json.object(fields) |> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
let req = case audit_log_reason {
option.Some(reason) -> request.set_header(req, "x-audit-log-reason", reason)
option.None -> req
}
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let error_decoder = {
use id <- decode.field("id", decode.string)
use error <- decode.field("error", decode.string)
decode.success(BulkOperationError(id: id, error: error))
}
let decoder = {
use successful <- decode.field("successful", decode.list(decode.string))
use failed <- decode.field("failed", decode.list(error_decoder))
decode.success(BulkOperationResponse(
successful: successful,
failed: failed,
))
}
case json.parse(resp.body, decoder) {
Ok(response) -> Ok(response)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}

View File

@@ -1,124 +0,0 @@
//// Copyright (C) 2026 Fluxer Contributors
////
//// This file is part of Fluxer.
////
//// Fluxer is free software: you can redistribute it and/or modify
//// it under the terms of the GNU Affero General Public License as published by
//// the Free Software Foundation, either version 3 of the License, or
//// (at your option) any later version.
////
//// Fluxer is distributed in the hope that it will be useful,
//// but WITHOUT ANY WARRANTY; without even the implied warranty of
//// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//// GNU Affero General Public License for more details.
////
//// You should have received a copy of the GNU Affero General Public License
//// along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
import fluxer_admin/api/common.{
type ApiError, Forbidden, NetworkError, NotFound, ServerError, Unauthorized,
}
import fluxer_admin/web
import gleam/dynamic/decode
import gleam/http
import gleam/http/request
import gleam/httpc
import gleam/json
fn parse_codes(body: String) -> Result(List(String), ApiError) {
let decoder = {
use codes <- decode.field("codes", decode.list(decode.string))
decode.success(codes)
}
case json.parse(body, decoder) {
Ok(codes) -> Ok(codes)
Error(_) -> Error(ServerError)
}
}
pub fn generate_beta_codes(
ctx: web.Context,
session: web.Session,
count: Int,
) -> Result(List(String), ApiError) {
let url = ctx.api_endpoint <> "/admin/codes/beta"
let body = json.object([#("count", json.int(count))]) |> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) ->
case resp.status {
200 -> parse_codes(resp.body)
401 -> Error(Unauthorized)
403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
404 -> Error(NotFound)
_ -> Error(ServerError)
}
Error(_) -> Error(NetworkError)
}
}
pub fn generate_gift_codes(
ctx: web.Context,
session: web.Session,
count: Int,
product_type: String,
) -> Result(List(String), ApiError) {
let url = ctx.api_endpoint <> "/admin/codes/gift"
let body =
json.object([
#("count", json.int(count)),
#("product_type", json.string(product_type)),
])
|> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) ->
case resp.status {
200 -> parse_codes(resp.body)
401 -> Error(Unauthorized)
403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
404 -> Error(NotFound)
_ -> Error(ServerError)
}
Error(_) -> Error(NetworkError)
}
}

View File

@@ -1,240 +0,0 @@
//// Copyright (C) 2026 Fluxer Contributors
////
//// This file is part of Fluxer.
////
//// Fluxer is free software: you can redistribute it and/or modify
//// it under the terms of the GNU Affero General Public License as published by
//// the Free Software Foundation, either version 3 of the License, or
//// (at your option) any later version.
////
//// Fluxer is distributed in the hope that it will be useful,
//// but WITHOUT ANY WARRANTY; without even the implied warranty of
//// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//// GNU Affero General Public License for more details.
////
//// You should have received a copy of the GNU Affero General Public License
//// along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
import fluxer_admin/web
import gleam/dynamic/decode
import gleam/http
import gleam/http/request
import gleam/httpc
import gleam/json
import gleam/option
pub type UserLookupResult {
UserLookupResult(
id: String,
username: String,
discriminator: Int,
global_name: option.Option(String),
bot: Bool,
system: Bool,
flags: String,
avatar: option.Option(String),
banner: option.Option(String),
bio: option.Option(String),
pronouns: option.Option(String),
accent_color: option.Option(Int),
email: option.Option(String),
email_verified: Bool,
email_bounced: Bool,
phone: option.Option(String),
date_of_birth: option.Option(String),
locale: option.Option(String),
premium_type: option.Option(Int),
premium_since: option.Option(String),
premium_until: option.Option(String),
suspicious_activity_flags: Int,
temp_banned_until: option.Option(String),
pending_deletion_at: option.Option(String),
pending_bulk_message_deletion_at: option.Option(String),
deletion_reason_code: option.Option(Int),
deletion_public_reason: option.Option(String),
acls: List(String),
has_totp: Bool,
authenticator_types: List(Int),
last_active_at: option.Option(String),
last_active_ip: option.Option(String),
last_active_ip_reverse: option.Option(String),
last_active_location: option.Option(String),
)
}
pub type ApiError {
Unauthorized
Forbidden(message: String)
NotFound
ServerError
NetworkError
}
pub fn admin_post_simple(
ctx: web.Context,
session: web.Session,
path: String,
fields: List(#(String, json.Json)),
) -> Result(Nil, ApiError) {
admin_post_with_audit(ctx, session, path, fields, option.None)
}
pub fn admin_post_with_audit(
ctx: web.Context,
session: web.Session,
path: String,
fields: List(#(String, json.Json)),
audit_log_reason: option.Option(String),
) -> Result(Nil, ApiError) {
let url = ctx.api_endpoint <> path
let body = json.object(fields) |> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
let req = case audit_log_reason {
option.Some(reason) -> request.set_header(req, "x-audit-log-reason", reason)
option.None -> req
}
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> Ok(Nil)
Ok(resp) if resp.status == 204 -> Ok(Nil)
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn user_lookup_decoder() {
use id <- decode.field("id", decode.string)
use username <- decode.field("username", decode.string)
use discriminator <- decode.field("discriminator", decode.int)
use global_name <- decode.field("global_name", decode.optional(decode.string))
use bot <- decode.field("bot", decode.bool)
use system <- decode.field("system", decode.bool)
use flags <- decode.field("flags", decode.string)
use avatar <- decode.field("avatar", decode.optional(decode.string))
use banner <- decode.field("banner", decode.optional(decode.string))
use bio <- decode.field("bio", decode.optional(decode.string))
use pronouns <- decode.field("pronouns", decode.optional(decode.string))
use accent_color <- decode.field("accent_color", decode.optional(decode.int))
use email <- decode.field("email", decode.optional(decode.string))
use email_verified <- decode.field("email_verified", decode.bool)
use email_bounced <- decode.field("email_bounced", decode.bool)
use phone <- decode.field("phone", decode.optional(decode.string))
use date_of_birth <- decode.field(
"date_of_birth",
decode.optional(decode.string),
)
use locale <- decode.field("locale", decode.optional(decode.string))
use premium_type <- decode.field("premium_type", decode.optional(decode.int))
use premium_since <- decode.field(
"premium_since",
decode.optional(decode.string),
)
use premium_until <- decode.field(
"premium_until",
decode.optional(decode.string),
)
use suspicious_activity_flags <- decode.field(
"suspicious_activity_flags",
decode.int,
)
use temp_banned_until <- decode.field(
"temp_banned_until",
decode.optional(decode.string),
)
use pending_deletion_at <- decode.field(
"pending_deletion_at",
decode.optional(decode.string),
)
use pending_bulk_message_deletion_at <- decode.field(
"pending_bulk_message_deletion_at",
decode.optional(decode.string),
)
use deletion_reason_code <- decode.field(
"deletion_reason_code",
decode.optional(decode.int),
)
use deletion_public_reason <- decode.field(
"deletion_public_reason",
decode.optional(decode.string),
)
use acls <- decode.field("acls", decode.list(decode.string))
use has_totp <- decode.field("has_totp", decode.bool)
use authenticator_types <- decode.field(
"authenticator_types",
decode.list(decode.int),
)
use last_active_at <- decode.field(
"last_active_at",
decode.optional(decode.string),
)
use last_active_ip <- decode.field(
"last_active_ip",
decode.optional(decode.string),
)
use last_active_ip_reverse <- decode.field(
"last_active_ip_reverse",
decode.optional(decode.string),
)
use last_active_location <- decode.field(
"last_active_location",
decode.optional(decode.string),
)
decode.success(UserLookupResult(
id: id,
username: username,
discriminator: discriminator,
global_name: global_name,
bot: bot,
system: system,
flags: flags,
avatar: avatar,
banner: banner,
bio: bio,
pronouns: pronouns,
accent_color: accent_color,
email: email,
email_verified: email_verified,
email_bounced: email_bounced,
phone: phone,
date_of_birth: date_of_birth,
locale: locale,
premium_type: premium_type,
premium_since: premium_since,
premium_until: premium_until,
suspicious_activity_flags: suspicious_activity_flags,
temp_banned_until: temp_banned_until,
pending_deletion_at: pending_deletion_at,
pending_bulk_message_deletion_at: pending_bulk_message_deletion_at,
deletion_reason_code: deletion_reason_code,
deletion_public_reason: deletion_public_reason,
acls: acls,
has_totp: has_totp,
authenticator_types: authenticator_types,
last_active_at: last_active_at,
last_active_ip: last_active_ip,
last_active_ip_reverse: last_active_ip_reverse,
last_active_location: last_active_location,
))
}

View File

@@ -1,109 +0,0 @@
//// Copyright (C) 2026 Fluxer Contributors
////
//// This file is part of Fluxer.
////
//// Fluxer is free software: you can redistribute it and/or modify
//// it under the terms of the GNU Affero General Public License as published by
//// the Free Software Foundation, either version 3 of the License, or
//// (at your option) any later version.
////
//// Fluxer is distributed in the hope that it will be useful,
//// but WITHOUT ANY WARRANTY; without even the implied warranty of
//// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//// GNU Affero General Public License for more details.
////
//// You should have received a copy of the GNU Affero General Public License
//// along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
import fluxer_admin/api/common.{
type ApiError, Forbidden, NetworkError, ServerError, Unauthorized,
}
import fluxer_admin/web.{type Context, type Session}
import gleam/dict
import gleam/dynamic/decode
import gleam/http
import gleam/http/request
import gleam/httpc
import gleam/json
import gleam/list
import gleam/string
pub type FeatureFlagConfig {
FeatureFlagConfig(guild_ids: List(String))
}
pub fn get_feature_flags(
ctx: Context,
session: Session,
) -> Result(List(#(String, FeatureFlagConfig)), ApiError) {
let url = ctx.api_endpoint <> "/admin/feature-flags/get"
let body = json.object([]) |> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let decoder = {
use feature_flags <- decode.field(
"feature_flags",
decode.dict(decode.string, decode.list(decode.string)),
)
decode.success(feature_flags)
}
case json.parse(resp.body, decoder) {
Ok(flags_dict) -> {
let entries =
dict.to_list(flags_dict)
|> list.map(fn(entry) {
let #(flag, guild_ids) = entry
#(flag, FeatureFlagConfig(guild_ids:))
})
Ok(entries)
}
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> Error(Forbidden("Access denied"))
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn update_feature_flag(
ctx: Context,
session: Session,
flag_id: String,
guild_ids: List(String),
) -> Result(FeatureFlagConfig, ApiError) {
let url = ctx.api_endpoint <> "/admin/feature-flags/update"
let guild_ids_str = string.join(guild_ids, ",")
let body =
json.object([
#("flag", json.string(flag_id)),
#("guild_ids", json.string(guild_ids_str)),
])
|> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> Ok(FeatureFlagConfig(guild_ids:))
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> Error(Forbidden("Access denied"))
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}

View File

@@ -1,182 +0,0 @@
//// Copyright (C) 2026 Fluxer Contributors
////
//// This file is part of Fluxer.
////
//// Fluxer is free software: you can redistribute it and/or modify
//// it under the terms of the GNU Affero General Public License as published by
//// the Free Software Foundation, either version 3 of the License, or
//// (at your option) any later version.
////
//// Fluxer is distributed in the hope that it will be useful,
//// but WITHOUT ANY WARRANTY; without even the implied warranty of
//// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//// GNU Affero General Public License for more details.
////
//// You should have received a copy of the GNU Affero General Public License
//// along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
import fluxer_admin/api/common.{
type ApiError, Forbidden, NetworkError, NotFound, ServerError, Unauthorized,
}
import fluxer_admin/web
import gleam/dynamic/decode
import gleam/http
import gleam/http/request
import gleam/httpc
import gleam/json
pub type GuildEmojiAsset {
GuildEmojiAsset(
id: String,
name: String,
animated: Bool,
creator_id: String,
media_url: String,
)
}
pub type ListGuildEmojisResponse {
ListGuildEmojisResponse(guild_id: String, emojis: List(GuildEmojiAsset))
}
pub type GuildStickerAsset {
GuildStickerAsset(
id: String,
name: String,
format_type: Int,
creator_id: String,
media_url: String,
)
}
pub type ListGuildStickersResponse {
ListGuildStickersResponse(guild_id: String, stickers: List(GuildStickerAsset))
}
pub fn list_guild_emojis(
ctx: web.Context,
session: web.Session,
guild_id: String,
) -> Result(ListGuildEmojisResponse, ApiError) {
let url = ctx.api_endpoint <> "/admin/guilds/" <> guild_id <> "/emojis"
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Get)
|> request.set_header("authorization", "Bearer " <> session.access_token)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let emoji_decoder = {
use id <- decode.field("id", decode.string)
use name <- decode.field("name", decode.string)
use animated <- decode.field("animated", decode.bool)
use creator_id <- decode.field("creator_id", decode.string)
use media_url <- decode.field("media_url", decode.string)
decode.success(GuildEmojiAsset(
id: id,
name: name,
animated: animated,
creator_id: creator_id,
media_url: media_url,
))
}
let decoder = {
use guild_id <- decode.field("guild_id", decode.string)
use emojis <- decode.field("emojis", decode.list(emoji_decoder))
decode.success(ListGuildEmojisResponse(
guild_id: guild_id,
emojis: emojis,
))
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn list_guild_stickers(
ctx: web.Context,
session: web.Session,
guild_id: String,
) -> Result(ListGuildStickersResponse, ApiError) {
let url = ctx.api_endpoint <> "/admin/guilds/" <> guild_id <> "/stickers"
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Get)
|> request.set_header("authorization", "Bearer " <> session.access_token)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let sticker_decoder = {
use id <- decode.field("id", decode.string)
use name <- decode.field("name", decode.string)
use format_type <- decode.field("format_type", decode.int)
use creator_id <- decode.field("creator_id", decode.string)
use media_url <- decode.field("media_url", decode.string)
decode.success(GuildStickerAsset(
id: id,
name: name,
format_type: format_type,
creator_id: creator_id,
media_url: media_url,
))
}
let decoder = {
use guild_id <- decode.field("guild_id", decode.string)
use stickers <- decode.field("stickers", decode.list(sticker_decoder))
decode.success(ListGuildStickersResponse(
guild_id: guild_id,
stickers: stickers,
))
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}

View File

@@ -1,529 +0,0 @@
//// Copyright (C) 2026 Fluxer Contributors
////
//// This file is part of Fluxer.
////
//// Fluxer is free software: you can redistribute it and/or modify
//// it under the terms of the GNU Affero General Public License as published by
//// the Free Software Foundation, either version 3 of the License, or
//// (at your option) any later version.
////
//// Fluxer is distributed in the hope that it will be useful,
//// but WITHOUT ANY WARRANTY; without even the implied warranty of
//// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//// GNU Affero General Public License for more details.
////
//// You should have received a copy of the GNU Affero General Public License
//// along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
import fluxer_admin/api/common.{
type ApiError, Forbidden, NetworkError, NotFound, ServerError, Unauthorized,
admin_post_simple,
}
import fluxer_admin/web
import gleam/dynamic/decode
import gleam/http
import gleam/http/request
import gleam/httpc
import gleam/json
import gleam/option
pub type GuildChannel {
GuildChannel(
id: String,
name: String,
type_: Int,
position: Int,
parent_id: option.Option(String),
)
}
pub type GuildRole {
GuildRole(
id: String,
name: String,
color: Int,
position: Int,
permissions: String,
hoist: Bool,
mentionable: Bool,
)
}
pub type GuildMember {
GuildMember(
user: GuildMemberUser,
nick: option.Option(String),
avatar: option.Option(String),
roles: List(String),
joined_at: String,
premium_since: option.Option(String),
deaf: Bool,
mute: Bool,
flags: Int,
pending: Bool,
communication_disabled_until: option.Option(String),
)
}
pub type GuildMemberUser {
GuildMemberUser(
id: String,
username: String,
discriminator: String,
avatar: option.Option(String),
bot: Bool,
system: Bool,
public_flags: Int,
)
}
pub type ListGuildMembersResponse {
ListGuildMembersResponse(
members: List(GuildMember),
total: Int,
limit: Int,
offset: Int,
)
}
pub type GuildLookupResult {
GuildLookupResult(
id: String,
owner_id: String,
name: String,
vanity_url_code: option.Option(String),
icon: option.Option(String),
banner: option.Option(String),
splash: option.Option(String),
features: List(String),
verification_level: Int,
mfa_level: Int,
nsfw_level: Int,
explicit_content_filter: Int,
default_message_notifications: Int,
afk_channel_id: option.Option(String),
afk_timeout: Int,
system_channel_id: option.Option(String),
system_channel_flags: Int,
rules_channel_id: option.Option(String),
disabled_operations: Int,
member_count: Int,
channels: List(GuildChannel),
roles: List(GuildRole),
)
}
pub type GuildSearchResult {
GuildSearchResult(
id: String,
owner_id: String,
name: String,
features: List(String),
icon: option.Option(String),
banner: option.Option(String),
member_count: Int,
)
}
pub type SearchGuildsResponse {
SearchGuildsResponse(guilds: List(GuildSearchResult), total: Int)
}
pub fn lookup_guild(
ctx: web.Context,
session: web.Session,
guild_id: String,
) -> Result(option.Option(GuildLookupResult), ApiError) {
let url = ctx.api_endpoint <> "/admin/guilds/lookup"
let body =
json.object([#("guild_id", json.string(guild_id))]) |> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let channel_decoder = {
use id <- decode.field("id", decode.string)
use name <- decode.field("name", decode.string)
use type_ <- decode.field("type", decode.int)
use position <- decode.field("position", decode.int)
use parent_id <- decode.field(
"parent_id",
decode.optional(decode.string),
)
decode.success(GuildChannel(
id: id,
name: name,
type_: type_,
position: position,
parent_id: parent_id,
))
}
let role_decoder = {
use id <- decode.field("id", decode.string)
use name <- decode.field("name", decode.string)
use color <- decode.field("color", decode.int)
use position <- decode.field("position", decode.int)
use permissions <- decode.field("permissions", decode.string)
use hoist <- decode.field("hoist", decode.bool)
use mentionable <- decode.field("mentionable", decode.bool)
decode.success(GuildRole(
id: id,
name: name,
color: color,
position: position,
permissions: permissions,
hoist: hoist,
mentionable: mentionable,
))
}
let guild_decoder = {
use id <- decode.field("id", decode.string)
use owner_id <- decode.field("owner_id", decode.string)
use name <- decode.field("name", decode.string)
use vanity_url_code <- decode.field(
"vanity_url_code",
decode.optional(decode.string),
)
use icon <- decode.field("icon", decode.optional(decode.string))
use banner <- decode.field("banner", decode.optional(decode.string))
use splash <- decode.field("splash", decode.optional(decode.string))
use features <- decode.field("features", decode.list(decode.string))
use verification_level <- decode.field("verification_level", decode.int)
use mfa_level <- decode.field("mfa_level", decode.int)
use nsfw_level <- decode.field("nsfw_level", decode.int)
use explicit_content_filter <- decode.field(
"explicit_content_filter",
decode.int,
)
use default_message_notifications <- decode.field(
"default_message_notifications",
decode.int,
)
use afk_channel_id <- decode.field(
"afk_channel_id",
decode.optional(decode.string),
)
use afk_timeout <- decode.field("afk_timeout", decode.int)
use system_channel_id <- decode.field(
"system_channel_id",
decode.optional(decode.string),
)
use system_channel_flags <- decode.field(
"system_channel_flags",
decode.int,
)
use rules_channel_id <- decode.field(
"rules_channel_id",
decode.optional(decode.string),
)
use disabled_operations <- decode.field(
"disabled_operations",
decode.int,
)
use member_count <- decode.field("member_count", decode.int)
use channels <- decode.field("channels", decode.list(channel_decoder))
use roles <- decode.field("roles", decode.list(role_decoder))
decode.success(GuildLookupResult(
id: id,
owner_id: owner_id,
name: name,
vanity_url_code: vanity_url_code,
icon: icon,
banner: banner,
splash: splash,
features: features,
verification_level: verification_level,
mfa_level: mfa_level,
nsfw_level: nsfw_level,
explicit_content_filter: explicit_content_filter,
default_message_notifications: default_message_notifications,
afk_channel_id: afk_channel_id,
afk_timeout: afk_timeout,
system_channel_id: system_channel_id,
system_channel_flags: system_channel_flags,
rules_channel_id: rules_channel_id,
disabled_operations: disabled_operations,
member_count: member_count,
channels: channels,
roles: roles,
))
}
let decoder = {
use guild <- decode.field("guild", decode.optional(guild_decoder))
decode.success(guild)
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn clear_guild_fields(
ctx: web.Context,
session: web.Session,
guild_id: String,
fields: List(String),
) -> Result(Nil, ApiError) {
admin_post_simple(ctx, session, "/admin/guilds/clear-fields", [
#("guild_id", json.string(guild_id)),
#("fields", json.array(fields, json.string)),
])
}
pub fn update_guild_features(
ctx: web.Context,
session: web.Session,
guild_id: String,
add_features: List(String),
remove_features: List(String),
) -> Result(Nil, ApiError) {
admin_post_simple(ctx, session, "/admin/guilds/update-features", [
#("guild_id", json.string(guild_id)),
#("add_features", json.array(add_features, json.string)),
#("remove_features", json.array(remove_features, json.string)),
])
}
pub fn update_guild_settings(
ctx: web.Context,
session: web.Session,
guild_id: String,
verification_level: option.Option(Int),
mfa_level: option.Option(Int),
nsfw_level: option.Option(Int),
explicit_content_filter: option.Option(Int),
default_message_notifications: option.Option(Int),
disabled_operations: option.Option(Int),
) -> Result(Nil, ApiError) {
let mut_fields = [#("guild_id", json.string(guild_id))]
let mut_fields = case verification_level {
option.Some(vl) -> [#("verification_level", json.int(vl)), ..mut_fields]
option.None -> mut_fields
}
let mut_fields = case mfa_level {
option.Some(ml) -> [#("mfa_level", json.int(ml)), ..mut_fields]
option.None -> mut_fields
}
let mut_fields = case nsfw_level {
option.Some(nl) -> [#("nsfw_level", json.int(nl)), ..mut_fields]
option.None -> mut_fields
}
let mut_fields = case explicit_content_filter {
option.Some(ecf) -> [
#("explicit_content_filter", json.int(ecf)),
..mut_fields
]
option.None -> mut_fields
}
let mut_fields = case default_message_notifications {
option.Some(dmn) -> [
#("default_message_notifications", json.int(dmn)),
..mut_fields
]
option.None -> mut_fields
}
let mut_fields = case disabled_operations {
option.Some(dops) -> [
#("disabled_operations", json.int(dops)),
..mut_fields
]
option.None -> mut_fields
}
admin_post_simple(ctx, session, "/admin/guilds/update-settings", mut_fields)
}
pub fn update_guild_name(
ctx: web.Context,
session: web.Session,
guild_id: String,
name: String,
) -> Result(Nil, ApiError) {
admin_post_simple(ctx, session, "/admin/guilds/update-name", [
#("guild_id", json.string(guild_id)),
#("name", json.string(name)),
])
}
pub fn update_guild_vanity(
ctx: web.Context,
session: web.Session,
guild_id: String,
vanity_url_code: option.Option(String),
) -> Result(Nil, ApiError) {
let fields = [#("guild_id", json.string(guild_id))]
let fields = case vanity_url_code {
option.Some(code) -> [#("vanity_url_code", json.string(code)), ..fields]
option.None -> fields
}
admin_post_simple(ctx, session, "/admin/guilds/update-vanity", fields)
}
pub fn transfer_guild_ownership(
ctx: web.Context,
session: web.Session,
guild_id: String,
new_owner_id: String,
) -> Result(Nil, ApiError) {
admin_post_simple(ctx, session, "/admin/guilds/transfer-ownership", [
#("guild_id", json.string(guild_id)),
#("new_owner_id", json.string(new_owner_id)),
])
}
pub fn reload_guild(
ctx: web.Context,
session: web.Session,
guild_id: String,
) -> Result(Nil, ApiError) {
admin_post_simple(ctx, session, "/admin/guilds/reload", [
#("guild_id", json.string(guild_id)),
])
}
pub fn shutdown_guild(
ctx: web.Context,
session: web.Session,
guild_id: String,
) -> Result(Nil, ApiError) {
admin_post_simple(ctx, session, "/admin/guilds/shutdown", [
#("guild_id", json.string(guild_id)),
])
}
pub fn delete_guild(
ctx: web.Context,
session: web.Session,
guild_id: String,
) -> Result(Nil, ApiError) {
admin_post_simple(ctx, session, "/admin/guilds/delete", [
#("guild_id", json.string(guild_id)),
])
}
pub fn force_add_user_to_guild(
ctx: web.Context,
session: web.Session,
user_id: String,
guild_id: String,
) -> Result(Nil, ApiError) {
admin_post_simple(ctx, session, "/admin/guilds/force-add-user", [
#("user_id", json.string(user_id)),
#("guild_id", json.string(guild_id)),
])
}
pub fn search_guilds(
ctx: web.Context,
session: web.Session,
query: String,
limit: Int,
offset: Int,
) -> Result(SearchGuildsResponse, ApiError) {
let url = ctx.api_endpoint <> "/admin/guilds/search"
let body =
json.object([
#("query", json.string(query)),
#("limit", json.int(limit)),
#("offset", json.int(offset)),
])
|> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let guild_decoder = {
use id <- decode.field("id", decode.string)
use owner_id <- decode.optional_field("owner_id", "", decode.string)
use name <- decode.field("name", decode.string)
use features <- decode.field("features", decode.list(decode.string))
use icon <- decode.optional_field(
"icon",
option.None,
decode.optional(decode.string),
)
use banner <- decode.optional_field(
"banner",
option.None,
decode.optional(decode.string),
)
use member_count <- decode.optional_field("member_count", 0, decode.int)
decode.success(GuildSearchResult(
id: id,
owner_id: owner_id,
name: name,
features: features,
icon: icon,
banner: banner,
member_count: member_count,
))
}
let decoder = {
use guilds <- decode.field("guilds", decode.list(guild_decoder))
use total <- decode.field("total", decode.int)
decode.success(SearchGuildsResponse(guilds: guilds, total: total))
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}

View File

@@ -1,191 +0,0 @@
//// Copyright (C) 2026 Fluxer Contributors
////
//// This file is part of Fluxer.
////
//// Fluxer is free software: you can redistribute it and/or modify
//// it under the terms of the GNU Affero General Public License as published by
//// the Free Software Foundation, either version 3 of the License, or
//// (at your option) any later version.
////
//// Fluxer is distributed in the hope that it will be useful,
//// but WITHOUT ANY WARRANTY; without even the implied warranty of
//// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//// GNU Affero General Public License for more details.
////
//// You should have received a copy of the GNU Affero General Public License
//// along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
import fluxer_admin/api/common.{
type ApiError, Forbidden, NetworkError, NotFound, ServerError, Unauthorized,
}
import fluxer_admin/web
import gleam/dynamic/decode
import gleam/http
import gleam/http/request
import gleam/httpc
import gleam/json
import gleam/option
pub type GuildMember {
GuildMember(
user: GuildMemberUser,
nick: option.Option(String),
avatar: option.Option(String),
roles: List(String),
joined_at: String,
premium_since: option.Option(String),
deaf: Bool,
mute: Bool,
flags: Int,
pending: Bool,
communication_disabled_until: option.Option(String),
)
}
pub type GuildMemberUser {
GuildMemberUser(
id: String,
username: String,
discriminator: String,
avatar: option.Option(String),
bot: Bool,
system: Bool,
public_flags: Int,
)
}
pub type ListGuildMembersResponse {
ListGuildMembersResponse(
members: List(GuildMember),
total: Int,
limit: Int,
offset: Int,
)
}
pub fn list_guild_members(
ctx: web.Context,
session: web.Session,
guild_id: String,
limit: Int,
offset: Int,
) -> Result(ListGuildMembersResponse, ApiError) {
let url = ctx.api_endpoint <> "/admin/guilds/list-members"
let body =
json.object([
#("guild_id", json.string(guild_id)),
#("limit", json.int(limit)),
#("offset", json.int(offset)),
])
|> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let user_decoder = {
use id <- decode.field("id", decode.string)
use username <- decode.field("username", decode.string)
use discriminator <- decode.field("discriminator", decode.string)
use avatar <- decode.field("avatar", decode.optional(decode.string))
use bot <- decode.optional_field("bot", False, decode.bool)
use system <- decode.optional_field("system", False, decode.bool)
use public_flags <- decode.optional_field("public_flags", 0, decode.int)
decode.success(GuildMemberUser(
id: id,
username: username,
discriminator: discriminator,
avatar: avatar,
bot: bot,
system: system,
public_flags: public_flags,
))
}
let member_decoder = {
use user <- decode.field("user", user_decoder)
use nick <- decode.optional_field(
"nick",
option.None,
decode.optional(decode.string),
)
use avatar <- decode.optional_field(
"avatar",
option.None,
decode.optional(decode.string),
)
use roles <- decode.field("roles", decode.list(decode.string))
use joined_at <- decode.field("joined_at", decode.string)
use premium_since <- decode.optional_field(
"premium_since",
option.None,
decode.optional(decode.string),
)
use deaf <- decode.optional_field("deaf", False, decode.bool)
use mute <- decode.optional_field("mute", False, decode.bool)
use flags <- decode.optional_field("flags", 0, decode.int)
use pending <- decode.optional_field("pending", False, decode.bool)
use communication_disabled_until <- decode.optional_field(
"communication_disabled_until",
option.None,
decode.optional(decode.string),
)
decode.success(GuildMember(
user: user,
nick: nick,
avatar: avatar,
roles: roles,
joined_at: joined_at,
premium_since: premium_since,
deaf: deaf,
mute: mute,
flags: flags,
pending: pending,
communication_disabled_until: communication_disabled_until,
))
}
let decoder = {
use members <- decode.field("members", decode.list(member_decoder))
use total <- decode.field("total", decode.int)
use limit <- decode.field("limit", decode.int)
use offset <- decode.field("offset", decode.int)
decode.success(ListGuildMembersResponse(
members: members,
total: total,
limit: limit,
offset: offset,
))
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}

View File

@@ -1,259 +0,0 @@
//// Copyright (C) 2026 Fluxer Contributors
////
//// This file is part of Fluxer.
////
//// Fluxer is free software: you can redistribute it and/or modify
//// it under the terms of the GNU Affero General Public License as published by
//// the Free Software Foundation, either version 3 of the License, or
//// (at your option) any later version.
////
//// Fluxer is distributed in the hope that it will be useful,
//// but WITHOUT ANY WARRANTY; without even the implied warranty of
//// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//// GNU Affero General Public License for more details.
////
//// You should have received a copy of the GNU Affero General Public License
//// along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
import fluxer_admin/api/common.{
type ApiError, Forbidden, NetworkError, ServerError, Unauthorized,
}
import fluxer_admin/web.{type Context, type Session}
import gleam/dynamic/decode
import gleam/http
import gleam/http/request
import gleam/httpc
import gleam/json
import gleam/option
pub type InstanceConfig {
InstanceConfig(
manual_review_enabled: Bool,
manual_review_schedule_enabled: Bool,
manual_review_schedule_start_hour_utc: Int,
manual_review_schedule_end_hour_utc: Int,
manual_review_active_now: Bool,
registration_alerts_webhook_url: String,
system_alerts_webhook_url: String,
)
}
fn instance_config_decoder() {
use manual_review_enabled <- decode.field(
"manual_review_enabled",
decode.bool,
)
use manual_review_schedule_enabled <- decode.field(
"manual_review_schedule_enabled",
decode.bool,
)
use manual_review_schedule_start_hour_utc <- decode.field(
"manual_review_schedule_start_hour_utc",
decode.int,
)
use manual_review_schedule_end_hour_utc <- decode.field(
"manual_review_schedule_end_hour_utc",
decode.int,
)
use manual_review_active_now <- decode.field(
"manual_review_active_now",
decode.bool,
)
use registration_alerts_webhook_url <- decode.field(
"registration_alerts_webhook_url",
decode.optional(decode.string),
)
use system_alerts_webhook_url <- decode.field(
"system_alerts_webhook_url",
decode.optional(decode.string),
)
decode.success(InstanceConfig(
manual_review_enabled:,
manual_review_schedule_enabled:,
manual_review_schedule_start_hour_utc:,
manual_review_schedule_end_hour_utc:,
manual_review_active_now:,
registration_alerts_webhook_url: option.unwrap(
registration_alerts_webhook_url,
"",
),
system_alerts_webhook_url: option.unwrap(system_alerts_webhook_url, ""),
))
}
pub type SnowflakeReservation {
SnowflakeReservation(
email: String,
snowflake: String,
updated_at: option.Option(String),
)
}
fn snowflake_reservation_decoder() {
use email <- decode.field("email", decode.string)
use snowflake <- decode.field("snowflake", decode.string)
use updated_at <- decode.field("updated_at", decode.optional(decode.string))
decode.success(SnowflakeReservation(
email:,
snowflake:,
updated_at: updated_at,
))
}
pub fn get_instance_config(
ctx: Context,
session: Session,
) -> Result(InstanceConfig, ApiError) {
let url = ctx.api_endpoint <> "/admin/instance-config/get"
let body = json.object([]) |> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
case json.parse(resp.body, instance_config_decoder()) {
Ok(config) -> Ok(config)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> Error(Forbidden("Access denied"))
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn update_instance_config(
ctx: Context,
session: Session,
manual_review_enabled: Bool,
manual_review_schedule_enabled: Bool,
manual_review_schedule_start_hour_utc: Int,
manual_review_schedule_end_hour_utc: Int,
registration_alerts_webhook_url: String,
system_alerts_webhook_url: String,
) -> Result(InstanceConfig, ApiError) {
let url = ctx.api_endpoint <> "/admin/instance-config/update"
let registration_webhook_json = case registration_alerts_webhook_url {
"" -> json.null()
url -> json.string(url)
}
let system_webhook_json = case system_alerts_webhook_url {
"" -> json.null()
url -> json.string(url)
}
let body =
json.object([
#("manual_review_enabled", json.bool(manual_review_enabled)),
#(
"manual_review_schedule_enabled",
json.bool(manual_review_schedule_enabled),
),
#(
"manual_review_schedule_start_hour_utc",
json.int(manual_review_schedule_start_hour_utc),
),
#(
"manual_review_schedule_end_hour_utc",
json.int(manual_review_schedule_end_hour_utc),
),
#("registration_alerts_webhook_url", registration_webhook_json),
#("system_alerts_webhook_url", system_webhook_json),
])
|> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
case json.parse(resp.body, instance_config_decoder()) {
Ok(config) -> Ok(config)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> Error(Forbidden("Access denied"))
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn list_snowflake_reservations(
ctx: Context,
session: Session,
) -> Result(List(SnowflakeReservation), ApiError) {
let url = ctx.api_endpoint <> "/admin/snowflake-reservations/list"
let body = json.object([]) |> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let decoder = {
use reservations <- decode.field(
"reservations",
decode.list(snowflake_reservation_decoder()),
)
decode.success(reservations)
}
case json.parse(resp.body, decoder) {
Ok(reservations) -> Ok(reservations)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> Error(Forbidden("Access denied"))
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn add_snowflake_reservation(
ctx: Context,
session: Session,
email: String,
snowflake: String,
) -> Result(Nil, ApiError) {
let fields = [
#("email", json.string(email)),
#("snowflake", json.string(snowflake)),
]
common.admin_post_simple(
ctx,
session,
"/admin/snowflake-reservations/add",
fields,
)
}
pub fn delete_snowflake_reservation(
ctx: Context,
session: Session,
email: String,
) -> Result(Nil, ApiError) {
let fields = [#("email", json.string(email))]
common.admin_post_simple(
ctx,
session,
"/admin/snowflake-reservations/delete",
fields,
)
}

View File

@@ -1,508 +0,0 @@
//// Copyright (C) 2026 Fluxer Contributors
////
//// This file is part of Fluxer.
////
//// Fluxer is free software: you can redistribute it and/or modify
//// it under the terms of the GNU Affero General Public License as published by
//// the Free Software Foundation, either version 3 of the License, or
//// (at your option) any later version.
////
//// Fluxer is distributed in the hope that it will be useful,
//// but WITHOUT ANY WARRANTY; without even the implied warranty of
//// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//// GNU Affero General Public License for more details.
////
//// You should have received a copy of the GNU Affero General Public License
//// along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
import fluxer_admin/api/common.{
type ApiError, Forbidden, NetworkError, NotFound, ServerError, Unauthorized,
admin_post_with_audit,
}
import fluxer_admin/web
import gleam/dynamic/decode
import gleam/http
import gleam/http/request
import gleam/httpc
import gleam/json
import gleam/option
pub type MessageAttachment {
MessageAttachment(filename: String, url: String)
}
pub type Message {
Message(
id: String,
channel_id: String,
author_id: String,
author_username: String,
content: String,
timestamp: String,
attachments: List(MessageAttachment),
)
}
pub type LookupMessageResponse {
LookupMessageResponse(messages: List(Message), message_id: String)
}
pub type MessageShredResponse {
MessageShredResponse(job_id: String, requested: option.Option(Int))
}
pub type DeleteAllUserMessagesResponse {
DeleteAllUserMessagesResponse(
dry_run: Bool,
channel_count: Int,
message_count: Int,
job_id: option.Option(String),
)
}
pub type MessageShredStatus {
MessageShredStatus(
status: String,
requested: option.Option(Int),
total: option.Option(Int),
processed: option.Option(Int),
skipped: option.Option(Int),
started_at: option.Option(String),
completed_at: option.Option(String),
failed_at: option.Option(String),
error: option.Option(String),
)
}
pub fn delete_message(
ctx: web.Context,
session: web.Session,
channel_id: String,
message_id: String,
audit_log_reason: option.Option(String),
) -> Result(Nil, ApiError) {
let fields = [
#("channel_id", json.string(channel_id)),
#("message_id", json.string(message_id)),
]
admin_post_with_audit(
ctx,
session,
"/admin/messages/delete",
fields,
audit_log_reason,
)
}
pub fn lookup_message(
ctx: web.Context,
session: web.Session,
channel_id: String,
message_id: String,
context_limit: Int,
) -> Result(LookupMessageResponse, ApiError) {
let url = ctx.api_endpoint <> "/admin/messages/lookup"
let body =
json.object([
#("channel_id", json.string(channel_id)),
#("message_id", json.string(message_id)),
#("context_limit", json.int(context_limit)),
])
|> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let attachment_decoder = {
use filename <- decode.field("filename", decode.string)
use url <- decode.field("url", decode.string)
decode.success(MessageAttachment(filename: filename, url: url))
}
let message_decoder = {
use id <- decode.field("id", decode.string)
use channel_id <- decode.field("channel_id", decode.string)
use author_id <- decode.field("author_id", decode.string)
use author_username <- decode.field("author_username", decode.string)
use content <- decode.field("content", decode.string)
use timestamp <- decode.field("timestamp", decode.string)
use attachments <- decode.optional_field(
"attachments",
[],
decode.list(attachment_decoder),
)
decode.success(Message(
id: id,
channel_id: channel_id,
author_id: author_id,
author_username: author_username,
content: content,
timestamp: timestamp,
attachments: attachments,
))
}
let decoder = {
use messages <- decode.field("messages", decode.list(message_decoder))
use message_id <- decode.field("message_id", decode.string)
decode.success(LookupMessageResponse(
messages: messages,
message_id: message_id,
))
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn queue_message_shred(
ctx: web.Context,
session: web.Session,
user_id: String,
entries: json.Json,
) -> Result(MessageShredResponse, ApiError) {
let url = ctx.api_endpoint <> "/admin/messages/shred"
let body =
json.object([
#("user_id", json.string(user_id)),
#("entries", entries),
])
|> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let decoder = {
use job_id <- decode.field("job_id", decode.string)
use requested <- decode.optional_field(
"requested",
option.None,
decode.optional(decode.int),
)
decode.success(MessageShredResponse(
job_id: job_id,
requested: requested,
))
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn delete_all_user_messages(
ctx: web.Context,
session: web.Session,
user_id: String,
dry_run: Bool,
) -> Result(DeleteAllUserMessagesResponse, ApiError) {
let url = ctx.api_endpoint <> "/admin/messages/delete-all"
let body =
json.object([
#("user_id", json.string(user_id)),
#("dry_run", json.bool(dry_run)),
])
|> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let decoder = {
use dry_run <- decode.field("dry_run", decode.bool)
use channel_count <- decode.field("channel_count", decode.int)
use message_count <- decode.field("message_count", decode.int)
use job_id <- decode.optional_field(
"job_id",
option.None,
decode.optional(decode.string),
)
decode.success(DeleteAllUserMessagesResponse(
dry_run: dry_run,
channel_count: channel_count,
message_count: message_count,
job_id: job_id,
))
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn get_message_shred_status(
ctx: web.Context,
session: web.Session,
job_id: String,
) -> Result(MessageShredStatus, ApiError) {
let url = ctx.api_endpoint <> "/admin/messages/shred-status"
let body =
json.object([#("job_id", json.string(job_id))])
|> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let decoder = {
use status <- decode.field("status", decode.string)
use requested <- decode.optional_field(
"requested",
option.None,
decode.optional(decode.int),
)
use total <- decode.optional_field(
"total",
option.None,
decode.optional(decode.int),
)
use processed <- decode.optional_field(
"processed",
option.None,
decode.optional(decode.int),
)
use skipped <- decode.optional_field(
"skipped",
option.None,
decode.optional(decode.int),
)
use started_at <- decode.optional_field(
"started_at",
option.None,
decode.optional(decode.string),
)
use completed_at <- decode.optional_field(
"completed_at",
option.None,
decode.optional(decode.string),
)
use failed_at <- decode.optional_field(
"failed_at",
option.None,
decode.optional(decode.string),
)
use error <- decode.optional_field(
"error",
option.None,
decode.optional(decode.string),
)
decode.success(MessageShredStatus(
status: status,
requested: requested,
total: total,
processed: processed,
skipped: skipped,
started_at: started_at,
completed_at: completed_at,
failed_at: failed_at,
error: error,
))
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn lookup_message_by_attachment(
ctx: web.Context,
session: web.Session,
channel_id: String,
attachment_id: String,
filename: String,
context_limit: Int,
) -> Result(LookupMessageResponse, ApiError) {
let url = ctx.api_endpoint <> "/admin/messages/lookup-by-attachment"
let body =
json.object([
#("channel_id", json.string(channel_id)),
#("attachment_id", json.string(attachment_id)),
#("filename", json.string(filename)),
#("context_limit", json.int(context_limit)),
])
|> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let attachment_decoder = {
use filename <- decode.field("filename", decode.string)
use url <- decode.field("url", decode.string)
decode.success(MessageAttachment(filename: filename, url: url))
}
let message_decoder = {
use id <- decode.field("id", decode.string)
use channel_id <- decode.field("channel_id", decode.string)
use author_id <- decode.field("author_id", decode.string)
use author_username <- decode.field("author_username", decode.string)
use content <- decode.field("content", decode.string)
use timestamp <- decode.field("timestamp", decode.string)
use attachments <- decode.optional_field(
"attachments",
[],
decode.list(attachment_decoder),
)
decode.success(Message(
id: id,
channel_id: channel_id,
author_id: author_id,
author_username: author_username,
content: content,
timestamp: timestamp,
attachments: attachments,
))
}
let decoder = {
use messages <- decode.field("messages", decode.list(message_decoder))
use message_id <- decode.field("message_id", decode.string)
decode.success(LookupMessageResponse(
messages: messages,
message_id: message_id,
))
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}

View File

@@ -1,264 +0,0 @@
//// Copyright (C) 2026 Fluxer Contributors
////
//// This file is part of Fluxer.
////
//// Fluxer is free software: you can redistribute it and/or modify
//// it under the terms of the GNU Affero General Public License as published by
//// the Free Software Foundation, either version 3 of the License, or
//// (at your option) any later version.
////
//// Fluxer is distributed in the hope that it will be useful,
//// but WITHOUT ANY WARRANTY; without even the implied warranty of
//// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//// GNU Affero General Public License for more details.
////
//// You should have received a copy of the GNU Affero General Public License
//// along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
import fluxer_admin/api/common.{
type ApiError, NetworkError, NotFound, ServerError,
}
import fluxer_admin/web.{type Context}
import gleam/dynamic/decode
import gleam/http
import gleam/http/request
import gleam/httpc
import gleam/int
import gleam/json
import gleam/option.{type Option, None, Some}
pub type DataPoint {
DataPoint(timestamp: Int, value: Float)
}
pub type QueryResponse {
QueryResponse(metric: String, data: List(DataPoint))
}
pub type TopEntry {
TopEntry(label: String, value: Float)
}
pub type AggregateResponse {
AggregateResponse(
metric: String,
total: Float,
breakdown: option.Option(List(TopEntry)),
)
}
pub type TopQueryResponse {
TopQueryResponse(metric: String, entries: List(TopEntry))
}
pub type CrashEvent {
CrashEvent(
id: String,
timestamp: Int,
guild_id: String,
stacktrace: String,
notified: Bool,
)
}
pub type CrashesResponse {
CrashesResponse(crashes: List(CrashEvent))
}
pub fn query_metrics(
ctx: Context,
metric: String,
start: Option(String),
end: Option(String),
) -> Result(QueryResponse, ApiError) {
case ctx.metrics_endpoint {
None -> Error(NotFound)
Some(endpoint) -> {
let query_params = case start, end {
Some(s), Some(e) ->
"?metric=" <> metric <> "&start=" <> s <> "&end=" <> e
Some(s), None -> "?metric=" <> metric <> "&start=" <> s
None, Some(e) -> "?metric=" <> metric <> "&end=" <> e
None, None -> "?metric=" <> metric
}
let url = endpoint <> "/query" <> query_params
let assert Ok(req) = request.to(url)
let req = req |> request.set_method(http.Get)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let data_point_decoder = {
use timestamp <- decode.field("timestamp", decode.int)
use value <- decode.field("value", decode.float)
decode.success(DataPoint(timestamp: timestamp, value: value))
}
let decoder = {
use metric_name <- decode.field("metric", decode.string)
use data <- decode.field("data", decode.list(data_point_decoder))
decode.success(QueryResponse(metric: metric_name, data: data))
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(_) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
}
}
pub fn query_aggregate(
ctx: Context,
metric: String,
) -> Result(AggregateResponse, ApiError) {
query_aggregate_grouped(ctx, metric, option.None)
}
fn top_entry_decoder() -> decode.Decoder(TopEntry) {
{
use label <- decode.field("label", decode.string)
use value <- decode.field("value", decode.float)
decode.success(TopEntry(label: label, value: value))
}
}
pub fn query_aggregate_grouped(
ctx: Context,
metric: String,
group_by: option.Option(String),
) -> Result(AggregateResponse, ApiError) {
case ctx.metrics_endpoint {
None -> Error(NotFound)
Some(endpoint) -> {
let query_params = case group_by {
option.Some(group) -> "?metric=" <> metric <> "&group_by=" <> group
option.None -> "?metric=" <> metric
}
let url = endpoint <> "/query/aggregate" <> query_params
let assert Ok(req) = request.to(url)
let req = req |> request.set_method(http.Get)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let decoder = {
use metric_name <- decode.field("metric", decode.string)
use total <- decode.field("total", decode.float)
use breakdown <- decode.optional_field(
"breakdown",
option.None,
decode.list(top_entry_decoder()) |> decode.map(option.Some),
)
decode.success(AggregateResponse(
metric: metric_name,
total: total,
breakdown: breakdown,
))
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(_) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
}
}
pub fn query_top(
ctx: Context,
metric: String,
limit: Int,
) -> Result(TopQueryResponse, ApiError) {
case ctx.metrics_endpoint {
None -> Error(NotFound)
Some(endpoint) -> {
let url =
endpoint
<> "/query/top?metric="
<> metric
<> "&limit="
<> int.to_string(limit)
let assert Ok(req) = request.to(url)
let req = req |> request.set_method(http.Get)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let decoder = {
use metric_name <- decode.field("metric", decode.string)
use entries <- decode.field(
"entries",
decode.list(top_entry_decoder()),
)
decode.success(TopQueryResponse(
metric: metric_name,
entries: entries,
))
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(_) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
}
}
pub fn query_crashes(
ctx: Context,
limit: Int,
) -> Result(CrashesResponse, ApiError) {
case ctx.metrics_endpoint {
None -> Error(NotFound)
Some(endpoint) -> {
let url = endpoint <> "/query/crashes?limit=" <> int.to_string(limit)
let assert Ok(req) = request.to(url)
let req = req |> request.set_method(http.Get)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let crash_decoder = {
use id <- decode.field("id", decode.string)
use timestamp <- decode.field("timestamp", decode.int)
use guild_id <- decode.field("guild_id", decode.string)
use stacktrace <- decode.field("stacktrace", decode.string)
use notified <- decode.field("notified", decode.bool)
decode.success(CrashEvent(
id: id,
timestamp: timestamp,
guild_id: guild_id,
stacktrace: stacktrace,
notified: notified,
))
}
let decoder = {
use crashes <- decode.field("crashes", decode.list(crash_decoder))
decode.success(CrashesResponse(crashes: crashes))
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(_) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,728 +0,0 @@
//// Copyright (C) 2026 Fluxer Contributors
////
//// This file is part of Fluxer.
////
//// Fluxer is free software: you can redistribute it and/or modify
//// it under the terms of the GNU Affero General Public License as published by
//// the Free Software Foundation, either version 3 of the License, or
//// (at your option) any later version.
////
//// Fluxer is distributed in the hope that it will be useful,
//// but WITHOUT ANY WARRANTY; without even the implied warranty of
//// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//// GNU Affero General Public License for more details.
////
//// You should have received a copy of the GNU Affero General Public License
//// along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
import fluxer_admin/api/common.{
type ApiError, Forbidden, NetworkError, NotFound, ServerError, Unauthorized,
admin_post_with_audit,
}
import fluxer_admin/api/messages.{type Message, Message, MessageAttachment}
import fluxer_admin/web
import gleam/dynamic/decode
import gleam/http
import gleam/http/request
import gleam/httpc
import gleam/int
import gleam/io
import gleam/json
import gleam/option
import gleam/string
pub type Report {
Report(
report_id: String,
reporter_id: option.Option(String),
reporter_tag: option.Option(String),
reporter_username: option.Option(String),
reporter_discriminator: option.Option(String),
reporter_email: option.Option(String),
reporter_full_legal_name: option.Option(String),
reporter_country_of_residence: option.Option(String),
reported_at: String,
status: Int,
report_type: Int,
category: String,
additional_info: option.Option(String),
reported_user_id: option.Option(String),
reported_user_tag: option.Option(String),
reported_user_username: option.Option(String),
reported_user_discriminator: option.Option(String),
reported_user_avatar_hash: option.Option(String),
reported_guild_id: option.Option(String),
reported_guild_name: option.Option(String),
reported_guild_icon_hash: option.Option(String),
reported_message_id: option.Option(String),
reported_channel_id: option.Option(String),
reported_channel_name: option.Option(String),
reported_guild_invite_code: option.Option(String),
resolved_at: option.Option(String),
resolved_by_admin_id: option.Option(String),
public_comment: option.Option(String),
message_context: List(Message),
)
}
pub type ListReportsResponse {
ListReportsResponse(reports: List(Report))
}
pub type SearchReportResult {
SearchReportResult(
report_id: String,
reporter_id: option.Option(String),
reporter_tag: option.Option(String),
reporter_username: option.Option(String),
reporter_discriminator: option.Option(String),
reporter_email: option.Option(String),
reporter_full_legal_name: option.Option(String),
reporter_country_of_residence: option.Option(String),
reported_at: String,
status: Int,
report_type: Int,
category: String,
additional_info: option.Option(String),
reported_user_id: option.Option(String),
reported_user_tag: option.Option(String),
reported_user_username: option.Option(String),
reported_user_discriminator: option.Option(String),
reported_user_avatar_hash: option.Option(String),
reported_guild_id: option.Option(String),
reported_guild_name: option.Option(String),
reported_guild_invite_code: option.Option(String),
)
}
pub type SearchReportsResponse {
SearchReportsResponse(
reports: List(SearchReportResult),
total: Int,
offset: Int,
limit: Int,
)
}
pub fn list_reports(
ctx: web.Context,
session: web.Session,
status: Int,
limit: Int,
offset: option.Option(Int),
) -> Result(ListReportsResponse, ApiError) {
let url = ctx.api_endpoint <> "/admin/reports/list"
let mut_fields = [#("status", json.int(status)), #("limit", json.int(limit))]
let mut_fields = case offset {
option.Some(o) -> [#("offset", json.int(o)), ..mut_fields]
option.None -> mut_fields
}
let body = json.object(mut_fields) |> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let report_decoder = {
use report_id <- decode.field("report_id", decode.string)
use reporter_id <- decode.field(
"reporter_id",
decode.optional(decode.string),
)
use reporter_tag <- decode.field(
"reporter_tag",
decode.optional(decode.string),
)
use reporter_username <- decode.field(
"reporter_username",
decode.optional(decode.string),
)
use reporter_discriminator <- decode.field(
"reporter_discriminator",
decode.optional(decode.string),
)
use reporter_email <- decode.field(
"reporter_email",
decode.optional(decode.string),
)
use reporter_full_legal_name <- decode.field(
"reporter_full_legal_name",
decode.optional(decode.string),
)
use reporter_country_of_residence <- decode.field(
"reporter_country_of_residence",
decode.optional(decode.string),
)
use reported_at <- decode.field("reported_at", decode.string)
use status_val <- decode.field("status", decode.int)
use report_type <- decode.field("report_type", decode.int)
use category <- decode.field("category", decode.string)
use additional_info <- decode.field(
"additional_info",
decode.optional(decode.string),
)
use reported_user_id <- decode.field(
"reported_user_id",
decode.optional(decode.string),
)
use reported_user_tag <- decode.field(
"reported_user_tag",
decode.optional(decode.string),
)
use reported_user_username <- decode.field(
"reported_user_username",
decode.optional(decode.string),
)
use reported_user_discriminator <- decode.field(
"reported_user_discriminator",
decode.optional(decode.string),
)
use reported_user_avatar_hash <- decode.field(
"reported_user_avatar_hash",
decode.optional(decode.string),
)
use reported_guild_id <- decode.field(
"reported_guild_id",
decode.optional(decode.string),
)
use reported_guild_name <- decode.field(
"reported_guild_name",
decode.optional(decode.string),
)
let reported_guild_icon_hash = option.None
use reported_guild_invite_code <- decode.field(
"reported_guild_invite_code",
decode.optional(decode.string),
)
use reported_message_id <- decode.field(
"reported_message_id",
decode.optional(decode.string),
)
use reported_channel_id <- decode.field(
"reported_channel_id",
decode.optional(decode.string),
)
use reported_channel_name <- decode.field(
"reported_channel_name",
decode.optional(decode.string),
)
use resolved_at <- decode.field(
"resolved_at",
decode.optional(decode.string),
)
use resolved_by_admin_id <- decode.field(
"resolved_by_admin_id",
decode.optional(decode.string),
)
use public_comment <- decode.field(
"public_comment",
decode.optional(decode.string),
)
decode.success(
Report(
report_id: report_id,
reporter_id: reporter_id,
reporter_tag: reporter_tag,
reporter_username: reporter_username,
reporter_discriminator: reporter_discriminator,
reporter_email: reporter_email,
reporter_full_legal_name: reporter_full_legal_name,
reporter_country_of_residence: reporter_country_of_residence,
reported_at: reported_at,
status: status_val,
report_type: report_type,
category: category,
additional_info: additional_info,
reported_user_id: reported_user_id,
reported_user_tag: reported_user_tag,
reported_user_username: reported_user_username,
reported_user_discriminator: reported_user_discriminator,
reported_user_avatar_hash: reported_user_avatar_hash,
reported_guild_id: reported_guild_id,
reported_guild_name: reported_guild_name,
reported_guild_icon_hash: reported_guild_icon_hash,
reported_message_id: reported_message_id,
reported_channel_id: reported_channel_id,
reported_channel_name: reported_channel_name,
reported_guild_invite_code: reported_guild_invite_code,
resolved_at: resolved_at,
resolved_by_admin_id: resolved_by_admin_id,
public_comment: public_comment,
message_context: [],
),
)
}
let decoder = {
use reports <- decode.field("reports", decode.list(report_decoder))
decode.success(ListReportsResponse(reports: reports))
}
case json.parse(resp.body, decoder) {
Ok(response) -> Ok(response)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn resolve_report(
ctx: web.Context,
session: web.Session,
report_id: String,
public_comment: option.Option(String),
audit_log_reason: option.Option(String),
) -> Result(Nil, ApiError) {
let fields = [#("report_id", json.string(report_id))]
let fields = case public_comment {
option.Some(comment) -> [
#("public_comment", json.string(comment)),
..fields
]
option.None -> fields
}
admin_post_with_audit(
ctx,
session,
"/admin/reports/resolve",
fields,
audit_log_reason,
)
}
pub fn search_reports(
ctx: web.Context,
session: web.Session,
query: option.Option(String),
status_filter: option.Option(Int),
type_filter: option.Option(Int),
category_filter: option.Option(String),
limit: Int,
offset: Int,
) -> Result(SearchReportsResponse, ApiError) {
let mut_fields = [#("limit", json.int(limit)), #("offset", json.int(offset))]
let mut_fields = case query {
option.Some(q) if q != "" -> [#("query", json.string(q)), ..mut_fields]
_ -> mut_fields
}
let mut_fields = case status_filter {
option.Some(s) -> [#("status", json.int(s)), ..mut_fields]
option.None -> mut_fields
}
let mut_fields = case type_filter {
option.Some(t) -> [#("report_type", json.int(t)), ..mut_fields]
option.None -> mut_fields
}
let mut_fields = case category_filter {
option.Some(c) if c != "" -> [#("category", json.string(c)), ..mut_fields]
_ -> mut_fields
}
let url = ctx.api_endpoint <> "/admin/reports/search"
let body = json.object(mut_fields) |> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let report_decoder = {
use report_id <- decode.field("report_id", decode.string)
use reporter_id <- decode.field(
"reporter_id",
decode.optional(decode.string),
)
use reporter_tag <- decode.field(
"reporter_tag",
decode.optional(decode.string),
)
use reporter_username <- decode.field(
"reporter_username",
decode.optional(decode.string),
)
use reporter_discriminator <- decode.field(
"reporter_discriminator",
decode.optional(decode.string),
)
use reporter_email <- decode.field(
"reporter_email",
decode.optional(decode.string),
)
use reporter_full_legal_name <- decode.field(
"reporter_full_legal_name",
decode.optional(decode.string),
)
use reporter_country_of_residence <- decode.field(
"reporter_country_of_residence",
decode.optional(decode.string),
)
use reported_at <- decode.field("reported_at", decode.string)
use status_val <- decode.field("status", decode.int)
use report_type <- decode.field("report_type", decode.int)
use category <- decode.field("category", decode.string)
use additional_info <- decode.field(
"additional_info",
decode.optional(decode.string),
)
use reported_user_id <- decode.field(
"reported_user_id",
decode.optional(decode.string),
)
use reported_user_tag <- decode.field(
"reported_user_tag",
decode.optional(decode.string),
)
use reported_user_username <- decode.field(
"reported_user_username",
decode.optional(decode.string),
)
use reported_user_discriminator <- decode.field(
"reported_user_discriminator",
decode.optional(decode.string),
)
use reported_user_avatar_hash <- decode.field(
"reported_user_avatar_hash",
decode.optional(decode.string),
)
use reported_guild_id <- decode.field(
"reported_guild_id",
decode.optional(decode.string),
)
use reported_guild_name <- decode.field(
"reported_guild_name",
decode.optional(decode.string),
)
use reported_guild_invite_code <- decode.field(
"reported_guild_invite_code",
decode.optional(decode.string),
)
decode.success(SearchReportResult(
report_id: report_id,
reporter_id: reporter_id,
reporter_tag: reporter_tag,
reporter_username: reporter_username,
reporter_discriminator: reporter_discriminator,
reporter_email: reporter_email,
reporter_full_legal_name: reporter_full_legal_name,
reporter_country_of_residence: reporter_country_of_residence,
reported_at: reported_at,
status: status_val,
report_type: report_type,
category: category,
additional_info: additional_info,
reported_user_id: reported_user_id,
reported_user_tag: reported_user_tag,
reported_user_username: reported_user_username,
reported_user_discriminator: reported_user_discriminator,
reported_user_avatar_hash: reported_user_avatar_hash,
reported_guild_id: reported_guild_id,
reported_guild_name: reported_guild_name,
reported_guild_invite_code: reported_guild_invite_code,
))
}
let decoder = {
use reports <- decode.field("reports", decode.list(report_decoder))
use total <- decode.field("total", decode.int)
use offset_val <- decode.field("offset", decode.int)
use limit_val <- decode.field("limit", decode.int)
decode.success(SearchReportsResponse(
reports: reports,
total: total,
offset: offset_val,
limit: limit_val,
))
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn get_report_detail(
ctx: web.Context,
session: web.Session,
report_id: String,
) -> Result(Report, ApiError) {
let url = ctx.api_endpoint <> "/admin/reports/" <> report_id
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Get)
|> request.set_header("authorization", "Bearer " <> session.access_token)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let attachment_decoder = {
use filename <- decode.field("filename", decode.string)
use url <- decode.field("url", decode.string)
decode.success(MessageAttachment(filename: filename, url: url))
}
let context_message_decoder = {
use id <- decode.field("id", decode.string)
use channel_id <- decode.optional_field("channel_id", "", decode.string)
use author_id <- decode.optional_field("author_id", "", decode.string)
use author_username <- decode.optional_field(
"author_username",
"",
decode.string,
)
use content <- decode.optional_field("content", "", decode.string)
use timestamp <- decode.optional_field("timestamp", "", decode.string)
use attachments <- decode.optional_field(
"attachments",
[],
decode.list(attachment_decoder),
)
decode.success(Message(
id: id,
channel_id: channel_id,
author_id: author_id,
author_username: author_username,
content: content,
timestamp: timestamp,
attachments: attachments,
))
}
let report_decoder = {
use report_id <- decode.field("report_id", decode.string)
use reporter_id <- decode.field(
"reporter_id",
decode.optional(decode.string),
)
use reporter_tag <- decode.field(
"reporter_tag",
decode.optional(decode.string),
)
use reporter_username <- decode.field(
"reporter_username",
decode.optional(decode.string),
)
use reporter_discriminator <- decode.field(
"reporter_discriminator",
decode.optional(decode.string),
)
use reporter_email <- decode.field(
"reporter_email",
decode.optional(decode.string),
)
use reporter_full_legal_name <- decode.field(
"reporter_full_legal_name",
decode.optional(decode.string),
)
use reporter_country_of_residence <- decode.field(
"reporter_country_of_residence",
decode.optional(decode.string),
)
use reported_at <- decode.field("reported_at", decode.string)
use status_val <- decode.field("status", decode.int)
use report_type <- decode.field("report_type", decode.int)
use category <- decode.field("category", decode.string)
use additional_info <- decode.field(
"additional_info",
decode.optional(decode.string),
)
use reported_user_id <- decode.field(
"reported_user_id",
decode.optional(decode.string),
)
use reported_user_tag <- decode.field(
"reported_user_tag",
decode.optional(decode.string),
)
use reported_user_username <- decode.field(
"reported_user_username",
decode.optional(decode.string),
)
use reported_user_discriminator <- decode.field(
"reported_user_discriminator",
decode.optional(decode.string),
)
use reported_user_avatar_hash <- decode.field(
"reported_user_avatar_hash",
decode.optional(decode.string),
)
use reported_guild_id <- decode.field(
"reported_guild_id",
decode.optional(decode.string),
)
use reported_guild_name <- decode.field(
"reported_guild_name",
decode.optional(decode.string),
)
use reported_guild_icon_hash <- decode.optional_field(
"reported_guild_icon_hash",
option.None,
decode.optional(decode.string),
)
use reported_guild_invite_code <- decode.field(
"reported_guild_invite_code",
decode.optional(decode.string),
)
use reported_message_id <- decode.field(
"reported_message_id",
decode.optional(decode.string),
)
use reported_channel_id <- decode.field(
"reported_channel_id",
decode.optional(decode.string),
)
use reported_channel_name <- decode.field(
"reported_channel_name",
decode.optional(decode.string),
)
use resolved_at <- decode.field(
"resolved_at",
decode.optional(decode.string),
)
use resolved_by_admin_id <- decode.field(
"resolved_by_admin_id",
decode.optional(decode.string),
)
use public_comment <- decode.field(
"public_comment",
decode.optional(decode.string),
)
use message_context <- decode.optional_field(
"message_context",
[],
decode.list(context_message_decoder),
)
decode.success(Report(
report_id: report_id,
reporter_id: reporter_id,
reporter_tag: reporter_tag,
reporter_username: reporter_username,
reporter_discriminator: reporter_discriminator,
reporter_email: reporter_email,
reporter_full_legal_name: reporter_full_legal_name,
reporter_country_of_residence: reporter_country_of_residence,
reported_at: reported_at,
status: status_val,
report_type: report_type,
category: category,
additional_info: additional_info,
reported_user_id: reported_user_id,
reported_user_tag: reported_user_tag,
reported_user_username: reported_user_username,
reported_user_discriminator: reported_user_discriminator,
reported_user_avatar_hash: reported_user_avatar_hash,
reported_guild_id: reported_guild_id,
reported_guild_name: reported_guild_name,
reported_guild_icon_hash: reported_guild_icon_hash,
reported_message_id: reported_message_id,
reported_channel_id: reported_channel_id,
reported_channel_name: reported_channel_name,
reported_guild_invite_code: reported_guild_invite_code,
resolved_at: resolved_at,
resolved_by_admin_id: resolved_by_admin_id,
public_comment: public_comment,
message_context: message_context,
))
}
case json.parse(resp.body, report_decoder) {
Ok(result) -> Ok(result)
Error(err) -> {
io.println(
"reports.get_report_detail decode failed: "
<> string.inspect(err)
<> " body="
<> string.slice(resp.body, 0, 4000),
)
Error(ServerError)
}
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(resp) -> {
io.println(
"reports.get_report_detail unexpected status "
<> int.to_string(resp.status)
<> " body="
<> string.slice(resp.body, 0, 1000),
)
Error(ServerError)
}
Error(err) -> {
io.println(
"reports.get_report_detail network error: " <> string.inspect(err),
)
Error(NetworkError)
}
}
}

View File

@@ -1,202 +0,0 @@
//// Copyright (C) 2026 Fluxer Contributors
////
//// This file is part of Fluxer.
////
//// Fluxer is free software: you can redistribute it and/or modify
//// it under the terms of the GNU Affero General Public License as published by
//// the Free Software Foundation, either version 3 of the License, or
//// (at your option) any later version.
////
//// Fluxer is distributed in the hope that it will be useful,
//// but WITHOUT ANY WARRANTY; without even the implied warranty of
//// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//// GNU Affero General Public License for more details.
////
//// You should have received a copy of the GNU Affero General Public License
//// along with Fluxer. If not, see <https://www.gnu.org/licenses/>.
import fluxer_admin/api/common.{
type ApiError, Forbidden, NetworkError, NotFound, ServerError, Unauthorized,
}
import fluxer_admin/web.{type Context, type Session}
import gleam/dynamic/decode
import gleam/http
import gleam/http/request
import gleam/httpc
import gleam/json
import gleam/option
pub type RefreshSearchIndexResponse {
RefreshSearchIndexResponse(job_id: String)
}
pub type IndexRefreshStatus {
IndexRefreshStatus(
status: String,
total: option.Option(Int),
indexed: option.Option(Int),
started_at: option.Option(String),
completed_at: option.Option(String),
error: option.Option(String),
)
}
pub fn refresh_search_index(
ctx: Context,
session: Session,
index_type: String,
audit_log_reason: option.Option(String),
) -> Result(RefreshSearchIndexResponse, ApiError) {
refresh_search_index_with_guild(
ctx,
session,
index_type,
option.None,
audit_log_reason,
)
}
pub fn refresh_search_index_with_guild(
ctx: Context,
session: Session,
index_type: String,
guild_id: option.Option(String),
audit_log_reason: option.Option(String),
) -> Result(RefreshSearchIndexResponse, ApiError) {
let fields = case guild_id {
option.Some(id) -> [
#("index_type", json.string(index_type)),
#("guild_id", json.string(id)),
]
option.None -> [#("index_type", json.string(index_type))]
}
let url = ctx.api_endpoint <> "/admin/search/refresh-index"
let body = json.object(fields) |> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
let req = case audit_log_reason {
option.Some(reason) -> request.set_header(req, "x-audit-log-reason", reason)
option.None -> req
}
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let decoder = {
use job_id <- decode.field("job_id", decode.string)
decode.success(RefreshSearchIndexResponse(job_id: job_id))
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}
pub fn get_index_refresh_status(
ctx: Context,
session: Session,
job_id: String,
) -> Result(IndexRefreshStatus, ApiError) {
let fields = [#("job_id", json.string(job_id))]
let url = ctx.api_endpoint <> "/admin/search/refresh-status"
let body = json.object(fields) |> json.to_string
let assert Ok(req) = request.to(url)
let req =
req
|> request.set_method(http.Post)
|> request.set_header("authorization", "Bearer " <> session.access_token)
|> request.set_header("content-type", "application/json")
|> request.set_body(body)
case httpc.send(req) {
Ok(resp) if resp.status == 200 -> {
let decoder = {
use status <- decode.field("status", decode.string)
use total <- decode.optional_field(
"total",
option.None,
decode.optional(decode.int),
)
use indexed <- decode.optional_field(
"indexed",
option.None,
decode.optional(decode.int),
)
use started_at <- decode.optional_field(
"started_at",
option.None,
decode.optional(decode.string),
)
use completed_at <- decode.optional_field(
"completed_at",
option.None,
decode.optional(decode.string),
)
use error <- decode.optional_field(
"error",
option.None,
decode.optional(decode.string),
)
decode.success(IndexRefreshStatus(
status: status,
total: total,
indexed: indexed,
started_at: started_at,
completed_at: completed_at,
error: error,
))
}
case json.parse(resp.body, decoder) {
Ok(result) -> Ok(result)
Error(_) -> Error(ServerError)
}
}
Ok(resp) if resp.status == 401 -> Error(Unauthorized)
Ok(resp) if resp.status == 403 -> {
let message_decoder = {
use message <- decode.field("message", decode.string)
decode.success(message)
}
let message = case json.parse(resp.body, message_decoder) {
Ok(msg) -> msg
Error(_) ->
"Missing required permissions. Contact an administrator to request access."
}
Error(Forbidden(message))
}
Ok(resp) if resp.status == 404 -> Error(NotFound)
Ok(_resp) -> Error(ServerError)
Error(_) -> Error(NetworkError)
}
}

Some files were not shown because too many files have changed in this diff Show More