feat: easy to self host#96
Conversation
WalkthroughCI now validates, builds, and publishes API and Web Docker images separately; the web app reads runtime environment via an Nginx entrypoint and a new getEnv() accessor replacing Vite compile-time env usages; self-host docs expanded and an interactive Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
8 issues found across 14 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="apps/web/docker-entrypoint.sh">
<violation number="1" location="apps/web/docker-entrypoint.sh:4">
P1: Escape environment values before embedding them in `env-config.js`; raw interpolation can generate invalid JavaScript when a value contains quotes, backslashes, or newlines.</violation>
</file>
<file name="apps/docs/get-started/self-host.mdx">
<violation number="1" location="apps/docs/get-started/self-host.mdx:84">
P1: The GitHub App webhook URL points at the OAuth callback route instead of the webhook handler, so GitHub event deliveries will 404.</violation>
<violation number="2" location="apps/docs/get-started/self-host.mdx:305">
P1: This Caddy rule won't serve the documented `API_ENDPOINT=https://.../api`: it neither matches `/api` itself nor strips the `/api` prefix before proxying.</violation>
</file>
<file name="apps/web/Dockerfile">
<violation number="1" location="apps/web/Dockerfile:24">
P2: This overwrites nginx:alpine’s built-in `/docker-entrypoint.sh`, so the image’s standard `/docker-entrypoint.d` initialization hooks no longer run. That can silently disable base-image features such as template envsubst and worker-process tuning in self-hosted deployments.</violation>
</file>
<file name="apps/web/src/env.ts">
<violation number="1" location="apps/web/src/env.ts:19">
P1: Empty runtime env values currently override the Vite fallback, so unset container variables resolve to `""` instead of the baked `VITE_*` value.</violation>
</file>
<file name=".github/workflows/docker-publish.yml">
<violation number="1" location=".github/workflows/docker-publish.yml:123">
P2: `validate-web` points the app at an API that does not exist in that job, so this new check only validates static file serving and can miss broken backend wiring.</violation>
</file>
<file name="bin/deploy">
<violation number="1" location="bin/deploy:17">
P2: `eval` with user-provided input will execute shell metacharacters (backticks, `$(...)`, unmatched quotes) that may appear in GitHub secrets or PEM keys, causing confusing failures or unintended command execution. Use `printf -v` for safe variable assignment:
```bash
printf -v "$var_name" '%s' "${value:-$default}"
```</violation>
<violation number="2" location="bin/deploy:46">
P1: `\n` inside bash double-quotes is a literal two-character sequence (backslash + n), not a real newline. The assembled PEM key will contain `\n` literals, which most PEM parsers reject, causing GitHub API authentication to fail at runtime with a cryptic key-parsing error. Use `$'\n'` or `printf` to insert actual newlines:
```bash
content="${content}${line}"$'\n'
```</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
Actionable comments posted: 9
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/docker-publish.yml:
- Around line 128-139: The health check is probing / which only verifies nginx
is responding but not that runtime config (/env-config.js) was generated; update
the step named "Wait for Web to be ready" to curl
http://localhost:8080/env-config.js (or
http://localhost:8080/env-config.js?nocache) instead of /, so the loop confirms
the runtime config file is present and returns 200 before exiting; keep the same
retry logic and still print docker logs for the container name sweetr-web on
failure.
In `@apps/docs/get-started/self-host.mdx`:
- Around line 171-172: The docker-compose snippet is exposing the entire .env to
the web/static service; update the compose example so the "web" (or static
nginx) service does not use env_file: .env but instead only receives necessary
frontend/runtime variables (e.g., create and reference a separate .env.frontend
or explicit environment: entries) while keeping backend-sensitive values like
DATABASE_URL, JWT_SECRET, GitHub client secrets, and GitHub app private key only
assigned to the backend/service that needs them; change occurrences around the
"web" service and the other compose service blocks (including the similar block
later) to use a minimal frontend-only env source.
- Around line 157-158: The compose example publishes service ports to the host
and conflicts with the Caddy reverse-proxy and public interface—update the
docker-compose services (notably the web and postgres services referenced in the
diff) to avoid binding backend ports to the host: remove host:container port
mappings (e.g., replace "8000:8000" with just "8000" or use expose: - "8000") so
the web service listens on the container network only and let Caddy proxy to it,
and ensure Postgres is not published publicly by removing the "5432:5432" host
mapping (or bind it to localhost only like "127.0.0.1:5432:5432" if needed for
local tooling); apply the same adjustments at the other occurrences called out
(lines ~172-173, 187-188, 303-311) so backend services remain internal and Caddy
manages 80/443 ingress.
- Around line 46-50: The current example sets SWEETR_TAG only on the curl side
of the pipeline so the installer (bash) doesn't receive it; update the example
to both fetch the script from the same git tag and pass the variable to the
shell interpreter (for example, download from
https://raw.githubusercontent.com/sweetr-dev/sweetr.dev/1.2.0/bin/deploy and
then run it with the environment applied to bash), e.g. fetch the tagged deploy
script and execute it with SWEETR_TAG=1.2.0 applied to the bash process
(alternatively use a temporary file or process substitution so SWEETR_TAG is set
for the shell running the installer).
In `@apps/web/docker-entrypoint.sh`:
- Around line 1-11: The entrypoint currently embeds raw environment variables
into env-config.js (in docker-entrypoint.sh) which allows injection; modify the
Dockerfile to install jq so you can JSON-escape values, then change
docker-entrypoint.sh to JSON-encode each variable (e.g., API_ENDPOINT,
AUTH_COOKIE_DOMAIN, GITHUB_APP, SENTRY_DSN, APP_ENV) before writing
window.__SWEETR_ENV__ to /usr/share/nginx/html/env-config.js so special
characters/newlines/quotes are safely escaped; ensure you handle empty defaults
the same way and keep the final exec nginx -g 'daemon off;' behavior.
In `@bin/deploy`:
- Around line 163-172: The generate_files function currently always regenerates
JWT_SECRET, DB_PASSWORD and POSTGRES_PASSWORD which breaks existing deployments;
modify generate_files (and the other similar blocks referenced) to check for an
existing sweetr directory and existing secret files/ENV entries before
overwriting: if sweetr/ exists (or files like .env, jwt_secret, or the persisted
DB credentials) then read and reuse the existing JWT_SECRET, POSTGRES_PASSWORD
and DB_PASSWORD instead of regenerating them; only generate new values when no
prior values/files are present, and ensure the values written match what
init-app-db.sh expects so DATABASE_URL and Postgres credentials remain in sync.
- Around line 240-241: The published port mappings (e.g., the ports: -
"8000:8000" entry) expose services to the host and conflict with a same-host
reverse proxy; update the Compose service port definitions for the web service
(and any DB/Postgres service) to only bind what the proxy/admin flow requires by
either removing host publishing or binding to localhost (127.0.0.1) instead of
0.0.0.0; specifically change the web service's ports mapping (and the
postgres/db service's "5432:5432" mapping) to use internal-only exposure or host
binding "127.0.0.1:PORT:PORT" so the reverse-proxy can bind 80/443 and the
database is not reachable outside the compose network (apply the same change to
the other occurrences noted).
- Around line 174-204: The deploy script currently writes all secrets into one
.env and then supplies it to services via env_file; change it to write two env
files: a full-secret backend .env containing sensitive entries (POSTGRES_*,
DB_PASSWORD, DATABASE_URL, JWT_SECRET, GITHUB_CLIENT_SECRET,
GITHUB_APP_PRIVATE_KEY, etc.) and a minimal .env.frontend containing only
frontend runtime keys (FRONTEND_URL, PORT, SWEETR_TAG, DOMAIN as needed). Update
the compose/service env_file references so the web/backend service uses the full
.env while the nginx/frontend service uses only .env.frontend (and remove any
use of the full .env for nginx); ensure the script writes both files where the
docker compose expects them (also apply the same change around the other
env_file usage referenced in the later block).
- Around line 13-25: The prompt function uses eval to assign to an indirectly
named variable (var_name) which is unsafe; replace eval "$var_name=..." with
safe indirect assignment using printf -v to set the variable by name (e.g., use
printf -v "$var_name" "%s" "${value:-$default}" and printf -v "$var_name" "%s"
"$value" in the required-field branch), and apply the same change to the other
prompt usages referenced in the diff (the blocks at lines ~28-38 and ~40-49) so
no eval is used anywhere for prompt assignment.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: cc6abd19-4791-4230-af4a-4d849dfba09f
📒 Files selected for processing (14)
.github/workflows/docker-publish.ymlapps/docs/get-started/self-host.mdxapps/web/.env.exampleapps/web/Dockerfileapps/web/docker-entrypoint.shapps/web/index.htmlapps/web/src/api/clients/graphql-client.tsapps/web/src/app/settings/billing/components/pricing/card-cloud.tsxapps/web/src/env.d.tsapps/web/src/env.tsapps/web/src/providers/auth.provider.tsapps/web/src/providers/github.provider.tsapps/web/src/providers/sentry.provider.tsbin/deploy
💤 Files with no reviewable changes (1)
- apps/web/src/env.d.ts
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
.github/workflows/docker-publish.yml (1)
128-140: Consider validating/env-config.jscontent, not just existence.The health check verifies the file is served but doesn't confirm the runtime config was correctly generated. If the entrypoint script fails silently, the file might be empty or malformed.
Optional: Add content validation
- name: Wait for Web to be ready run: | for i in $(seq 1 15); do - if curl -sf http://localhost:8080/env-config.js > /dev/null 2>&1; then + if curl -sf http://localhost:8080/env-config.js 2>&1 | grep -q 'window.__SWEETR_ENV__'; then echo "Web is healthy" exit 0 fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/docker-publish.yml around lines 128 - 140, The current health check in the "Wait for Web to be ready" step only confirms /env-config.js is served; update it to validate the file's content (not just existence) by fetching /env-config.js and verifying it's non-empty and contains the expected runtime keys/patterns (for example check for a JSON payload or expected token like "window.__ENV__" / known keys such as API_URL), and fail with container logs from sweetr-web if validation fails; place the validation logic into the same step so it exits 0 only when content is valid, otherwise print "Web failed to start. Container logs:" and run docker logs sweetr-web then exit 1.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/deploy`:
- Around line 328-351: The health check endpoint
(apps/api/src/app/health/health.router.ts) returns status immediately so
bin/deploy can report success before DB/migrations are ready; update the health
logic to verify downstream readiness (e.g., attempt a simple DB query or check
migration status via the TypeORM/Prisma client used in the API, or call a real
authenticated/internal endpoint like the GraphQL schema or a test query) and
only return { status: "ok" } when those checks succeed, or alternatively keep
the health endpoint as-is but modify the deploy script loop to perform a
stronger readiness probe (e.g., curl the GraphQL/schema endpoint or an
authenticated ping, or add a short fixed delay after /health returns) so the
deploy waits for migrations to complete before signaling success.
- Around line 40-49: The prompt_multiline function appends the two-character
sequence "\n" instead of real newlines causing PEM keys to be corrupted; change
the accumulation so each read line is followed by an actual newline (e.g. use
content+=$line$'\n' or append with printf '%s\n' "$line" into content) and keep
the final printf -v "$var_name" '%s' "$content" to set the variable; also note
you should handle storing multiline secrets in .env safely (e.g., base64 or
separate file) after fixing the newline handling in prompt_multiline.
- Around line 64-72: The Linux-only memory check using /proc/meminfo causes
false low-RAM warnings on macOS; update the logic around mem_kb/mem_gb to detect
the platform (uname -s or similar) and on Darwin use sysctl -n hw.memsize
(divide to KB) to compute mem_kb, falling back to /proc/meminfo for Linux and to
"0" only if both fail; keep the existing mem_gb calculation and the subsequent
warn/read logic (refer to the variables mem_kb and mem_gb and the if block that
calls warn and read -rp) so macOS reports actual RAM instead of 0.
---
Nitpick comments:
In @.github/workflows/docker-publish.yml:
- Around line 128-140: The current health check in the "Wait for Web to be
ready" step only confirms /env-config.js is served; update it to validate the
file's content (not just existence) by fetching /env-config.js and verifying
it's non-empty and contains the expected runtime keys/patterns (for example
check for a JSON payload or expected token like "window.__ENV__" / known keys
such as API_URL), and fail with container logs from sweetr-web if validation
fails; place the validation logic into the same step so it exits 0 only when
content is valid, otherwise print "Web failed to start. Container logs:" and run
docker logs sweetr-web then exit 1.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 99ba852b-71bc-4c29-b75e-fd8dd7c54532
📒 Files selected for processing (5)
.github/workflows/docker-publish.ymlapps/docs/get-started/self-host.mdxapps/web/Dockerfileapps/web/docker-entrypoint.shbin/deploy
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/web/docker-entrypoint.sh
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/docs/get-started/self-host.mdx`:
- Around line 283-287: Add the missing SLACK_INSTALL_NOTIFICATION_WEBHOOK_URL
env variable to the Slack section of the .env example (near SLACK_CLIENT_ID,
SLACK_CLIENT_SECRET, SLACK_WEBHOOK_SECRET) and also add the same variable to the
API env reference table later in the document (the block around the other Slack
variables referenced at 343-346) so the docs match the API env validation;
include a short description like "Webhook URL for install notifications" and
mark it as optional if applicable.
- Around line 23-27: Add "curl" to the documented prerequisites by updating the
"**Software requirements:**" bullet list: insert a new bullet "curl" (or "curl
(for installer download and health checks)") alongside "Docker 24+ and Docker
Compose v2+" and "Ports 80 and 443 open", and also update the duplicate
requirements block later in the doc (the second software requirements section)
to include the same "curl" bullet so both locations match.
- Around line 271-276: The .env example uses GITHUB_APP_PRIVATE_KEY but doesn't
explain that it must be a PEM with embedded newlines; update the docs around the
GITHUB_APP_PRIVATE_KEY placeholder to instruct users to either escape newlines
(\\n) when placing the key on a single line or to load the PEM from a file and
reference its path, and show the required multiline/escaped format for the
GitHub private key so self-hosted deployments won't fail at runtime; mention the
GITHUB_APP_PRIVATE_KEY symbol explicitly so maintainers can locate and update
that block.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e6cda373-b7f0-4abe-8ab3-0b1ebf0c5731
📒 Files selected for processing (5)
apps/docs/get-started/self-host.mdxapps/docs/snippets/github-app-button.mdxapps/web/Dockerfileapps/web/docker-entrypoint.shbin/deploy
🚧 Files skipped from review as they are similar to previous changes (2)
- bin/deploy
- apps/web/Dockerfile
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/api/Dockerfile`:
- Line 26: The Docker CMD needs to URL-encode POSTGRES_PASSWORD before embedding
it into DATABASE_URL and ensure DATABASE_URL contains an '@' before attempting
the substring rewrite; update the shell snippet that sets DATABASE_URL (the
command using
DATABASE_URL="postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${DATABASE_URL#*@}"
and invoking npm run prisma:migrate:production) to first check for an '@' in
DATABASE_URL and to percent-encode POSTGRES_PASSWORD (e.g., via a small encoding
routine using printf and od/awk or node -e) before interpolating so reserved
characters in POSTGRES_PASSWORD don't break the Prisma URI parsing.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 51939cf6-a6ef-4168-89d7-066e4dda364f
📒 Files selected for processing (2)
apps/api/Dockerfilebin/deploy
🚧 Files skipped from review as they are similar to previous changes (1)
- bin/deploy
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 8
♻️ Duplicate comments (4)
bin/deploy (1)
67-76:⚠️ Potential issue | 🟠 MajorDon't serialize the GitHub PEM through a quoted
.envstring.Lines 67-76 collect the private key as an escaped text blob, and Lines 300-305 persist it into
env_file. That leaves newline reconstruction up to Compose parsing, which is the brittle part here: if the PEM does not round-trip exactly, GitHub auth fails. Writing the PEM to its own mounted file, or encoding/decoding it explicitly, is much safer.Also applies to: 300-305
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/deploy` around lines 67 - 76, The prompt_multiline function currently collects the PEM into a single escaped string and later writes it into env_file, which can corrupt newlines; instead, change the flow so prompt_multiline returns/raw-saves the PEM to a file (e.g. prompt_multiline/GITHUB_PEM_FILE path) or base64-encodes the content and decode when used; specifically modify prompt_multiline (and the code that writes to env_file) to avoid embedding the PEM as a quoted env value—either write the exact PEM bytes to a dedicated file and store its path in env_file (reference the prompt_multiline function and the env_file write code around where GITHUB_PEM is persisted) or perform explicit base64 encode/decode to guarantee exact round-trip.apps/docs/get-started/self-host.mdx (2)
288-293:⚠️ Potential issue | 🟠 MajorSpell out the
.envformat forGITHUB_APP_PRIVATE_KEY.Lines 288-293 still read like a single-line placeholder, and Line 333 only says “PEM format”. Self-hosted setups need explicit newline-handling guidance here; otherwise a valid GitHub key is easy to paste into
.envin a form the app cannot parse.Suggested doc update
- GITHUB_APP_PRIVATE_KEY=<your-private-key> + # Keep the PEM on one line in `.env` with escaped newlines. + GITHUB_APP_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n" @@ -| `GITHUB_APP_PRIVATE_KEY` | Yes | — | GitHub App private key (PEM format) | +| `GITHUB_APP_PRIVATE_KEY` | Yes | — | GitHub App private key as newline-escaped PEM text in `.env` |Also applies to: 333-334
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/docs/get-started/self-host.mdx` around lines 288 - 293, The GITHUB_APP_PRIVATE_KEY placeholder currently reads like a single-line value and doesn't explain how to preserve PEM newlines in a .env; update the text near GITHUB_APP_PRIVATE_KEY (lines shown) and the note around “PEM format” (lines 333-334) to explicitly state that the private key is a multi-line PEM and must either have its newlines escaped (use literal “\n” sequences) or be stored via a multiline/env-file-safe method (e.g., dotenv multi-line or base64 encoding) before being loaded by the app, and give a short, clear example of both approaches so users know how to paste the GitHub app private key into their .env correctly.
23-27:⚠️ Potential issue | 🟡 MinorDocument
curlas a prerequisite.Lines 25-27 still omit
curl, but the quick-start command on Line 35 and the installer's readiness probe both depend on it. Users can follow this doc exactly and still fail on a minimal host.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/docs/get-started/self-host.mdx` around lines 23 - 27, Add "curl" to the Software requirements list because the quick-start command and the installer's readiness probe depend on it; update the bullet list to include "curl (CLI for transferring data)" or similar, and ensure any referenced quick-start script or installer README (the quick-start command and the installer's readiness probe) mention that curl must be installed on the host before running the steps.apps/api/Dockerfile (1)
30-30:⚠️ Potential issue | 🟠 MajorAvoid shell-form
CMDfor the startup chain.Line 30 uses
/bin/shas PID 1 withoutexecto replace it, which creates a signal-forwarding risk during graceful shutdown. Replace with an exec wrapper or entrypoint script.Recommended fix
-CMD npm run prisma:migrate:production --workspace api && npm run start +CMD ["sh", "-c", "npm run prisma:migrate:production --workspace api && exec npm run start"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/Dockerfile` at line 30, The Dockerfile uses a shell-form CMD ("CMD npm run prisma:migrate:production --workspace api && npm run start") which runs /bin/sh as PID 1 and prevents proper signal forwarding; change to an exec-form CMD (JSON array) that runs the migration and server with an exec wrapper or convert the chain into an entrypoint script (e.g., create an entrypoint script that runs "npm run prisma:migrate:production --workspace api" then exec "npm run start") and update CMD to call that script (or use tini as the ENTRYPOINT) so PID 1 is your process and signals are forwarded correctly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/docs/get-started/self-host.mdx`:
- Around line 393-405: The docs currently show two snippets both named
SENTRY_DSN which is misleading because the runtime wiring uses a single
SENTRY_DSN (the second value wins) and the stack does not support separate
API/web DSNs; update the copy so it accurately reflects this: either remove the
duplicate snippet and state that both API and web read the same SENTRY_DSN
environment variable, or explicitly document that separate DSNs are not
supported, referencing the SENTRY_DSN env var and the examples that use it so
readers aren’t led to expect independent API and frontend DSNs.
- Around line 275-280: Replace usage of the bootstrap superuser "postgres" for
application connections by creating a dedicated app role and updating the
environment variables: create a limited-privilege role (e.g., SWEETR_APP_USER)
and a corresponding password, ensure the role and database (POSTGRES_DB /
SWEETR) exist and grant only the necessary privileges, then change
POSTGRES_USER/POSTGRES_PASSWORD and DATABASE_URL to reference that app role
instead of the bootstrap "postgres" user so the bootstrap account remains
administrative only.
- Around line 241-244: The Dragonfly service in the Docker compose snippet
(symbols: dragonfly, image, container_name, command) is missing the tested Lua
flag; update the command array for the dragonfly service to include
"--default_lua_flags=allow-undeclared-keys" alongside the existing flags (e.g.,
"--cluster_mode=emulated", "--lock_on_hashtags", "--proactor_threads=2") so the
self-host documentation matches the configuration used in the
.github/workflows/docker-publish.yml validate-api step.
In `@bin/deploy`:
- Around line 192-195: The deploy script always mints a new
GITHUB_WEBHOOK_SECRET (WEBHOOK_SECRET) on every run, breaking existing GitHub
webhooks; change the logic in bin/deploy around the
WEBHOOK_SECRET/GITHUB_WEBHOOK_SECRET assignment so it only generates a new
secret when none exists (check if GITHUB_WEBHOOK_SECRET is already set or if the
installer’s persisted secrets file used by generate_files() contains a webhook
secret) and otherwise reuse the existing value; keep the export
GITHUB_WEBHOOK_SECRET behavior but guard the openssl rand -hex 32 call with an
if test that preserves the current secret.
- Around line 457-462: The health probe currently curls
http://localhost/api/health (inside the info/spin loop) which fails when Caddy
serves the API under ${DOMAIN} in non-local installs; update the probe to be
mode-aware: if running local keep using http://localhost/api/health, otherwise
target the configured host (e.g. https://${DOMAIN}/api/health or a HEALTH_URL
derived from ${DOMAIN}) so the curl in the for-loop uses the correct vhost;
adjust the conditional that runs curl (the block containing info "Waiting for
API..." and the spin array/for loop) to choose the URL based on the install mode
or presence of ${DOMAIN}.
- Around line 81-95: The check_requirements() function currently validates
Docker and Docker Compose but omits curl and openssl which are required later
(e.g., secret generation and start_stack()); update check_requirements() to also
verify both curl and openssl are present (using command -v or equivalent) and
call error with a clear install message and link if missing so these binaries
are validated before any interactive flow or start_stack() execution. Ensure the
checks reference the same success/error logging pattern used for Docker (e.g.,
info/error/success) and run early in check_requirements() so missing
curl/openssl are caught upfront.
- Around line 415-418: The Dragonfly service command in the docker-compose entry
(service name "dragonfly" / the command array) is missing the
--default_lua_flags=allow-undeclared-keys flag; update the command array for the
dragonfly container (the command assigned to the dragonfly service) to include
"--default_lua_flags=allow-undeclared-keys" alongside the existing flags so the
deployed compose matches the validated configuration.
- Around line 288-292: The deploy script is hardcoding the superuser
(POSTGRES_USER=postgres) and composing DATABASE_URL with the postgres user;
update it to use the restricted app user variables created in
apps/api/init-app-db.sh by replacing POSTGRES_USER and DATABASE_URL to reference
${DB_USERNAME} and ${DB_PASSWORD} instead of the literal "postgres" so the app
connects with the least-privileged user; ensure the DATABASE_URL follows the
pattern postgres://\${DB_USERNAME}:\${DB_PASSWORD}@postgres:5432/\${POSTGRES_DB}
and that POSTGRES_USER is set to ${DB_USERNAME} so docker/compose pick up the
correct credentials.
---
Duplicate comments:
In `@apps/api/Dockerfile`:
- Line 30: The Dockerfile uses a shell-form CMD ("CMD npm run
prisma:migrate:production --workspace api && npm run start") which runs /bin/sh
as PID 1 and prevents proper signal forwarding; change to an exec-form CMD (JSON
array) that runs the migration and server with an exec wrapper or convert the
chain into an entrypoint script (e.g., create an entrypoint script that runs
"npm run prisma:migrate:production --workspace api" then exec "npm run start")
and update CMD to call that script (or use tini as the ENTRYPOINT) so PID 1 is
your process and signals are forwarded correctly.
In `@apps/docs/get-started/self-host.mdx`:
- Around line 288-293: The GITHUB_APP_PRIVATE_KEY placeholder currently reads
like a single-line value and doesn't explain how to preserve PEM newlines in a
.env; update the text near GITHUB_APP_PRIVATE_KEY (lines shown) and the note
around “PEM format” (lines 333-334) to explicitly state that the private key is
a multi-line PEM and must either have its newlines escaped (use literal “\n”
sequences) or be stored via a multiline/env-file-safe method (e.g., dotenv
multi-line or base64 encoding) before being loaded by the app, and give a short,
clear example of both approaches so users know how to paste the GitHub app
private key into their .env correctly.
- Around line 23-27: Add "curl" to the Software requirements list because the
quick-start command and the installer's readiness probe depend on it; update the
bullet list to include "curl (CLI for transferring data)" or similar, and ensure
any referenced quick-start script or installer README (the quick-start command
and the installer's readiness probe) mention that curl must be installed on the
host before running the steps.
In `@bin/deploy`:
- Around line 67-76: The prompt_multiline function currently collects the PEM
into a single escaped string and later writes it into env_file, which can
corrupt newlines; instead, change the flow so prompt_multiline returns/raw-saves
the PEM to a file (e.g. prompt_multiline/GITHUB_PEM_FILE path) or base64-encodes
the content and decode when used; specifically modify prompt_multiline (and the
code that writes to env_file) to avoid embedding the PEM as a quoted env
value—either write the exact PEM bytes to a dedicated file and store its path in
env_file (reference the prompt_multiline function and the env_file write code
around where GITHUB_PEM is persisted) or perform explicit base64 encode/decode
to guarantee exact round-trip.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7d8bfd38-8553-470b-83a0-81530706928a
📒 Files selected for processing (6)
.github/workflows/docker-publish.ymlapps/api/Dockerfileapps/docs/get-started/self-host.mdxapps/docs/snippets/github-app-button.mdxapps/web/Dockerfilebin/deploy
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/docs/snippets/github-app-button.mdx
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
.github/workflows/docker-publish.yml (1)
129-133:⚠️ Potential issue | 🟡 MinorHarden Web readiness check to validate runtime env payload, not only 200 response.
Line 132 currently passes on any successful
/env-config.jsresponse. It can still green-light a broken runtime config file.Suggested fix
- name: Wait for Web to be ready run: | for i in $(seq 1 15); do - if curl -sf http://localhost:8080/env-config.js > /dev/null 2>&1; then + if curl -sf http://localhost:8080/env-config.js | grep -q 'window.__SWEETR_ENV__'; then echo "Web is healthy" exit 0 fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/docker-publish.yml around lines 129 - 133, The readiness loop named "Wait for Web to be ready" currently only checks that /env-config.js returns HTTP 200; change it to fetch the body and validate the runtime env payload content (not just status). Update the curl check for /env-config.js to also verify the response contains expected tokens (for example the runtime assignment prefix like "window." or a specific variable name such as "VITE_API_URL" or "API_URL") and that the payload is not an HTML error page (e.g., starts with "window" or contains the variable), and only echo success/exit when that content validation passes; keep the existing retry loop and failure behavior otherwise.
🧹 Nitpick comments (1)
.github/workflows/docker-publish.yml (1)
64-65: Use separate Buildx GHA cache scopes for API and Web images.Both images currently share the same cache namespace, which can reduce cache efficiency and cause cross-image cache churn.
Suggested fix
- name: Build API image uses: docker/build-push-action@v6 with: @@ - cache-from: type=gha - cache-to: type=gha,mode=max + cache-from: type=gha,scope=api-image + cache-to: type=gha,mode=max,scope=api-image @@ - name: Build Web image uses: docker/build-push-action@v6 with: @@ - cache-from: type=gha - cache-to: type=gha,mode=max + cache-from: type=gha,scope=web-image + cache-to: type=gha,mode=max,scope=web-image @@ - name: Build and push API image uses: docker/build-push-action@v6 with: @@ - cache-from: type=gha - cache-to: type=gha,mode=max + cache-from: type=gha,scope=api-image + cache-to: type=gha,mode=max,scope=api-image @@ - name: Build and push Web image uses: docker/build-push-action@v6 with: @@ - cache-from: type=gha - cache-to: type=gha,mode=max + cache-from: type=gha,scope=web-image + cache-to: type=gha,mode=max,scope=web-imageAlso applies to: 118-119, 171-172, 183-184
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/docker-publish.yml around lines 64 - 65, Both image build steps currently share the same Buildx GHA cache settings; update the cache-from and cache-to entries for each image build so they use distinct scopes/names (e.g., use scope=api-build or scope=web-build) to avoid cache churn between the API and Web images. Locate the cache lines (cache-from: type=gha and cache-to: type=gha,mode=max) in the API build step (e.g., the "Build and push API image" step) and the Web build step (e.g., the "Build and push Web image") and change them to include unique scope values (cache-from: type=gha,scope=api-build and cache-to: type=gha,mode=max,scope=api-build for API; use scope=web-build for Web), doing the same for the other duplicated occurrences so each image uses its own cache scope.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/deploy`:
- Around line 462-486: The health check uses curl http://localhost:8000/health
from the host but the generated docker-compose api service does not expose port
8000, so the probe will always fail; fix by either adding a host port mapping
for the api service in the docker-compose template (add a ports entry mapping
8000:8000 under the api service in the docker-compose.yml generation) or change
the health probe to run the check inside the container (use docker-compose exec
or docker exec into the api container and curl http://localhost:8000/health) and
ensure the script’s curl invocation (the health check command) references the
container execution method you choose.
---
Duplicate comments:
In @.github/workflows/docker-publish.yml:
- Around line 129-133: The readiness loop named "Wait for Web to be ready"
currently only checks that /env-config.js returns HTTP 200; change it to fetch
the body and validate the runtime env payload content (not just status). Update
the curl check for /env-config.js to also verify the response contains expected
tokens (for example the runtime assignment prefix like "window." or a specific
variable name such as "VITE_API_URL" or "API_URL") and that the payload is not
an HTML error page (e.g., starts with "window" or contains the variable), and
only echo success/exit when that content validation passes; keep the existing
retry loop and failure behavior otherwise.
---
Nitpick comments:
In @.github/workflows/docker-publish.yml:
- Around line 64-65: Both image build steps currently share the same Buildx GHA
cache settings; update the cache-from and cache-to entries for each image build
so they use distinct scopes/names (e.g., use scope=api-build or scope=web-build)
to avoid cache churn between the API and Web images. Locate the cache lines
(cache-from: type=gha and cache-to: type=gha,mode=max) in the API build step
(e.g., the "Build and push API image" step) and the Web build step (e.g., the
"Build and push Web image") and change them to include unique scope values
(cache-from: type=gha,scope=api-build and cache-to:
type=gha,mode=max,scope=api-build for API; use scope=web-build for Web), doing
the same for the other duplicated occurrences so each image uses its own cache
scope.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 564b788e-0815-424d-9be5-462bc9168240
📒 Files selected for processing (3)
.github/workflows/docker-publish.ymlapps/web/src/providers/sentry.provider.tsbin/deploy
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/web/src/providers/sentry.provider.ts
Greptile Summary
This PR introduces a self-hosting path for Sweetr, adding a Bash deploy script (
bin/deploy), Docker images for both the API and web frontend, and comprehensive documentation. The web frontend gains runtime environment injection viaenv-config.js(generated at container start bydocker-entrypoint.shusingjqfor safe JSON escaping), allowing self-hosters to configure the app without a rebuild.Key changes:
bin/deploy: Interactive installer that generatesdocker-compose.yml,Caddyfile, and.env, then starts the stack. Correctly preservesJWT_SECRET,DB_PASSWORD, andGITHUB_WEBHOOK_SECRETacross re-runs, but GitHub App credentials and Slack credentials are re-prompted on every execution.window.__SWEETR_ENV__is injected at container start viadocker-entrypoint.sh;getEnv()reads from it first with a fallback toimport.meta.envfor Vite dev mode.validate-apiandvalidate-web, and adds apublishjob that builds and pushes both images to Docker Hub.apps/web/public/env-config.js: Empty stub added to prevent 404s during local Vite development.Confidence Score: 3/5
bin/deploydoes not preserve GitHub App credentials (or Slack credentials) on re-runs — every re-execution forces the user to manually re-paste all five GitHub fields including the multi-line PEM private key. This doesn't break a fresh install, but makes any follow-up operation (changing domain, adding Slack, upgrading) error-prone.bin/deploy— specifically thecollect_github_appandcollect_slack_appfunctions, which always prompt fresh without checking for existing values insweetr/.env.Important Files Changed
validatejob intovalidate-apiandvalidate-web, adds apublishjob that builds and pushes both images, and adds Dragonfly env vars required for cluster-emulated mode.steps:key is correctly placed, and both validate jobs follow sound patterns.nginx:alpine. Installsjqfor JSON-safe env var escaping in the entrypoint, copies the entrypoint script to/docker-entrypoint.d/so nginx's own entrypoint runs it before starting nginx. Clean and correct.env-config.jsat container start usingjqfor safe JSON escaping of each env var. Addresses the previously noted XSS/broken-syntax concern. No new issues.getEnv()with awindow.__SWEETR_ENV__→import.meta.envfallback chain using??. The??operator correctly skips the fallback whenwindow.__SWEETR_ENV__isundefined(Vite dev), and in Docker the runtime values are authoritative..env,Caddyfile, anddocker-compose.ymlthen starts the stack.JWT_SECRET,DB_PASSWORD, andGITHUB_WEBHOOK_SECRETare correctly preserved on re-runs, but all five GitHub App credentials (and Slack credentials) are re-prompted on every execution — this is a meaningful usability and reliability concern for re-deployments.Sequence Diagram
sequenceDiagram participant User participant deploy as bin/deploy participant FS as sweetr/.env + files participant DC as Docker Compose participant Caddy participant Web as sweetr-web (nginx) participant API as sweetr-api (Node) participant PG as sweetr-postgres participant DF as sweetr-dragonfly User->>deploy: curl ... | bash deploy->>deploy: check_requirements (docker, compose) deploy->>deploy: collect_deploy_mode + collect_domain deploy->>deploy: collect_github_app (prompt credentials) deploy->>deploy: collect_slack_app (optional) deploy->>FS: generate .env, Caddyfile, docker-compose.yml deploy->>DC: docker compose pull && up -d DC->>PG: start (healthcheck: pg_isready) DC->>DF: start (healthcheck: redis-cli ping) DC->>API: start (waits for PG + DF healthy) API->>PG: prisma migrate deploy API->>API: npm run start (port 8000) Note over API: healthcheck: curl localhost:8000/health DC->>Web: start Note over Web: docker-entrypoint.d/00-env-config.sh<br/>generates env-config.js via jq Web->>Web: nginx serves SPA on :80 DC->>Caddy: start (depends on api + web) Caddy->>Caddy: TLS via Let's Encrypt (prod) deploy->>DC: docker inspect sweetr-api health (poll) DC-->>deploy: healthy deploy-->>User: ✓ Sweetr is up — https://domainComments Outside Diff (4)
apps/web/src/providers/sentry.provider.ts, line 20-24 (link)Hardcoded
tracePropagationTargetsbreaks distributed tracing for self-hosted deploymentstracePropagationTargetsonly listslocalhost,api.sweetr.local, andapi.sweetr.dev. For any self-hosted instance, the API lives on the user's custom domain (e.g.,https://mycompany.example.com/api). Sentry will not propagate trace headers to API requests from self-hosted frontends, making distributed tracing silently non-functional for all self-hosted users who enable Sentry.Since this PR introduces self-hosting, the
initSentry()call already has access to the configuredAPI_ENDPOINTviagetEnv. The target list should include it:apps/web/src/providers/sentry.provider.ts, line 20-24 (link)Hardcoded
tracePropagationTargetsbreaks distributed tracing for self-hosted instancestracePropagationTargetsis hardcoded tolocalhost,api.sweetr.local, andapi.sweetr.dev. For any self-hosted instance using a custom domain (e.g.,sweetr.example.com), Sentry will not inject trace headers into API requests because the API endpoint doesn't match any of these patterns. Distributed traces between the web frontend and the API will be broken for all self-hosted users.The
API_ENDPOINTenv var is already available viagetEnv. Consider deriving the target dynamically:apps/web/src/providers/sentry.provider.ts, line 20-24 (link)Hardcoded
tracePropagationTargetsbreaks Sentry distributed tracing for self-hosted instancestracePropagationTargetsis hardcoded toapi.sweetr.localandapi.sweetr.dev. For a self-hosted instance running at, say,sweetr.example.com, all API requests go tohttps://sweetr.example.com/api, which doesn't match any of these patterns. As a result, Sentry won't inject thesentry-traceandbaggageheaders on outgoing API requests, and distributed traces won't be linked end-to-end for self-hosted users even when they configure their ownSENTRY_DSN.Consider including the runtime
API_ENDPOINTas a target:apps/web/src/providers/sentry.provider.ts, line 20-24 (link)tracePropagationTargetshardcodes Sweetr cloud domainsThe
tracePropagationTargetslist only includeslocalhost,api.sweetr.local, andapi.sweetr.dev. For self-hosted deployments — the entire purpose of this PR — the API endpoint will be some arbitrary domain (e.g.https://sweetr.mycompany.com/api). No Sentry trace headers will be forwarded to those requests, breaking distributed tracing for every self-hosters who configures aSENTRY_DSN.Consider building the pattern dynamically from the runtime
API_ENDPOINTvalue:Last reviewed commit: "fix: healthcheck"