Deploy guide

Host S1 yourself.

One docker compose up and S1 is running on your box. No SaaS in the middle, no egress bills, no metered viewers. Pull the latest build whenever you want; your cameras, users, and share links stay put.

Step 0

What you'll need

A quiet 15 minutes, one machine, and line of sight to your cameras.

  • A host with Docker installed. Linux, macOS, or Windows with WSL2. A modest VM, a Raspberry Pi 4/5, or a spare desktop all work. For transcoded streams (HEVC cameras, MJPEG sources), plan ~1 CPU core per active viewer.
  • Network reachability to your cameras. The machine running S1 must be able to open RTSP to each camera's IP and port (typically 554, sometimes 8554). Private LAN is fine.
  • A free TCP port on the host. Default is 3000; you can remap it.
  • Optional — a domain and reverse proxy (Caddy, nginx, Cloudflare Tunnel) if you want HTTPS and a nice URL. For a LAN-only test box, skip this.
Why Docker? It ships the right ffmpeg build, the right Node version, and a persistent volume for the embedded SQLite database in one tagged artifact. Updating means pull, not apt-get.
Express path

Easy button — bootstrap script

Two commands on a fresh Ubuntu/Debian box. Installs Docker, adds you to the docker group, writes .env with a fresh secret, and launches. Skip to Step 1 if you'd rather walk through it by hand.

sudo apt-get install -y git
git clone https://github.com/networkyoda/s1.git s1 && cd s1
./scripts/bootstrap-ubuntu.sh

The script prints the URL to browse to when it's done. It's re-runnable — each step is a no-op if already done.

Prefer one command at a time? The seven-command manual version:

# Docker — one line, official script
curl -fsSL https://get.docker.com | sudo sh
sudo usermod -aG docker $USER && newgrp docker

# Get S1 and launch
sudo apt-get install -y git
git clone https://github.com/networkyoda/s1.git s1 && cd s1
cp .env.example .env
sed -i "s|replace-me-with-a-long-random-string|$(openssl rand -hex 32)|" .env
docker compose up -d
Tradeoff (both paths). get.docker.com is maintained by Docker Inc., but it runs as root and adds an apt repo without letting you review first. Docker's own docs call it "not recommended for production." For a test box the convenience is worth it; for a hardened production host, use the manual apt-repo sequence from Docker's official docs and then follow Steps 1–4 below.
Step 1

Clone the repo

All the configuration you need is checked in — docker-compose.yml, the Dockerfile, and a template env file.

git clone https://github.com/networkyoda/s1.git s1
cd s1
Step 2

Create your .env

One required secret (the auth signing key), two optional knobs.

cp .env.example .env
# Generate a random signing key and paste it in as JWT_SECRET:
openssl rand -hex 32

Open .env in your editor. The shipped values are safe defaults except for JWT_SECRET — change that to the output of the openssl command above so auth tokens are signed with a key only you know.

Don't commit your .env. It's in .gitignore already, but double-check before you push a fork anywhere public.
Step 3

Launch it

Compose pulls the image, starts the container, and mounts a named volume for your SQLite database.

docker compose up -d
docker compose logs -f

You should see s1 listening on 0.0.0.0:3000 within a second or two. Ctrl-C out of the log tail whenever you're satisfied.

First pull failing? The image lives at ghcr.io/networkyoda/s1:latest and GHCR defaults to private. Either flip the package to public in the repo's Packages tab, or docker login ghcr.io with a GitHub PAT that has read:packages. While you're waiting, docker compose up -d --build builds the image locally from the Dockerfile in the repo.
Step 4

Register and add a camera

The first user you register becomes the admin of that instance.

  1. Browse to http://<host>:3000 — on the same machine, http://localhost:3000.
  2. Hit Create one on the sign-in card. Pick a username and a password. No email, no verification flow.
  3. On the dashboard, click + Add stream. Paste your camera's full RTSP URL, including any user:pass@ credentials.
  4. Open Edit → Test URL if the stream doesn't start within a few seconds. The probe reports codec, resolution, frame rate, or a readable error if the URL is wrong.

Share/embed codes are generated from the per-stream Share menu. The embed is tap-to-play by design so an <iframe> on a page nobody reads doesn't pull bandwidth from your cameras.

Keeping up to date

Upgrading

GitHub Actions publishes a fresh image on every push to main. Pulling the latest takes about ten seconds of downtime.

cd /path/to/s1
git pull
docker compose down
docker compose pull
docker compose up -d

Your s1-data volume — the one holding the SQLite database with users, streams, and share links — survives down and is re-attached to the new container. A clean wipe (if you ever want one) is docker compose down -v.

Reference

Configuration

Every knob is an environment variable. See src/config.js for the source of truth.

VariableDefaultNotes
JWT_SECRETdev-secret-change-meRequired. Any long random string; keep it stable across restarts.
PORT3000Host port mapped to the container.
TRUST_PROXY1false for direct LAN access, a hop count (or true) when behind a reverse proxy.
HOST0.0.0.0Bind address inside the container. Rarely needs changing.
DB_PATH/data/rtspme.dbSQLite file path inside the container. The named volume lands at /data.
FFMPEG_PATHffmpegOnly set if you've got a custom ffmpeg build on PATH.
MARKETING_HOMEfalseLeave off for self-hosting — / serves the login screen. The hosted product sets this to true to surface the public landing page at /.
When things misbehave

Troubleshooting

The dashboard's per-stream Logs button is the first place to look; it tails the last few hundred ffmpeg lines per stream.

  • Stream never leaves "Connecting" — usually the RTSP URL is wrong or unreachable. Open Edit → Test URL; the probe will say Connection refused, 401 Unauthorized, or similar.
  • HEVC camera plays on some browsers but not others — S1 auto-transcodes HEVC to H.264 so every browser works. If transcoding isn't happening, check Logs for source video codec is hevc — using transcode on session start.
  • CPU pegged — transcoded streams are the cost driver. Reduce concurrent viewers, or route high-resolution HEVC cameras to a beefier host.
  • Healthcheck keeps failing — the container defines a healthcheck that hits /healthz. If docker ps shows unhealthy, pull container logs with docker compose logs s1.