tsbridge acts as a tsnet-powered reverse proxy, letting you expose multiple backend services on your Tailnet from a single process. It's designed for homelabs and development environments where you want the magic of Tailscale without the hassle of running a separate sidecar for every service.
Inspired by Traefik, tsbridge can be configured with a simple TOML file or by watching Docker for container labels.
I got tired of spinning up a new tsnsrv instance for every service I wanted to expose on my Tailnet. Each one needs its own systemd service and configuration. With tsbridge, you configure once and add services as needed - either through a config file or by just adding labels to your Docker containers.
Grab a binary from releases or:
go install github.com/jtdowney/tsbridge/cmd/tsbridge@latest- Get OAuth credentials from https://login.tailscale.com/admin/settings/oauth
- Create
tsbridge.toml:
[tailscale]
oauth_client_id_env = "TS_OAUTH_CLIENT_ID"
oauth_client_secret_env = "TS_OAUTH_CLIENT_SECRET"
[[services]]
name = "api"
backend_addr = "127.0.0.1:8080"
[[services]]
name = "web"
backend_addr = "unix:///var/run/web.sock"- Run it:
export TS_OAUTH_CLIENT_ID=your-id
export TS_OAUTH_CLIENT_SECRET=your-secret
tsbridge -config tsbridge.tomltsbridge will now be available on your tailnet. Thanks to MagicDNS, you can reach your services at https://api.<tailnet>.ts.net and https://web.<tailnet>.ts.net.
tsbridge is configured via tsbridge.toml. See docs/quickstart.md for getting started quickly, or docs/configuration-reference.md for all options.
Here are a few common settings:
whois_enabled: Set totrueto addTailscale-User-*identity headers to upstream requestswrite_timeout: Defaults to30s. Set to"0s"to support long-running connections like Server-Sent Events (SSE)metrics_addr: Expose a Prometheus metrics endpoint (e.g.,":9090") - see docs/metrics.md for available metrics (secure this endpoint in production)
Security Note: tsbridge is intended for homelabs and development environments. It hasn't been hardened or battle-tested for production workloads. See THREAT_MODEL.md for details.
For enhanced security using tag ownership, see Tag Ownership and OAuth Security.
docker run -v /path/to/config:/config \
-e TS_OAUTH_CLIENT_ID=... \
-e TS_OAUTH_CLIENT_SECRET=... \
ghcr.io/jtdowney/tsbridge:latest -config /config/tsbridge.tomltsbridge can watch Docker and automatically expose containers based on their labels:
# docker-compose.yml
services:
tsbridge:
image: ghcr.io/jtdowney/tsbridge:latest
command: ["--provider", "docker"]
volumes:
- /var/run/docker.sock:/var/run/docker.sock # Required for label discovery
- tsbridge-state:/var/lib/tsbridge
environment:
- TS_OAUTH_CLIENT_ID=${TS_OAUTH_CLIENT_ID}
- TS_OAUTH_CLIENT_SECRET=${TS_OAUTH_CLIENT_SECRET}
labels:
- "tsbridge.tailscale.oauth_client_id_env=TS_OAUTH_CLIENT_ID"
- "tsbridge.tailscale.oauth_client_secret_env=TS_OAUTH_CLIENT_SECRET"
- "tsbridge.tailscale.state_dir=/var/lib/tsbridge"
- "tsbridge.tailscale.default_tags=tag:server" # Must match or be owned by your OAuth client's tag
whoami:
image: traefik/whoami
labels:
- "tsbridge.enabled=true"
- "tsbridge.service.name=whoami"
- "tsbridge.service.port=80"
volumes:
tsbridge-state:See docs/docker-labels.md for the full label reference.
Note: The
default_tagsmust match or be owned by your OAuth client's tag. Individual services can override this with their owntagslabel. See Tag Ownership and OAuth Security for setup details.
Works with Headscale but requires auth keys instead of OAuth and TLS must be disabled (until headscale issue #2137 gets implemented):
[tailscale]
auth_key_env = "TS_AUTH_KEY"
control_url = "https://headscale.example.com"
[[services]]
name = "api"
backend_addr = "127.0.0.1:8080"
tls_mode = "off" # Required for HeadscaleSee example/headscale/ for a complete setup.
make build # Build binary
make test # Run tests
make lint # Run lintersWhen metrics_addr is configured, tsbridge exposes Prometheus metrics at http://<metrics_addr>:
- Request counts and latencies
- Error rates
- Active connections
- Service lifecycle events
Each service runs its own tsnet instance with isolated state, enabling independent lifecycle management and per-service configuration.
MIT License - see LICENSE file for details.
