This set of modules contains a web server and the client for the Observe tool
The backend is written with http4s, it exposes a REST API and can provide static files for the UI. It is intended to run in the same process as the observe-server
This project uses an extra plugin
- sbt-revolver: This plugin allows to restart the web server and trigger a recompilation when the source code changes
Go to the JVM project
project observe_web_server
~reStart
Now every time a file is changed in the server code, the server files will be compiled, then the server will restart
By default the REST backend will run on port 7070
It can be stopped by executing
reStop
from within the project.
This module contains a web-based observe client. It contains a SPA (Single-page application) which communicates to the backend using Ajax-style calls and websockets.
Deployment is done via a Docker image.
When a PR is merged into main
, CI automatically builds a Docker image called noirlab/gpp-obs
. This image contains the configurations for all 6 deployments. The 6 deployments are the combination of each of the sites (GN and GS) and each of our 3 environments (Heroku, staging and production).
The image is automatically pushed to Heroku and to the nlsoftware
account in Dockerhub. Both Heroku applications are also released after the push.
The image is not release automatically in these environments.
There are a bunch of shell scripts in deploy/scripts
that should be copied on the ~/observe
directory on each server. Then the correct site needs to be configured in config.sh
. These scripts are:
update.sh
(will automatically stop and restart a running server).start.sh
stop.sh
A Dockerhub access token has to be configued in config.sh
too. Ideally, a read-only token should be used.
The base image contains a basic configuration called base.conf
. These same file is the one used during development.
Each server has a site.conf
providing overrides. The site.conf
for each server is under deploy/confs
. This is where they should be edited, making a new release when they change.
The only things not included in site.conf
are the secrets. Namely: the SSO service token, and the passphrases needed for TLS in Staging and Production. These should be placed in ~/observe/conf/secrets.conf
at the staging and production servers. For Heroku, the SSO service token needs to be provided as an environment variable.
For Starging and Production, the SSO service token and TLS passphrases need to be provided in a local file ~/observe/conf/secrets.conf
. The whole ~/observe/conf
directory is bind mounted into the container by start.sh
.
A typical secrets.conf
will look like this:
lucuma-sso {
service-token = "<INSERT TOKEN HERE>"
}
web-server {
tls {
key-store-pwd = "<INSERT PASSPHRASE HERE>"
cert-pwd = "<INSERT PASSPHRASE HERE>"
}
}
Furthermore, observe will also read a ~/observe/conf/systems.conf
, which should be used to provide frequently changing overrides. The more permanent configurations should be in site.conf
. In particular all the systems that are to be set to full
should be done there. Otherwise, they will remain simulated
.
NOTE:
- To generate a service token, see the lucuma-sso documentation.