Skip to content

mpsky: solar system ephemerides computation service for PP #4609

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

mjuric
Copy link
Collaborator

@mjuric mjuric commented Apr 28, 2025

This PR adds an mpsky application, deploying a containerized version of MPSky that can be accessed by PP pipelines at nighttime.

This service is meant to be used purely internally (by the PP pipeline), is read-only, and doesn't come with authentication. On KTL's recommendation, to make it visible internally (and inaccessible externally) we've dropped the Ingress and changed the service type to a LoadBalancer instead, using an IP from sdf-services pool. (see discussion here).

This is an initial merge, with more development expected to occur over the next few days; that's why the image tag is currently set to 'latest'.

mjuric added 2 commits April 27, 2025 18:48
The image has now been pointed to ghcr.io/mjuric/mpsky-daily, and the
starting command updated accordingly.  At the moment, the image hardcodes
some of the input files which will soon be moved to a bucket.

This service needs to be accessible only internally from within USDF.  On
KTL's recommendation, we've therefore dropped the Ingress and changed the
service type to a LoadBalancer instead, using an IP from sdf-services pool.

See discussion at:
https://rubin-obs.slack.com/archives/C07Q45N7KHV/p1742520202629099
@mjuric mjuric changed the title WIP: mpsky: solar system ephemerides computation service for PP mpsky: solar system ephemerides computation service for PP Apr 28, 2025
@mjuric mjuric requested review from rra, ktlim and pav511 and removed request for ktlim April 28, 2025 01:54
@jonathansick jonathansick self-requested a review May 27, 2025 13:28
Comment on lines +7 to +9
annotations:
# metallb.universe.tf/address-pool: sdf-services
loadBalancerIP: 172.24.5.245
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This IP is being baked into the service template, and therefore will be the same for all Phalanx environments where mpsky is deployed (usdf-rsp-dev, usdf-rsp, etc.). I think this may need to be different for each deployment. To do this, add a value to values.yaml that will get overriden for each environment (i.e., values-usdfdev.yaml). To handle that awkward initial deployment, you could do something like set the default loadBlancerIP to "none" and do a conditional check to use the sdf-services annotation instead, like this (I haven't checked this syntax, so try this first):

Suggested change
annotations:
# metallb.universe.tf/address-pool: sdf-services
loadBalancerIP: 172.24.5.245
annotations:
{{- if .Values.loadBalancerIP == "none" }}
metallb.universe.tf/address-pool: sdf-services
{{- else }}
loadBalancerIP: {{ .Values.loadBalancerIP | quote }}
{{- end }}

Last, add a comment here that captures the discussion from Slack about how this IP address is determined.

Comment on lines +1 to +21
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: "mpsky"
spec:
podSelector:
matchLabels:
{{- include "mpsky.selectorLabels" . | nindent 6 }}
policyTypes:
- "Ingress"
ingress:
# Allow inbound access from pods (in any namespace) labeled
# gafaelfawr.lsst.io/ingress: true.
- from:
- namespaceSelector: {}
podSelector:
matchLabels:
gafaelfawr.lsst.io/ingress: "true"
ports:
- protocol: "TCP"
port: 8080
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the NetworkPolicy here necessary, given that gafaelfawr and ingress are out of the picture?

@athornton
Copy link
Member

So when you say "used internally" you mean "accessible only to things running inside the USDF network, but not necessarily on this K8s cluster" ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants