Skip to content

Commit b741b64

Browse files
committed
docs: Add Kubernetes deploy HOWTO
1 parent 7f4540a commit b741b64

File tree

1 file changed

+304
-0
lines changed

1 file changed

+304
-0
lines changed
Lines changed: 304 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,304 @@
1+
[[!meta title="Deploying Kubernetes Services"]]
2+
3+
## Overview
4+
Are you looking to deploy a new service to the OCF Kubernetes cluster or port
5+
an existing service from [[Marathon|doc staff/backend/mesos#h2_marathon]]? This
6+
document will cover the steps required to do so. Do note that this is not a
7+
substitute for a Kubernetes tutorial or a Docker tutorial (there are many
8+
resources online for that) but a guide for getting your service running on the
9+
OCF.
10+
11+
## Getting started
12+
This `HOWTO` will focus on one of the OCF's simplest services:
13+
[templates][templates]. Templates is service used internally by OCF staff
14+
serving 'copy-pasteable' email templates. Right now you should use [[git|doc
15+
staff/backend/git]] to `clone` your repo. Now let's get the templates repo.
16+
17+
```
18+
git clone [email protected]:ocf/templates.git
19+
```
20+
21+
In the root of your project create a `kubernetes` folder. This is where all
22+
your Kubernetes configuration files will live. Templates, a relatively simple
23+
service, is a single `nginx` server serving static content. Because this
24+
application is self-contained we need to create one file,
25+
`kubernetes/templates.yaml`.
26+
27+
## Service
28+
Since templates is a web service we will first create a `Service` object. The
29+
first step to make your Kubernetes service internet-facing is to make your
30+
application accessible within the Kubernetes cluster. In most cases you can
31+
simply fill in this template.
32+
33+
```
34+
apiVersion: v1
35+
kind: Service
36+
metadata:
37+
name: <myapp>-service
38+
spec:
39+
selector:
40+
app: <myapp>
41+
ports:
42+
- port: 80
43+
targetPort: <docker-port>
44+
```
45+
46+
The `name` field under `metadata` resource is the name Kubernetes uses to
47+
identify your `Service` object when, for example, you run `kubectl get
48+
services`. The `selector` resource is the name you will use to bind `Pods` to
49+
this `Service` object. Fill in the `targetPort` with the port that your
50+
application uses _inside_ of the docker container. In the case of templates we
51+
bind to port `8000`. Here is the `Service` configuration for templates with all
52+
the fields filled in.
53+
54+
```
55+
apiVersion: v1
56+
kind: Service
57+
metadata:
58+
name: templates-service
59+
spec:
60+
selector:
61+
app: templates
62+
ports:
63+
- port: 80
64+
targetPort: 8000
65+
```
66+
67+
## Creating the deployment
68+
69+
Great! Now let's move onto creating our pods! To do this we'll create a
70+
`Deployment` object. Deployments can get become complicated with application
71+
specific configuration, but the simplicity of Templates elucidates the
72+
bare-bones requirements for any Deployment.
73+
74+
```
75+
apiVersion: apps/v1
76+
kind: Deployment
77+
metadata:
78+
name: <myapp>-deployment
79+
labels:
80+
app: <myapp>
81+
spec:
82+
replicas: <#pods>
83+
selector:
84+
matchLabels:
85+
app: <myapp>
86+
template:
87+
metadata:
88+
labels:
89+
app: <myapp>
90+
spec:
91+
containers:
92+
- name: <container-name>
93+
image: "docker.ocf.berkeley.edu/<your-repo-name>:<%= version %>"
94+
resources:
95+
limits:
96+
memory: <#Mi>
97+
cpu: <cpus-in-millicores>m
98+
ports:
99+
- containerPort: <docker-port>
100+
```
101+
102+
This section can be a bit daunting, but we'll go through it step-by-step. Fill
103+
in `<app-name>` and `<docker-port>` with the same name you used in your
104+
`Service`. This will ensure your Pods are bound to the `Service` we previously
105+
created. `replicas` is the number of instances we want. Because Templates is
106+
used internally by OCF staff, we aren't super concerned with uptime and create
107+
only 1 instance. For a service like `ocfweb`, where uptime is crucial, we would
108+
opt for 3 instances to handle failover.
109+
110+
The `containers` resource is where Kubernetes looks to obtain `docker` images
111+
to deploy. For production services this will _always_ be the OCF docker server:
112+
`docker.ocf.berkeley.edu`. `<your-repo-name>` is the name of the repository on
113+
the OCF GitHub, and version will be filled in automatically by [[Jenkins|doc
114+
staff/backend/jenkins]]. For testing, it is recommended you push your image to
115+
[DockerHub][dockerhub] or to `docker.ocf.berkeley.edu` (talk to a root staffer
116+
in the latter case) and use a hardcoded image name.
117+
118+
Lastly, we set our resource limits. Templates is a low-resource service so
119+
we'll give it 1 megabyte of memory and `50/1000` of a CPU core (Kubernetes uses
120+
millicores for CPU units, so 1 core = 1000m). Do note that every instance of
121+
the application gets these resources, so with _N_ instances you are using _N *
122+
limits_.
123+
124+
WARNING: On low-resource development cluster, asking for too much CPU or RAM
125+
can put your application in an infinite `Pending` loop since the cluster will
126+
never have enough resources to schedule your service (yes, this has happened to
127+
us).
128+
129+
With all the fields filled in we have this Deployment object for Templates.
130+
131+
```
132+
apiVersion: apps/v1
133+
kind: Deployment
134+
metadata:
135+
name: templates-deployment
136+
labels:
137+
app: templates
138+
spec:
139+
replicas: 1
140+
selector:
141+
matchLabels:
142+
app: templates
143+
template:
144+
metadata:
145+
labels:
146+
app: templates
147+
spec:
148+
containers:
149+
- name: templates-static-content
150+
image: "docker.ocf.berkeley.edu/templates:<%= version %>"
151+
resources:
152+
limits:
153+
memory: 128Mi
154+
cpu: 50m
155+
ports:
156+
- containerPort: 8000
157+
```
158+
159+
The last object we need to create for the Templates service is `Ingress`. We
160+
want to expose our service to the world with the fully-qualified-domain-name
161+
templates.ocf.berkeley.edu. Ingress, like Service objects, are similar for
162+
most services.
163+
164+
```
165+
apiVersion: extensions/v1beta1
166+
kind: Ingress
167+
metadata:
168+
name: virtual-host-ingress
169+
spec:
170+
rules:
171+
- host: <myapp>.ocf.berkeley.edu
172+
http:
173+
paths:
174+
- backend:
175+
serviceName: <myapp>-service
176+
servicePort: 80
177+
```
178+
179+
Note that `serviceName` _must_ be the same as that used in the `Service`
180+
object. Now that we have ingress, all requests with the `Host` header
181+
`templates.ocf.berkeley.edu` will be directed to a Templates Pod!
182+
183+
184+
## Deployment extras
185+
186+
### OCF DNS
187+
188+
If your application at any point uses OCF-specific DNS, like using the hostname
189+
`mysql` as opposed to `mysql.ocf.berkeley.edu` to access `MariaDB`, then you
190+
need to add this under your deployment `spec`.
191+
192+
```
193+
dnsPolicy: ClusterFirst
194+
dnsConfig:
195+
searches:
196+
- "ocf.berkeley.edu"
197+
```
198+
199+
### NFS
200+
201+
If your application does not need access to the filesystem then you can skip
202+
this section. If your application needs to keep state, try to explore `MariaDB`
203+
as a much simpler option before making use of `NFS`.
204+
205+
For Kubernetes to access the file system we need two objects: a
206+
`PersistentVolume` and a `PersistentVolumeClaim`. The former maps a filesystem
207+
to the cluster, and the latter is how a service asks to access that filesystem.
208+
You will need to create the `PersistentVolume` in [Puppet][puppet] as
209+
<app-nfs-pv.yaml>. In this example we'll create 30 gigabytes of readable and
210+
writeable storage.
211+
212+
```
213+
apiVersion: v1
214+
kind: PersistentVolume
215+
metadata:
216+
name: <myapp>-nfs-pv
217+
spec:
218+
capacity:
219+
storage: 30Gi
220+
accessModes:
221+
- ReadWriteMany
222+
nfs:
223+
path: /opt/homes/services/<myapp>
224+
server: filehost.ocf.berkeley.edu
225+
readOnly: false
226+
```
227+
228+
That's all you need to add to Puppet. Now you need to add the
229+
`PersistentVolumeClaim` object to your service. Here we will claim all 30
230+
gigabytes of the volume we added in Puppet.
231+
232+
```
233+
apiVersion: v1
234+
kind: PersistentVolumeClaim
235+
metadata:
236+
name: <myapp>-pvc
237+
spec:
238+
accessModes:
239+
- ReadWriteMany
240+
resources:
241+
requests:
242+
storage: 30Gi
243+
volumeName: "<myapp>-pv"
244+
```
245+
246+
Under our `deployment` we add a `volumes` sequence under `spec`. Use the
247+
`volumeName` you chose in the `PVC`.
248+
249+
```
250+
volumes:
251+
- name: <myapp-data>
252+
persistentVolumeClaim:
253+
claimName: <myapp>-pvc
254+
```
255+
256+
Now we've set up the volume claim. Finally, we need to tell Kubernetes to mount
257+
this `PVC` into our docker container. Under the `container` resource add:
258+
259+
```
260+
volumeMounts:
261+
- mountPath: /target/path/in/my/container
262+
name: <myapp-data>
263+
```
264+
265+
266+
## Wrapping up
267+
268+
Now we have all the necessary configuration to deploy our service. To see if
269+
everything works, we will deploy the service manually. On `supernova`, first
270+
run `kinit`. This will obtain a [[kerberos|doc staff/backend/kerberos]] ticket
271+
giving us access to the Kubernetes cluster. Now run
272+
273+
```
274+
kubectl create namespace <myapp>
275+
kubectl apply -n <myapp> -f <myapp>.yaml
276+
```
277+
278+
You can run `kubectl -n <myapp> get all` to Kubernetes create your `Service`
279+
and `Deployment` objects.
280+
281+
### Production Services: Setting up DNS
282+
283+
If you are testing your deployment, use
284+
`<myapp>.dev-kubernetes.ocf.berkeley.edu` as your Ingress host and that will
285+
work immediately. When you deploy your service to production, make sure to
286+
follow the instructions below.
287+
288+
The final step to make your service live is to create a DNS entry for your
289+
Kubernetes service. You will need to clone the OCF dns repo.
290+
291+
```
292+
git clone [email protected]:ocf/dns.git
293+
```
294+
295+
Since we are adding DNS for a Kubernetes service, we run `ldapvi
296+
cn=lb-kubernetes`. Add a `dnsCname` entry for your application. Run `make` and
297+
commit your changes to GitHub. Once the DNS propagates and Puppet runs on all
298+
the Kubernetes masters (wait about 30 minutes) your service will be accessible,
299+
with TLS, at `<myapp>.ocf.berkeley.edu`. Congratulations!
300+
301+
302+
[templates]: https://templates.ocf.berkeley.edu
303+
[dockerhub]: https://hub.docker.com
304+
[puppet]: https://github.com/ocf/puppet/tree/master/modules/ocf_kubernetes/files/persistent-volume-nfs

0 commit comments

Comments
 (0)