marp | theme |
---|---|
true |
gitops |
apiVersion: v1
kind: Developer
metadata:
name: Kristoffer-Andre Kalliainen
labels:
drinks: coffee
spec:
linkedin: https://www.linkedin.com/in/kalliainen/
github: https://github.com/181192
companyRef:
apiVersion: v1
kind: Company
name: Stacc AS
A way of implementing Continuous Deployment for cloud native applications. It focuses on a developer-centric experience when operating infrastructure.
- Deploy faster more often
- Easy and fast error recovery -
git revert
- Easier Credential Managment
- Self-documenting deployments
Environment Configuration as Git repository - There are at least two repositories: the application repository and the environment configuration repository. The application repository contains the source code of the application and the deployment manifests to deploy the application. The environment configuration repository contains all deployment manifests of the currently desired infrastructure of an deployment environment. It describes what applications and infrastructural services (message broker, service mesh, monitoring tool, …) should run with what configuration and version in the deployment environment.
GitOps doesn’t provide a solution to propagating changes from one stage to the next one. We recommend using only a single environment and avoid stage propagation altogether. But if you need multiple stages (e.g., DEV, QA, PROD, etc.) with an environment for each, you need to handle the propagation outside of the GitOps scope, for example by some CI/CD pipeline.
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: nginx-ingress
namespace: ingress-nginx
annotations:
fluxcd.io/ignore: "false"
spec:
releaseName: nginx-ingress
chart:
repository: https://kubernetes-charts.storage.googleapis.com/
name: nginx-ingress
version: 1.25.0
values:
controller:
service:
type: LoadBalancer
metrics:
enabled: true
git add -A && \
git commit -m "install ingress" && \
git push origin master && \
fluxctl sync
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
annotations:
fluxcd.io/automated: "true"
fluxcd.io/tag.chart-image: semver:~3.0
For advanced deployments patterns like Canary releases, A/B testing and Blue/Green deployments, Flux can be used together with Flagger and a service mesh of your choice.
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: prod
annotations:
fluxcd.io/ignore: "false"
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
service:
port: 9898
canaryAnalysis:
interval: 10s
stepWeight: 5
threshold: 5
metrics:
- name: request-success-rate
threshold: 99
interval: 1m
- name: request-duration
threshold: 500
interval: 1m
webhooks:
- name: load-test
url: http://load-tester.prod/
metadata:
cmd: "hey -z 2m -q 10 -c 2 http://podinfo:9898/"
The Cluster API is a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes to manage the lifecycle of a Kubernetes cluster.
- To manage the lifecycle (create, scale, upgrade, destroy) of Kubernetes-conformant clusters using a declarative API.
- To work in different environments, both on-premises and in the cloud.
- To define common operations, provide a default implementation, and provide the ability to swap out implementations for alternative ones.
- To reuse and integrate existing ecosystem components rather than duplicating their functionality (e.g. node-problem-detector, cluster autoscaler, SIG-Multi-cluster).
- To provide a transition path for Kubernetes lifecycle products to adopt Cluster API incrementally. Specifically, existing cluster lifecycle management tools should be able to adopt Cluster API in a staged manner, over the course of multiple releases, or even adopting a subset of Cluster API.
apiVersion: cluster.x-k8s.io/v1alpha3
kind: Cluster
metadata:
name: ${CLUSTER_NAME}
spec:
clusterNetwork:
pods:
cidrBlocks: ["192.168.0.0/16"]
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: AzureCluster
name: ${CLUSTER_NAME}
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: AzureCluster
metadata:
name: ${CLUSTER_NAME}
spec:
resourceGroup: "${AZURE_RESOURCE_GROUP}"
location: "${AZURE_LOCATION}"
networkSpec:
vnet:
name: "${VNET_NAME}"
apiVersion: cluster.x-k8s.io/v1alpha3
kind: Machine
metadata:
name: ${CLUSTER_NAME}-controlplane-0
labels:
cluster.x-k8s.io/control-plane: "true"
spec:
version: ${KUBERNETES_VERSION}
clusterName: ${CLUSTER_NAME}
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfig
name: ${CLUSTER_NAME}-controlplane-0
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: AzureMachine
name: ${CLUSTER_NAME}-controlplane-0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: AzureMachine
metadata:
name: ${CLUSTER_NAME}-controlplane-0
spec:
location: ${AZURE_LOCATION}
vmSize: ${CONTROL_PLANE_MACHINE_TYPE}
osDisk:
osType: "Linux"
diskSizeGB: 30
managedDisk:
storageAccountType: "Premium_LRS"
sshPublicKey: ${SSH_PUBLIC_KEY}
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfig
metadata:
name: ${CLUSTER_NAME}-controlplane-0
spec:
initConfiguration:
nodeRegistration:
name: '{{ ds.meta_data["local_hostname"] }}'
kubeletExtraArgs:
cloud-provider: azure
cloud-config: /etc/kubernetes/azure.json
clusterConfiguration:
apiServer:
timeoutForControlPlane: 20m
extraArgs:
cloud-provider: azure
cloud-config: /etc/kubernetes/azure.json
extraVolumes:
- hostPath: /etc/kubernetes/azure.json
mountPath: /etc/kubernetes/azure.json
name: cloud-config
readOnly: true
controllerManager:
extraArgs:
cloud-provider: azure
cloud-config: /etc/kubernetes/azure.json
allocate-node-cidrs: "false"
extraVolumes:
- hostPath: /etc/kubernetes/azure.json
mountPath: /etc/kubernetes/azure.json
name: cloud-config
readOnly: true
files:
- path: /etc/kubernetes/azure.json
owner: "root:root"
permissions: "0644"
content: |
{
"cloud": "AzurePublicCloud",
"tenantId": "${AZURE_TENANT_ID}",
"subscriptionId": "${AZURE_SUBSCRIPTION_ID}",
"aadClientId": "${AZURE_CLIENT_ID}",
"aadClientSecret": "${AZURE_CLIENT_SECRET}",
"resourceGroup": "${AZURE_RESOURCE_GROUP}",
"securityGroupName": "${CLUSTER_NAME}-controlplane-nsg",
"location": "${AZURE_LOCATION}",
"vmType": "standard",
"vnetName": "${CLUSTER_NAME}",
"vnetResourceGroup": "${CLUSTER_NAME}",
"subnetName": "${CLUSTER_NAME}-controlplane-subnet",
"routeTableName": "${CLUSTER_NAME}-node-routetable",
"userAssignedID": "${CLUSTER_NAME}",
"loadBalancerSku": "standard",
"maximumLoadBalancerRuleCount": 250,
"useManagedIdentityExtension": false,
"useInstanceMetadata": true
}
apiVersion: cluster.x-k8s.io/v1alpha3
kind: MachineDeployment
metadata:
name: ${CLUSTER_NAME}-md-0
labels:
nodepool: nodepool-0
spec:
replicas: 2
clusterName: ${CLUSTER_NAME}
selector:
matchLabels:
nodepool: nodepool-0
template:
metadata:
labels:
nodepool: nodepool-0
spec:
version: ${KUBERNETES_VERSION}
clusterName: ${CLUSTER_NAME}
bootstrap:
configRef:
name: ${CLUSTER_NAME}-md-0
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
infrastructureRef:
name: ${CLUSTER_NAME}-md-0
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: AzureMachineTemplate
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: AzureMachineTemplate
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
template:
spec:
location: ${AZURE_LOCATION}
vmSize: ${NODE_MACHINE_TYPE}
osDisk:
osType: "Linux"
diskSizeGB: 30
managedDisk:
storageAccountType: "Premium_LRS"
sshPublicKey: ${SSH_PUBLIC_KEY}
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
name: '{{ ds.meta_data["local_hostname"] }}'
kubeletExtraArgs:
cloud-provider: azure
cloud-config: /etc/kubernetes/azure.json
files:
- path: /etc/kubernetes/azure.json
owner: "root:root"
permissions: "0644"
content: |
{
"cloud": "AzurePublicCloud",
"tenantId": "${AZURE_TENANT_ID}",
"subscriptionId": "${AZURE_SUBSCRIPTION_ID}",
"aadClientId": "${AZURE_CLIENT_ID}",
"aadClientSecret": "${AZURE_CLIENT_SECRET}",
"resourceGroup": "${CLUSTER_NAME}",
"securityGroupName": "${CLUSTER_NAME}-node-nsg",
"location": "${AZURE_LOCATION}",
"vmType": "standard",
"vnetName": "${CLUSTER_NAME}",
"vnetResourceGroup": "${CLUSTER_NAME}",
"subnetName": "${CLUSTER_NAME}-node-subnet",
"routeTableName": "${CLUSTER_NAME}-node-routetable",
"loadBalancerSku": "standard",
"maximumLoadBalancerRuleCount": 250,
"useManagedIdentityExtension": false,
"useInstanceMetadata": true
}
Service Catalog is an extension API that enables applications running in Kubernetes clusters to easily use external managed software offerings, such as a datastore service offered by a cloud provider.