Talking with Kubernetes
Setup communication
Binairies
kubectl
is the command to interact with Kubernetes, on top of it helm
allows you to have template of recipe to apply to Kubernetes.
While only kubectl
is needed to check you quota, and get logs of a
container, helm
is needed to deploy you app.
You always need to either be within the Institut Pasteur, or have VPN enabled.
Links:
Install kubectl : https://kubernetes.io/fr/docs/tasks/tools/install-kubectl/
Install helm : https://helm.sh/docs/intro/install/
Configuring “contexts” to access clusters
Dev cluster
Connect to https://kubeconfig.dev.pasteur.cloud/, and apply the config cluster context. Although you now can connect to the Kubernetes cluster k8sdev-01, you would still have to specify the namespace to work in at each command. To avoid it, create a config for your specific dev namespace.
NAMESPACE="my-project-dev"
SHORT_LOGIN="jdoe"
kubectl config set-context ${NAMESPACE} --cluster=k8sdev-01 --user=${SHORT_LOGIN}@k8sdev-01 --namespace ${NAMESPACE}
kubectl config use-context ${NAMESPACE}
kubectl get quota # test that connection works
Prod cluster (if you have a namespace in prod)
Same procedure but with https://console-k8sprod-02.pasteur.cloud/
NAMESPACE="my-project-prod"
SHORT_LOGIN="jdoe"
kubectl config set-context ${NAMESPACE} --cluster=k8sprod-02 --user=${SHORT_LOGIN}@k8sprod-02 --namespace ${NAMESPACE}
kubectl config use-context ${NAMESPACE}
kubectl get quota # test that connection works
Accessing quota usage
Here is the command asking for quota usage on the namespace rshiny-dev
kubectl config use-context rshiny-dev
kubectl get quota # test that connection works
The results is:
NAME AGE REQUEST LIMIT
project-quotas 91d isilon.storageclass.storage.k8s.io/requests.storage: 0/50Gi, pure-block.storageclass.storage.k8s.io/requests.storage: 0/0, requests.cpu: 500m/2, requests.ephemeral-storage: 512Mi/5Gi, requests.memory: 512Mi/5Gi limits.cpu: 1/2, limits.ephemeral-storage: 512Mi/5Gi, limits.memory: 1Gi/5Gi
Which should be read as:
Kind |
Requests |
Limits |
Quota |
---|---|---|---|
CPU |
0.5 |
1 |
2 |
Ram |
512Mi |
1Gi |
5Gi |
We use half a CPU, but indicate that our app can use up to 1 CPU in peek of usage. Also the quota indicate that we could go up to 2 cpu.
If Nodowntime option is enabled in your helm settings, you need all the requests and limits to be at most half your quota. See Nodowntime for more details.
Reading the logs
Reading the logs can be done in one-click with the CI task
documented here, but it cannot follow the logs.
We describe here how to do it in CLI with kubectl
.
The following use the appropriate namespace, and then get the name of the pod, roughly the name of the Docker container running your app.
kubectl config use context my-project-dev
kubectl get po
NAME READY STATUS RESTARTS AGE
base-python-chart-shiny-server-6789964785-679w8 1/1 Running 0 24m
This is the output, you see that the pod is
base-python-chart-shiny-server-6789964785-679w8
, while the beginning is
constant, the last part is random, and change at every startup.
From this pod, you can then follow the logs, and thus see error printed while testing you app in a browser.
kubectl logs base-python-chart-shiny-server-6789964785-679w8 -f
Note
All this should be done in one set of shell command:
kubectl config use context my-project-dev
kubectl logs $(kubectl get po --output=jsonpath='{.items[0].metadata.name}') -f
The do helm script
The script do_helm.sh
is present in both example project, it’s purpose is
to mimics what happens in the CI, and deploy to kubernetes the application.
The script
Note
Before anything, install the helm dependencies, the package we provide to deploy a shiny app in Kubernetes.
helm dependency update ./chart/
1#!/usr/bin/env bash
2
3touch tokens.sh
4source ./tokens.sh # put `export SECRET_KEY="..."` in this file
5
6NAMESPACE="rshiny-dev"
7CI_PROJECT_NAMESPACE="hub"
8CI_PROJECT_NAME="shiny-k8s-example"
9CI_REGISTRY="registry-gitlab.pasteur.fr"
10CI_REGISTRY_IMAGE="${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}"
11CI_COMMIT_SHA=$(git log --format="%H" -n 1)
12# CI_COMMIT_SHA="63a65791c93197d280f302a67983756f91b9a1db"
13CI_COMMIT_REF_SLUG=$(git branch --show)
14INGRESS_CLASS="internal"
15PUBLIC_URL="${CI_PROJECT_NAME}-${CI_COMMIT_REF_SLUG}.dev.pasteur.cloud"
16IMAGE="${CI_REGISTRY_IMAGE}/${CI_COMMIT_REF_SLUG}:${CI_COMMIT_SHA::8}"
17CHART_LOCATION="chart"
18
19export ACTION="upgrade --install"
20export ACTION="template --debug"
21
22helm ${ACTION} --namespace=${NAMESPACE} \
23 --render-subchart-notes \
24 --set shiny-server.ingress.className=${INGRESS_CLASS} \
25 --set shiny-server.ingress.hostname=${PUBLIC_URL} \
26 --set shiny-server.imageFullNameAndTag=${IMAGE} \
27 --set shiny-server.registry.username=${DEPLOY_USER} \
28 --set shiny-server.registry.password=${DEPLOY_TOKEN} \
29 --set shiny-server.registry.host=${CI_REGISTRY} \
30 ${CI_COMMIT_REF_SLUG}-${CHART_LOCATION} ./${CHART_LOCATION}/
If you use the script as is, it produce on the ouput the yaml template that would be applied by helm to deploy your application to Kubernetes. You should first adapte variables to your settings (line 6-8).
Commenting line 20, will make the action to be upgrade --install
, and
thus running the script will actually deploy the application.
Here is an example output:
./dohelm.sh
Release "base-r-chart" does not exist. Installing it now.
NAME: base-r-chart
LAST DEPLOYED: Thu Jul 13 16:19:39 2023
NAMESPACE: rshiny-dev
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The application will be up shortly at:
https://shiny-k8s-example-base-r.dev.pasteur.cloud
Settings:
- Image used: registry-gitlab.pasteur.fr/hub/shiny-k8s-example/base-r:83216202
- Ressources: 250m and 256Mi, limits are 500m and 512Mi
- Autoscaling is enabled
- Nodowntime is enabled (keep in mind that quota must thus be twice your resources.limits)
brancotte@xps(base-r):~/projects/rshiny-k8s-example-r$
Explain how to use the script, don't forget system dependencies
Deploying a specific version (i.e commit)
If you want to deploy a specific version of you app, i.e a specific commit, then
uncomment line 12 and update it with the appropriate COMMIT_SHA
.
This image must be in the registry, otherwise the deployment will never succed. To make sure this image is in the registry, you can check it with this command:
docker manifest inspect ${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}/${CI_COMMIT_REF_SLUG}:${CI_COMMIT_SHA::8}
If absent, you have to re-run the whole pipelines associated to this commit.
With a private project/registry
If your registry is private, you have to provide the credentials you
previously defined in the file token.sh
export DEPLOY_USER=uuu
export DEPLOY_TOKEN=ttt