2. Tag 2
2.6. Minikube installieren
brew install minikube brew install kubectl
minikube start minikube addons list minikube addons enable dashboard minikube addons enable metrics-server kubectl get nodes minikube dashboard
apiVersion: v1
kind: Namespace
metadata:
name: demo-namespace
cd k8s kubectl apply -f namespace.yaml
namespace/demo-namespace created
source <(kubectl completion zsh) # set up autocomplete in zsh into the current shell echo '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc # add autocomplete permanently to your zsh shell
kubectl config set-context --current --namespace=demo-namespace kubectl get namespaces kubectl get pods
-
Man darf sich nicht für Kleinigkeiten ein neues Docker-Image bauen
-
Man sollte immer ein Standard Image zB für postgres verwenden und dieses konfigurieren
2.6.1. ConfigMaps
-
Key-Value-Maps zur Konfiguration dieser Images
k8s/postgres.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-setup
data:
setup.sql: |
DROP database if exists demo;
DROP user if exists demo;
CREATE USER demo WITH
LOGIN
NOSUPERUSER
NOCREATEDB
NOCREATEROLE
INHERIT
NOREPLICATION
CONNECTION LIMIT -1
PASSWORD 'demo';
CREATE DATABASE demo
WITH
OWNER = demo
ENCODING = 'UTF8'
CONNECTION LIMIT = -1;
allow-all.sh: |
echo "allow all hosts..."
echo "host all all 0.0.0.0/0 md5" >> /var/lib/postgresql/data/pg_hba.conf
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgresql-data
annotations:
nfs.io/storage-path: "postgresql-data"
spec:
accessModes:
- ReadWriteMany
storageClassName: standard
resources:
requests:
storage: 100Mi
---
apiVersion: v1
kind: Secret
metadata:
name: postgres-admin
type: kubernetes.io/basic-auth
stringData:
username: demo
password: demo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
component: postgres
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
restartPolicy: Always
terminationGracePeriodSeconds: 30
containers:
- name: postgres
image: postgres:14
ports:
- containerPort: 5432
protocol: TCP
name: postgres
readinessProbe:
tcpSocket:
port: 5432
initialDelaySeconds: 20
periodSeconds: 30
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
- name: setup-scripts
mountPath: /docker-entrypoint-initdb.d/setup.sql
subPath: setup.sql
readOnly: true
- name: allowall
mountPath: /docker-entrypoint-initdb.d/allow-all.sh
subPath: allow-all.sh
readOnly: true
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-admin
key: password
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgresql-data
- name: setup-scripts
configMap:
name: postgres-setup
items:
- key: setup.sql
path: setup.sql
- name: allowall
configMap:
name: postgres-setup
items:
- key: allow-all.sh
path: allow-all.sh
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
ports:
- port: 5432
targetPort: 5432
protocol: TCP
selector:
component: postgres
kubectl apply -f postgres.yaml
Um auf die DB zuzugreifen, verwendet man port-forwarding
-
Schauen, wie der Pod heißt
kubectl get pods
NAME READY STATUS RESTARTS AGE postgres-5468d5c66c-78lcv 1/1 Running 0 12m
kubectl port-forward postgres-5468d5c66c-78lcv 5432:5432
-
Das Terminal verliert Fokus
-
Neues Terminal öffnen und kontrollieren:
netstat -ant | grep 5432


# datasource configuration
quarkus.datasource.db-kind = postgresql
quarkus.datasource.username = demo
quarkus.datasource.password = demo
%dev.quarkus.datasource.jdbc.url = jdbc:postgresql://localhost:5432/demo
%prod.quarkus.datasource.jdbc.url = jdbc:postgresql://postgres:5432/db
# drop and create the database at startup (use `update` to only update the schema)
quarkus.hibernate-orm.database.generation=drop-and-create
quarkus.hibernate-orm.log.sql=true
%dev.quarkus.hibernate-orm.sql-load-script=db/import.sql
-
für entwickeln verwenden wir immer "latest" und "imagePullPolicy: Always"
-
erst in der Produktion wird eine Version vergeben und die imagePullPolicy wird entfernt
2.7. Docker-Image in die ghcr.io pushen
2.7.1. Secret erstellen
-
github.com - Settings - Developer Settings - Personal Access Tokens - Tokens (classic)

-
Erstelltes Token in Editor kopieren
2.7.2. Mit Docker CLI in ghcr.io einloggen
docker login ghcr.io
Username: htl-leonding Password: <token einfügen> Login Succeeded
2.7.4. Docker Image ins ghcr.io pushen
docker push ghcr.io/htl-leonding/my-busybox:latest
The push refers to repository [ghcr.io/htl-leonding/my-busybox] 5f5f687a05d8: Pushed latest: digest: sha256:afebab8e3d8cbef70c0632b5a7aa5c003f253d4f4f1ca47fe6b094ef7fe0cd07 size: 528
2.7.5. Kontrollieren, ob Image in ghcr.io
https://github.com/htl-leonding?tab=packages
Package public setzen |
2.7.6. Docker Image lokal löschen
docker image rm ghcr.io/htl-leonding/my-busybox
Untagged: ghcr.io/htl-leonding/my-busybox:latest Untagged: ghcr.io/htl-leonding/my-busybox@sha256:afebab8e3d8cbef70c0632b5a7aa5c003f253d4f4f1ca47fe6b094ef7fe0cd07
2.7.7. Docker aus ghcr.io pullen (herunterladen)
docker pull ghcr.io/htl-leonding/my-busybox:latest
latest: Pulling from htl-leonding/my-busybox 814c8b675ca3: Pull complete Digest: sha256:afebab8e3d8cbef70c0632b5a7aa5c003f253d4f4f1ca47fe6b094ef7fe0cd07 Status: Downloaded newer image for ghcr.io/htl-leonding/my-busybox:latest ghcr.io/htl-leonding/my-busybox:latest
2.8. Deployment eines vorhandenen Images ins Minikube
-
Wir brauchen ein yaml-File
postgres.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-setup
data:
setup.sql: |
DROP database if exists demo;
DROP user if exists demo;
CREATE USER demo WITH
LOGIN
NOSUPERUSER
NOCREATEDB
NOCREATEROLE
INHERIT
NOREPLICATION
CONNECTION LIMIT -1
PASSWORD 'demo';
CREATE DATABASE demo
WITH
OWNER = demo
ENCODING = 'UTF8'
CONNECTION LIMIT = -1;
allow-all.sh: |
echo "allow all hosts..."
echo "host all all 0.0.0.0/0 md5" >> /var/lib/postgresql/data/pg_hba.conf
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgresql-data
annotations:
nfs.io/storage-path: "postgresql-data"
spec:
accessModes:
- ReadWriteMany
storageClassName: standard
resources:
requests:
storage: 100Mi
---
apiVersion: v1
kind: Secret
metadata:
name: postgres-admin
type: kubernetes.io/basic-auth
stringData:
username: demo
password: demo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
component: postgres
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
restartPolicy: Always
terminationGracePeriodSeconds: 30
containers:
- name: postgres
image: postgres:14
ports:
- containerPort: 5432
protocol: TCP
name: postgres
readinessProbe:
tcpSocket:
port: 5432
initialDelaySeconds: 20
periodSeconds: 30
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
- name: setup-scripts
mountPath: /docker-entrypoint-initdb.d/setup.sql
subPath: setup.sql
readOnly: true
- name: allowall
mountPath: /docker-entrypoint-initdb.d/allow-all.sh
subPath: allow-all.sh
readOnly: true
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-admin
key: password
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgresql-data
- name: setup-scripts
configMap:
name: postgres-setup
items:
- key: setup.sql
path: setup.sql
- name: allowall
configMap:
name: postgres-setup
items:
- key: allow-all.sh
path: allow-all.sh
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
ports:
- port: 5432
targetPort: 5432
protocol: TCP
selector:
component: postgres
cd k8s kubectl apply -f postgres.yaml
-
kontrollieren im minikube dashboard, ob erfolgreich
minikube dashboard
2.9. Deployment einer Quarkus - App ins Minikube
-
Kompilieren und in die Docker Registry (ghcr.io) pushen
./mvnw clean package -DskipTests -Dquarkus.container-image.push=true
[INFO] [io.quarkus.container.image.jib.deployment.JibProcessor] Pushed container image ghcr.io/htl-leonding/vehicle (sha256:90f8f348139fb81830d0eef70b8aad9cf8da545f14ed4dd6b191ce0511713116) [INFO] [io.quarkus.deployment.QuarkusAugmentor] Quarkus augmentation completed in 11371ms [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 13.944 s [INFO] Finished at: 2023-03-17T08:25:35+01:00 [INFO] ------------------------------------------------------------------------
-
Das Package in den Package Setting public setzen
appsrv.yaml erstellen
# Quarkus Application Server
apiVersion: apps/v1
kind: Deployment
metadata:
name: appsrv
spec:
replicas: 1
selector:
matchLabels:
app: appsrv
template:
metadata:
labels:
app: appsrv
spec:
containers:
- name: appsrv
image: ghcr.io/htl-leonding/vehicle:latest
# remove this when stable. Currently we do not take care of version numbers
imagePullPolicy: Always
ports:
- containerPort: 8080
#startupProbe:
# httpGet:
# path: /api/q/health
# port: 8080
# timeoutSeconds: 5
# initialDelaySeconds: 15
#readinessProbe:
# tcpSocket:
# port: 8080
# initialDelaySeconds: 5
# periodSeconds: 10
#livenessProbe:
# httpGet:
# path: /api/q/health
# port: 8080
# timeoutSeconds: 5
# initialDelaySeconds: 60
# periodSeconds: 120
---
apiVersion: v1
kind: Service
metadata:
name: appsrv
spec:
ports:
- port: 8080
targetPort: 8080
protocol: TCP
selector:
app: appsrv
kubectl apply -f appsrv.yaml
2.9.1. Port Forwarding
kubectl get pods
NAME READY STATUS RESTARTS AGE appsrv-8545fd6488-qckbw 1/1 Running 0 19m postgres-5468d5c66c-cxq7n 1/1 Running 0 34m
kubectl port-forward postgres-5468d5c66c-cxq7n 5432:5432
-
das Terminal verliert den Fokus, es läuft der Prozess zum Port-Forwarding
-
Alternative: Verwendung des Shell-Scripts port-forward.sh
2.10. Hinzufügen von Health zur Quarkus App
./mvnw quarkus:add-extension -Dextensions='smallrye-health'
<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency>
Zuerst lokal testen, dann in die Cloud deployen, dh wir verwenden die DB vom Minikube (port-forward nicht vergessen) und starten die quarkus app lokal |
./mvnw clean quarkus:dev
2.11. Neuerliches Deployment
-
Zuerst neu kompilieren
-
Deployment lösschen (besser wäre ein sanfter Übergang mit rollout)
kubectl delete -f appsrv.yaml kubectl apply -f appsrv.yaml
2.13. Installieren eines Reverse-Proxy
-
Auch hier verwenden wir ein bestehendes nginx-image
-
Das könnte zum Problem beim Deployment in eine kommerzielle Cloud werden, da ein ReadWriteMany VolumeClaim verwendet wird
-
dies kann teuer werden
-
kubectl apply -f nginx.yaml
-
Port-Forward 4200:80 im nginx-pod
2.13.1. Zugriff auf Quarkus über nginx
http://localhost:4200/api/q/health http://localhost:4200/api/vehicles
2.13.2. Eröffen einer shell am nginx
kubectl get pods
NAME READY STATUS RESTARTS AGE appsrv-5f65b54df-w6rtx 1/1 Running 0 27m nginx-f88cd74d5-bgcqx 1/1 Running 0 6m36s postgres-5468d5c66c-cxq7n 1/1 Running 0 101m
kubectl exec -it nginx-f88cd74d5-bgcqx -- bash
-
Aus Security Gründen wurde der Volume Mount schreibgeschützt
-
Daher spielen wir einen Busybox-Job ein
kubectl apply -f busybox-job.yaml


2.14. Creating a Linux executable without GraalVM installed
./mvnw install -Dnative -DskipTests -Dquarkus.native.container-build=true
2.15. Skalieren der Anzahl der Pods
-
in den yaml-Files des appsrv und nginx den Wert der
replicas
auf zB 3 setzen -
im dashboard unter "Replica Sets" kontrollieren