Kubernetes Single Node Cluster aufsetzen

Ziel des Artikels ist es, die notwendigen Schritte zum Aufsetzen eines Kubernetes Systems mit folgenden Anforderungen zu dokumentieren:

  • Lauffähig auf einem einzelnen Netcup vServer (keine High-Availability)
  • Integration in Gitlab AutoDevops möglich
  • IPv4/IPv6 Dual-Stack fähig
  • Mit Kubernetes Network Policies abgesichert
  • Nutzung der lokalen Platte als Persistent Storage
  • Mögliche Erweiterbarkeit auf mehrere Nodes

Basisimage

  • Ubuntu 18.04 LTS Docker Image von Netcup, kleine Partition oder
  • Ubuntu 20.04 LTS Minimal Image von Netcup, kleine Partition

Absicherungen basierend auf

System absichern

Mit dem vorkonfigurierten Root-User per SSH:

1
2
3
4
5
6
7
8
9
adduser jo
adduser jo sudo
# nur bei 20.04
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
apt install docker-ce docker-ce-cli containerd.io
#
adduser docker

Mit dem neuen Benutzer jo per SSH:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
ssh-keygen
echo "ssh-rsa AAAAB...3w== ssh-jowi-privat-aes" >> ~/.ssh/authorized_keys

sudo passwd -l root
sudo ufw allow OpenSSH
sudo ufw allow out http
sudo ufw allow out https
# allow dns queries
sudo ufw allow out 53/udp
# allow ntp systemd time syncing
sudo ufw allow out 123
sudo ufw default deny outgoing
sudo ufw default deny incoming
sudo ufw enable

SSH verlegen um Platz für Gitlab Port zu schaffen:

1
2
3
sudo ufw allow 2222
sudo sed -i -e 's/#Port 22/Port 2222/' /etc/ssh/sshd_config
sudo systemctl restart ssh

Paritionierung

1
2
3
4
5
sudo fdisk /dev/sda
# n enter enter enter w
sudo mke2fs /dev/sda4
echo "/dev/sda4 /srv ext4 defaults 0 2" | sudo tee -a /etc/fstab
sudo mount /srv

Automatische Updates aktivieren

1
2
3
4
5
6
7
sudo sed -i 's/\/\/Unattended-Upgrade::Automatic-Reboot/Unattended-Upgrade::Automatic-Reboot/g' /etc/apt/apt.conf.d/50unattended-upgrades
cat <<EOF | sudo tee -a /etc/apt/apt.conf.d/20auto-upgrades
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "7";
APT::Periodic::Unattended-Upgrade "1";
EOF

Aufsetzen des Clusters

Basierend auf der offiziellen Dokumentation. Zusätzlich IPv6 Forwarding aktivieren (Zeile 4).

Vorbereitungen

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv6.conf.all.forwarding = 1
EOF
sudo sysctl --system

cat <<EOF | sudo tee /etc/default/kubelet
KUBELET_EXTRA_ARGS=--feature-gates="IPv6DualStack=true"
EOF

cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
1
2
3
4
5
6
7
8
9
10
11
12
13
sudo mv /var/lib/docker /srv/docker
sudo ln -s /srv/docker /var/lib/docker
sudo systemctl enable docker
sudo systemctl restart docker
sudo ufw allow 6443
sudo ufw allow out to 172.18.0.0/24
sudo ufw allow out to 172.18.1.0/24
sudo ufw allow out to fc00::/64
sudo ufw allow out to fc01::/110
sudo ufw allow in from 172.18.0.0/24
sudo ufw allow in from 172.18.1.0/24
sudo ufw allow in from fc00::/64
sudo ufw allow in from fc01::/110

Paketinstallation

Zum Zeitpunkt des Artikels wird Version 1.18.2 installiert.

1
2
3
4
5
6
7
8
9
10
11
12
sudo apt-get update && sudo apt-get upgrade
sudo apt-get install -y apt-transport-https curl mc ipvsadm
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl

sudo snap install helm --classic
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update

Bash Tab-Vervollständigung analog zu https://kubernetes.io/de/docs/tasks/tools/install-kubectl/

1
echo 'source <(kubectl completion bash)' >>~/.bashrc

Kubeadm Setup

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cat <<EOF > kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
featureGates:
IPv6DualStack: true
kind: ClusterConfiguration
kubernetesVersion: 1.18.1
networking:
serviceSubnet: "172.18.1.0/24,fc01::/110"
podSubnet: "172.18.0.0/24,fc00::/64"
dnsDomain: "cluster.local"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
EOF
sudo kubeadm init --config kubeadm-config.yaml
1
2
3
4
5
6
7
8
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) \$HOME/.kube/config

kubectl taint nodes --all node-role.kubernetes.io/master-

# edit /etc/kubernetes/manifests/kube-apiserver.yaml
# add ServerSideApply=false to --feature-gate parameter

Calico Networking Setup

Siehe auch https://gitlab.com/gitlab-org/gitlab-runner/-/issues/3705 zur Erhöhung der MTU auf 1500, da es sonst zu Konnektivitätsproblemen kommen kann.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
curl https://docs.projectcalico.org/manifests/calico.yaml -o calico.yaml
cat <<"EOF" | patch
--- calico.yaml.sav 2020-05-08 10:47:50.400000000 +0200
+++ calico.yaml 2020-05-08 10:49:34.700000000 +0200
@@ -14,7 +14,7 @@ data:
# Configure the MTU to use for workload interfaces and the
# tunnels. For IPIP, set to your network MTU - 20; for VXLAN
# set to your network MTU - 50.
- veth_mtu: "1440"
+ veth_mtu: "1500"

# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
@@ -30,8 +30,13 @@ data:
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
+ "assign_ipv4": "true",
+ "assign_ipv6": "true",
"type": "calico-ipam"
},
+ "container_settings": {
+ "allow_ip_forwarding": true
+ },
"policy": {
"type": "k8s"
},
@@ -671,6 +676,8 @@ spec:
# no effect. This should fall within `--cluster-cidr`.
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"
+ - name: CALICO_IPV4POOL_CIDR
+ value: "172.18.0.0/24"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
@@ -685,6 +692,14 @@ spec:
value: "info"
- name: FELIX_HEALTHENABLED
value: "true"
+ - name: IP6
+ value: "autodetect"
+ - name: CALICO_IPV6POOL_CIDR
+ value: "fc00::/64"
+ - name: FELIX_IPV6SUPPORT
+ value: "true"
+ - name: CALICO_IPV6POOL_NAT_OUTGOING
+ value: "true"
securityContext:
privileged: true
resources:
EOF
kubectl apply -f calico.yaml

Loadbalancer Setup

Auf Basis von https://metallb.universe.tf/installation/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
kubectl create namespace kube-lb
cat <<EOF > metallb-values.yaml
configInline:
peers:
address-pools:
- name: default4
protocol: layer2
addresses:
- 152.89.xxx.xxx/32
- 2a03:4000:39:xxxx:xxxx:xxxx:xxxx:xxxx/128
controller:
image:
tag: v0.9.3
speaker:
image:
tag: v0.9.3
EOF
helm upgrade -i metallb -n kube-lb stable/metallb -f metallb-values.yaml

Hostpath Provider

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
mkdir -p /srv/k8s-storage
cat <<EOF > hostpath-provisioner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hostpath-provisioner
labels:
k8s-app: hostpath-provisioner
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 0
selector:
matchLabels:
k8s-app: hostpath-provisioner
template:
metadata:
labels:
k8s-app: hostpath-provisioner
spec:
serviceAccountName: k8s-hostpath
containers:
- name: hostpath-provisioner
image: cdkbot/hostpath-provisioner-amd64:1.0.0
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: PV_DIR
value: /srv/k8s-storage
#- name: PV_RECLAIM_POLICY
# value: Retain
volumeMounts:
- name: pv-volume
mountPath: /srv/k8s-storage
volumes:
- name: pv-volume
hostPath:
path: /srv/k8s-storage
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: k8s-hostpath
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: microk8s.io/hostpath
#reclaimPolicy: Retain //is ignored (see above)
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-hostpath
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k8s-hostpath
rules:
- apiGroups: [""]
resources:
- persistentvolumeclaims
verbs:
- list
- get
- watch
- update
- apiGroups: [""]
resources:
- persistentvolumes
verbs:
- list
- get
- update
- watch
- create
- delete
- apiGroups: [""]
resources:
- events
verbs:
- create
- list
- patch
- apiGroups: ["storage.k8s.io"]
resources:
- storageclasses
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-hostpath
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: k8s-hostpath
subjects:
- kind: ServiceAccount
name: k8s-hostpath
namespace: kube-system
EOF
kubectl apply -f hostpath-provisioner.yaml

NGINX Ingress

Auf Basis von https://github.com/helm/charts/tree/master/stable/nginx-ingress

1
2
3
sudo ufw allow in 22
sudo ufw allow in http
sudo ufw allow in https
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
curl https://raw.githubusercontent.com/helm/charts/master/stable/nginx-ingress/values.yaml -o nginx-ingress-values.yaml
patch <<EOF
--- nginx-ingress-values.yaml.sav 2020-04-20 09:08:30.152000000 +0200
+++ nginx-ingress-values.yaml 2020-04-20 09:11:40.840000000 +0200
@@ -264,7 +264,7 @@ controller:
## Set external traffic policy to: "Local" to preserve source IP on
## providers supporting it
## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
- externalTrafficPolicy: ""
+ externalTrafficPolicy: "Local"

# Must be either "None" or "ClientIP" if set. Kubernetes will default to "None".
# Ref: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
@@ -567,6 +567,7 @@ imagePullSecrets: []
##
tcp: {}
# 8080: "default/example-tcp-svc:9000"
+22: "gitlab/gitlab-gitlab-shell:22"

# UDP service key:value pairs
EOF
kubectl create namespace "kube-ingress"
helm upgrade -i -n kube-ingress nginx-ingress stable/nginx-ingress -f nginx-ingress-values.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
cat <<EOF > nginx-ingress-service-v6.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller-v6
namespace: kube-ingress
spec:
externalTrafficPolicy: Local
ipFamily: IPv6
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
- name: git
port: 22
protocol: TCP
targetPort: 22
selector:
app: nginx-ingress
component: controller
release: nginx-ingress
sessionAffinity: None
type: LoadBalancer
EOF
kubectl apply -f nginx-ingress-service-v6.yaml

Certmanager

Automatisches Ausstellen von Zertifikaten mittels LetsEncrypt

Auf Basis von https://cert-manager.io/docs/tutorials/acme/ingress/

Achtung: ein abweichender Namespace (z.B. kube-cert-manager) führt leider zu Problemen, hier müssten die CRDs modifiziert werden.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
curl https://raw.githubusercontent.com/jetstack/cert-manager/master/deploy/charts/cert-manager/values.yaml -o cert-manager-values.yaml
patch <<EOF
--- cert-manager-values.yaml.sav 2020-04-21 15:20:14.080000000 +0200
+++ cert-manager-values.yaml 2020-04-21 15:22:26.104000000 +0200
@@ -115,10 +115,10 @@ podLabels: {}

nodeSelector: {}

-ingressShim: {}
- # defaultIssuerName: ""
- # defaultIssuerKind: ""
- # defaultIssuerGroup: ""
+ingressShim:
+ defaultIssuerName: "letsencrypt-prod"
+ defaultIssuerKind: "ClusterIssuer"
+ defaultIssuerGroup: "cert-manager.io"

prometheus:
enabled: true
EOF
1
2
3
4
5
6
helm repo add jetstack https://charts.jetstack.io
# add crds
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.14.2/cert-manager.crds.yaml
# install chart
kubectl create namespace cert-manager
helm upgrade -i -n cert-manager cert-manager jetstack/cert-manager -f cert-manager-values.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# add issuer
cat <<EOF > cert-manager-issuer.yaml
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: xxx@xxx
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
EOF

Verwendung: Ein Ingress muss folgende Annotation aufweisen um vom Reverse-Proxy geroutet zu werden und ein TLS-Zertifikat mit dem defaultIssuer zu erhalten:

1
2
3
4
5
metadata:
name: kuard
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"

Kubernetes Dashboard

Auf Basis von:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
curl https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml -o kube-dashboard.yaml
sed -i -e 's/namespace: kubernetes-dashboard/namespace: kube-dashboard/g' kube-dashboard.yaml
cat <<EOF | patch
--- kube-dashboard.yaml.sav 2020-04-17 20:06:59.396000000 +0200
+++ kube-dashboard.yaml 2020-04-23 07:44:28.536000000 +0200
@@ -15,7 +15,7 @@
apiVersion: v1
kind: Namespace
metadata:
- name: kubernetes-dashboard
+ name: kube-dashboard

---

@@ -194,7 +194,8 @@ spec:
protocol: TCP
args:
- --auto-generate-certificates
- - --namespace=kubernetes-dashboard
+ - --namespace=kube-dashboard
+ - --token-ttl=43200
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
EOF
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
cat <<EOF >> kube-dashboard.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-dashboard
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/use-port-in-redirects: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: dashboard
namespace: kube-dashboard
spec:
tls:
- hosts:
- dashboard.mydomain.com
secretName: dashboard.mydomain.com-tls
rules:
- host: dashboard.mydomain.com
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 8443
path: /
EOF
kubectl apply -f kube-dashboard.yaml

# Get token for login
kubectl -n kube-dashboard describe secret \
$(kubectl -n kube-dashboard get secret | grep admin-user | awk '{print $1}')

Gitlab

Installation

Verwendendung des Standard-Helm-Charts unter Berücksichtigung folgender Modifikationen https://docs.gitlab.com/charts/advanced/external-nginx/index.html and https://docs.gitlab.com/charts/installation/tls.html

1
2
3
helm repo add gitlab https://charts.gitlab.io/
kubectl create namespace gitlab
kubectl create secret generic gitlab-smtp-creds --from-literal=password='xxx' -n gitlab

Gitlab Integration

Den Cluster in Gitlab hinzufügen (nicht durch Gitlab managen lassen):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
cat <<EOF > gitlab-admin-service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: gitlab-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: gitlab-admin
namespace: kube-system
EOF
kubectl apply -f gitlab-admin-service-account.yaml

Folgende Informationen in der Einrichtung angeben:

1
2
3
4
5
6
7
8
# API URL
kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}'
# CA Certificate
kubectl get secret $(kubectl get secrets | grep token | cut -d " " -f 1) \
-o jsonpath="{['data']['ca\.crt']}" | base64 --decode
# Token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret |\
grep gitlab-admin | awk '{print $1}')

Vorhandene Docker Projekte für Kubernetes anpassen

1
2
3
4
cd repo
cat <<EOF > .gitlab/auto-deploy-values.yaml
deploymentApiVersion: apps/v1
EOF
  • Auto-Devops konfigurieren mit eingenem Namespace und Domain
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# derived from https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml

include:
- template: Auto-DevOps.gitlab-ci.yml

variables:
CODE_QUALITY_DISABLED: "1"
POSTGRES_ENABLED: "false"
TEST_DISABLED: "1"
PERFORMANCE_DISABLED: "1"
ADDITIONAL_HOSTS: "app2.joachim-wilke.de"

production:
environment:
name: production
url: https://app.joachim-wilke.de
kubernetes:
namespace: jowi-app
  • Bei Bedarf eigenes Chart auf Basis des offiziellen Charts ableiten (Nutzung von Git Subtree)
1
2
3
4
5
6
git remote add -f autodeploy https://gitlab.com/gitlab-org/charts/auto-deploy-app.git
git subtree add --prefix chart autodeploy master --squash

# On each update:
git fetch autodeploy master
git subtree pull --prefix chart autodeploy master --squash
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# allow no egress traffic, ingress only from ingress proxy
networkPolicy:
enabled: true
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector: {}
- podSelector:
matchLabels:
app: "nginx-ingress"

Cheat-Sheet

Firewall

  • Get Status: sudo ufw status verbose
  • Applikationsprofile aufrufen: sudo ufw app list

Kubernetes

  • Reset Kubeadm: kubeadm reset && iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X && ipvsadm -C
  • Get Node Status: kubectl get nodes -o wide
  • Get Status: watch kubectl get all -A

Offene Punkte

  • Helm Chart auch für das Dashboard verwenden
  • Bringt der iptables proxy mode, im Vergleich zu ipvs Vorteile? Aktuell wird iptables von calico i.V.m. DualStack noch nicht unterstützt.
  • Anzahl Replicas des Coredns Pods auf 1 beschränken (per default sind es 2)
  • Ist es sinnvoll (und bei Netcup technisch machbar), global routbare IPv6 Adressen in den Pods zu verwenden? Siehe auch https://forum.netcup.de/administration-eines-server-vserver/vserver-server-kvm-server/9577-ipv6-addressvergabe-an-docker-container/
  • Was hat es mit den Meldungen im Syslog auf sich: systemd[1551]: Failed to canonicalize path permission denied message?
  • Nginx Ingress Default Backend anpassen (eigene Fehlerseite)
  • Certmanager im Namespace kube-cert-manager deployen
  • Alle Sicherheitschecks von https://en.internet.nl/ bestehen
  • Alle Checks des Kube-Bench bestehen kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/master/job-master.yaml