Skip to content
Snippets Groups Projects
Commit 926fefc9 authored by Tim Kreuzer's avatar Tim Kreuzer
Browse files

init

parent f12b84ec
Branches master
No related tags found
No related merge requests found
# Kubernetes as a Service administration
This repo is used to create clusters on [JSC-Cloud](https://cloud.jsc.fz-juelich.de) and deploy software on them.
## Create Cluster
Requirements:
- OpenStack CLI (`pip install openstackclient`)
- application credentials for `jsc-cloud-team` project
- application credentials for `<user>` project
Create OpenStack environment in users project:
- `git clone --single-branch --branch main git@gitlab.jsc.fz-juelich.de:kaas/fleet-deployments.git fleet_deployments/managed_clusters`
- `cd fleet_deployments/managed_clusters`
- Store `jsc-cloud-team` credentials in `managed_clusters/management_credentials.sh`
- Store `<user>` credentials in `managed_clusters/<NAME>_credentials.sh` (<NAME> must be equal to the Name given in create.sh)
- # UPDATE create.sh , fill in name, project id and subnet cidr
- `/bin/bash create.sh`
Create NodeTemplate / RKETemplate
- Browse to https://zam12142.zam.kfa-juelich.de , log in
- Open sidebar (click top left) -> Cluster Management
- RKE1 Configuration (sidebar) -> Node Templates
- Add Template (top right), choose OpenStack
- Create 2 Node Templates (main + worker template, see <NAME>/userdata_[main|worker].yaml for values)
- **IMPORTANT: At the end of the node template creation, `Engine Options` -> `Docker Install URL` must be "None"!**
- RKE1 Configuration (sidebar) -> RKE Templates
- Add template (top right), name should be equal to cluster name, revision can be v1
- Click "Edit as YAML" on the right side, copy the rke.yaml file from this repo into it.
- Replace the secrets and subnet ID from the output given by create.sh earlier
Create Cluster:
- Browse to https://zam12142.zam.kfa-juelich.de , log in
- Open sidebar (click top left) -> Cluster Management
- Create (top right), select RKE1 in the top right, click OpenStack
- Cluster Name: as before in create.sh, create two nodepools (one for main nodes [check: drain before delete, etcd, control-plane], one for worker nodes [check: drain before delete, worker]). Set "Auto Replace" to 5 minutes. Use the previously created node templates.
- Cluster Options: "Use an existing RKE Template and revision" -> Choose the previously created one.
- Member roles (above Cluster Options) -> Add member as owner to this cluster. If user does not exists yet, it can be done later.
- Labels: can be used to install default software. See List below for available labels
- Scroll down: Create -> Done.
How to Manage Cluster (once it's created, may take up to 10 minutes):
- 1. via UI: https://zam12142.zam.kfa-juelich.de , open sidebar (click top left), Explore Cluster -> <Name>
- 2. via CLI: Install kubectl, download kubectl (icons top right in Explore Cluster)
How to increase/decrease number of nodes:
- https://zam12142.zam.kfa-juelich.de , sidebar (click top left), Cluster Management, Click on Cluster name, use `+` in nodepool to add more nodes in nodepool.
- When decreasing you should drain them first:
- `kubectl cordon <node>` (or in Explore Clusters -> <name> -> nodes)
- `kubectl drain --ignore-daemonsets --delete-emptydir-data <node>` (or in UI, same as above)
- In Cluster Management select node and click on `Scale Down`. (Deleted nodes would be replaced otherwise)
## Supported Labels
- kured: "true" -> Install [Kured](https://github.com/kubereboot/kured), this will reboot your nodes if necessary on a sunday between 2am and 5am (Timezone: Europe/Berlin). [more](https://gitlab.jsc.fz-juelich.de/kaas/fleet-deployments/-/tree/kured)
- cinder-csi: "true" -> Install [Cinder-CSI Plugin](https://github.com/kubernetes/cloud-provider-openstack/tree/release-1.26/docs/cinder-csi-plugin), this will create a storage class on the cluster, which uses OpenStack Cinder Volumes as persistent storage. [more](https://gitlab.jsc.fz-juelich.de/kaas/fleet-deployments/-/tree/openstack-cinder-csi)
## Delete cluster
- Delete Cluster in Rancher UI
- Use `delete.sh` to revert all changes done before (network, security-group, static-routes, etc.)
\ No newline at end of file
defaultNamespace: default
defaultNamespace: kube-system
helm:
releaseName: kured
repo: https://kubereboot.github.io/charts
chart: kured
version: 5.2.0
values:
configuration:
rebootDays: ["mo", "tu", "we", "th", "fr", "sa", "su"]
startTime: "2am"
endTime: "5am"
timeZone: "Europe/Berlin"
period: "10m0s"
drainGracePeriod: "240"
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/controlplane
value: "true"
- effect: NoExecute
key: node-role.kubernetes.io/etcd
value: "true"
#!/bin/bash
### Customization
NAME="" # Enter a (ideally) unique name for the cluster
PROJECT_ID="" # project id from the users project, where the k8s cluster should be created
SUBNET_CIDR="" # Unique CIDR (10.0.x.0/24) , each cluster needs a different subnet CIDR.
###
# set to false, to get the output at the end without creating anything
CREATE="true"
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
mkdir -p ${DIR}/${NAME}
# Some variables for our `jsc-cloud-team` management project
MANAGEMENT_PROJECT_ID=2092d29f72ca4f32ac416cc545986007
MANAGEMENT_ROUTER_ID=90d2a94c-3bff-4a79-88d2-00dc6626e278
MANAGEMENT_ROUTER_INTERNAL_ID=5e048465-53ed-4f24-8eec-871cf7d668d5
MANAGEMENT_NETWORK_CIDR="10.0.1.0/24"
MANAGEMENT_GATEWAY_INTERNAL="10.0.1.253"
MANAGEMENT_SECGROUP_ID=7b7de2f9-a561-4f3c-929a-fd8bc26a0d2c
# activate `<user>` project credentials
source ${DIR}/${NAME}_credentials.sh
USER_ROUTER_ID=$(openstack router show router -f value -c id)
if [[ $CREATE == "true" ]]; then
# Create network and share it with `jsc-cloud-team`
USER_NETWORK_ID=$(openstack network create $NAME -c id -f value)
USER_SUBNET_ID=$(openstack subnet create --subnet-range $SUBNET_CIDR --dns-nameserver 134.94.32.3 --dns-nameserver 134.94.32.4 --dns-nameserver 134.94.32.5 --network $USER_NETWORK_ID $NAME -c id -f value)
openstack router add subnet $USER_ROUTER_ID $USER_SUBNET_ID
openstack network rbac create --target-project $MANAGEMENT_PROJECT_ID --action access_as_shared --type network $USER_NETWORK_ID
else
# Get IDs
USER_NETWORK_ID=$(openstack network show $NAME -c id -f value)
USER_SUBNET_ID=$(openstack subnet show $NAME -c id -f value)
fi
# activate `jsc-cloud-team` project credentials
source ${DIR}/management_credentials.sh
if [[ $CREATE == "true" ]]; then
# Add port from shared network to jsc-cloud-team's internal router
INTERNAL_ROUTER_PORT_ID=$(openstack port create --network $USER_NETWORK_ID -f value -c id ${NAME})
INTERNAL_ROUTER_PORT_IP=$(openstack port show $INTERNAL_ROUTER_PORT_ID -f json -c fixed_ips | jq -r '.fixed_ips[0].ip_address')
openstack router add port $MANAGEMENT_ROUTER_INTERNAL_ID $INTERNAL_ROUTER_PORT_ID
# Set static route for external (default) router
openstack router set --route destination=$SUBNET_CIDR,gateway=$MANAGEMENT_GATEWAY_INTERNAL $MANAGEMENT_ROUTER_ID
# Add security group rules to allow new cluster to reach Rancher VMs
openstack security group rule create --dst-port 443 --remote-ip=$SUBNET_CIDR --protocol tcp --description "Rancher access for ${NAME} cluster" $MANAGEMENT_SECGROUP_ID -f value -c id
openstack security group rule create --dst-port 111 --remote-ip=$SUBNET_CIDR --protocol tcp --description "NFS access for ${NAME} cluster" $MANAGEMENT_SECGROUP_ID -f value -c id
openstack security group rule create --dst-port 111 --remote-ip=$SUBNET_CIDR --protocol udp --description "NFS access for ${NAME} cluster" $MANAGEMENT_SECGROUP_ID -f value -c id
openstack security group rule create --dst-port 2049 --remote-ip=$SUBNET_CIDR --protocol tcp --description "NFS access for ${NAME} cluster" $MANAGEMENT_SECGROUP_ID -f value -c id
openstack security group rule create --dst-port 2049 --remote-ip=$SUBNET_CIDR --protocol udp --description "NFS access for ${NAME} cluster" $MANAGEMENT_SECGROUP_ID -f value -c id
fi
# activate `<user>` project credentials
source ${DIR}/${NAME}_credentials.sh
if [[ $CREATE == "true" ]]; then
# Set static route for <user> project router
openstack router set --route destination=$MANAGEMENT_NETWORK_CIDR,gateway=$INTERNAL_ROUTER_PORT_IP $USER_ROUTER_ID
# Create security group
# More details: https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/installation-requirements/port-requirements
USER_SEC_GROUP_ID=$(openstack security group create ${NAME} -c id -f value)
openstack security group rule create --dst-port 22 --remote-ip=$MANAGEMENT_NETWORK_CIDR --protocol tcp --description "SSH provisioning of node by RKE" $USER_SEC_GROUP_ID -f value -c id
openstack security group rule create --dst-port 2376 --remote-ip=$MANAGEMENT_NETWORK_CIDR --protocol tcp --description "Docker daemon TLS port used by node driver" $USER_SEC_GROUP_ID -f value -c id
openstack security group rule create --dst-port 80 --remote-ip=$SUBNET_CIDR --protocol tcp --description "http ingress" $USER_SEC_GROUP_ID -f value -c id
openstack security group rule create --dst-port 443 --remote-ip=$SUBNET_CIDR --protocol tcp --description "https ingress" $USER_SEC_GROUP_ID -f value -c id
openstack security group rule create --dst-port 2379 --remote-ip=$SUBNET_CIDR --protocol tcp --description "etcd client requests" $USER_SEC_GROUP_ID -f value -c id
openstack security group rule create --dst-port 2380 --remote-ip=$SUBNET_CIDR --protocol tcp --description "etcd peer communication" $USER_SEC_GROUP_ID -f value -c id
openstack security group rule create --dst-port 6443 --remote-ip=$SUBNET_CIDR --protocol tcp --description "Kubernetes apiserver" $USER_SEC_GROUP_ID -f value -c id
openstack security group rule create --dst-port 8472 --remote-ip=$SUBNET_CIDR --protocol udp --description "Canal/Flannel VXLAN overlay networking" $USER_SEC_GROUP_ID -f value -c id
openstack security group rule create --dst-port 9099 --remote-ip=$SUBNET_CIDR --protocol tcp --description "Canal/Flannel livenessProbe/readinessProbe" $USER_SEC_GROUP_ID -f value -c id
openstack security group rule create --dst-port 10250 --remote-ip=$SUBNET_CIDR --protocol tcp --description "Metrics server communication with all nodes" $USER_SEC_GROUP_ID -f value -c id
openstack security group rule create --dst-port 10254 --remote-ip=$SUBNET_CIDR --protocol tcp --description "Ingress controller livenessProbe/readinessProbe" $USER_SEC_GROUP_ID -f value -c id
openstack security group rule create --dst-port 30000:32767 --remote-ip=$SUBNET_CIDR --protocol tcp --description "NodePort port range" $USER_SEC_GROUP_ID -f value -c id
openstack security group rule create --dst-port 30000:32767 --remote-ip=$SUBNET_CIDR --protocol udp --description "NodePort port range" $USER_SEC_GROUP_ID -f value -c id
# Create a keypair, will be used to bootstrap VMs of the new cluster
openstack keypair create ${NAME} > ${DIR}/${NAME}/keypair.key
chmod 400 ${DIR}/${NAME}/keypair.key
fi
# You can use these variables to create NodeTemplates in Rancher.
# IMPORTANT: at the end of the nodetemplate, set "engineInstallUrl" to None.
# Docker will be installed during the cloud-init runcmd phase.
# You'll find the userdata files in ${DIR}/${NAME}/userdata_[main|worker].yaml
echo "--- NodeTemplate ---"
echo "applicationCredentialId: ${OS_APPLICATION_CREDENTIAL_ID}"
echo "applicationCredentialSecret: ${OS_APPLICATION_CREDENTIAL_SECRET}"
echo "authUrl: https://cloud.jsc.fz-juelich.de:5000/v3"
echo "domainId: default"
echo "flavorId: d468d3fb-18da-4bd3-94ce-9c4793cf2082 (4Cpu / 8GB)"
echo "flavorId: 05572232-73cc-4dfc-87af-b9f84d56bd33 (2Cpu / 4GB)"
echo "imageId: 1b14ce21-5bd3-4776-860f-8d77a0232d24"
echo "keypairName: ${NAME}"
echo "netId: ${USER_NETWORK_ID}"
echo "privateKeyFile:"
cat ${DIR}/${NAME}/keypair.key
echo "region: JSCCloud"
echo "secGroups: ${NAME}"
echo "sshUser: ubuntu"
echo "tenantDomainId: aaa9e797f2b94bbfab233dab6b48697a"
echo "tenantId: ${PROJECT_ID}"
echo "userDataFile: see files for main/worker in ${DIR}"
sed -e "s@<name>@${NAME}@g" ${DIR}/userdata_main.yaml > ${DIR}/${NAME}/userdata_main.yaml
sed -e "s@<name>@${NAME}@g" ${DIR}/userdata_worker.yaml > ${DIR}/${NAME}/userdata_worker.yaml
echo "engineInstallUrl: None"
echo "----------------------------------"
# You can use the rke.yaml file and create a RKE Template in Rancher
echo "----------------------------------"
echo "--- RkeTemplate (replace in rke.yaml line 16,17,22) ---"
echo " [Global]"
echo " auth-url=https://cloud.jsc.fz-juelich.de:5000/v3"
echo " application-credential-id=$OS_APPLICATION_CREDENTIAL_ID"
echo " application-credential-secret=$OS_APPLICATION_CREDENTIAL_SECRET"
echo " region=JSCCloud"
echo " tls-insecure=true"
echo " [LoadBalancer]"
echo " use-octavia=true"
echo " subnet-id=$USER_SUBNET_ID"
echo " floating-network-id=c2ce19a1-ad08-41fb-8dd2-4b97d78815fc"
echo " manage-security-groups=false"
echo " [BlockStorage]"
echo " bs-version=v2"
echo " ignore-volume-az=true"
echo "----------------------------------"
# ssh into the rancher-1 vm.
# Create a NFS folder for the cluster-backups
# Allow access to this directory
echo "---- Administrator ----"
echo "ssh ubuntu@134.94.198.215"
echo "sudo su"
echo "mkdir /nfs/cluster-backups/${NAME}"
echo "echo \"/nfs/cluster-backups/${NAME} ${SUBNET_CIDR}(rw,sync,no_root_squash,no_subtree_check)\" >> /etc/exports"
echo "exportfs -a"
echo "--------------------------------"
echo "---- Logs for the cluster creation (on Rancher-1 VM)----"
echo "kubectl -n cattle-system logs -f -l app=rancher"
echo "----------------------------------"
#!/bin/bash
### Customization
NAME=""
SUBNET_CIDR=""
###
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
MANAGEMENT_PROJECT_ID=2092d29f72ca4f32ac416cc545986007
MANAGEMENT_ROUTER_ID=90d2a94c-3bff-4a79-88d2-00dc6626e278
MANAGEMENT_ROUTER_INTERNAL_ID=5e048465-53ed-4f24-8eec-871cf7d668d5
MANAGEMENT_NETWORK_CIDR="10.0.1.0/24"
MANAGEMENT_GATEWAY_INTERNAL="10.0.1.253"
MANAGEMENT_SECGROUP_ID=7b7de2f9-a561-4f3c-929a-fd8bc26a0d2c
source ${DIR}/credentials.sh
USER_ROUTER_ID=$(openstack router show router -f value -c id)
USER_NETWORK_ID=$(openstack network show $NAME -f value -c id)
USER_SUBNET_ID=$(openstack network show $NAME -c subnets -f json | jq -r '.subnets[0]')
openstack keypair delete ${NAME}
rm ${DIR}/keypair.key
USER_SEC_GROUP_ID=$(openstack security group create ${NAME} -c id -f value)
openstack security group delete $USER_SEC_GROUP_ID
source ${DIR}/../management_credentials.sh
INTERNAL_ROUTER_PORT_ID=$(openstack port show -f value -c id ${NAME})
INTERNAL_ROUTER_PORT_IP=$(openstack port show $INTERNAL_ROUTER_PORT_ID -f json -c fixed_ips | jq -r '.fixed_ips[0].ip_address')
openstack router remove port $MANAGEMENT_ROUTER_INTERNAL_ID $INTERNAL_ROUTER_PORT_ID
openstack router unset --route destination=$SUBNET_CIDR,gateway=$MANAGEMENT_GATEWAY_INTERNAL $MANAGEMENT_ROUTER_ID
RULE_ID=$(openstack security group rule list -c ID -c 'IP Range' -c 'Port Range' -c 'IP Protocol' -f value $MANAGEMENT_SECGROUP_ID | grep "443:443" | grep tcp | grep "$SUBNET_CIDR" | cut -d' ' -f1)
openstack security group rule delete $RULE_ID
RULE_ID=$(openstack security group rule list -c ID -c 'IP Range' -c 'Port Range' -c 'IP Protocol' -f value $MANAGEMENT_SECGROUP_ID | grep "111:111" | grep tcp | grep "$SUBNET_CIDR" | cut -d' ' -f1)
openstack security group rule delete $RULE_ID
RULE_ID=$(openstack security group rule list -c ID -c 'IP Range' -c 'Port Range' -c 'IP Protocol' -f value $MANAGEMENT_SECGROUP_ID | grep "2049:2049" | grep tcp | grep "$SUBNET_CIDR" | cut -d' ' -f1)
openstack security group rule delete $RULE_ID
RULE_ID=$(openstack security group rule list -c ID -c 'IP Range' -c 'Port Range' -c 'IP Protocol' -f value $MANAGEMENT_SECGROUP_ID | grep "111:111" | grep udp | grep "$SUBNET_CIDR" | cut -d' ' -f1)
openstack security group rule delete $RULE_ID
RULE_ID=$(openstack security group rule list -c ID -c 'IP Range' -c 'Port Range' -c 'IP Protocol' -f value $MANAGEMENT_SECGROUP_ID | grep "2049:2049" | grep udp | grep "$SUBNET_CIDR" | cut -d' ' -f1)
openstack security group rule delete $RULE_ID
source ${DIR}/credentials.sh
openstack router unset --route destination=$MANAGEMENT_NETWORK_CIDR,gateway=$INTERNAL_ROUTER_PORT_IP $USER_ROUTER_ID
openstack router remove subnet $USER_ROUTER_ID $USER_SUBNET_ID
openstack network delete $USER_NETWORK_ID
echo "ssh Rancher-1"
echo "# Remove nfs share for cluster in /etc/exports"
echo "exportfs -a"
echo "# Remove nfs backup directory for cluster, if no longer needed"
docker_root_dir: /var/lib/docker
enable_cluster_alerting: false
enable_cluster_monitoring: false
enable_network_policy: false
local_cluster_auth_endpoint:
enabled: true
rancher_kubernetes_engine_config:
addon_job_timeout: 45
addons: |-
---
apiVersion: v1
stringData:
cloud-config: |-
[Global]
auth-url=https://cloud.jsc.fz-juelich.de:5000/v3
application-credential-id=...
application-credential-secret=...
region=JSCCloud
tls-insecure=true
[LoadBalancer]
use-octavia=true
subnet-id=...
floating-network-id=c2ce19a1-ad08-41fb-8dd2-4b97d78815fc
manage-security-groups=false
[BlockStorage]
bs-version=v2
ignore-volume-az=true
kind: Secret
metadata:
name: cloud-config
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cloud-controller-manager
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: openstack-cloud-controller-manager
namespace: kube-system
labels:
k8s-app: openstack-cloud-controller-manager
spec:
selector:
matchLabels:
k8s-app: openstack-cloud-controller-manager
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
k8s-app: openstack-cloud-controller-manager
spec:
nodeSelector:
node-role.kubernetes.io/controlplane: "true"
securityContext:
runAsUser: 1001
tolerations:
- key: node.cloudprovider.kubernetes.io/uninitialized
value: "true"
effect: NoSchedule
- key: node-role.kubernetes.io/controlplane
effect: NoSchedule
value: "true"
- key: node-role.kubernetes.io/etcd
effect: NoExecute
value: "true"
serviceAccountName: cloud-controller-manager
containers:
- name: openstack-cloud-controller-manager
image: registry.k8s.io/provider-os/openstack-cloud-controller-manager:v1.26.3
args:
- /bin/openstack-cloud-controller-manager
- --v=1
- --cluster-name=$(CLUSTER_NAME)
- --cloud-config=$(CLOUD_CONFIG)
- --cloud-provider=openstack
- --use-service-account-credentials=true
- --bind-address=127.0.0.1
volumeMounts:
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/config
name: cloud-config-volume
readOnly: true
resources:
requests:
cpu: 200m
env:
- name: CLOUD_CONFIG
value: /etc/config/cloud-config
- name: CLUSTER_NAME
value: kubernetes
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- name: cloud-config-volume
secret:
secretName: cloud-config
---
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:cloud-node-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:cloud-node-controller
subjects:
- kind: ServiceAccount
name: cloud-node-controller
namespace: kube-system
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:cloud-controller-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:cloud-controller-manager
subjects:
- kind: ServiceAccount
name: cloud-controller-manager
namespace: kube-system
kind: List
metadata: {}
---
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:cloud-controller-manager
rules:
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- create
- update
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- '*'
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- apiGroups:
- ""
resources:
- services
verbs:
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- services/status
verbs:
- patch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- create
- get
- apiGroups:
- ""
resources:
- serviceaccounts/token
verbs:
- create
- apiGroups:
- ""
resources:
- persistentvolumes
verbs:
- '*'
- apiGroups:
- ""
resources:
- endpoints
verbs:
- create
- get
- list
- watch
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- list
- get
- watch
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:cloud-node-controller
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- '*'
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- update
kind: List
metadata: {}
authentication:
strategy: x509
authorization: {}
bastion_host:
ignore_proxy_env_vars: false
ssh_agent_auth: false
cloud_provider:
name: external
dns:
linear_autoscaler_params: {}
node_selector: null
nodelocal:
node_selector: null
update_strategy:
rolling_update: {}
options: null
provider: coredns
reversecidrs: null
stubdomains: null
tolerations: null
update_strategy: {}
upstreamnameservers:
- 134.94.32.3
- 134.94.32.4
- 134.94.32.5
enable_cri_dockerd: false
ignore_docker_version: true
ingress:
default_backend: false
default_ingress_class: true
http_port: 0
https_port: 0
provider: none
kubernetes_version: v1.26.7-rancher1-1
monitoring:
provider: metrics-server
replicas: 1
network:
mtu: 0
options:
flannel_backend_type: vxlan
plugin: canal
restore:
restore: false
rotate_encryption_key: false
services:
etcd:
backup_config:
enabled: true
interval_hours: 12
retention: 6
safe_timestamp: false
timeout: 300
creation: 12h
extra_args:
election-timeout: '5000'
heartbeat-interval: '500'
gid: 0
retention: 72h
snapshot: false
uid: 0
kube-api:
always_pull_images: false
pod_security_policy: false
secrets_encryption_config:
enabled: false
service_node_port_range: 30000-32767
kube-controller: {}
kubelet:
fail_swap_on: false
generate_serving_certificate: false
kubeproxy: {}
scheduler: {}
ssh_agent_auth: false
upgrade_strategy:
max_unavailable_controlplane: '1'
max_unavailable_worker: 10%
node_drain_input:
delete_local_data: false
force: false
grace_period: -1
ignore_daemon_sets: true
timeout: 120
#cloud-config
package_update: false
package_upgrade: false
write_files:
- encoding: b64
content: Ly8gQXV0b21hdGljYWxseSB1cGdyYWRlIHBhY2thZ2VzIGZyb20gdGhlc2UgKG9yaWdpbjphcmNoaXZlKSBwYWlycwovLwovLyBOb3RlIHRoYXQgaW4gVWJ1bnR1IHNlY3VyaXR5IHVwZGF0ZXMgbWF5IHB1bGwgaW4gbmV3IGRlcGVuZGVuY2llcwovLyBmcm9tIG5vbi1zZWN1cml0eSBzb3VyY2VzIChlLmcuIGNocm9taXVtKS4gQnkgYWxsb3dpbmcgdGhlIHJlbGVhc2UKLy8gcG9ja2V0IHRoZXNlIGdldCBhdXRvbWF0aWNhbGx5IHB1bGxlZCBpbi4KVW5hdHRlbmRlZC1VcGdyYWRlOjpBbGxvd2VkLU9yaWdpbnMgewoJIiR7ZGlzdHJvX2lkfToke2Rpc3Ryb19jb2RlbmFtZX0iOwoJIiR7ZGlzdHJvX2lkfToke2Rpc3Ryb19jb2RlbmFtZX0tc2VjdXJpdHkiOwoJLy8gRXh0ZW5kZWQgU2VjdXJpdHkgTWFpbnRlbmFuY2U7IGRvZXNuJ3QgbmVjZXNzYXJpbHkgZXhpc3QgZm9yCgkvLyBldmVyeSByZWxlYXNlIGFuZCB0aGlzIHN5c3RlbSBtYXkgbm90IGhhdmUgaXQgaW5zdGFsbGVkLCBidXQgaWYKCS8vIGF2YWlsYWJsZSwgdGhlIHBvbGljeSBmb3IgdXBkYXRlcyBpcyBzdWNoIHRoYXQgdW5hdHRlbmRlZC11cGdyYWRlcwoJLy8gc2hvdWxkIGFsc28gaW5zdGFsbCBmcm9tIGhlcmUgYnkgZGVmYXVsdC4KCSIke2Rpc3Ryb19pZH1FU01BcHBzOiR7ZGlzdHJvX2NvZGVuYW1lfS1hcHBzLXNlY3VyaXR5IjsKCSIke2Rpc3Ryb19pZH1FU006JHtkaXN0cm9fY29kZW5hbWV9LWluZnJhLXNlY3VyaXR5IjsKCSIke2Rpc3Ryb19pZH06JHtkaXN0cm9fY29kZW5hbWV9LXVwZGF0ZXMiOwovLwkiJHtkaXN0cm9faWR9OiR7ZGlzdHJvX2NvZGVuYW1lfS1wcm9wb3NlZCI7Ci8vCSIke2Rpc3Ryb19pZH06JHtkaXN0cm9fY29kZW5hbWV9LWJhY2twb3J0cyI7Cn07CgovLyBQeXRob24gcmVndWxhciBleHByZXNzaW9ucywgbWF0Y2hpbmcgcGFja2FnZXMgdG8gZXhjbHVkZSBmcm9tIHVwZ3JhZGluZwpVbmF0dGVuZGVkLVVwZ3JhZGU6OlBhY2thZ2UtQmxhY2tsaXN0IHsKfTsKClVuYXR0ZW5kZWQtVXBncmFkZTo6RGV2UmVsZWFzZSAiYXV0byI7Cg==
owner: root:root
path: /etc/apt/apt.conf.d/50unattended-upgrades
permissions: '0644'
- encoding: b64
content: L3Zhci9saWIvZG9ja2VyL2NvbnRhaW5lcnMvKi8qLmxvZyB7CiAgcm90YXRlIDcKICBkYWlseQogIGNvbXByZXNzCiAgbWlzc2luZ29rCiAgZGVsYXljb21wcmVzcwogIGNvcHl0cnVuY2F0ZQp9Cg==
owner: root:root
path: /etc/logrotate.d/docker-container
permissions: '0644'
- encoding: b64
content: IwojIERlZmF1bHQgc2V0dGluZ3MgZm9yIC9ldGMvaW5pdC5kL3N5c3N0YXQsIC9ldGMvY3Jvbi5kL3N5c3N0YXQKIyBhbmQgL2V0Yy9jcm9uLmRhaWx5L3N5c3N0YXQgZmlsZXMKIwoKIyBTaG91bGQgc2FkYyBjb2xsZWN0IHN5c3RlbSBhY3Rpdml0eSBpbmZvcm1hdGlvbnM/IFZhbGlkIHZhbHVlcwojIGFyZSAidHJ1ZSIgYW5kICJmYWxzZSIuIFBsZWFzZSBkbyBub3QgcHV0IG90aGVyIHZhbHVlcywgdGhleQojIHdpbGwgYmUgb3ZlcndyaXR0ZW4gYnkgZGViY29uZiEKRU5BQkxFRD0idHJ1ZSIKCg==
owner: root:root
path: /etc/default/sysstat
permissions: '0644'
- encoding: b64
content: a2VybmVsLnVucHJpdmlsZWdlZF91c2VybnNfY2xvbmU9MAo=
owner: root:root
path: /etc/sysctl.d/99-disable-unpriv-userns.conf
permissions: '0644'
runcmd:
- echo "$(date) - Start node" >> /home/ubuntu/start.log
- echo "$(date) - Sleep 5 seconds, to avoid race condition" >> /home/ubuntu/start.log
- sleep 5
- echo "$(date) - Download docker" >> /home/ubuntu/start.log
- wget -O /tmp/docker.sh https://releases.rancher.com/install-docker/23.0.sh
- echo "$(date) - Download docker done" >> /home/ubuntu/start.log
- echo "$(date) - Install docker" >> /home/ubuntu/start.log
- sh /tmp/docker.sh
- usermod -aG docker ubuntu
- echo "$(date) - Install docker done" >> /home/ubuntu/start.log
- echo "$(date) - Set containerd and docker packages on hold" >> /home/ubuntu/start.log
- apt-mark hold containerd.io docker-compose-plugin docker-scan-plugin docker-ce docker-ce-cli docker-ce-rootless-extras
- echo "$(date) - Install custom packages" >> /home/ubuntu/start.log
- apt update && apt install -yq autofs jq net-tools nfs-common sudo sysstat unattended-upgrades
- echo "$(date) - Install custom packages done" >> /home/ubuntu/start.log
- echo "$(date) - Configure autofs" >> /home/ubuntu/start.log
- systemctl stop autofs
- mkdir -p /opt/rke
- echo "/opt/rke/etcd-snapshots -fstype=nfs,rw,vers=4,minorversion=2,proto=tcp,hard,nobind,rsize=32768,wsize=32768,nodiratime,fsc,timeo=100,noatime,nosuid,intr,nodev 10.0.1.124:/nfs/cluster-backups/<name>" > /etc/auto.nfs
- echo "$(date) - Enable autofs" >> /home/ubuntu/start.log
- echo "/- /etc/auto.nfs --ghost --timeout=86400" >> /etc/auto.master
- systemctl enable --now autofs
- echo "$(date) - Upgrade all packages" >> /home/ubuntu/start.log
- apt update && apt upgrade -yq
- echo "$(date) - Upgrade all packages done" >> /home/ubuntu/start.log
- echo "$(date) - Enable sysstat" >> /home/ubuntu/start.log
- systemctl enable --now sysstat
- echo "$(date) - Start script done" >> /home/ubuntu/start.log
#cloud-config
package_update: false
package_upgrade: false
write_files:
- encoding: b64
content: Ly8gQXV0b21hdGljYWxseSB1cGdyYWRlIHBhY2thZ2VzIGZyb20gdGhlc2UgKG9yaWdpbjphcmNoaXZlKSBwYWlycwovLwovLyBOb3RlIHRoYXQgaW4gVWJ1bnR1IHNlY3VyaXR5IHVwZGF0ZXMgbWF5IHB1bGwgaW4gbmV3IGRlcGVuZGVuY2llcwovLyBmcm9tIG5vbi1zZWN1cml0eSBzb3VyY2VzIChlLmcuIGNocm9taXVtKS4gQnkgYWxsb3dpbmcgdGhlIHJlbGVhc2UKLy8gcG9ja2V0IHRoZXNlIGdldCBhdXRvbWF0aWNhbGx5IHB1bGxlZCBpbi4KVW5hdHRlbmRlZC1VcGdyYWRlOjpBbGxvd2VkLU9yaWdpbnMgewoJIiR7ZGlzdHJvX2lkfToke2Rpc3Ryb19jb2RlbmFtZX0iOwoJIiR7ZGlzdHJvX2lkfToke2Rpc3Ryb19jb2RlbmFtZX0tc2VjdXJpdHkiOwoJLy8gRXh0ZW5kZWQgU2VjdXJpdHkgTWFpbnRlbmFuY2U7IGRvZXNuJ3QgbmVjZXNzYXJpbHkgZXhpc3QgZm9yCgkvLyBldmVyeSByZWxlYXNlIGFuZCB0aGlzIHN5c3RlbSBtYXkgbm90IGhhdmUgaXQgaW5zdGFsbGVkLCBidXQgaWYKCS8vIGF2YWlsYWJsZSwgdGhlIHBvbGljeSBmb3IgdXBkYXRlcyBpcyBzdWNoIHRoYXQgdW5hdHRlbmRlZC11cGdyYWRlcwoJLy8gc2hvdWxkIGFsc28gaW5zdGFsbCBmcm9tIGhlcmUgYnkgZGVmYXVsdC4KCSIke2Rpc3Ryb19pZH1FU01BcHBzOiR7ZGlzdHJvX2NvZGVuYW1lfS1hcHBzLXNlY3VyaXR5IjsKCSIke2Rpc3Ryb19pZH1FU006JHtkaXN0cm9fY29kZW5hbWV9LWluZnJhLXNlY3VyaXR5IjsKCSIke2Rpc3Ryb19pZH06JHtkaXN0cm9fY29kZW5hbWV9LXVwZGF0ZXMiOwovLwkiJHtkaXN0cm9faWR9OiR7ZGlzdHJvX2NvZGVuYW1lfS1wcm9wb3NlZCI7Ci8vCSIke2Rpc3Ryb19pZH06JHtkaXN0cm9fY29kZW5hbWV9LWJhY2twb3J0cyI7Cn07CgovLyBQeXRob24gcmVndWxhciBleHByZXNzaW9ucywgbWF0Y2hpbmcgcGFja2FnZXMgdG8gZXhjbHVkZSBmcm9tIHVwZ3JhZGluZwpVbmF0dGVuZGVkLVVwZ3JhZGU6OlBhY2thZ2UtQmxhY2tsaXN0IHsKfTsKClVuYXR0ZW5kZWQtVXBncmFkZTo6RGV2UmVsZWFzZSAiYXV0byI7Cg==
owner: root:root
path: /etc/apt/apt.conf.d/50unattended-upgrades
permissions: '0644'
- encoding: b64
content: L3Zhci9saWIvZG9ja2VyL2NvbnRhaW5lcnMvKi8qLmxvZyB7CiAgcm90YXRlIDcKICBkYWlseQogIGNvbXByZXNzCiAgbWlzc2luZ29rCiAgZGVsYXljb21wcmVzcwogIGNvcHl0cnVuY2F0ZQp9Cg==
owner: root:root
path: /etc/logrotate.d/docker-container
permissions: '0644'
- encoding: b64
content: IwojIERlZmF1bHQgc2V0dGluZ3MgZm9yIC9ldGMvaW5pdC5kL3N5c3N0YXQsIC9ldGMvY3Jvbi5kL3N5c3N0YXQKIyBhbmQgL2V0Yy9jcm9uLmRhaWx5L3N5c3N0YXQgZmlsZXMKIwoKIyBTaG91bGQgc2FkYyBjb2xsZWN0IHN5c3RlbSBhY3Rpdml0eSBpbmZvcm1hdGlvbnM/IFZhbGlkIHZhbHVlcwojIGFyZSAidHJ1ZSIgYW5kICJmYWxzZSIuIFBsZWFzZSBkbyBub3QgcHV0IG90aGVyIHZhbHVlcywgdGhleQojIHdpbGwgYmUgb3ZlcndyaXR0ZW4gYnkgZGViY29uZiEKRU5BQkxFRD0idHJ1ZSIKCg==
owner: root:root
path: /etc/default/sysstat
permissions: '0644'
- encoding: b64
content: a2VybmVsLnVucHJpdmlsZWdlZF91c2VybnNfY2xvbmU9MAo=
owner: root:root
path: /etc/sysctl.d/99-disable-unpriv-userns.conf
permissions: '0644'
runcmd:
- echo "$(date) - Start node" >> /home/ubuntu/start.log
- echo "$(date) - Sleep 5 seconds, to avoid race condition" >> /home/ubuntu/start.log
- sleep 5
- echo "$(date) - Download docker" >> /home/ubuntu/start.log
- wget -O /tmp/docker.sh https://releases.rancher.com/install-docker/23.0.sh
- echo "$(date) - Download docker done" >> /home/ubuntu/start.log
- echo "$(date) - Install docker" >> /home/ubuntu/start.log
- sh /tmp/docker.sh
- usermod -aG docker ubuntu
- echo "$(date) - Install docker done" >> /home/ubuntu/start.log
- echo "$(date) - Set containerd and docker packages on hold" >> /home/ubuntu/start.log
- apt-mark hold containerd.io docker-compose-plugin docker-scan-plugin docker-ce docker-ce-cli docker-ce-rootless-extras
- echo "$(date) - Install custom packages" >> /home/ubuntu/start.log
- apt update && apt install -yq jq net-tools nfs-common sudo sysstat unattended-upgrades
- echo "$(date) - Install custom packages done" >> /home/ubuntu/start.log
- echo "$(date) - Upgrade all packages" >> /home/ubuntu/start.log
- apt update && apt upgrade -yq
- echo "$(date) - Upgrade all packages done" >> /home/ubuntu/start.log
- echo "$(date) - Enable sysstat" >> /home/ubuntu/start.log
- systemctl enable --now sysstat
- echo "$(date) - Start script done" >> /home/ubuntu/start.log
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment