Skip to content
Snippets Groups Projects
Select Git revision
  • b073de34ad256ac1f57b71e0184abbcad6804c89
  • main default
  • postgresql-16.7.21
  • postgresql-16.7.20
  • database-operators
  • ingress-nginx
  • gitlab-runner-0.79.0
  • cluster-autoscaler
  • monitoring-rke2
  • sealed-secrets-2.17.3
  • postgresql-ha-16.0.22
  • postgresql-16.7.19
  • openstack-cloud-controller-manager-2.33.0
  • openstack-cinder-csi-2.33.0
  • rancher-monitoring-106.1.2+up69.8.2-rancher.7
  • rancher-monitoring-crd-106.1.2+up69.8.2-rancher.7
  • rancher-logging-crd-106.0.2+up4.10.0-rancher.6
  • rancher-logging-106.0.2+up4.10.0-rancher.6
  • kured-5.6.2
  • rancher-istio-106.3.1+up1.25.0
  • postgres-operator-ui-1.14.0
  • cert-manager-v1.18.2
22 results

fleet-deployments

  • Clone with SSH
  • Clone with HTTPS
  • Name Last commit Last update
    managed_clusters
    .gitignore
    README.md

    Kubernetes as a Service administration

    This repo is used to create clusters on JSC-Cloud and deploy software on them.

    Supported Labels

    • kured: "true" -> Install Kured, this will reboot your nodes if necessary on a sunday between 2am and 5am (Timezone: Europe/Berlin). more
    • cinder-csi: "true" -> Install Cinder-CSI Plugin, this will create a storage class on the cluster, which uses OpenStack Cinder Volumes as persistent storage. more

    Create Cluster

    Requirements:

    • OpenStack CLI (pip install openstackclient)
    • application credentials for jsc-cloud-team project
    • application credentials for <user> project ( Roles: load-balancer_member member reader)

    Create OpenStack environment in users project:

    • git clone --single-branch --branch main git@gitlab.jsc.fz-juelich.de:kaas/fleet-deployments.git fleet_deployments/managed_clusters
    • cd fleet_deployments/managed_clusters
    • Store jsc-cloud-team credentials in managed_clusters/management_credentials.sh
    • Store <user> credentials in managed_clusters/<NAME>_credentials.sh ( must be equal to the Name given in create.sh)
    • update create.sh , fill in name, project id and subnet cidr
    • /bin/bash create.sh

    Create NodeTemplate / RKETemplate

    • Browse to https://zam12142.zam.kfa-juelich.de , log in
    • Open sidebar (click top left) -> Cluster Management
    • RKE1 Configuration (sidebar) -> Node Templates
    • Add Template (top right), choose OpenStack
    • Create 2 Node Templates (main + worker template, see /userdata_[main|worker].yaml for values)
    • IMPORTANT: At the end of the node template creation, Engine Options -> Docker Install URL must be "None"!
    • RKE1 Configuration (sidebar) -> RKE Templates
    • Add template (top right), name should be equal to cluster name, revision can be v1
    • Click "Edit as YAML" on the right side, copy the ${NAME}/rke.yaml file from into it.

    Create Cluster:

    • Browse to https://zam12142.zam.kfa-juelich.de , log in
    • Open sidebar (click top left) -> Cluster Management
    • Create (top right), select RKE1 in the top right, click OpenStack
    • Cluster Name: as before in create.sh, create two nodepools (one for main nodes [check: drain before delete, etcd, control-plane], one for worker nodes [check: drain before delete, worker]). Set "Auto Replace" to 5 minutes. Use the previously created node templates.
    • Cluster Options: "Use an existing RKE Template and revision" -> Choose the previously created one.
    • Member roles (above Cluster Options) -> Add member as owner to this cluster. If user does not exists yet, it can be done later.
    • Labels: can be used to install default software. See List above for available labels
    • Scroll down: Create -> Done.

    How to Manage Cluster (once it's created, may take up to 10 minutes):

      1. via UI: https://zam12142.zam.kfa-juelich.de , open sidebar (click top left), Explore Cluster ->
      1. via CLI: Install kubectl, download kubectl (icons top right in Explore Cluster)

    How to increase/decrease number of nodes:

    • https://zam12142.zam.kfa-juelich.de , sidebar (click top left), Cluster Management, Click on Cluster name, use + in nodepool to add more nodes in nodepool.
    • When decreasing you should drain them first:
      • kubectl cordon <node> (or in Explore Clusters -> -> nodes)
      • kubectl drain --ignore-daemonsets --delete-emptydir-data <node> (or in UI, same as above)
      • In Cluster Management select node and click on Scale Down. (Deleted nodes would be replaced otherwise)

    Delete cluster

    • Delete Cluster in Rancher UI
    • Use delete.sh to revert all changes done before (network, security-group, static-routes, etc.)