Upgrading Clusters on Huawei Cloud Stack

This guide explains how to upgrade Kubernetes clusters on Huawei Cloud Stack with minimal downtime, while preserving stability and data integrity.

Overview

Cluster upgrades on HCS encompass multiple components and follow a structured approach to ensure system reliability:

  • Control Plane Upgrades: Update Kubernetes control plane components and underlying infrastructure
  • Worker Node Upgrades: Upgrade worker nodes with new machine images and Kubernetes versions
  • Infrastructure Updates: Modify virtual machine specifications, storage, and network configurations

Cluster API orchestrates declarative rolling updates with built-in safety mechanisms.

INFO

Prerequisites

Before you start, ensure:

  • The control plane is reachable
  • All nodes are healthy (Ready)

For initial deployment, see the Create Cluster guide.

WARNING

Data Loss During Upgrades

Upgrades rely on Cluster API's rolling update mechanism. During a rolling update on HCS, data disks are detached from old VMs and reattached to new VMs. Ensure that no cluster functionality or workloads depend on data stored on the system disk.

Control Plane Upgrades

Control plane upgrades update the Kubernetes API server, etcd, scheduler, and controller manager, along with the underlying VM infrastructure.

Infrastructure Image Updates

Upgrading the underlying machine images for control plane nodes provides security patches, performance improvements, and updated system components.

Procedure

  1. Create Updated Machine Template

    Copy the existing HCSMachineTemplate referenced by KubeadmControlPlane and modify the required specifications:

    kubectl get hcsmachinetemplate <current-template-name> -n cpaas-system -o yaml > new-cp-template.yaml
  2. Modify Template Specifications

    Modify the new template:

    • Set metadata.name to <new-template-name>
    • Update as needed:
      • spec.template.spec.imageName
      • spec.template.spec.flavorName
      • spec.template.spec.rootVolume.size
      • spec.template.spec.dataVolumes
  3. Deploy Updated Template

    Apply the new machine template:

    kubectl apply -f new-cp-template.yaml -n cpaas-system
  4. Update Control Plane Reference

    Modify the KubeadmControlPlane resource to reference the new template:

    kubectl patch kubeadmcontrolplane <kcp-name> -n cpaas-system --type='merge' -p='{"spec":{"machineTemplate":{"infrastructureRef":{"name":"<new-template-name>"}}}}'
  5. Monitor Rolling Update

    The control plane will automatically perform a rolling update:

    kubectl get kubeadmcontrolplane <kcp-name> -n cpaas-system -w
    kubectl get machines -n cpaas-system -l cluster.x-k8s.io/control-plane

Kubernetes Version Upgrades

Upgrading the Kubernetes version involves updating both the control plane software and the supporting virtual machine images.

Prerequisites

  • Verify compatibility between the target Kubernetes version and existing workloads
  • Ensure the VM template supports the target Kubernetes version. See OS Support Matrix for version mapping.
  • Review the Kubernetes upgrade path and version skew policy

Procedure

  1. Update VM Template Reference

    Update spec.template.spec.imageName in the referenced HCSMachineTemplate. The new VM template must match the target Kubernetes version.

  2. Update Control Plane Version

    Modify the spec.version field in the KubeadmControlPlane resource (required). Optionally adjust related fields as needed (for example, rollout strategy, drain/deletion timeouts, or the referenced infrastructure template) to align with the new version and upgrade policy

  3. Verify Upgrade Progress

    Monitor the rolling upgrade process:

    # Check control plane status
    kubectl get kubeadmcontrolplane <kcp-name> -n cpaas-system
    
    # Monitor individual machines
    kubectl get machines -n cpaas-system -l cluster.x-k8s.io/control-plane
    
    # Verify cluster health
    kubectl get nodes

Worker Node Upgrades

Worker node upgrades are managed via MachineDeployment resources.

INFO

For detailed worker node procedures, see the Managing Nodes section.

Additional Resources