diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..6968aa00 --- /dev/null +++ b/404.html @@ -0,0 +1,440 @@ + + + +
+ + + + + + + + + + + + + +The Cluster API driver for Magnum enables OpenStack Magnum, a container +orchestration service, to create and manage Kubernetes clusters using the +Cluster API framework. The Cluster API driver for Magnum leverages the power of +the Cluster API to simplify the deployment and management of Kubernetes clusters +within an OpenStack infrastructure.
+With the Cluster API driver for Magnum, Magnum takes on the responsibility of +creating and maintaining the Cluster API resources. Magnum interacts with the +Cluster API controllers and reconcilers to dynamically provision and manage +Kubernetes clusters.
+Magnum utilizes the capabilities of the Cluster API to define the desired state
+of the Kubernetes clusters using familiar Cluster API resources such as Cluster
,
+MachineDeployment
, and MachineSet
. These resources encapsulate important
+cluster configurations, including the number of control plane and worker nodes,
+their specifications, and other relevant attributes.
By leveraging the Cluster API driver for Magnum, Magnum translates the desired +cluster specifications into Cluster API resources. It creates and manages these +resources, ensuring that the Kubernetes clusters are provisioned and maintained +according to the specified configurations.
+Through this integration, Magnum empowers users to leverage the Cluster API's +declarative and consistent approach to manage their Kubernetes clusters in an +OpenStack environment. By leveraging Magnum's container orchestration +capabilities, users can easily create and scale Kubernetes deployments while +benefiting from the automation and extensibility provided by the Cluster API.
+Here references some awesome Intro and Install blogs:
+Openstack Magnum Cluster API
written by Satish Patel.OpenStack Magnum Kubernetes Cluster API driver in Kolla-Ansible
written by R0K5T4R.clusterctl
If you'd like to track the progress of a specific cluster from the clusterctl
+perspective, you can run the following command to find out the stack_id
of the
+cluster and then use clusterctl describe
to get the status of the cluster:
$ export CLUSTER_ID=$(openstack coe cluster show <cluster-name> -f value -c stack_id)
+$ watch -cn1 'clusterctl describe cluster -n magnum-system $CLUSTER_ID --grouping=false --color'
+
CREATE_IN_PROGRESS
stateWith the Cluster API driver for Magnum, the cluster creation process is +performed by the Cluster API for OpenStack. Due to the logic of how the +controller managers work, the process of creating a cluster is performed in +multiple steps and if a step fails, it will keep retrying until it succeeds.
+Unlike the legacy Heat driver, the Cluster API driver for Magnum does not
+move the state of the cluster to CREATE_FAILED
if a step fails. Instead,
+it will keep the cluster in CREATE_IN_PROGRESS
state until the cluster is
+successfully created or the cluster is deleted.
If you are experiencing issues with the cluster being stuck in CREATE_IN_PROGRESS
+state, you can follow the steps below to troubleshoot the issue:
Check the Cluster
name from the stack_id
field in Magnum:
$ openstack coe cluster show <cluster-name> -f value -c stack_id
+
Check if the Cluster
exists in the Kuberentes cluster using the stack_id
:
$ kubectl -n magnum-system get clusters <stack-id>
+
Note
+If the cluster exists and it is in Provisioned
state, you can skip to
+step 3.
You will need to lookup the OpenStackCluster
for the Cluster
:
$ kubectl -n magnum-system get openstackclusters -l cluster.x-k8s.io/cluster-name=<stack-id>
+
Note
+If the OpenStackCluster
shows true
for READY
, you can skip to
+step 4.
You will have to look at the KubeadmControlPlane
for
+ the OpenStackCluster
:
$ kubectl -n magnum-system get kubeadmcontrolplanes -l cluster.x-k8s.io/cluster-name=<stack-id>
+
If the number of READY
nodes does not match the number of REPLICAS
,
+ you will need to investigate if the instances are going up by looking at
+ the OpenStackMachine
for the KubeadmControlPlane
:
$ kubectl -n magnum-system describe openstackmachines -l cluster.x-k8s.io/control-plane=,cluster.x-k8s.io/cluster-name=<stack-id>
+
From the output, you will need to look at the Status
field and see if
+any of the conditions are False
. If they are, you will need to look at
+the Message
field to see what the error is.
DELETE_IN_PROGRESS
stateIf you have a case where a project has been deleted from OpenStack but the +cluster was not deleted, you will not be able to delete it even as an admin +user. You will find log messages such as the following which will indicate +that the project is missing:
+E0705 17:11:00.333902 1 controller.go:326] "Reconciler error" err=<providerClient authentication err: Resource not found: [POST https://cloud.atmosphere.dev/v3/auth/tokens], error message: {"error":{"code":404,"message":"Could not find project: 1dfcc1f4399948baac7a83a6607f693c.","title":"Not Found"}}
+
In order to work around this issue, you will need to create a new project,
+go into the database and update the id
of the new project to match the
+project_id
of the cluster.
Warning
+It is possible to corrupt the database if you do not know what you are +doing. Please make sure you have a backup of the database before
+Create a new project in OpenStack:
+$ export NEW_PROJECT_ID=$(openstack project create cleanup-project -f value -c id)
+
Get the existing project_id
of the cluster:
$ export CURRENT_PROJECT_ID=$(openstack coe cluster show <cluster-name> -f value -c project_id)
+
Update the id
of the project in Keystone to match the project_id
of
+ the cluster:
$ mysql -B -N -u root -p -e "update project set id='$CURRENT_PROJECT_ID' where id='$NEW_PROJECT_ID';" keystone
+
If you're using Atmosphere, you can run the following:
+$ kubectl -n openstack exec -it sts/percona-xtradb-pxc -- mysql -hlocalhost -uroot -p$(kubectl -n openstack get secret/percona-xtradb -ojson | jq -r '.data.root' | base64 --decode) -e "update project set id='$CURRENT_PROJECT_ID' where id='$NEW_PROJECT_ID';" keystone
+
Verify that the project now exists under the new id
:
$ openstack project show $CURRENT_PROJECT_ID
+
Give access to your current admin user to the new project:
+$ openstack role add --user $OS_USERNAME --project $CURRENT_PROJECT_ID member
+
Switch to the context of that user
+$ export OS_PROJECT_ID=$CURRENT_PROJECT_ID
+
Create a new set of application credentials and update the existing
+ cloud-config
secret for the cluster
$ export CAPI_CLUSTER_NAME=$(openstack coe cluster show tst1-useg-k8s-1 -f value -c stack_id)
+$ export EXISTING_APPCRED_ID=$(kubectl -n magnum-system get secret/$CAPI_CLUSTER_NAME-cloud-config -ojson | jq -r '.data."clouds.yaml"' | base64 --decode | grep application_credential_id | awk '{print $2}')
+$ export EXISTING_APPCRED_SECRET=$(kubectl -n magnum-system get secret/$CAPI_CLUSTER_NAME-cloud-config -ojson | jq -r '.data."clouds.yaml"' | base64 --decode | grep application_credential_secret | awk '{print $2}')
+$ export NEW_APPCRED_ID=$(openstack application credential create --secret $EXISTING_APPCRED_SECRET $CAPI_CLUSTER_NAME-cleanup -f value -c id)
+$ > /tmp/clouds.yaml
+$ kubectl -n magnum-system patch secret/$CAPI_CLUSTER_NAME-cloud-config -p '{"data":{"clouds.yaml":"'$(kubectl -n magnum-system get secret/$CAPI_CLUSTER_NAME-cloud-config -ojson | jq -r '.data."clouds.yaml"' | base64 --decode | sed "s/$EXISTING_APPCRED_ID/$NEW_APPCRED_ID/" | base64 --wrap=0)'"}}'
+
At this point, the cluster should start progressing on the deletion process, you +can verify this by running:
+$ kubectl -n capo-system logs deploy/capo-controller-manager -f
+
Once the cluster is gone, you can clean up the project:
+$ unset OS_PROJECT_ID
+$ openstack project delete $CURRENT_PROJECT_ID
+
The Cluster API driver for Magnum makes use of the Cluster topology feature of the Cluster API project. This allows it to delegate all of the work around building resources such as the OpenStackCluster
, MachineDeployments
and everything else managed entire by the Cluster API instead of the driver creating all of these resources.
In order to do this, the driver creates a ClusterClass
resource which is called magnum-v{VERSION}
where {VERSION}
is the current version of the driver because of the following reasons:
ClusterClass
because it is an immutable resource.ClusterClass
.It's important to note that there are only one scenarios where the spec.topology.class
for a given Cluster
will be modified and this will be when a cluster upgrade is done. This is because there is an expectation by the user that a rolling restart operation will occur if a cluster upgrade is requested. No other action should be allowed to change the spec.topology.class
of a Cluster
.
For users, it's important to keep in mind that if they want to use a newer ClusterClass
in order to make sure of a new feature available in a newer ClusterClass
, they can simply do an upgrade within Magnum to the same cluster template and it will actually force an update of the spec.topology.class
, which might then naturally cause a full rollout to occur.
In order to be able to test and develop the magnum-cluster-api
project, you
+will need to have an existing Magnum deployment. You can use the following
+steps to be able to test and develop the project.
./hack/stack.sh
+
pushd /tmp
+source /opt/stack/openrc
+export OS_DISTRO=ubuntu # you can change this to "flatcar" if you want to use Flatcar
+for version in v1.24.16 v1.25.12 v1.26.7 v1.27.4; do \
+ [[ "${OS_DISTRO}" == "ubuntu" ]] && IMAGE_NAME="ubuntu-2204-kube-${version}" || IMAGE_NAME="flatcar-kube-${version}"; \
+ curl -LO https://object-storage.public.mtl1.vexxhost.net/swift/v1/a91f106f55e64246babde7402c21b87a/magnum-capi/${IMAGE_NAME}.qcow2; \
+ openstack image create ${IMAGE_NAME} --disk-format=qcow2 --container-format=bare --property os_distro=${OS_DISTRO} --file=${IMAGE_NAME}.qcow2; \
+ openstack coe cluster template create \
+ --image $(openstack image show ${IMAGE_NAME} -c id -f value) \
+ --external-network public \
+ --dns-nameserver 8.8.8.8 \
+ --master-lb-enabled \
+ --master-flavor m1.medium \
+ --flavor m1.medium \
+ --network-driver calico \
+ --docker-storage-driver overlay2 \
+ --coe kubernetes \
+ --label kube_tag=${version} \
+ k8s-${version};
+done;
+popd
+
openstack coe cluster create \
+ --cluster-template k8s-v1.25.12 \
+ --master-count 3 \
+ --node-count 2 \
+ k8s-v1.25.12
+
CREATE_COMPLETE
state, you can interact with it:eval $(openstack coe cluster config k8s-v1.25.12)
+
In Atmosphere environment, Magnum cluster-api is embedded in the Magnum conductor container.
+Therefore, to upgrade Magnum cluster-api in your atmosphere environment, developers have to build a new Magnum container image with the desired revision.
+On the other hand, magnum-cluster-api
repository has a Github workflow to build and push Magnum container images which include the latest Magnum cluster-api driver code. So developers can trigger this workflow to get new Magnum images with the latest code.
Once a new release is published, image build workflow is triggered automatically and new container images is published to quay.io/vexxhost/magnum-cluster-api
. So there is no need to run images again and just need to upgrade the image ref in Atmosphere deployment code.
+Run the following cmd to update all Magnum image tags in Atmosphere
project.
+
earthly +pin-images
+
roles/default/vars/main.yaml
file. Then you can run ansible-playbook
or poetry
command to deploy/upgrade atmosphere as normal.
+If you want to apply patches merged into main branch but not released yet, you can follow this instruction.
+- First, build new Magnum container images by running image workflow with Push images to Container Registry
enabled at https://github.com/vexxhost/magnum-cluster-api/actions/workflows/image.yml.
+- Once the workflow is successfully finished, new images will be pushed to quay.io/vexxhost/magnum-cluster-api
. You can get the exact image tag hash value from the workflow log. (note: It will not promote images so need to get the exact image digest hash value.)
+- Update the image tags in roles/default/vars/main.yaml
of atmosphere project. https://github.com/vexxhost/atmosphere/blob/c7c0de94112448522abb8973483da82eb5f937a8/roles/defaults/vars/main.yml#L101-L105
+- Run atmosphere playbook again.
Or you can just update the images on-fly using kubectl CLI in your Atmosphere environment once you know the image ref but this is not recommended. +
kubectl set image sts/magnum-conductor magnum-conductor=${IMAGE_REF} magnum-conductor-init=${IMAGE_REF} -n openstack
+kubectl set image deploy/magnum-api magnum-api=${IMAGE_REF} -n openstack
+kubectl set image deploy/magnum-registry registry=${IMAGE_REF} -n openstack
+
The Cluster API driver for Magnum extends magnum configuration by adding these +driver-specific configuration options.
+Options under this group are used for auto scaling.
+image_repository
Image repository for the cluster auto-scaler.
+Type: string
+Default value: registry.k8s.io/autoscaling
v1_22_image
Image for the cluster auto-scaler for Kubernetes v1.22.
+Type: string
+Default value: $image_repository/cluster-autoscaler:v1.22.3
v1_23_image
Image for the cluster auto-scaler for Kubernetes v1.23.
+Type: string
+Default value: $image_repository/cluster-autoscaler:v1.23.1
v1_24_image
Image for the cluster auto-scaler for Kubernetes v1.24.
+Type: string
+Default value: $image_repository/cluster-autoscaler:v1.24.2
v1_25_image
Image for the cluster auto-scaler for Kubernetes v1.25.
+Type: string
+Default value: $image_repository/cluster-autoscaler:v1.25.2
v1_26_image
Image for the cluster auto-scaler for Kubernetes v1.26.
+Type: string
+Default value: $image_repository/cluster-autoscaler:v1.26.3
v1_27_image
Image for the cluster auto-scaler for Kubernetes v1.27.
+Type: string
+Default value: $image_repository/cluster-autoscaler:v1.27.2
Options under this group are used for configuring Manila client.
+region_name
Region in Identity service catalog to use for communication with the OpenStack service.
+Type: string
endpoint_type
Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
+Type: string
+Default value: publicURL
api_version
Version of Manila API to use in manilaclient.
+Type: string
+Default value: 3
ca_file
Optional CA cert file to use in SSL connections.
+Type: string
cert_file
Optional PEM-formatted certificate chain file.
+Type: string
key_file
Optional PEM-formatted file that contains the private key.
+Type: string
insecure
If set, then the server's certificate will not be verified.
+Type: boolean
+Default value: False
Options under this group are used for configuring Openstack authentication for CAPO.
+endpoint_type
Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
+Type: string
+Default value: publicURL
ca_file
Optional CA cert file to use in SSL connections.
+Type: string
insecure
If set, then the server's certificate will not be verified.
+Type: boolean
+Default value: False
You can use a few different methods to create a Kubernetes cluster with the +Cluster API driver for Magnum. We cover a few different methods in this +section.
+Notes about deployment speed
+The Cluster API driver for Magnum is designed to be fast. It is capable of +deploying a Kubernetes cluster in under 5 minutes. However, there are several +factors that can slow down the deployment process:
+Operating system image size + The average size of the operating system image is around 4 GB. The image + needs to be downloaded to each node before deploying the cluster, and the + download speed depends on the network connection. The compute service caches + images locally, so the initial cluster deployment is slower than subsequent + deployments.
+Network connectivity + When the cluster goes up, it needs to pull all the container images from the + container registry. By default, it will pull all the images from the upstream + registries. If you have a slow network connection, you can use a local + registry to speed up the deployment process and read more about pointing to + it in the Labels section.
+Atmosphere deploys a local +registry by default as well as includes several speed optimizations to +improve the deployment speed down to 5 minutes.
+You can create clusters using several different methods which all end up using +the Magnum API. You can either use the OpenStack CLI, OpenStack Horizon +dashboard, Terraform, Ansible or the Magnum API directly.
+The OpenStack CLI is the easiest way to create a Kubernetes cluster from
+your terminal directly. You can use the openstack coe cluster create
+command to create a Kubernetes cluster with the Cluster API driver for Magnum.
Before you get started, you'll have to make sure that you have the cluster +templates you want to use available in your environment. You can create +them using the OpenStack CLI:
+export OS_DISTRO=ubuntu # you can change this to "flatcar" if you want to use Flatcar
+for version in v1.24.16 v1.25.12 v1.26.7 v1.27.4; do \
+ [[ "${OS_DISTRO}" == "ubuntu" ]] && IMAGE_NAME="ubuntu-2204-kube-${version}" || IMAGE_NAME="flatcar-kube-${version}"; \
+ curl -LO https://object-storage.public.mtl1.vexxhost.net/swift/v1/a91f106f55e64246babde7402c21b87a/magnum-capi/${IMAGE_NAME}.qcow2; \
+ openstack image create ${IMAGE_NAME} --disk-format=qcow2 --container-format=bare --property os_distro=${OS_DISTRO} --file=${IMAGE_NAME}.qcow2; \
+ openstack coe cluster template create \
+ --image $(openstack image show ${IMAGE_NAME} -c id -f value) \
+ --external-network public \
+ --dns-nameserver 8.8.8.8 \
+ --master-lb-enabled \
+ --master-flavor m1.medium \
+ --flavor m1.medium \
+ --network-driver calico \
+ --docker-storage-driver overlay2 \
+ --coe kubernetes \
+ --label kube_tag=${version} \
+ k8s-${version};
+done;
+
Once you've got a cluster template, you can create a cluster using the +OpenStack CLI:
+$ openstack coe cluster create --cluster-template <cluster-template-name> <cluster-name>
+
You'll be able to view the status of the deployment using the OpenStack CLI:
+$ openstack coe cluster show <cluster-name>
+
The OpenStack Horizon dashboard is the easiest way to create a Kubernetes +using a simple web interface. In order to get started, you can review the +list of current cluster templates in your environment by navigating using +the left sidebar to Project > Container Infra > Cluster Templates.
+ +In order to launch an new cluster, you will need to navigate to Project > +Container Infra > Clusters and click on the Launch Cluster button.
+ +There is a set of required fields that you will need to fill out in order +to launch a cluster, the first of which are related to it's basic +configuration, the required fields are:
+Cluster Name
+ The name of the cluster that will be created.
Cluster Template
+ The cluster template that will be used to create the cluster.
Keypair
+ The SSH key pair that will be used to access the cluster.
In this example, we're going to create a cluster with the name of
+test-cluster
, running Kuberentes 1.27.3 so using the k8s-v1.27.3
+cluster template, and using the admin_key
SSH key pair.
The next step is deciding on the size of the cluster and selecting if auto +scaling will be enabled for the cluster. The required fields are:
+Number of Master Nodes
+ The number of master nodes that will be created in the cluster.
Flavor of Master Nodes
+ The flavor of the master nodes that will be created in the cluster.
Number of Worker Nodes
+ The number of worker nodes that will be created in the cluster.
Flavor of Worker Nodes
+ The flavor of the worker nodes that will be created in the cluster.
In addition, if you want to enable auto scaling, you will need to provide the +following information:
+Auto-scale Worker Nodes
+ Whether or not to enable auto scaling for the worker nodes.
Minimum Number of Worker Nodes + The minimum number of worker nodes that will be created in the cluster, + the auto scaler will not scale below this number even if the cluster is + under utilized.
+Maximum Number of Worker Nodes + The maximum number of worker nodes that will be created in the cluster, + the auto scaler will not scale above this number even if the cluster is + over utilized.
+In this example, we're going to create a cluster with 3 master node and 4
+worker nodes, using the m1.medium
flavor for both the master and worker
+nodes, and we will enable auto scaling with a minimum of 2 worker nodes and
+a maximum of 10 worker nodes.
The next step is managing the network configuration of the cluster. The +required fields are:
+Enable Load Balancer for Master Nodes + This is required to be enabled for the Cluster API driver for Magnum + to work properly.
+Create New Network + This will determine if a new network will be created for the cluster or if + an existing network will be used. It's useful to use an existing network + if you want to attach the cluster to an existing network with other + resources.
+Cluster API
+ This setting controls if the API will get a floating IP address assigned
+ to it. You can set this to Accessible on private network only if you
+ are using an existing network and don't want to expose the API to the
+ public internet. Otherwise, you should set it to Accessible on the public
+ internet to allow access to the API from the external network.
In this example, we're going to make sure we have the load balancer enabled +for the master nodes, we're going to create a new network for the cluster, +and we're going to make sure that the API is accessible on the public internet.
+ +For the next step, we need to decide if we want to enable auto-healing for +the cluster which automatically detects nodes that are unhealthy and +replaces them with new nodes. The required fields are:
+In this example, we're going to enable auto-healing for the cluster since it +will help keep the cluster healthy.
+ +Finally, you can override labels for the cluster in the Advanced section, +we do not recommend changing these unless you know what you're doing. Once +you're ready, you can click on the Submit button to create the cluster. +The page will show your cluster being created.
+ +If you click on the cluster, you'll be able to track the progress of the +cluster creation, more specifically in the Status Reason field, seen below:
+ +Once the cluster is created, you'll be able to see the cluster details, +including the health status as well:
+ +At this point, you should have a ready cluster and you can proceed to the +Accessing section to learn how to access the cluster.
+In order to access the Kubernetes cluster, you will have to request for a
+KUBECONFIG
file generated by the Cluster API driver for Magnum. You can do
+this using a few several ways, we cover a few of them in this section.
You can use the OpenStack CLI to request a KUBECONFIG
file for a
+Kubernetes cluster. You can do this using the openstack coe cluster config
+command:
$ openstack coe cluster config <cluster-name>
+
The Cluster API driver for Magnum supports upgrading Kubernetes clusters to any +minor release in the same series or one major release ahead. The upgrade +process is performed in-place, meaning that the existing cluster is upgraded to +the new version without creating a new cluster in a rolling fashion.
+Note
+You must have an operating system image for the new Kubernetes version +available in Glance before upgrading the cluster. See the Images +documentation for more information.
+In order to upgrade a cluster, you must have a cluster template pointing at the
+image for the new Kubernetes version and the kube_tag
label must be updated
+to point at the new Kubernetes version.
Once you have this cluster template, you can trigger an upgrade by using the +OpenStack CLI:
+$ openstack coe cluster upgrade <cluster-name> <cluster-template-name>
+
Roles can be used to show the purpose of a node group, and multiple node groups can be given the same role if they share a common purpose. +
$ openstack coe nodegroup create kube test-ng --node-count 1 --role test
+
$ openstack coe nodegroup list kube --role test
++--------------------------------------+---------+-----------+--------------------------------------+------------+-----------------+------+
+| uuid | name | flavor_id | image_id | node_count | status | role |
++--------------------------------------+---------+-----------+--------------------------------------+------------+-----------------+------+
+| c8acbb1f-2fa3-4d1f-b583-9a2df1e269d7 | test-ng | m1.medium | ef107f29-8f26-474e-8f5f-80d269c7d2cd | 1 | CREATE_COMPLETE | test |
++--------------------------------------+---------+-----------+--------------------------------------+------------+-----------------+------+
+
$ kubectl get nodes
+NAME STATUS ROLES AGE VERSION
+kube-7kjbp-control-plane-vxtrz-nhjr2 Ready control-plane,master 3d v1.25.3
+kube-7kjbp-default-worker-infra-hnk8x-v6cp9 Ready worker 2d19h v1.25.3
+kube-7kjbp-test-ng-infra-b8yux-3v6fd Ready test 5m v1.25.3
+
nodeSelector:
+ # node-role.kubernetes.io/ROLE_NAME: ""
+ node-role.kubernetes.io/test: ""
+
node.cluster.x-k8s.io/nodegroup
is also available for selecting a specific node group.
+nodeSelector:
+ # node.cluster.x-k8s.io/nodegroup: "NODEGROUP_NAME"
+ node.cluster.x-k8s.io/nodegroup: "test-ng"
+
Here reference awesome blog:
+Kubernetes Cluster Autoscaler with Magnum CAPI Driver
written by Satish Patel.The Cluster API driver for Magnum relies on specific OpenStack images containing +all necessary dependencies for deploying Kubernetes clusters. These images are +pre-configured with Kubernetes binaries, container runtimes, networking +components, and other required software.
+The images used by the Cluster API driver for Magnum are built using the
+kubernetes-sigs/image-builder
+project. This project provides a comprehensive and flexible framework for
+constructing Kubernetes-specific images.
In order to simplify the process of building images, the Cluster API driver for
+Magnum provides a small Python utility which wraps the image-builder
project.
To build the images, run the following command:
+$ pip install magnum-cluster-api
+$ magnum-cluster-api-image-builder --version v1.26.2
+
In the example above, this command will build the images for Kubernetes version
+v1.26.2
. The --version
flag is optional and defaults to v1.26.2
.
Magnum cluster template labels are key-value pairs that are used to provide +metadata and configuration information for Kubernetes clusters created through +Magnum.
+They can be used to define characteristics such as the operating system, +networking settings, container runtime, Kubernetes version, or any other custom +attributes relevant to the cluster deployment.
+If you require your cluster to have the root filesystem on a volume, you can +specify the volume size and type using the following labels:
+boot_volume_size
The size in gigabytes of the boot volume. If you set this value, it will +enable boot from volume. +Default value: Unset
+boot_volume_type
The volume type of the boot volume. +Default value: Default volume
+etcd_volume_size
The size in gigabytes of the etcd
volume. If you set this value, it will
+create a volume for etcd
specifically and mount it on the system.
+Default value: Unset
etcd_volume_type
The volume type of the etcd
volume, this can be useful if you want to use an
+encrypted or high performance volume type.
+Default value: None
Note
+Volume labels cannot be changed once the cluster is deployed. However, you +generally do not need a large boot volume since the root filesystem is +only used for the operating system and container runtime.
+The Cluster API driver for Magnum relies on specific container images for the +deployment process.
+container_infra_prefix
The prefix of the container images to use for the cluster. +Default value: None, defaults to upstream images.
+The way containers talk to each other and the outside world is defined by the networking setup. +This setup decides how information is shared among containers inside and outside the cluster, and +is often accomplished by deploying a driver on each node.
+calico_ipv4pool
IPv4 network in CIDR format. +It refers to the IPv4 address pool used by the Calico network plugin for allocating IP addresses to pods in Kubernetes clusters. +Default value: 10.100.0.0/16.
+service_cluster_ip_range
IPv4 network in CIDR format. +Defines the range of IP addresses allocated for Kubernetes services within clusters managed by Magnum. +These IP addresses are used to expose and connect services. +Default value: 10.254.0.0/16
+audit_log_enabled
Enable audit logs for the cluster. The audit logs are stored in the
+ /var/log/kubernetes/audit/kube-apiserver-audit.log
file on the control
+ plane hosts.
Default value: false
audit_log_maxage
The number of days to retain audit logs. This is only effective if the
+ audit_log_enabled
label is set to true
.
Default value: 30
audit_log_maxbackup
The maximum number of audit log files to retain. This is only effective if
+ the audit_log_enabled
label is set to true
.
Default value: 10
audit_log_maxsize
The maximum size in megabytes of the audit log file before it gets rotated.
+ This is only effective if the audit_log_enabled
label is set to true
.
Default value: 100
cloud_provider_tag
The tag to use for the OpenStack cloud controller provider when bootstrapping + the cluster.
+Default value: Automatically detected based on kube_tag
label.
octavia_provider
The Octavia provider to configure for the load balancers created by the cluster.
+Default value: amphora
octavia_lb_algorithm
The Octavia load balancer algorithm to configure for the load balancers
+ created by the cluster (options are ROUND_ROBIN
, LEAST_CONNECTIONS
,
+ SOURCE_IP
& SOURCE_IP_PORT
).
It's important to note that the OVN provider supports only the SOURCE_IP_PORT
+ driver as part of it's limitations.
Default value (amphora
provider): ROUND_ROBIN
+ Default value (ovn
provider): SOURCE_IP_PORT
calico_tag
The version of the Calico container image to use when bootstrapping the + cluster.
+Default value: v3.24.2
cinder_csi_plugin_tag
The version of the Cinder CSI container image to use when bootstrapping the + cluster.
+Default value: Automatically detected based on kube_tag
label.
manila_csi_plugin_tag
The version of the Manila CSI container image to use when bootstrapping the + cluster.
+Default value: Automatically detected based on kube_tag
label.
manila_csi_share_network_id
Manila share network ID.
+Default value: None
api_server_cert_sans
Specify the additional Subject Alternative Names (SANs) for the Kubernetes API Server, + separated by commas.
+api_server_tls_cipher_suites
Specify the list of TLS cipher suites to use for the Kubernetes API server, + separated by commas. If not specified, the default list of cipher suites + will be used using the Mozilla SSL Configuration Generator.
+Default value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
auto_healing_enabled
Enable auto-healing for the cluster. This will automatically replace failed + nodes in the cluster with new nodes (after 5 minutes of not being ready) + and stops further remediation if more than 40% of the cluster is unhealthy.
+Default value: true
auto_scaling_enabled
Enable auto-scaling for the cluster. This will automatically scale the + cluster up and down based on the number of pods running in the cluster.
+Default value: false
kubelet_tls_cipher_suites
Specify the list of TLS cipher suites to use in communication between the + kubelet and applications, separated by commas. If not specified, the + default list of cipher suites will be used.
+Default value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
kube_tag
The version of Kubernetes to use.
+Default value: v1.25.3
master_lb_floating_ip_enabled
Attach a floating IP to the load balancer that fronts the Kubernetes API
+ servers. In order to disable this, you must be running the
+ magnum-cluster-api-proxy
service on all your Neutron network nodes.
Default value: true
oidc_issuer_url
The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it + will be used to verify the OIDC JSON Web Token (JWT).
+Default value: ``
+oidc_client_id
The client ID for the OpenID Connect client, must be set if oidc_issuer_url
+ is set.
Default value: ``
+oidc_username_claim
The OpenID claim to use as the user name.
+Default value: sub
oidc_username_prefix
If provided, all usernames will be prefixed with this value. If not provided, + username claims other than 'email' are prefixed by the issuer URL to avoid + clashes. To skip any prefixing, use the default value.
+Default value: -
oidc_groups_claim
If provided, the name of a custom OpenID Connect claim for specifying user + groups. The claim value is expected to be a string or array of strings.
+Default value: ``
+oidc_groups_prefix
If provided, all groups will be prefixed with this value to prevent conflicts + with other authentication strategies.
+Default value: ``
+fixed_subnet_cidr
The CIDR of the fixed subnet to use for the cluster.
+Default value: 10.0.0.0/24
availability_zone +dns_cluster_domain +calico_ipv4pool
+ + + + + + +