diff --git a/docs/book/src/SUMMARY.md b/docs/book/src/SUMMARY.md index d959fde4e9e4..95b0f1c32552 100644 --- a/docs/book/src/SUMMARY.md +++ b/docs/book/src/SUMMARY.md @@ -47,7 +47,6 @@ - [Diagnostics](./tasks/diagnostics.md) - [Security Guidelines](./security/index.md) - [Pod Security Standards](./security/pod-security-standards.md) - - [Infrastructure Provider Security Guidance](./security/infrastructure-provider-security-guidance.md) - [clusterctl CLI](./clusterctl/overview.md) - [clusterctl Commands](clusterctl/commands/commands.md) - [init](clusterctl/commands/init.md) @@ -64,7 +63,6 @@ - [alpha topology plan](clusterctl/commands/alpha-topology-plan.md) - [additional commands](clusterctl/commands/additional-commands.md) - [clusterctl Configuration](clusterctl/configuration.md) - - [clusterctl Provider Contract](clusterctl/provider-contract.md) - [clusterctl for Developers](clusterctl/developers.md) - [clusterctl Extensions with Plugins](clusterctl/plugins.md) - [Developer Guide](./developer/getting-started.md) @@ -87,7 +85,6 @@ - [Developing E2E tests](developer/core/e2e.md) - [Tuning controllers](./developer/core/tuning.md) - [Support multiple instances](./developer/core/support-multiple-instances.md) - - [Multi-tenancy](./developer/core/multi-tenancy.md) - [Developing providers](./developer/providers/overview.md) - [Getting started](developer/providers/getting-started/overview.md) - [Naming](developer/providers/getting-started/naming.md) @@ -97,12 +94,15 @@ - [Controllers and Reconciliation](developer/providers/getting-started/controllers-and-reconciliation.md) - [Configure the provider manifest](developer/providers/getting-started/configure-the-deployment.md) - [Building, Running, Testing](developer/providers/getting-started/building-running-and-testing.md) - - [Provider contracts](./developer/providers/contracts.md) - - [Cluster Infrastructure](./developer/providers/cluster-infrastructure.md) - - [Control Plane](./developer/providers/control-plane.md) - - [Machine Infrastructure](./developer/providers/machine-infrastructure.md) - - [Bootstrap](./developer/providers/bootstrap.md) - - [Version migration](./developer/providers/version-migration.md) + - [Provider contracts](developer/providers/contracts/overview.md) + - [InfraCluster](./developer/providers/contracts/infra-cluster.md) + - [InfraMachine](developer/providers/contracts/infra-machine.md) + - [BootstrapConfig](developer/providers/contracts/bootstrap-config.md) + - [ControlPlane](developer/providers/contracts/control-plane.md) + - [clusterctl](developer/providers/contracts/clusterctl.md) + - [Best practices](./developer/providers/best-practices.md) + - [Security guidelines](./developer/providers/security-guidelines.md) + - [Version migration](developer/providers/migrations/overview.md) - [v1.6 to v1.7](./developer/providers/migrations/v1.6-to-v1.7.md) - [v1.7 to v1.8](./developer/providers/migrations/v1.7-to-v1.8.md) - [v1.8 to v1.9](./developer/providers/migrations/v1.8-to-v1.9.md) diff --git a/docs/book/src/clusterctl/commands/move.md b/docs/book/src/clusterctl/commands/move.md index f8dfff4aeab9..c27dcce71b9e 100644 --- a/docs/book/src/clusterctl/commands/move.md +++ b/docs/book/src/clusterctl/commands/move.md @@ -24,7 +24,7 @@ clusterctl move --to-kubeconfig="path-to-target-kubeconfig.yaml" To move the Cluster API objects existing in the current namespace of the source management cluster; in case if you want to move the Cluster API objects defined in another namespace, you can use the `--namespace` flag. -The discovery mechanism for determining the objects to be moved is in the [provider contract](../provider-contract.md#move) +The discovery mechanism for determining the objects to be moved is in the [provider contract](../../developer/providers/contracts/clusterctl.md#move) @@ -74,7 +74,7 @@ branch to include it in the next patch release.

What about closed source providers?

Closed source provider can not be added to the pre-defined list of provider shipped with `clusterctl`, however, -those providers could be used with `clusterctl` by changing the [clusterctl configuration](configuration.md). +those providers could be used with `clusterctl` by changing the [clusterctl configuration](../../../clusterctl/configuration.md). @@ -87,7 +87,7 @@ The need to add a prefix for providers not in the kubernetes-sigs org applies to to the existing pre-defined providers, but we reserve the right to reconsider this in the future. Please note that the need to add a prefix for providers not in the kubernetes-sigs org does not apply to providers added by -changing the [clusterctl configuration](configuration.md). +changing the [clusterctl configuration](../../../clusterctl/configuration.md). @@ -148,7 +148,7 @@ for the core provider: - metadata.yaml ``` -- Use the following [`clusterctl` configuration](configuration.md): +- Use the following [`clusterctl` configuration](../../../clusterctl/configuration.md): ```yaml providers: @@ -254,7 +254,7 @@ While defining the Deployment Spec, the container that executes the controller/r For controllers only, the manager MUST support a `--namespace` flag for specifying the namespace where the controller will look for objects to reconcile; however, clusterctl will always install providers watching for all namespaces -(`--namespace=""`); for more details see [support for multiple instances](../developer/core/support-multiple-instances.md) +(`--namespace=""`); for more details see [support for multiple instances](../../core/support-multiple-instances.md) for more context. While defining Pods for Deployments, canonical names should be used for images. diff --git a/docs/book/src/developer/providers/control-plane.md b/docs/book/src/developer/providers/contracts/control-plane.md similarity index 100% rename from docs/book/src/developer/providers/control-plane.md rename to docs/book/src/developer/providers/contracts/control-plane.md diff --git a/docs/book/src/developer/providers/contracts/infra-cluster.md b/docs/book/src/developer/providers/contracts/infra-cluster.md new file mode 100644 index 000000000000..0908e1e33e87 --- /dev/null +++ b/docs/book/src/developer/providers/contracts/infra-cluster.md @@ -0,0 +1,527 @@ +# Contract rules for InfraCluster + +Infrastructure providers SHOULD implement an InfraCluster resource. + +The goal of an InfraCluster resource is to supply whatever prerequisites (in term of infrastructure) are necessary for running machines. +Examples might include networking, load balancers, firewall rules, and so on. + +The InfraCluster resource will be referenced by one of the Cluster API core resources, Cluster. + +The [Cluster's controller](../../core/controllers/cluster.md) will be responsible to coordinate operations of the InfraCluster, +and the interaction between the Cluster's controller and the InfraCluster resource is based on the contract +rules defined in this page. + +Once contract rules are satisfied by an InfraCluster implementation, other implementation details +could be addressed according to the specific needs (Cluster API in not prescriptive). + +Nevertheless, it is always recommended to take a look at Cluster API controllers, +in-tree providers, other providers and use them as a reference implementation (unless custom solutions are required +in order to address very specific needs). + +In order to facilitate the initial design for each InfraCluster resource, a few [implementation best practices] and [infrastructure Provider Security Guidance] +are explicitly called out in dedicated pages. + + + +## Rules (contract version v1beta1) + +| Rule | Mandatory | Note | +|----------------------------------------------------------------------|-----------|---------------------------------------------------------------------| +| [All resources: scope] | Yes | | +| [All resources: `TypeMeta` and `ObjectMeta`field] | Yes | | +| [All resources: `APIVersion` field value] | Yes | | +| [InfraCluster, InfraClusterList resource definition] | Yes | | +| [InfraCluster: control plane endpoint] | No | Mandatory if control plane endpoint is not provided by other means. | +| [InfraCluster: failure domains] | No | | +| [InfraCluster: initialization completed] | Yes | | +| [InfraCluster: conditions] | No | | +| [InfraCluster: terminal failures] | No | | +| [InfraClusterTemplate, InfraClusterTemplateList resource definition] | No | Mandatory for ClusterClasses support | +| [Externally managed infrastructure] | No | | +| [Multi tenancy] | No | Mandatory for clusterctl CLI support | +| [Clusterctl support] | No | Mandatory for clusterctl CLI support | + +Note: +- `All resources` refers to all the provider's resources "core" Cluster API interacts with; + In the context of this page: `InfraCluster`, `InfraClusterTemplate` and corresponding list types + +### All resources: scope + +All resources MUST be namespace-scoped. + +### All resources: `TypeMeta` and `ObjectMeta` field + +All resources MUST have the standard Kubernetes `TypeMeta` and `ObjectMeta` fields. + +### All resources: `APIVersion` field value + +In Kubernetes `APIVersion` is a combination of API group and version. +Special consideration MUST applies to both API group and version for all the resources Cluster API interacts with. + +#### All resources: API group + +The domain for Cluster API resources is `cluster.x-k8s.io`, and infrastructure providers under the Kubernetes SIGS org +generally use `infrastructure.cluster.x-k8s.io` as API group. + +If your provider uses a different API group, you MUST grant full read/write RBAC permissions for resources in your API group +to the Cluster API core controllers. The canonical way to do so is via a `ClusterRole` resource with the [aggregation label] +`cluster.x-k8s.io/aggregate-to-manager: "true"`. + +The following is an example ClusterRole for a `FooCluster` resource in the `infrastructure.foo.com` API group: + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: capi-foo-clusters + labels: + cluster.x-k8s.io/aggregate-to-manager: "true" +rules: +- apiGroups: + - infrastructure.foo.com + resources: + - fooclusters + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - infrastructure.foo.com + resources: + - fooclustertemplates + verbs: + - get + - list + - patch + - update + - watch +``` + +Note: The write permissions allow the Cluster controller to set owner references and labels on the InfraCluster” resources; +write permissions are not used for general mutations of InfraCluster resources, unless specifically required (e.g. when +using ClusterClass and managed topologies). + +#### All resources: version + +The resource Version defines the stability of the API and its backward compatibility guarantees. +Examples include `v1alpha1`, `v1beta1`, `v1`, etc. and are governed by the [Kubernetes API Deprecation Policy]. + +Your provider SHOULD abide by the same policies. + +Note: The version of your provider does not need to be in sync with the version of core Cluster API resources. +Instead, prefer choosing a version that matches the stability of the provider API and its backward compatibility guarantees. + +Additionally: + +Providers MUST set `cluster.x-k8s.io/` label on the InfraCluster Custom Resource Definitions. + +The label is a map from a Cluster API contract version to your Custom Resource Definition versions. +The value is an underscore-delimited (_) list of versions. Each value MUST point to an available version in your CRD Spec. + +The label allows Cluster API controllers to perform automatic conversions for object references, the controllers will pick +the last available version in the list if multiple versions are found. + +To apply the label to CRDs it’s possible to use commonLabels in your `kustomize.yaml` file, usually in `config/crd`: + +```yaml +commonLabels: + cluster.x-k8s.io/v1alpha2: v1alpha1 + cluster.x-k8s.io/v1alpha3: v1alpha2 + cluster.x-k8s.io/v1beta1: v1beta1 +``` + +An example of this is in the [Kubeadm Bootstrap provider](https://github.com/kubernetes-sigs/cluster-api/blob/release-1.1/controlplane/kubeadm/config/crd/kustomization.yaml). + +### InfraCluster, InfraClusterList resource definition + +You MUST define a InfraCluster resource. +The InfraCluster resource name must have the format produced by `sigs.k8s.io/cluster-api/util/contract.CalculateCRDName(Group, Kind)`. + +Note: Cluster API is using such a naming convention to avoid an expensive CRD lookup operation when looking for labels from +the CRD definition of the InfraCluster resource. + +It is a generally applied convention to use names in the format `${env}Cluster`, where ${env} is a, possibly short, name +for the environment in question. For example `GCPCluster` is an implementation for the Google Cloud Platform, and `AWSCluster` +is one for Amazon Web Services. + +```go +// +kubebuilder:object:root=true +// +kubebuilder:resource:path=fooclusters,shortName=foocl,scope=Namespaced,categories=cluster-api +// +kubebuilder:storageversion +// +kubebuilder:subresource:status + +// FooCluster is the Schema for fooclusters. +type FooCluster struct { + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata,omitempty"` + Spec FooClusterSpec `json:"spec,omitempty"` + Status FooClusterStatus `json:"status,omitempty"` +} + +type FooClusterSpec struct { + // See other rules for more details about mandatory/optional fields in InfraCluster spec. + // Other fields SHOULD be added based on the needs of your provider. +} + +type FooClusterStatus struct { + // See other rules for more details about mandatory/optional fields in InfraCluster status. + // Other fields SHOULD be added based on the needs of your provider. +} +``` + +For each InfraCluster resource, you MUST also add the corresponding list resource. +The list resource MUST be named as `List`. + +```go +// +kubebuilder:object:root=true + +// FooClusterList contains a list of fooclusters. +type FooClusterList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata,omitempty"` + Items []FooCluster `json:"items"` +} +``` + +### InfraCluster: control plane endpoint + +Each Cluster needs a control plane endpoint to sit in front of control plane machines. +Control plane endpoint can be provided in three ways in Cluster API: by the users, by the control plane provider or +by the infrastructure provider. + +In case you are developing an infrastructure provider which is responsible to provide a control plane endpoint for +each Cluster, the host and port of the generated control plane endpoint MUST surface on `spec.controlPlaneEndpoint` +in the InfraCluster resource. + +```go +type FooClusterSpec struct { + // ControlPlaneEndpoint represents the endpoint used to communicate with the control plane. + // +optional + ControlPlaneEndpoint APIEndpoint `json:"controlPlaneEndpoint"` + + // See other rules for more details about mandatory/optional fields in InfraCluster spec. + // Other fields SHOULD be added based on the needs of your provider. +} + +// APIEndpoint represents a reachable Kubernetes API endpoint. +type APIEndpoint struct { + // The hostname on which the API server is serving. + Host string `json:"host"` + + // The port on which the API server is serving. + Port int32 `json:"port"` +} +``` + +Once `spec.controlPlaneEndpoint` is set on the InfraCluster resource and the [InfraCluster initialization completed], +the Cluster controller will bubble up this info in Cluster's `spec.controlPlaneEndpoint`. + +If instead you are developing an infrastructure provider which is NOT responsible to provide a control plane endpoint, +the implementer should exit reconciliation until it sees Cluster's `spec.controlPlaneEndpoint` populated. + +### InfraCluster: failure domains + +In case you are developing an infrastructure provider which has a notion of failure domains where machines should be +placed in, the list of available failure domains MUST surface on `status.failureDomains` in the InfraCluster resource. + +```go +type FooClusterStatus struct { + // FailureDomains is a list of failure domain objects synced from the infrastructure provider. + FailureDomains clusterv1.FailureDomains `json:"failureDomains,omitempty"` + + // See other rules for more details about mandatory/optional fields in InfraCluster status. + // Other fields SHOULD be added based on the needs of your provider. +} +``` + +`clusterv1.FailureDomains` is a map, defined as `map[string]FailureDomainSpec`. A unique key must be used for each `FailureDomainSpec`. +`FailureDomainSpec` is defined as: +- `controlPlane bool`: indicates if failure domain is appropriate for running control plane instances. +- `attributes map[string]string`: arbitrary attributes for users to apply to a failure domain. + +Once `status.failureDomains` is set on the InfraCluster resource and the [InfraCluster initialization completed], +the Cluster controller will bubble up this info in Cluster's `status.failureDomains`. + +### InfraCluster: initialization completed + +Each InfraCluster MUST report when Cluster's infrastructure is fully provisioned (initialization) by setting +`status.ready` in the InfraCluster resource. + +```go +type FooClusterStatus struct { + // Ready denotes that the foo cluster infrastructure fully provisioned. + // +optional + Ready bool `json:"ready"` + + // See other rules for more details about mandatory/optional fields in InfraCluster status. + // Other fields SHOULD be added based on the needs of your provider. +} +``` + +Once `status.ready` the Cluster "core" controller will bubbles up this info in Cluster's `status.infrastructureReady`; +If defined, also InfraCluster's `spec.controlPlaneEndpoint` and `status.failureDomains` will be surfaced on Cluster's +corresponding field at the same time. + + + +### InfraCluster: conditions + +According to [Kubernetes API Conventions], Conditions provide a standard mechanism for higher-level +status reporting from a controller. + +Providers implementers SHOULD implement `status.conditions` for their InfraCluster resource. +In case conditions are implemented, Cluster API condition type MUST be used. + +If a condition with type `Ready` exist, such condition will be mirrored in Cluster's `InfrastructureReady` condition. + +Please note that the `Ready` condition is expected to surface the status of the InfraCluster during its own entire lifecycle, +including initial provisioning, the final deletion process, and the period in between these two moments. + +See [Cluster API condition proposal] for more context. + + + +### InfraCluster: terminal failures + +Each InfraCluster SHOULD report when Cluster's enter in a state that cannot be recovered (terminal failure) by +setting `status.failureReason` and `status.failureMessage` in the InfraCluster resource. + +```go +type FooClusterStatus struct { + // FailureReason will be set in the event that there is a terminal problem reconciling the FooCluster + // and will contain a succinct value suitable for machine interpretation. + // + // This field should not be set for transitive errors that can be fixed automatically or with manual intervention, + // but instead indicate that something is fundamentally wrong with the FooCluster and that it cannot be recovered. + // +optional + FailureReason *capierrors.ClusterStatusError `json:"failureReason,omitempty"` + + // FailureMessage will be set in the event that there is a terminal problem reconciling the FooCluster + // and will contain a more verbose string suitable for logging and human consumption. + // + // This field should not be set for transitive errors that can be fixed automatically or with manual intervention, + // but instead indicate that something is fundamentally wrong with the FooCluster and that it cannot be recovered. + // +optional + FailureMessage *string `json:"failureMessage,omitempty"` + + // See other rules for more details about mandatory/optional fields in InfraCluster status. + // Other fields SHOULD be added based on the needs of your provider. +} +``` + +Once `status.failureReason` and `status.failureMessage` are set on the InfraCluster resource, the Cluster "core" controller +will bubble up those info in the corresponding fields in Cluster's `status`. + +Please note that once failureReason/failureMessage is set in Cluster's `status`, the only way to recover is to delete and +recreate the Cluster (it is a terminal failure). + + + +### InfraClusterTemplate, InfraClusterTemplateList resource definition + +For a given InfraCluster resource, you should also add a corresponding InfraClusterTemplate resources in order to use it in ClusterClasses. +The template resource MUST be named as `Template`. + +```go +// +kubebuilder:object:root=true +// +kubebuilder:resource:path=fooclustertemplates,scope=Namespaced,categories=cluster-api +// +kubebuilder:storageversion + +// FooClusterTemplate is the Schema for the fooclustertemplates API. +type FooClusterTemplate struct { + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata,omitempty"` + + Spec FooClusterTemplateSpec `json:"spec,omitempty"` +} + +type FooClusterTemplateSpec struct { + Template FooClusterTemplateResource `json:"template"` +} + +type FooClusterTemplateResource struct { + // Standard object's metadata. + // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + // +optional + ObjectMeta clusterv1.ObjectMeta `json:"metadata,omitempty"` + Spec FooClusterSpec `json:"spec"` +} +``` + +NOTE: in this example InfraClusterTemplate's `spec.template.spec` embeds `FooClusterSpec` from InfraCluster. This might not always be +the best choice depending of if/how InfraCluster's spec fields applies to many clusters vs only one. + +For each InfraClusterTemplate resource, you MUST also add the corresponding list resource. +The list resource MUST be named as `List`. + +```go +// +kubebuilder:object:root=true + +// FooClusterTemplateList contains a list of FooClusterTemplates. +type FooClusterTemplateList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata,omitempty"` + Items []FooClusterTemplate `json:"items"` +} +``` + +### Externally managed infrastructure + +In some cases, users might be required (or choose to) manage infrastructure out of band and run CAPI on top of already +existing infrastructure. + +In order to support this use case, the InfraCluster controller SHOULD skip reconciliation of InfraCluster resources with +the `cluster.x-k8s.io/managed-by: ""` label, and not update the resource or its status in any way. + +Please note that when the cluster infrastructure is externally managed, it is responsibility of external management system +to abide to the following contract rules: +- [InfraCluster control plane endpoint] +- [InfraCluster failure domains] +- [InfraCluster initialization completed] +- [InfraCluster terminal failures] + +See the [externally managed infrastructure proposal] for more detail about this use case. + +### Multi tenancy + +Multi tenancy in Cluster API defines the capability of an infrastructure provider to manage different credentials, +each one of them corresponding to an infrastructure tenant. + +See [infrastructure Provider Security Guidance] for considerations about cloud provider credential management. + +Please also note that Cluster API does not support running multiples instances of the same provider, which someone can +assume an alternative solution to implement multi tenancy; same applies to the clusterctl CLI. + +See [Support running multiple instances of the same provider] for more context. + +However, if you want to make it possible for users to run multiples instances of your provider, your controller's SHOULD: + +- support the `--namespace` flag. +- support the `--watch-filter` flag. + +Please, read carefully the page linked above to fully understand implications and risks related to this option. + +### Clusterctl support + +The clusterctl command is designed to work with all the providers compliant with the rules defined in the [clusterctl provider contract]. + +## Typical InfraCluster reconciliation workflow + +A cluster infrastructure provider must respond to changes to its InfraCluster resources. This process is +typically called reconciliation. The provider must watch for new, updated, and deleted resources and respond +accordingly. + +As a reference you can look at the following workflow to understand how the typical reconciliation workflow +is implemented in InfraCluster controllers: + +![Cluster infrastructure provider activity diagram](../../../images/cluster-infra-provider.png) + +### Normal resource + +1. If the resource is externally managed, exit the reconciliation + 1. The `ResourceIsNotExternallyManaged` predicate can be used to prevent reconciling externally managed resources +1. If the resource does not have a `Cluster` owner, exit the reconciliation + 1. The Cluster API `Cluster` reconciler populates this based on the value in the `Cluster`'s `spec.infrastructureRef` + field. +1. Add the provider-specific finalizer, if needed +1. Reconcile provider-specific cluster infrastructure + 1. If any errors are encountered, exit the reconciliation +1. If the provider created a load balancer for the control plane, record its hostname or IP in `spec.controlPlaneEndpoint` +1. Set `status.ready` to `true` +1. Set `status.failureDomains` based on available provider failure domains (optional) +1. Patch the resource to persist changes + +### Deleted resource + +1. If the resource has a `Cluster` owner + 1. Perform deletion of provider-specific cluster infrastructure + 1. If any errors are encountered, exit the reconciliation +1. Remove the provider-specific finalizer from the resource +1. Patch the resource to persist changes + +[All resources: Scope]: #all-resources-scope +[All resources: `TypeMeta` and `ObjectMeta`field]: #all-resources-typemeta-and-objectmeta-field +[All resources: `APIVersion` field value]: #all-resources-apiversion-field-value +[aggregation label]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles +[Kubernetes API Deprecation Policy]: https://kubernetes.io/docs/reference/using-api/deprecation-policy/ +[InfraCluster, InfraClusterList resource definition]: #infracluster-infraclusterlist-resource-definition +[InfraCluster: control plane endpoint]: #infracluster-control-plane-endpoint +[InfraCluster: failure domains]: #infracluster-failure-domains +[InfraCluster: initialization completed]: #infracluster-initialization-completed +[Improving status in CAPI resources]: https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20240916-improve-status-in-CAPI-resources.md +[InfraCluster: conditions]: #infracluster-conditions +[Kubernetes API Conventions]: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties +[Cluster API condition proposal]: https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20200506-conditions.md +[InfraCluster: terminal failures]: #infracluster-terminal-failures +[InfraClusterTemplate, InfraClusterTemplateList resource definition]: #infraclustertemplate-infraclustertemplatelist-resource-definition +[Externally managed infrastructure]: #externally-managed-infrastructure +[externally managed infrastructure proposal]: https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20210203-externally-managed-cluster-infrastructure.md +[Multi tenancy]: #multi-tenancy +[Support running multiple instances of the same provider]: ../../core/support-multiple-instances.md +[Clusterctl support]: #clusterctl-support +[clusterctl provider contract]: clusterctl.md +[implementation best practices]: ../best-practices.md +[infrastructure Provider Security Guidance]: ../security-guidelines.md diff --git a/docs/book/src/developer/providers/machine-infrastructure.md b/docs/book/src/developer/providers/contracts/infra-machine.md similarity index 97% rename from docs/book/src/developer/providers/machine-infrastructure.md rename to docs/book/src/developer/providers/contracts/infra-machine.md index 526d4673e126..3ec498714872 100644 --- a/docs/book/src/developer/providers/machine-infrastructure.md +++ b/docs/book/src/developer/providers/contracts/infra-machine.md @@ -114,7 +114,7 @@ accordingly. The following diagram shows the typical logic for a machine infrastructure provider: -![Machine infrastructure provider activity diagram](../../images/machine-infra-provider.png) +![Machine infrastructure provider activity diagram](../../../images/machine-infra-provider.png) ### Normal resource @@ -212,4 +212,4 @@ Note, the write permissions allow the `Machine` controller to set owner referenc ## Security Guidelines -Please refer to [Infrastructure Provider Security Guidance](../../security/infrastructure-provider-security-guidance.md). +Please refer to [Infrastructure Provider Security Guidance](../security-guidelines.md). diff --git a/docs/book/src/developer/providers/contracts/overview.md b/docs/book/src/developer/providers/contracts/overview.md new file mode 100644 index 000000000000..2b54504bdf54 --- /dev/null +++ b/docs/book/src/developer/providers/contracts/overview.md @@ -0,0 +1,36 @@ +# Provider contract + +The __Cluster API contract__ defines a set of rules a provider is expected to comply with in order to interact with Cluster API. +Those rules can be in the form of CustomResourceDefinition (CRD) fields and/or expected behaviors to be implemented. + +Different rules apply to each provider type and for each different resource that is expected to interact with "core" Cluster API. + +- Infrastructure provider + - Contract rules for [InfraCluster](infra-cluster.md) resource + - Contract rules for [InfraMachine](infra-machine.md) resource + - Contract rules for InfraMachinePool resource (TODO) + +- Bootstrap provider + - Contract rules for [BootstrapConfig](bootstrap-config.md) resource + +- Control plane provider + - Contract rules for [ControlPlane](control-plane.md) resource + +- IPAM provider + - Contract rules for IPAM resource (TODO) + +- Addon Providers + - [Cluster API Add-On Orchestration](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20220712-cluster-api-addon-orchestration.md) + +- Runtime Extensions Providers + - [Experimental Feature: Runtime SDK (alpha)](https://cluster-api.sigs.k8s.io/tasks/experimental-features/runtime-sdk/) + +Additional rules must be considered for a provider to work with the [clusterctl CLI](clusterctl.md). + +## Improving and contributing to the contract + +The definition of the contract between Cluster API and providers may be changed in future versions of Cluster API. +The Cluster API maintainers welcome feedback and contributions to the contract in order to improve how it's defined, +its clarity and visibility to provider implementers and its suitability across the different kinds of Cluster API providers. +To provide feedback or open a discussion about the provider contract please [open an issue on the Cluster API](https://github.com/kubernetes-sigs/cluster-api/issues/new?assignees=&labels=&template=feature_request.md) +repo or add an item to the agenda in the [Cluster API community meeting](https://git.k8s.io/community/sig-cluster-lifecycle/README.md#cluster-api). diff --git a/docs/book/src/developer/providers/getting-started/initialize-repo-and-api-types.md b/docs/book/src/developer/providers/getting-started/initialize-repo-and-api-types.md index 87f4524439f0..a64a365737e2 100644 --- a/docs/book/src/developer/providers/getting-started/initialize-repo-and-api-types.md +++ b/docs/book/src/developer/providers/getting-started/initialize-repo-and-api-types.md @@ -115,7 +115,4 @@ git commit -m "Generate Cluster and Machine resources." ### Apply further customizations -The cluster API CRDs should be further customized: - -- [Apply the contract version label to support conversions](../contracts.md#api-version-labels) (required to deploy _any_ custom resource of your provider) -- [Ensure you are compliant with the clusterctl provider contract](../../../clusterctl/provider-contract.md#components-yaml) +The cluster API CRDs should be further customized, please refer to [provider contracts](../contracts/overview.md). diff --git a/docs/book/src/developer/providers/version-migration.md b/docs/book/src/developer/providers/migrations/overview.md similarity index 75% rename from docs/book/src/developer/providers/version-migration.md rename to docs/book/src/developer/providers/migrations/overview.md index d932d1c0fe73..83d4141e6bb6 100644 --- a/docs/book/src/developer/providers/version-migration.md +++ b/docs/book/src/developer/providers/migrations/overview.md @@ -3,8 +3,8 @@ The following pages provide an overview of relevant changes between versions of Cluster API and their direct successors. These guides are intended to assist maintainers of other providers and consumers of the Go API in upgrading from one version of Cluster API to a subsequent version. -- [v1.6 to v1.7](migrations/v1.6-to-v1.7.md) -- [v1.7 to v1.8](migrations/v1.7-to-v1.8.md) -- [v1.8 to v1.9](migrations/v1.7-to-v1.8.md) +- [v1.6 to v1.7](v1.6-to-v1.7.md) +- [v1.7 to v1.8](v1.7-to-v1.8.md) +- [v1.8 to v1.9](v1.7-to-v1.8.md) For older versions please refer to [Older Cluster API documentation versions](#clusterapi-documentation-versions) diff --git a/docs/book/src/security/infrastructure-provider-security-guidance.md b/docs/book/src/developer/providers/security-guidelines.md similarity index 100% rename from docs/book/src/security/infrastructure-provider-security-guidance.md rename to docs/book/src/developer/providers/security-guidelines.md diff --git a/docs/book/src/reference/glossary.md b/docs/book/src/reference/glossary.md index 93cdf63800bb..4e96f6d27995 100644 --- a/docs/book/src/reference/glossary.md +++ b/docs/book/src/reference/glossary.md @@ -327,7 +327,7 @@ one of them corresponding to an infrastructure tenant. Please note that up until v1alpha3 this concept had a different meaning, referring to the capability to run multiple instances of the same provider, each one with its own credentials; starting from v1alpha4 we are disambiguating the two concepts. -See [Multi-tenancy](../developer/core/multi-tenancy.md) and [Support multiple instances](../developer/core/support-multiple-instances.md). +See also [Support multiple instances](../developer/core/support-multiple-instances.md). # N --- diff --git a/docs/book/src/tasks/experimental-features/cluster-class/index.md b/docs/book/src/tasks/experimental-features/cluster-class/index.md index d708451fc465..31476cfb4a2c 100644 --- a/docs/book/src/tasks/experimental-features/cluster-class/index.md +++ b/docs/book/src/tasks/experimental-features/cluster-class/index.md @@ -3,8 +3,6 @@ The ClusterClass feature introduces a new way to create clusters which reduces boilerplate and enables flexible and powerful customization of clusters. ClusterClass is a powerful abstraction implemented on top of existing interfaces and offers a set of tools and operations to streamline cluster lifecycle management while maintaining the same underlying API. - -