Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/book/src/developer/core/controllers/machine-pool.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@

![](../../../images/cluster-admission-machinepool-controller.png)

📖 **For conceptual information about MachinePools, when to use them, and how they compare to MachineDeployments**, see the [MachinePool Guide](../../../tasks/experimental-features/machine-pools.md).

The MachinePool controller's main responsibilities are:

* Setting an OwnerReference on each MachinePool object to:
Expand Down
122 changes: 105 additions & 17 deletions docs/book/src/tasks/experimental-features/machine-pools.md
Original file line number Diff line number Diff line change
@@ -1,31 +1,114 @@
# Experimental Feature: MachinePool (beta)

The `MachinePool` feature provides a way to manage a set of machines by defining a common configuration, number of desired machine replicas etc. similar to `MachineDeployment`,
except `MachineSet` controllers are responsible for the lifecycle management of the machines for `MachineDeployment`, whereas in `MachinePools`,
each infrastructure provider has a specific solution for orchestrating these `Machines`.

**Feature gate name**: `MachinePool`

**Variable name to enable/disable the feature gate**: `EXP_MACHINE_POOL`

Infrastructure providers can support this feature by implementing their specific `MachinePool` such as `AzureMachinePool`.
## Table of Contents

- [Introduction](#introduction)
- [What is a MachinePool?](#what-is-a-machinepool)
- [Why MachinePool?](#why-machinepool)
- [When to use MachinePool vs MachineDeployment](#when-to-use-machinepool-vs-machinedeployment)
- [Enabling MachinePool](#enabling-machinepool)
- [MachinePool provider implementations](#machinepool-provider-implementations)
- [Additional Resources](#additional-resources)

## Introduction

Cluster API (CAPI) manages Kubernetes worker nodes primarily through Machine, MachineSet, and MachineDeployment objects. These primitives manage nodes individually (Machines), and have served well across a wide variety of providers.

However, many infrastructure providers already offer first-class abstractions for groups of compute instances (AWS: Auto Scaling Groups (ASG), Azure: Virtual Machine Scale Sets (VMSS), or GCP: Managed Instance Groups (MIG)). These primitives natively support scaling, rolling upgrades, and health management.

MachinePool brings these provider features into Cluster API by introducing a higher-level abstraction for managing a group of machines as a single unit.

## What is a MachinePool?

A MachinePool is a Cluster API resource representing a group of worker nodes. Instead of reconciling each machine individually, CAPI delegates lifecycle management to the infrastructure provider.

- **MachinePool (core API)**: defines desired state (replicas, Kubernetes version, bootstrap template, infrastructure reference).
- **InfrastructureMachinePool (provider API)**: provides an implementation that backs a pool. A provider may offer more than one type depending on how it is managed. For example:
- `AWSMachinePool`: self-managed ASG
- `AWSManagedMachinePool`: EKS managed node group
- `AzureMachinePool`: VM Scale Set
- `AzureManagedMachinePool`: AKS managed node pool
- `GCPManagedMachinePool`: GKE managed node pool
- `OCIManagedMachinePool`: OKE managed node pool
- `ScalewayManagedMachinePool`: Scaleway Kapsule node pool
- **Bootstrap configuration**: still applies (e.g., kubeadm configs), ensuring that new nodes join the cluster with the correct setup.

The MachinePool controller coordinates between the Cluster API core and provider-specific implementations:

- Reconciles desired replicas with the infrastructure pool.
- Matches provider IDs from the infrastructure resource with Kubernetes Nodes in the workload cluster.
- Updates MachinePool status (ready replicas, conditions, etc.)

## Why MachinePool?

### Leverage provider primitives

Most cloud providers already manage scaling, instance replacement, and health monitoring at the group level. MachinePool lets CAPI delegate lifecycle operations instead of duplicating that logic.

**Example:**
- AWS Auto Scaling Groups replace failed nodes automatically.
- Azure VM Scale Sets support rolling upgrades with configurable surge/availability strategies.

### Simplify upgrades and scaling

More details on `MachinePool` can be found at:
[MachinePool CAEP](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20190919-machinepool-api.md)
Upgrades and scaling events are managed at the pool level:
- Update Kubernetes version or bootstrap template → cloud provider handles rolling replacement.
- Scale up/down replicas → provider adjusts capacity.

For developer docs on the MachinePool controller, see [here](./../../developer/core/controllers/machine-pool.md).
This provides more predictable, cloud-native semantics compared to reconciling many individual Machine objects.

## MachinePools vs MachineDeployments
### Autoscaling integration

Although MachinePools provide a similar feature to MachineDeployments, MachinePools do so by leveraging an InfraMachinePool which corresponds 1:1 with a resource like VMSS on Azure or Autoscaling Groups on AWS which we treat as a black box. When a MachinePool is scaled up, the InfraMachinePool scales itself up and populates its provider ID list based on the response from the infrastructure provider. On the other hand, when a MachineDeployment is scaled up, new Machines are created which then create an individual InfraMachine, which corresponds to a VM in any infrastructure provider.
MachinePool integrates with the Cluster Autoscaler in the same way that MachineDeployments do. In practice, the autoscaler treats a MachinePool as a node group, enabling scale-up and scale-down decisions based on cluster load.

| MachinePools | MachineDeployments |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------- |
| Creates new instances through a single infrastructure resource like VMSS in Azure or Autoscaling Groups in AWS. | Creates new instances by creating new Machines, which create individual VM instances on the infra provider. |
| Set of instances is orchestrated by the infrastructure provider. | Set of instances is orchestrated by Cluster API using a MachineSet. |
| Each MachinePool corresponds 1:1 with an associated InfraMachinePool. | Each MachineDeployment includes a MachineSet, and for each replica, it creates a Machine and InfraMachine. |
| Each MachinePool requires only a single BootstrapConfig. | Each MachineDeployment uses an InfraMachineTemplate and a BootstrapConfigTemplate, and each Machine requires a unique BootstrapConfig. |
| Maintains a list of instances in the `providerIDList` field in the MachinePool spec. This list is populated based on the response from the infrastructure provider. | Maintains a list of instances through the Machine resources owned by the MachineSet. |
### Tradeoffs and limitations

While powerful, MachinePool comes with tradeoffs:

- **Infrastructure provider complexity**: requires infrastructure providers to implement and maintain an InfrastructureMachinePool type.
- **Less per-machine granularity**: you cannot configure each node individually; the pool defines a shared template.
> **Note**: While this is typically true, certain cloud providers do offer flexibility.
> **Example**: AWS allows `AWSMachinepool.spec.mixedInstancesPolicy.instancesDistribution` while Azure allows `AzureMachinePool.spec.orchestrationMode`.
- **Complex reconciliation**: node-to-providerID matching introduces edge cases (delays, inconsistent states).
- **Draining**: The cloud resources for MachinePool may not necessarily support draining of Kubernetes worker nodes. For example, with an AWSMachinePool, AWS would normally terminate instances as quickly as possible. To solve this, tools like `aws-node-termination-handler` combined with ASG lifecycle hooks (defined in `AWSMachine.spec.lifecycleHooks`) must be installed, and is not a built-in feature of the infrastructure provider (CAPA in this example).
- **Maturity**: The MachinePool API is still considered experimental/beta.

## When to use MachinePool vs MachineDeployment

Both MachineDeployment and MachinePool are valid options for managing worker nodes in Cluster API. The right choice depends on your infrastructure provider's capabilities and your operational requirements.

### Use MachinePool when:

- **Cloud provider supports scaling group primitives**: AWS Auto Scaling Groups, Azure Virtual Machine Scale Sets, GCP Managed Instance Groups, OCI Compute Instances, Scaleway Kapsule. These resources natively handle scaling, rolling upgrades, and health checks.
- **You want to leverage cloud provider-level features**: MachinePool enables direct use of cloud-native upgrade strategies (e.g., surge, maxUnavailable) and autoscaling behaviors.

### Use MachineDeployment when:

- **The provider does not support scaling groups**: Common in environments such as bare metal, vSphere, or Docker.
- **You need fine-grained per-machine control**: MachineDeployments allow unique bootstrap configurations, labels, and taints across different MachineSets.
- **You prefer maturity and portability**: MachineDeployment is stable, GA, and supported across all providers. MachinePool remains experimental in some implementations.

## Enabling MachinePool

Starting from Cluster API v1.7, MachinePool is enabled by default. No additional configuration is needed.

For Cluster API versions prior to v1.7, you need to set the `EXP_MACHINE_POOL` environment variable:

```bash
export EXP_MACHINE_POOL=true
clusterctl init
```

Or when upgrading an existing management cluster:

```bash
export EXP_MACHINE_POOL=true
clusterctl upgrade
```

## MachinePool provider implementations

Expand All @@ -38,3 +121,8 @@ The following Cluster API infrastructure providers have implemented support for
| [GCP](https://github.com/kubernetes-sigs/cluster-api-provider-gcp/pull/1506) | `GCPMachinePool` | In Progress |
| [OCI](https://oracle.github.io/cluster-api-provider-oci/managed/managedcluster.html) | `OCIManagedMachinePool`<br> `OCIMachinePool` | Implemented, MachinePoolMachines supported |
| [Scaleway](https://github.com/scaleway/cluster-api-provider-scaleway/blob/main/docs/scalewaymanagedmachinepool.md) | `ScalewayManagedMachinePool` | Implemented |

## Additional Resources

- **Design Document**: [MachinePool CAEP](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20190919-machinepool-api.md)
- **Developer Documentation**: [MachinePool Controller](./../../developer/core/controllers/machine-pool.md)
4 changes: 4 additions & 0 deletions docs/book/src/user/concepts.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,10 @@ A MachineDeployment provides declarative updates for Machines and MachineSets.

A MachineDeployment works similarly to a core Kubernetes [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/). A MachineDeployment reconciles changes to a Machine spec by rolling out changes to 2 MachineSets, the old and the newly updated.

### MachinePool

A MachinePool is a declarative spec for a group of Machines. It is similar to a MachineDeployment, but is specific to a particular Infrastructure Provider. For more information, please check out [MachinePool](../tasks/experimental-features/machine-pools.md).

### MachineSet

A MachineSet's purpose is to maintain a stable set of Machines running at any given time.
Expand Down
You are viewing a condensed version of this merge commit. You can view the full changes here.