Skip to content

e2e failure: Should pivot the bootstrap cluster to a self-hosted cluster #12340

Closed
@mboersma

Description

@mboersma

Which jobs are flaking?

capi-e2e-release-1.8

Which tests are flaking?

capi-e2e: [It] When testing Cluster API working on self-hosted clusters using ClusterClass [ClusterClass] Should pivot the bootstrap cluster to a self-hosted cluster 

Since when has it been flaking?

6/9/2025

Testgrid link

https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/periodic-cluster-api-e2e-release-1-8/1931924002503135232

Reason for failure (if possible)

 INFO: clusterctl move --from-kubeconfig /tmp/e2e-kubeconfig2207946005 --to-kubeconfig /tmp/e2e-kind3812226240 --namespace self-hosted-xgb36n
  [FAILED] in [AfterEach] - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/clusterctl/client.go:452 @ 06/09/25 05:28:39.475
  << Timeline
  [FAILED] Failed to run clusterctl move
  Expected success, but got an error:
      <*errors.withStack | 0xc002fc0768>: 
      failed to get object graph: failed to check for provisioned infrastructure: cannot start the move operation while "/, Kind=" self-hosted-xgb36n/worker-59kx64 is still provisioning the node
      {
          error: <*errors.withMessage | 0xc0030a4420>{
              cause: <*errors.withStack | 0xc002fc0738>{
                  error: <*errors.withMessage | 0xc0030a4400>{
                      cause: <errors.aggregate | len:1, cap:1>[
                          <*errors.fundamental | 0xc0030aa210>{
                              msg: "cannot start the move operation while \"/, Kind=\" self-hosted-xgb36n/worker-59kx64 is still provisioning the node",
                              stack: [0x1f40665, 0x1f3f9d3, 0x1f3f0ec, 0x1f78e02, 0x1f78c05, 0x20b0b48, 0x2194528, 0x8b6f13, 0x8caf1b, 0x47f921],
                          },
                      ],
                      msg: "failed to check for provisioned infrastructure",
                  },
                  stack: [0x1f3f9e9, 0x1f3f0ec, 0x1f78e02, 0x1f78c05, 0x20b0b48, 0x2194528, 0x8b6f13, 0x8caf1b, 0x47f921],
              },
              msg: "failed to get object graph",
          },
          stack: [0x1f3f185, 0x1f78e02, 0x1f78c05, 0x20b0b48, 0x2194528, 0x8b6f13, 0x8caf1b, 0x47f921],
      }
  In [AfterEach] at: /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/clusterctl/client.go:452 @ 06/09/25 05:28:39.475
  Full Stack Trace
    sigs.k8s.io/cluster-api/test/framework/clusterctl.Move({0x2cfb918, 0xc0006bf950}, {{0xc000b63920, 0x2b}, {0xc000134d76, 0x31}, {0xc000d193a0, 0x1d}, {0xc000134da8, 0x17}, ...})
    	/home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/clusterctl/client.go:452 +0x6fd
    sigs.k8s.io/cluster-api/test/e2e.SelfHostedSpec.func3()
    	/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/self_hosted.go:451 +0x588
------------------------------
• [161.233 seconds]
When testing MachineDeployment scale out/in Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/md_scale.go:83
  Timeline >>
  STEP: Creating a namespace for hosting the "md-scale" test spec @ 06/09/25 05:28:13.566

Anything else we need to know?

No response

Label(s) to be applied

/kind flake

Metadata

Metadata

Assignees

Labels

help wantedDenotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.kind/flakeCategorizes issue or PR as related to a flaky test.priority/important-longtermImportant over the long term, but may not be staffed and/or may need multiple releases to complete.triage/acceptedIndicates an issue or PR is ready to be actively worked on.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions