Skip to content

Conversation

@huiran0826
Copy link

@huiran0826 huiran0826 commented Dec 1, 2025

Test Coverage for Bug: https://issues.redhat.com/browse/OCPBUGS-63348

Test steps :

  1. Check the node capacity
Annotations:        cloud.network.openshift.io/egress-ipconfig:
                      [{"interface":"eni-018a493be1e8adf3f","ifaddr":{"ipv4":"10.0.0.0/18"},"capacity":{"ipv4":14,"ipv6":15}}]
                    csi.volume.kubernetes.io/nodeid: {"ebs.csi.aws.com":"i-061369b27ea9661fc"}
  1. Label it as egress node
  2. Configure multiple EIPs , should have EIPs assigned successfully
  3. Reboot egress node
  4. Checking EIPs assigned back successfully
  5. Check capacity is not changed
  6. Check'oc get cloudprivateipconfigs -o yaml', no errors.
  7. replace step 4 as kill CNCC pod

Test log:

  I1201 15:10:53.984647   49393 i18n.go:139] Couldn't find translations for C, using default
  I1201 15:10:54.167518   49393 binary.go:77] Found 8501 test specs
  I1201 15:10:54.168857   49393 binary.go:94] 1051 test specs remain, after filtering out k8s
openshift-tests v4.1.0-10414-g563a12f
  I1201 15:10:58.702142   49393 test_setup.go:125] Extended test version v4.1.0-10414-g563a12f
  I1201 15:10:58.702278   49393 test_context.go:559] Tolerating taints "node-role.kubernetes.io/control-plane" when considering if nodes are ready
  I1201 15:10:58.923052 49393 framework.go:2334] microshift-version configmap not found
  I1201 15:10:58.925650   49393 binary.go:114] Loaded test configuration: &framework.TestContextType{KubeConfig:"/tmp/kubeconfig", KubeContext:"", KubeAPIContentType:"application/vnd.kubernetes.protobuf", KubeletRootDir:"/var/lib/kubelet", KubeletConfigDropinDir:"", CertDir:"", Host:"https://api.huirwang-1201a.qe.devcluster.openshift.com:6443", BearerToken:"<redacted>", RepoRoot:"../../", ListImages:false, listTests:false, listLabels:false, ListConformanceTests:false, Provider:"aws", Tooling:"", timeouts:framework.TimeoutContext{Poll:2000000000, PodStart:300000000000, PodStartShort:120000000000, PodStartSlow:900000000000, PodDelete:300000000000, ClaimProvision:300000000000, DataSourceProvision:300000000000, ClaimProvisionShort:60000000000, ClaimBound:180000000000, PVReclaim:180000000000, PVBound:180000000000, PVCreate:180000000000, PVDelete:300000000000, PVDeleteSlow:1200000000000, SnapshotCreate:300000000000, SnapshotDelete:300000000000, SnapshotControllerMetrics:300000000000, SystemPodsStartup:600000000000, NodeSchedulable:1800000000000, SystemDaemonsetStartup:300000000000, NodeNotReady:180000000000}, CloudConfig:framework.CloudConfig{APIEndpoint:"", ProjectID:"", Zone:"us-east-2a", Zones:[]string{"us-east-2a"}, Region:"us-east-2", MultiZone:false, MultiMaster:true, Cluster:"", MasterName:"", NodeInstanceGroup:"", NumNodes:3, ClusterIPRange:"", ClusterTag:"", Network:"", ConfigFile:"", NodeTag:"", MasterTag:"", Provider:(*aws.Provider)(0x10e3379a8)}, KubectlPath:"kubectl", OutputDir:"/tmp", ReportDir:"", ReportPrefix:"", ReportCompleteGinkgo:false, ReportCompleteJUnit:false, Prefix:"e2e", MinStartupPods:-1, EtcdUpgradeStorage:"", EtcdUpgradeVersion:"", GCEUpgradeScript:"", ContainerRuntimeEndpoint:"unix:///run/containerd/containerd.sock", ContainerRuntimeProcessName:"containerd", ContainerRuntimePidFile:"/run/containerd/containerd.pid", SystemdServices:"containerd*", DumpSystemdJournal:false, ImageServiceEndpoint:"", MasterOSDistro:"custom", NodeOSDistro:"custom", NodeOSArch:"amd64", VerifyServiceAccount:true, DeleteNamespace:true, DeleteNamespaceOnFailure:true, AllowedNotReadyNodes:-1, CleanStart:false, GatherKubeSystemResourceUsageData:"false", GatherLogsSizes:false, GatherMetricsAfterTest:"false", GatherSuiteMetricsAfterTest:false, MaxNodesToGather:0, IncludeClusterAutoscalerMetrics:false, OutputPrintType:"json", CreateTestingNS:(framework.CreateTestingNSFn)(0x1060bc570), DumpLogsOnFailure:true, DisableLogDump:false, LogexporterGCSPath:"", NodeTestContextType:framework.NodeTestContextType{NodeE2E:false, NodeName:"", NodeConformance:false, PrepullImages:false, ImageDescription:"", RuntimeConfig:map[string]string(nil), SystemSpecName:"", RestartKubelet:false, ExtraEnvs:map[string]string(nil), StandaloneMode:false, CriProxyEnabled:false}, ClusterDNSDomain:"cluster.local", NodeKiller:framework.NodeKillerConfig{Enabled:false, FailureRatio:0.01, Interval:60000000000, JitterFactor:60, SimulatedDowntime:600000000000, NodeKillerStopCtx:context.Context(nil), NodeKillerStop:(func())(nil)}, IPFamily:"ipv4", NonblockingTaints:"node-role.kubernetes.io/control-plane", ProgressReportURL:"", SriovdpConfigMapFile:"", SpecSummaryOutput:"", DockerConfigFile:"", E2EDockerConfigFile:"", KubeTestRepoList:"", SnapshotControllerPodName:"", SnapshotControllerHTTPPort:0, RequireDevices:false, EnabledVolumeDrivers:[]string(nil)}
  Running Suite:  - /Users/huirwang/go/src/github.com/openshift/origin
  ====================================================================
  Random Seed: 1764573053 - will randomize all specs

  Will run 1 of 1 specs
  ------------------------------
  [sig-network][Feature:EgressIP][apigroup:operator.openshift.io] [external-targets][apigroup:user.openshift.io][apigroup:security.openshift.io] Rebooting a node/Restarting CNCC pod should not change the EgressIPs capacity
  github.com/openshift/origin/test/extended/networking/egressip.go:569
    STEP: Creating a kubernetes client @ 12/01/25 15:10:58.946
  I1201 15:10:58.947972   49393 discovery.go:214] Invalidating discovery information
  I1201 15:11:03.559062 49393 client.go:293] configPath is now "/var/folders/qz/g9wq_ps129jdx0wygt6t7dr80000gn/T/configfile2222328429"
  I1201 15:11:03.559136 49393 client.go:368] The user is now "e2e-test-egressip-zr9sr-user"
  I1201 15:11:03.559148 49393 client.go:370] Creating project "e2e-test-egressip-zr9sr"
  I1201 15:11:03.850196 49393 client.go:378] Waiting on permissions in project "e2e-test-egressip-zr9sr" ...
  I1201 15:11:04.749810 49393 client.go:407] DeploymentConfig capability is enabled, adding 'deployer' SA to the list of default SAs
  I1201 15:11:04.975550 49393 client.go:422] Waiting for ServiceAccount "default" to be provisioned...
  I1201 15:11:05.517153 49393 client.go:422] Waiting for ServiceAccount "builder" to be provisioned...
  I1201 15:11:06.057254 49393 client.go:422] Waiting for ServiceAccount "deployer" to be provisioned...
  I1201 15:11:06.595657 49393 client.go:432] Waiting for RoleBinding "system:image-pullers" to be provisioned...
  I1201 15:11:07.020401 49393 client.go:432] Waiting for RoleBinding "system:image-builders" to be provisioned...
  I1201 15:11:07.868521 49393 client.go:432] Waiting for RoleBinding "system:deployers" to be provisioned...
  I1201 15:11:09.384841 49393 client.go:465] Project "e2e-test-egressip-zr9sr" has been fully provisioned.
    STEP: Verifying that this cluster uses a network plugin that is supported for this test @ 12/01/25 15:11:09.385
    STEP: Creating a temp directory @ 12/01/25 15:11:10.311
    STEP: Getting the kubernetes clientset @ 12/01/25 15:11:10.312
    STEP: Getting the cloudnetwork clientset @ 12/01/25 15:11:10.312
    STEP: Determining the cloud infrastructure type @ 12/01/25 15:11:10.314
    STEP: Verifying that this is a supported cloud infrastructure platform @ 12/01/25 15:11:10.535
    STEP: Verifying that this is a supported version of OpenShift @ 12/01/25 15:11:10.536
    STEP: Getting all worker nodes in alphabetical order @ 12/01/25 15:11:11.635
    STEP: Determining the cloud address families @ 12/01/25 15:11:11.855
    STEP: Determining the target protocol, host and port @ 12/01/25 15:11:12.082
  I1201 15:11:12.082683 49393 egressip.go:147] Testing against: CloudType: AWS, Protocol http, TargetHost: self, TargetPort: 80
    STEP: Creating a project for the prober pod @ 12/01/25 15:11:12.082
  I1201 15:11:15.516221 49393 client.go:293] configPath is now "/var/folders/qz/g9wq_ps129jdx0wygt6t7dr80000gn/T/configfile3648316508"
  I1201 15:11:15.516321 49393 client.go:368] The user is now "e2e-test-egressip-fdb7c-user"
  I1201 15:11:15.516378 49393 client.go:370] Creating project "e2e-test-egressip-fdb7c"
  I1201 15:11:15.778509 49393 client.go:378] Waiting on permissions in project "e2e-test-egressip-fdb7c" ...
  I1201 15:11:16.667626 49393 client.go:407] DeploymentConfig capability is enabled, adding 'deployer' SA to the list of default SAs
  I1201 15:11:16.892222 49393 client.go:422] Waiting for ServiceAccount "default" to be provisioned...
  I1201 15:11:17.427957 49393 client.go:422] Waiting for ServiceAccount "builder" to be provisioned...
  I1201 15:11:17.963388 49393 client.go:422] Waiting for ServiceAccount "deployer" to be provisioned...
  I1201 15:11:18.516915 49393 client.go:432] Waiting for RoleBinding "system:image-pullers" to be provisioned...
  I1201 15:11:18.936266 49393 client.go:432] Waiting for RoleBinding "system:image-builders" to be provisioned...
  I1201 15:11:19.361594 49393 client.go:432] Waiting for RoleBinding "system:deployers" to be provisioned...
  I1201 15:11:20.248335 49393 client.go:465] Project "e2e-test-egressip-fdb7c" has been fully provisioned.
    STEP: Selecting the EgressIP nodes and a non-EgressIP node @ 12/01/25 15:11:20.248
    STEP: Setting the ingressdomain @ 12/01/25 15:11:20.248
    STEP: Setting the EgressIP nodes as EgressIP assignable @ 12/01/25 15:11:20.475
    STEP: Adding SCC privileged to the external namespace @ 12/01/25 15:11:23.342
    STEP: Determining the interface that will be used for packet sniffing @ 12/01/25 15:11:24.513
  I1201 15:11:48.947039 49393 egressip.go:342] Using interface ens5 for packet captures
    STEP: Spawning the packet sniffer pods on the EgressIP assignable hosts @ 12/01/25 15:11:48.947
    STEP: Get one Egress node @ 12/01/25 15:11:54.903
    STEP: Get capacity of one Egress node before reboot @ 12/01/25 15:11:56.123
  I1201 15:11:56.560811 49393 egressip.go:583] The capacity of node ip-10-0-32-25.us-east-2.compute.internal before reboot is {14 15 0}
    STEP: Getting a map of source nodes and potential Egress IPs for these nodes @ 12/01/25 15:11:56.56
  I1201 15:11:58.415116 49393 egressip.go:594] map[ip-10-0-32-25.us-east-2.compute.internal:[10.0.0.5 10.0.0.6 10.0.0.7 10.0.0.8 10.0.0.9 10.0.0.10 10.0.0.11 10.0.0.12 10.0.0.13 10.0.0.14 10.0.0.15 10.0.0.16 10.0.0.17 10.0.0.18] ip-10-0-46-129.us-east-2.compute.internal:[10.0.0.19 10.0.0.20 10.0.0.21 10.0.0.22 10.0.0.23 10.0.0.24 10.0.0.25 10.0.0.26 10.0.0.27 10.0.0.28 10.0.0.29 10.0.0.30 10.0.0.31 10.0.0.32]]
    STEP: Creating 14 EgressIPs objects for the Egress node @ 12/01/25 15:11:58.415
    STEP: Creating the EgressIP object @ 12/01/25 15:11:58.415
  I1201 15:11:58.415349 49393 egressip.go:692] Marshalling the desired EgressIPs into a string
  I1201 15:11:58.415437 49393 egressip.go:700] Creating the EgressIP object and writing it to disk
    STEP: Applying the EgressIP object @ 12/01/25 15:11:58.416
  I1201 15:11:58.416302 49393 egressip.go:728] Applying the EgressIP object e2e-test-egressip-zr9sr-0
  I1201 15:11:59.384190 49393 egressip.go:733] Waiting for CloudPrivateIPConfig creation for a maximum of 180 seconds
  I1201 15:11:59.602996 49393 egressip.go:745] CloudPrivateIPConfig for 10.0.0.5 not assigned.
  I1201 15:12:04.823207 49393 egressip.go:749] CloudPrivateIPConfigs for map[10.0.0.5:ip-10-0-32-25.us-east-2.compute.internal] found.
  I1201 15:12:04.823397 49393 egressip.go:754] Waiting for EgressIP addresses inside status of EgressIP CR e2e-test-egressip-zr9sr-0 for a maximum of 180 seconds
  I1201 15:12:05.045092 49393 egressip.go:768] Egress IP object e2e-test-egressip-zr9sr-0 does have all IPs for map[10.0.0.5:ip-10-0-32-25.us-east-2.compute.internal].
    STEP: Creating the EgressIP object @ 12/01/25 15:12:05.045
  I1201 15:12:05.045454 49393 egressip.go:692] Marshalling the desired EgressIPs into a string
  I1201 15:12:05.045477 49393 egressip.go:700] Creating the EgressIP object and writing it to disk
    STEP: Applying the EgressIP object @ 12/01/25 15:12:05.047
  I1201 15:12:05.047732 49393 egressip.go:728] Applying the EgressIP object e2e-test-egressip-zr9sr-1
  I1201 15:12:06.071958 49393 egressip.go:733] Waiting for CloudPrivateIPConfig creation for a maximum of 180 seconds
  I1201 15:12:06.293176 49393 egressip.go:745] CloudPrivateIPConfig for 10.0.0.6 not assigned.
  I1201 15:12:11.512949 49393 egressip.go:749] CloudPrivateIPConfigs for map[10.0.0.6:ip-10-0-32-25.us-east-2.compute.internal] found.
  I1201 15:12:11.513183 49393 egressip.go:754] Waiting for EgressIP addresses inside status of EgressIP CR e2e-test-egressip-zr9sr-1 for a maximum of 180 seconds
  I1201 15:12:11.736488 49393 egressip.go:768] Egress IP object e2e-test-egressip-zr9sr-1 does have all IPs for map[10.0.0.6:ip-10-0-32-25.us-east-2.compute.internal].
    STEP: Creating the EgressIP object @ 12/01/25 15:12:11.736
  I1201 15:12:11.737037 49393 egressip.go:692] Marshalling the desired EgressIPs into a string
  I1201 15:12:11.737073 49393 egressip.go:700] Creating the EgressIP object and writing it to disk
    STEP: Applying the EgressIP object @ 12/01/25 15:12:11.738
  I1201 15:12:11.738744 49393 egressip.go:728] Applying the EgressIP object e2e-test-egressip-zr9sr-2
  I1201 15:12:12.839978 49393 egressip.go:733] Waiting for CloudPrivateIPConfig creation for a maximum of 180 seconds
  I1201 15:12:13.391946 49393 egressip.go:745] CloudPrivateIPConfig for 10.0.0.7 not assigned.
  I1201 15:12:18.660828 49393 egressip.go:749] CloudPrivateIPConfigs for map[10.0.0.7:ip-10-0-32-25.us-east-2.compute.internal] found.
  I1201 15:12:18.661045 49393 egressip.go:754] Waiting for EgressIP addresses inside status of EgressIP CR e2e-test-egressip-zr9sr-2 for a maximum of 180 seconds
  I1201 15:12:18.882365 49393 egressip.go:768] Egress IP object e2e-test-egressip-zr9sr-2 does have all IPs for map[10.0.0.7:ip-10-0-32-25.us-east-2.compute.internal].
    STEP: Creating the EgressIP object @ 12/01/25 15:12:18.882
  I1201 15:12:18.882787 49393 egressip.go:692] Marshalling the desired EgressIPs into a string
  I1201 15:12:18.882812 49393 egressip.go:700] Creating the EgressIP object and writing it to disk
    STEP: Applying the EgressIP object @ 12/01/25 15:12:18.884
  I1201 15:12:18.884499 49393 egressip.go:728] Applying the EgressIP object e2e-test-egressip-zr9sr-3
  I1201 15:12:19.851880 49393 egressip.go:733] Waiting for CloudPrivateIPConfig creation for a maximum of 180 seconds
  I1201 15:12:20.072176 49393 egressip.go:745] CloudPrivateIPConfig for 10.0.0.8 not assigned.
  I1201 15:12:25.295302 49393 egressip.go:749] CloudPrivateIPConfigs for map[10.0.0.8:ip-10-0-32-25.us-east-2.compute.internal] found.
  I1201 15:12:25.295407 49393 egressip.go:754] Waiting for EgressIP addresses inside status of EgressIP CR e2e-test-egressip-zr9sr-3 for a maximum of 180 seconds
  I1201 15:12:25.624895 49393 egressip.go:768] Egress IP object e2e-test-egressip-zr9sr-3 does have all IPs for map[10.0.0.8:ip-10-0-32-25.us-east-2.compute.internal].
    STEP: Creating the EgressIP object @ 12/01/25 15:12:25.625
  I1201 15:12:25.625372 49393 egressip.go:692] Marshalling the desired EgressIPs into a string
  I1201 15:12:25.625407 49393 egressip.go:700] Creating the EgressIP object and writing it to disk
    STEP: Applying the EgressIP object @ 12/01/25 15:12:25.627
  I1201 15:12:25.627143 49393 egressip.go:728] Applying the EgressIP object e2e-test-egressip-zr9sr-4
  I1201 15:12:26.598471 49393 egressip.go:733] Waiting for CloudPrivateIPConfig creation for a maximum of 180 seconds
  I1201 15:12:26.818077 49393 egressip.go:745] CloudPrivateIPConfig for 10.0.0.9 not assigned.
  I1201 15:12:32.037625 49393 egressip.go:749] CloudPrivateIPConfigs for map[10.0.0.9:ip-10-0-32-25.us-east-2.compute.internal] found.
  I1201 15:12:32.037903 49393 egressip.go:754] Waiting for EgressIP addresses inside status of EgressIP CR e2e-test-egressip-zr9sr-4 for a maximum of 180 seconds
  I1201 15:12:32.261623 49393 egressip.go:768] Egress IP object e2e-test-egressip-zr9sr-4 does have all IPs for map[10.0.0.9:ip-10-0-32-25.us-east-2.compute.internal].
    STEP: Creating the EgressIP object @ 12/01/25 15:12:32.262
  I1201 15:12:32.262416 49393 egressip.go:692] Marshalling the desired EgressIPs into a string
  I1201 15:12:32.262451 49393 egressip.go:700] Creating the EgressIP object and writing it to disk
    STEP: Applying the EgressIP object @ 12/01/25 15:12:32.264
  I1201 15:12:32.264490 49393 egressip.go:728] Applying the EgressIP object e2e-test-egressip-zr9sr-5
  I1201 15:12:33.638877 49393 egressip.go:733] Waiting for CloudPrivateIPConfig creation for a maximum of 180 seconds
  I1201 15:12:34.188230 49393 egressip.go:745] CloudPrivateIPConfig for 10.0.0.10 not assigned.
  I1201 15:12:39.408783 49393 egressip.go:749] CloudPrivateIPConfigs for map[10.0.0.10:ip-10-0-32-25.us-east-2.compute.internal] found.
  I1201 15:12:39.408984 49393 egressip.go:754] Waiting for EgressIP addresses inside status of EgressIP CR e2e-test-egressip-zr9sr-5 for a maximum of 180 seconds
  I1201 15:12:39.629988 49393 egressip.go:768] Egress IP object e2e-test-egressip-zr9sr-5 does have all IPs for map[10.0.0.10:ip-10-0-32-25.us-east-2.compute.internal].
    STEP: Creating the EgressIP object @ 12/01/25 15:12:39.63
  I1201 15:12:39.630330 49393 egressip.go:692] Marshalling the desired EgressIPs into a string
  I1201 15:12:39.630349 49393 egressip.go:700] Creating the EgressIP object and writing it to disk
    STEP: Applying the EgressIP object @ 12/01/25 15:12:39.631
  I1201 15:12:39.631893 49393 egressip.go:728] Applying the EgressIP object e2e-test-egressip-zr9sr-6
  I1201 15:12:41.650198 49393 egressip.go:733] Waiting for CloudPrivateIPConfig creation for a maximum of 180 seconds
  I1201 15:12:41.870644 49393 egressip.go:745] CloudPrivateIPConfig for 10.0.0.11 not assigned.
  I1201 15:12:47.734563 49393 egressip.go:749] CloudPrivateIPConfigs for map[10.0.0.11:ip-10-0-32-25.us-east-2.compute.internal] found.
  I1201 15:12:47.734790 49393 egressip.go:754] Waiting for EgressIP addresses inside status of EgressIP CR e2e-test-egressip-zr9sr-6 for a maximum of 180 seconds
  I1201 15:12:47.957331 49393 egressip.go:768] Egress IP object e2e-test-egressip-zr9sr-6 does have all IPs for map[10.0.0.11:ip-10-0-32-25.us-east-2.compute.internal].
    STEP: Creating the EgressIP object @ 12/01/25 15:12:47.957
  I1201 15:12:47.957716 49393 egressip.go:692] Marshalling the desired EgressIPs into a string
  I1201 15:12:47.957832 49393 egressip.go:700] Creating the EgressIP object and writing it to disk
    STEP: Applying the EgressIP object @ 12/01/25 15:12:47.959
  I1201 15:12:47.959358 49393 egressip.go:728] Applying the EgressIP object e2e-test-egressip-zr9sr-7
  I1201 15:12:49.081577 49393 egressip.go:733] Waiting for CloudPrivateIPConfig creation for a maximum of 180 seconds
  I1201 15:12:49.514519 49393 egressip.go:745] CloudPrivateIPConfig for 10.0.0.12 not assigned.
  I1201 15:12:54.735708 49393 egressip.go:749] CloudPrivateIPConfigs for map[10.0.0.12:ip-10-0-32-25.us-east-2.compute.internal] found.
  I1201 15:12:54.735958 49393 egressip.go:754] Waiting for EgressIP addresses inside status of EgressIP CR e2e-test-egressip-zr9sr-7 for a maximum of 180 seconds
  I1201 15:12:54.959639 49393 egressip.go:768] Egress IP object e2e-test-egressip-zr9sr-7 does have all IPs for map[10.0.0.12:ip-10-0-32-25.us-east-2.compute.internal].
    STEP: Creating the EgressIP object @ 12/01/25 15:12:54.959
  I1201 15:12:54.960039 49393 egressip.go:692] Marshalling the desired EgressIPs into a string
  I1201 15:12:54.960074 49393 egressip.go:700] Creating the EgressIP object and writing it to disk
    STEP: Applying the EgressIP object @ 12/01/25 15:12:54.961
  I1201 15:12:54.961953 49393 egressip.go:728] Applying the EgressIP object e2e-test-egressip-zr9sr-8
  I1201 15:12:56.048342 49393 egressip.go:733] Waiting for CloudPrivateIPConfig creation for a maximum of 180 seconds
  I1201 15:12:56.268960 49393 egressip.go:745] CloudPrivateIPConfig for 10.0.0.13 not assigned.
  I1201 15:13:01.515422 49393 egressip.go:749] CloudPrivateIPConfigs for map[10.0.0.13:ip-10-0-32-25.us-east-2.compute.internal] found.
  I1201 15:13:01.515777 49393 egressip.go:754] Waiting for EgressIP addresses inside status of EgressIP CR e2e-test-egressip-zr9sr-8 for a maximum of 180 seconds
  I1201 15:13:01.771764 49393 egressip.go:768] Egress IP object e2e-test-egressip-zr9sr-8 does have all IPs for map[10.0.0.13:ip-10-0-32-25.us-east-2.compute.internal].
    STEP: Creating the EgressIP object @ 12/01/25 15:13:01.772
  I1201 15:13:01.772212 49393 egressip.go:692] Marshalling the desired EgressIPs into a string
  I1201 15:13:01.772247 49393 egressip.go:700] Creating the EgressIP object and writing it to disk
    STEP: Applying the EgressIP object @ 12/01/25 15:13:01.775
  I1201 15:13:01.775425 49393 egressip.go:728] Applying the EgressIP object e2e-test-egressip-zr9sr-9
  I1201 15:13:02.798881 49393 egressip.go:733] Waiting for CloudPrivateIPConfig creation for a maximum of 180 seconds
  I1201 15:13:03.353955 49393 egressip.go:745] CloudPrivateIPConfig for 10.0.0.14 not assigned.
  I1201 15:13:08.577566 49393 egressip.go:749] CloudPrivateIPConfigs for map[10.0.0.14:ip-10-0-32-25.us-east-2.compute.internal] found.
  I1201 15:13:08.577797 49393 egressip.go:754] Waiting for EgressIP addresses inside status of EgressIP CR e2e-test-egressip-zr9sr-9 for a maximum of 180 seconds
  I1201 15:13:08.802435 49393 egressip.go:768] Egress IP object e2e-test-egressip-zr9sr-9 does have all IPs for map[10.0.0.14:ip-10-0-32-25.us-east-2.compute.internal].
    STEP: Creating the EgressIP object @ 12/01/25 15:13:08.802
  I1201 15:13:08.802987 49393 egressip.go:692] Marshalling the desired EgressIPs into a string
  I1201 15:13:08.803024 49393 egressip.go:700] Creating the EgressIP object and writing it to disk
    STEP: Applying the EgressIP object @ 12/01/25 15:13:08.805
  I1201 15:13:08.805382 49393 egressip.go:728] Applying the EgressIP object e2e-test-egressip-zr9sr-10
  I1201 15:13:09.767746 49393 egressip.go:733] Waiting for CloudPrivateIPConfig creation for a maximum of 180 seconds
  I1201 15:13:09.987572 49393 egressip.go:745] CloudPrivateIPConfig for 10.0.0.15 not assigned.
  I1201 15:13:15.211368 49393 egressip.go:749] CloudPrivateIPConfigs for map[10.0.0.15:ip-10-0-32-25.us-east-2.compute.internal] found.
  I1201 15:13:15.211589 49393 egressip.go:754] Waiting for EgressIP addresses inside status of EgressIP CR e2e-test-egressip-zr9sr-10 for a maximum of 180 seconds
  I1201 15:13:15.435239 49393 egressip.go:768] Egress IP object e2e-test-egressip-zr9sr-10 does have all IPs for map[10.0.0.15:ip-10-0-32-25.us-east-2.compute.internal].
    STEP: Creating the EgressIP object @ 12/01/25 15:13:15.435
  I1201 15:13:15.435604 49393 egressip.go:692] Marshalling the desired EgressIPs into a string
  I1201 15:13:15.435637 49393 egressip.go:700] Creating the EgressIP object and writing it to disk
    STEP: Applying the EgressIP object @ 12/01/25 15:13:15.437
  I1201 15:13:15.437374 49393 egressip.go:728] Applying the EgressIP object e2e-test-egressip-zr9sr-11
  I1201 15:13:16.432639 49393 egressip.go:733] Waiting for CloudPrivateIPConfig creation for a maximum of 180 seconds
  I1201 15:13:16.655215 49393 egressip.go:745] CloudPrivateIPConfig for 10.0.0.16 not assigned.
  I1201 15:13:21.878690 49393 egressip.go:749] CloudPrivateIPConfigs for map[10.0.0.16:ip-10-0-32-25.us-east-2.compute.internal] found.
  I1201 15:13:21.878925 49393 egressip.go:754] Waiting for EgressIP addresses inside status of EgressIP CR e2e-test-egressip-zr9sr-11 for a maximum of 180 seconds
  I1201 15:13:22.153127 49393 egressip.go:768] Egress IP object e2e-test-egressip-zr9sr-11 does have all IPs for map[10.0.0.16:ip-10-0-32-25.us-east-2.compute.internal].
    STEP: Creating the EgressIP object @ 12/01/25 15:13:22.153
  I1201 15:13:22.153468 49393 egressip.go:692] Marshalling the desired EgressIPs into a string
  I1201 15:13:22.153493 49393 egressip.go:700] Creating the EgressIP object and writing it to disk
    STEP: Applying the EgressIP object @ 12/01/25 15:13:22.154
  I1201 15:13:22.155015 49393 egressip.go:728] Applying the EgressIP object e2e-test-egressip-zr9sr-12
  I1201 15:13:23.125358 49393 egressip.go:733] Waiting for CloudPrivateIPConfig creation for a maximum of 180 seconds
  I1201 15:13:23.343061 49393 egressip.go:745] CloudPrivateIPConfig for 10.0.0.17 not assigned.
  I1201 15:13:28.564911 49393 egressip.go:749] CloudPrivateIPConfigs for map[10.0.0.17:ip-10-0-32-25.us-east-2.compute.internal] found.
  I1201 15:13:28.565172 49393 egressip.go:754] Waiting for EgressIP addresses inside status of EgressIP CR e2e-test-egressip-zr9sr-12 for a maximum of 180 seconds
  I1201 15:13:28.788327 49393 egressip.go:768] Egress IP object e2e-test-egressip-zr9sr-12 does have all IPs for map[10.0.0.17:ip-10-0-32-25.us-east-2.compute.internal].
    STEP: Creating the EgressIP object @ 12/01/25 15:13:28.788
  I1201 15:13:28.788725 49393 egressip.go:692] Marshalling the desired EgressIPs into a string
  I1201 15:13:28.788749 49393 egressip.go:700] Creating the EgressIP object and writing it to disk
    STEP: Applying the EgressIP object @ 12/01/25 15:13:28.79
  I1201 15:13:28.790495 49393 egressip.go:728] Applying the EgressIP object e2e-test-egressip-zr9sr-13
  I1201 15:13:29.798123 49393 egressip.go:733] Waiting for CloudPrivateIPConfig creation for a maximum of 180 seconds
  I1201 15:13:30.016507 49393 egressip.go:745] CloudPrivateIPConfig for 10.0.0.18 not assigned.
  I1201 15:13:35.238017 49393 egressip.go:749] CloudPrivateIPConfigs for map[10.0.0.18:ip-10-0-32-25.us-east-2.compute.internal] found.
  I1201 15:13:35.238352 49393 egressip.go:754] Waiting for EgressIP addresses inside status of EgressIP CR e2e-test-egressip-zr9sr-13 for a maximum of 180 seconds
  I1201 15:13:35.461829 49393 egressip.go:768] Egress IP object e2e-test-egressip-zr9sr-13 does have all IPs for map[10.0.0.18:ip-10-0-32-25.us-east-2.compute.internal].
    STEP: Rebooting the node gracefully @ 12/01/25 15:13:35.462
  I1201 15:13:35.693790   49393 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]"
  I1201 15:13:35.914827   49393 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]"
    STEP: Waiting for the node to become NotReady @ 12/01/25 15:13:35.915
  I1201 15:13:35.915515 49393 wait.go:119] Waiting up to 10m0s for node ip-10-0-32-25.us-east-2.compute.internal condition Ready to be false
  I1201 15:13:36.611183 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:13:39.150109 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:13:41.591690 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:13:44.050294 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:13:46.816057 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:13:49.472453 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:13:51.912577 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:13:54.353124 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:13:56.791535 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:13:59.444843 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:02.317051 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:04.976979 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:08.052376 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:10.930526 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:13.977484 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:16.415336 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:18.857500 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:21.336830 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:23.773316 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:26.212333 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:28.652102 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:31.092668 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:33.529537 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:35.968240 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:38.411345 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:40.845614 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:43.283405 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:45.718986 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:48.152280 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:50.702219 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:53.142039 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:55.580616 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:14:58.133277 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:15:00.578326 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:15:03.018971 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:15:05.454309 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:15:08.111992 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:15:10.548813 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:15:12.986693 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:15:15.421492 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:15:17.965942 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
  I1201 15:15:20.423904 49393 resource.go:140] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status
    STEP: Waiting for the node to become Ready again after reboot @ 12/01/25 15:15:22.89
  I1201 15:15:22.891114 49393 wait.go:119] Waiting up to 15m0s for node ip-10-0-32-25.us-east-2.compute.internal condition Ready to be true
  I1201 15:15:23.329424 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:15:25.775339 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:15:28.213400 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:15:30.787020 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:15:33.340506 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:15:35.999785 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:15:38.656272 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:15:41.314360 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:15:43.755312 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:15:46.196257 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:15:48.685845 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:15:51.128172 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:15:53.927065 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:15:56.423792 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:15:59.243406 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:01.956677 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:04.608087 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:07.261347 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:09.702116 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:12.148568 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:14.588671 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:17.029601 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:19.470808 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:21.976508 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:24.412009 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:26.847305 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:29.286727 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:31.725241 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:34.238905 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:36.684705 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:39.175198 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:41.612807 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:44.151466 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST}]. Failure
  I1201 15:16:46.591789 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoExecute 2025-12-01 15:15:20 +0800 CST} {node.kubernetes.io/not-ready  NoSchedule 2025-12-01 15:16:46 +0800 CST}]. Failure
  I1201 15:16:49.035408 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready  NoSchedule 2025-12-01 15:16:46 +0800 CST} {node.kubernetes.io/not-ready  NoExecute 2025-12-01 15:16:46 +0800 CST}]. Failure
  I1201 15:16:51.526537 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready  NoSchedule 2025-12-01 15:16:46 +0800 CST} {node.kubernetes.io/not-ready  NoExecute 2025-12-01 15:16:46 +0800 CST}]. Failure
  I1201 15:16:54.000494 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready  NoSchedule 2025-12-01 15:16:46 +0800 CST} {node.kubernetes.io/not-ready  NoExecute 2025-12-01 15:16:46 +0800 CST}]. Failure
  I1201 15:16:56.659165 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready  NoSchedule 2025-12-01 15:16:46 +0800 CST} {node.kubernetes.io/not-ready  NoExecute 2025-12-01 15:16:46 +0800 CST}]. Failure
  I1201 15:16:59.361794 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready  NoSchedule 2025-12-01 15:16:46 +0800 CST} {node.kubernetes.io/not-ready  NoExecute 2025-12-01 15:16:46 +0800 CST}]. Failure
  I1201 15:17:01.857325 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready  NoSchedule 2025-12-01 15:16:46 +0800 CST} {node.kubernetes.io/not-ready  NoExecute 2025-12-01 15:16:46 +0800 CST}]. Failure
  I1201 15:17:04.374759 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready  NoExecute 2025-12-01 15:16:46 +0800 CST}]. Failure
  I1201 15:17:06.817091 49393 resource.go:131] Condition Ready of node ip-10-0-32-25.us-east-2.compute.internal is true, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready  NoExecute 2025-12-01 15:16:46 +0800 CST}]. Failure
    STEP: Get capacity of the Egress node after reboot @ 12/01/25 15:17:09.254
  I1201 15:17:09.254576 49393 egressip.go:633] The capacity of node ip-10-0-32-25.us-east-2.compute.internal after reboot is {14 15 0}
    STEP: Comparing capacity before and after reboot @ 12/01/25 15:17:09.254
    STEP: Restarting the CNCC pod @ 12/01/25 15:17:09.254
  I1201 15:17:09.254633 49393 egressip_helpers.go:1744] Restarting CNCC pod by deleting it
  I1201 15:17:11.844160 49393 egressip_helpers.go:1753] Waiting for the CNCC pod to become Ready again after restart
    STEP: Get capacity of the Egress node after CNCC restart @ 12/01/25 15:17:13.281
  I1201 15:17:13.282090 49393 egressip.go:647] The capacity of node ip-10-0-32-25.us-east-2.compute.internal after CNCC restart is {14 15 0}
    STEP: Comparing capacity before and after CNCC restart @ 12/01/25 15:17:13.282
    STEP: Deleting all EgressIPs objects @ 12/01/25 15:17:13.282
  I1201 15:17:22.693859 49393 client.go:681] Deleted {user.openshift.io/v1, Resource=users  e2e-test-egressip-zr9sr-user}, err: <nil>
  I1201 15:17:22.916501 49393 client.go:681] Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-egressip-zr9sr}, err: <nil>
  I1201 15:17:23.143682 49393 client.go:681] Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~Kn3pT4hs28RjKL_qXKLDKytjB97fjTZHrfkEsmDmRf4}, err: <nil>
  I1201 15:17:23.649987 49393 client.go:681] Deleted {user.openshift.io/v1, Resource=users  e2e-test-egressip-fdb7c-user}, err: <nil>
  I1201 15:17:23.876942 49393 client.go:681] Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-egressip-fdb7c}, err: <nil>
  I1201 15:17:24.101110 49393 client.go:681] Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~Uj1WV5gzwGEoFuZSxfra7x2dxz2phZYb8C9-5X07rXE}, err: <nil>
    STEP: Deleting the EgressIP object if it exists @ 12/01/25 15:17:24.101
    STEP: Removing the EgressIP assignable annotation @ 12/01/25 15:17:24.102
    STEP: Removing the temp directory @ 12/01/25 15:17:28.743
    STEP: Destroying namespace "e2e-test-egressip-zr9sr" for this suite. @ 12/01/25 15:17:28.747
    STEP: Destroying namespace "e2e-test-egressip-fdb7c" for this suite. @ 12/01/25 15:17:28.971
  • [390.269 seconds]
  ------------------------------

  Ran 1 of 1 Specs in 390.270 seconds
  SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped

@openshift-ci-robot
Copy link

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: automatic mode

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 1, 2025
@openshift-ci openshift-ci bot requested review from arkadeepsen and tssurya December 1, 2025 08:44
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 1, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: huiran0826
Once this PR has been reviewed and has the lgtm label, please assign tssurya for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot
Copy link

Scheduling required tests:
/test e2e-aws-csi
/test e2e-aws-ovn-fips
/test e2e-aws-ovn-microshift
/test e2e-aws-ovn-microshift-serial
/test e2e-aws-ovn-serial-1of2
/test e2e-aws-ovn-serial-2of2
/test e2e-gcp-csi
/test e2e-gcp-ovn
/test e2e-gcp-ovn-upgrade
/test e2e-metal-ipi-ovn-ipv6
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-upi

@openshift-trt
Copy link

openshift-trt bot commented Dec 1, 2025

Risk analysis has seen new tests most likely introduced by this PR.
Please ensure that new tests meet guidelines for naming and stability.

New Test Risks for sha: 44d742c

Job Name New Test Risk
pull-ci-openshift-origin-main-e2e-aws-ovn-serial-2of2 Medium - "[sig-network][Feature:EgressIP][apigroup:operator.openshift.io] [external-targets][apigroup:user.openshift.io][apigroup:security.openshift.io] Rebooting a node/Restarting CNCC pod should not change the EgressIPs capacity [Serial] [Suite:openshift/conformance/serial]" is a new test, and was only seen in one job.

New tests seen in this PR at sha: 44d742c

  • "[sig-network][Feature:EgressIP][apigroup:operator.openshift.io] [external-targets][apigroup:user.openshift.io][apigroup:security.openshift.io] Rebooting a node/Restarting CNCC pod should not change the EgressIPs capacity [Serial] [Suite:openshift/conformance/serial]" [Total: 1, Pass: 1, Fail: 0, Flake: 0]

@openshift-ci-robot
Copy link

Scheduling required tests:
/test e2e-aws-csi
/test e2e-aws-ovn-fips
/test e2e-aws-ovn-microshift
/test e2e-aws-ovn-microshift-serial
/test e2e-aws-ovn-serial-1of2
/test e2e-aws-ovn-serial-2of2
/test e2e-gcp-csi
/test e2e-gcp-ovn
/test e2e-gcp-ovn-upgrade
/test e2e-metal-ipi-ovn-ipv6
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-upi

@openshift-trt
Copy link

openshift-trt bot commented Dec 3, 2025

Risk analysis has seen new tests most likely introduced by this PR.
Please ensure that new tests meet guidelines for naming and stability.

New Test Risks for sha: 8e4959a

Job Name New Test Risk
pull-ci-openshift-origin-main-e2e-aws-ovn-serial-2of2 Medium - "[sig-network][Feature:EgressIP][apigroup:operator.openshift.io] [external-targets][apigroup:user.openshift.io][apigroup:security.openshift.io] Rebooting a node/Restarting CNCC pod should not change the EgressIPs capacity [Serial] [Suite:openshift/conformance/serial]" is a new test, and was only seen in one job.

New tests seen in this PR at sha: 8e4959a

  • "[sig-network][Feature:EgressIP][apigroup:operator.openshift.io] [external-targets][apigroup:user.openshift.io][apigroup:security.openshift.io] Rebooting a node/Restarting CNCC pod should not change the EgressIPs capacity [Serial] [Suite:openshift/conformance/serial]" [Total: 1, Pass: 1, Fail: 0, Flake: 0]

@huiran0826 huiran0826 changed the title WIP: add test coverage for bug OCPBUGS-63348 add test coverage for bug OCPBUGS-63348 Dec 5, 2025
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 5, 2025
@openshift-ci-robot
Copy link

Scheduling required tests:
/test e2e-aws-csi
/test e2e-aws-ovn-fips
/test e2e-aws-ovn-microshift
/test e2e-aws-ovn-microshift-serial
/test e2e-aws-ovn-serial-1of2
/test e2e-aws-ovn-serial-2of2
/test e2e-gcp-csi
/test e2e-gcp-ovn
/test e2e-gcp-ovn-upgrade
/test e2e-metal-ipi-ovn-ipv6
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-upi

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 5, 2025

@huiran0826: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-vsphere-ovn 1d49d1c link true /test e2e-vsphere-ovn
ci/prow/e2e-aws-ovn-serial-2of2 1d49d1c link true /test e2e-aws-ovn-serial-2of2

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-trt
Copy link

openshift-trt bot commented Dec 5, 2025

Risk analysis has seen new tests most likely introduced by this PR.
Please ensure that new tests meet guidelines for naming and stability.

New Test Risks for sha: 1d49d1c

Job Name New Test Risk
pull-ci-openshift-origin-main-e2e-aws-ovn-serial-2of2 Medium - "[sig-network][Feature:EgressIP][apigroup:operator.openshift.io] [external-targets][apigroup:user.openshift.io][apigroup:security.openshift.io] Rebooting a node/Restarting CNCC pod should not change the EgressIPs capacity [Serial] [Suite:openshift/conformance/serial]" is a new test, and was only seen in one job.

New tests seen in this PR at sha: 1d49d1c

  • "[sig-network][Feature:EgressIP][apigroup:operator.openshift.io] [external-targets][apigroup:user.openshift.io][apigroup:security.openshift.io] Rebooting a node/Restarting CNCC pod should not change the EgressIPs capacity [Serial] [Suite:openshift/conformance/serial]" [Total: 1, Pass: 1, Fail: 0, Flake: 0]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants