Skip to content

Conversation

@AkihiroSuda
Copy link
Member

@AkihiroSuda AkihiroSuda commented Oct 1, 2025

@AkihiroSuda AkihiroSuda added this to the v2.0.0 milestone Oct 1, 2025
@AkihiroSuda AkihiroSuda force-pushed the fix-3237 branch 2 times, most recently from 2e45090 to f74fbdf Compare October 1, 2025 10:42
@AkihiroSuda
Copy link
Member Author

Before (964fb30)

$ du -hs _output/
128M    _output/

$ ls -lh _output/bin/limactl _output/share/lima/lima-guestagent.Linux-*
-rwxr-xr-x@ 1 suda  staff    28M Oct  1 19:47 _output/bin/limactl*
-rw-r--r--@ 1 suda  staff    14M Oct  1 19:47 _output/share/lima/lima-guestagent.Linux-aarch64.gz
-rw-r--r--@ 1 suda  staff    15M Oct  1 19:47 _output/share/lima/lima-guestagent.Linux-armv7l.gz
-rw-r--r--@ 1 suda  staff    14M Oct  1 19:47 _output/share/lima/lima-guestagent.Linux-ppc64le.gz
-rw-r--r--@ 1 suda  staff    15M Oct  1 19:47 _output/share/lima/lima-guestagent.Linux-riscv64.gz
-rw-r--r--@ 1 suda  staff    16M Oct  1 19:47 _output/share/lima/lima-guestagent.Linux-s390x.gz
-rw-r--r--@ 1 suda  staff    16M Oct  1 19:47 _output/share/lima/lima-guestagent.Linux-x86_64.gz

After (f74fbdf)

$ du -hs _output/
 88M    _output/

$ ls -lh _output/bin/limactl _output/share/lima/lima-guestagent.Linux-*
-rwxr-xr-x@ 1 suda  staff    28M Oct  1 19:49 _output/bin/limactl*
-rw-r--r--@ 1 suda  staff   8.4M Oct  1 19:49 _output/share/lima/lima-guestagent.Linux-aarch64.gz
-rw-r--r--@ 1 suda  staff   8.8M Oct  1 19:49 _output/share/lima/lima-guestagent.Linux-armv7l.gz
-rw-r--r--@ 1 suda  staff   8.4M Oct  1 19:49 _output/share/lima/lima-guestagent.Linux-ppc64le.gz
-rw-r--r--@ 1 suda  staff   8.8M Oct  1 19:49 _output/share/lima/lima-guestagent.Linux-riscv64.gz
-rw-r--r--@ 1 suda  staff   9.2M Oct  1 19:49 _output/share/lima/lima-guestagent.Linux-s390x.gz
-rw-r--r--@ 1 suda  staff   9.3M Oct  1 19:49 _output/share/lima/lima-guestagent.Linux-x86_64.gz

@jandubois
Copy link
Member

I have not looked at this PR at all yet, but wanted to mention a couple of things I discussed with @Nino-K as requirements for his port monitoring PR:

  • Keep retrying to connect to k8s indefinitely, as it will not be running yet by the time the guest agent starts.
  • When the connection breaks, keep trying to reconnect with a short delay indefinitely, as the user may have stopped and restarted k8s.

For this PR also: the kubectl binary may not yet be available on the PATH when you try to invoke it, keep trying. It may also fail because the port may be open, but the apiserver not yet responding, or the kubeconfig may be missing etc. The retry on broken connections should handle this automatically.

Because of the indefinite retries, the kubernetes watcher should be opt-in (configurable in lima.yaml), so it only runs when the VM is known to run Kubernetes.

@AkihiroSuda AkihiroSuda marked this pull request as ready for review October 3, 2025 06:53
@AkihiroSuda
Copy link
Member Author

The retry on broken connections should handle this automatically.

Yes, this is retried.

Because of the indefinite retries, the kubernetes watcher should be opt-in

It wasn't opt-in so far, and I don't think it has to be so, as the overhead of polling LookPath("kubectl") seems trivial.

Copy link
Member

@nirs nirs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trimming the guest agent is nice, anything using client-go becomes huge quickly. But did you measure memory and cpu usage before and after this change?

With the new code we always keep kubectl watch command running, which has the similar cpu usage to what we had before in the guest agent, but now we format the json events and parse them back in the guest agent, and keep all service in memory twice, once is kubectl (using the informer) and once in the guest agent.

@AkihiroSuda AkihiroSuda marked this pull request as draft October 6, 2025 06:13
@AkihiroSuda AkihiroSuda modified the milestones: v2.0.0, v2.1.0 (?) Oct 16, 2025
Part of issue 3237

TODO: drop dependency on k8s.io/api

Signed-off-by: Akihiro Suda <[email protected]>
@AkihiroSuda AkihiroSuda marked this pull request as ready for review November 19, 2025 08:31
@AkihiroSuda
Copy link
Member Author

Trimming the guest agent is nice, anything using client-go becomes huge quickly. But did you measure memory and cpu usage before and after this change?

Before

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
   1589 root      20   0 1284200  63488  36992 S   0.0   1.6   0:10.21 lima-guestagent

After

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                                                           
   1192 root      20   0 1254740  41600  19200 S   0.0   1.0   0:01.72 lima-guestagent                                                                                                                   
   2932 root      20   0 1284688  44404  33536 S   0.0   1.1   0:00.02 kubectl

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants