This is the outcome of an MCP Server created with the help of Cursor AI providing https://github.com/kubearchive/kubearchive as part of the context.
It successfully created 3 tools, one per endpoing: healthz, get_resources and get_logs.
I've experimented with the get_resources MCP Tool so far and I've seen that KubeArchive MCP server is probably
not good enough to parse the KubeArchive API output.
With a cluster locally deployed with kubearchive and several pods, jobs and a cronjob archived (particularly 31 pods) I've this chat.
To sum up, I wasn't able to get AI returns how many pods were archived by checking in the response given by KubeArchive API,
which is a structured JSON, but it was able to successfully pipe the output with a jq call when I specifically asked
for it.
My guess is that this could be improved with other MCP servers for doing JSON parsing.
It makes sense to add more MCP servers to the equation and see them all work together:
- JSON parse
- RAG with the kubearchive docs
- Kubernetes MCP
- KubeArchive MCP
That will probably allow us to create an agent able to successfully parse the configuration and the output of kubearchive API.