-
Notifications
You must be signed in to change notification settings - Fork 531
Description
Relates to changes introduced here: #2922
Originally posted by @ff137 in #2922 (comment)
The upgrading middleware will now use a cache during upgrades and after the upgrade has been completed. Agents that have the upgrade middleware but never upgrade will still have a very minor performance hit, due to needing to query the is_upgrading record.
With regards to that "minor performance hit" -- I'm noticing that because upgrade_middleware
is always added to the web application's set of middleware, it means that askar is queried for acapy_upgrading
records every single time there is any incoming request.
I'm picking this up after testing with more debug logging (#3689)
And I can see with every /status/live
or /status/ready
check that we do, we get:
2025-04-30 10:52:50,958 acapy_agent.admin.server DEBUG Incoming request: GET /status/live
2025-04-30 10:52:50,963 acapy_agent.storage.askar DEBUG Fetching all records for acapy_upgrading with tag query None
2025-04-30 10:52:50,971 aiohttp.access INFO 127.0.0.6 [30/Apr/2025:10:52:50 +0000] "GET /status/live HTTP/1.1" 200 194 "-" "kube-probe/1.32"
2025-04-30 10:52:50,977 acapy_agent.admin.server DEBUG Incoming request: GET /status/ready
2025-04-30 10:52:50,982 acapy_agent.storage.askar DEBUG Fetching all records for acapy_upgrading with tag query None
2025-04-30 10:52:50,988 aiohttp.access INFO 127.0.0.6 [30/Apr/2025:10:52:50 +0000] "GET /status/ready HTTP/1.1" 200 194 "-" "kube-probe/1.32"
That means an extra, unnecessary askar session and connection is opened up every single API call.
I think this isn't just a minor performance hit, but really not the way that this should have been implemented.
Can we please revisit this, and see if there's a different way to handle the "upgrading" logic?