-
-
Notifications
You must be signed in to change notification settings - Fork 360
fix: shutdown connect on relevant settings changes #1413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Skipped Deployment
|
8d0bb7a
to
9f42738
Compare
9f42738
to
823aa27
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a couple of comments but they are not blockers
# Warm cache to avoid Cachex and Ecto.Sandbox ownership issues | ||
Cachex.put!(Cache, {{:get_tenant_by_external_id, 1}, [tenant1.external_id]}, {:cached, tenant1}) | ||
Cachex.put!(Cache, {{:get_tenant_by_external_id, 1}, [tenant2.external_id]}, {:cached, tenant2}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that it's running on async: false this can probably be removed completely
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that will be changed 😂 forgot
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for some reason it's breaking with async true as such going to avoid it for now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right that's what I meant. If you are leaving async: false
then just remove the Cachex.put
calls here. It shouldn't be needed
lib/realtime/api.ex
Outdated
}) | ||
when is_map_key(changes, :jwt_jwks) or is_map_key(changes, :jwt_secret) do | ||
Phoenix.PubSub.broadcast!(Realtime.PubSub, "realtime:operations:" <> external_id, :disconnect) | ||
maybe_invalidate_cache(changeset) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a race condition here?
What if the invalidation takes a long time and the DB reconnects before the Tenant cache has been discarded 🤔 This way it might still connect to the database with the old settings?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have no clue honestly ... we might just need to trust cachex? 😅
|
||
defp maybe_invalidate_cache(%Ecto.Changeset{changes: changes, valid?: true, data: %{external_id: external_id}}) | ||
when changes != %{} do | ||
Tenants.Cache.invalidate_tenant_cache(external_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be calling distributed_invalidate_tenant_cache
? Otherwise a remote node won't have the new settings when Connect.lookup_or_start_connection
is called the next time.
And that's what I mean by race condition:
- Tell all nodes to invalidate cache (broadcast)
- Tell Connect to shutdown (broadcast)
There is no real guarantee which one of the two above will be processed first on each node ^
Then at some point Connect.lookup_or_start_connection
will be called and it's not 100% clear that the new settings will be available if this makes sense.
🤔
What kind of change does this PR introduce?
shutdown connect on relevant settings changes