Skip to content

fix(entity-cache): support several key directives on the same type #7207

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 6 commits into from

Conversation

bnjjj
Copy link
Contributor

@bnjjj bnjjj commented Apr 10, 2025

This PR fixes a bug in entity caching introduced by the fix in #6888 for cases where several @key directives with different fields were declared on a type as documented here.

For example if you have this kind of entity in your schema:

type Product @key(fields: "upc") @key(fields: "sku") {
  upc: ID!
  sku: ID!
  name: String
}

Checklist

Complete the checklist (and note appropriate exceptions) before the PR is marked ready-for-review.

  • Changes are compatible1
  • Documentation2 completed
  • Performance impact assessed and acceptable
  • Tests added and passing3
    • Unit Tests
    • Integration Tests
    • Manual Tests

Exceptions

Note any exceptions here

Notes

Footnotes

  1. It may be appropriate to bring upcoming changes to the attention of other (impacted) groups. Please endeavour to do this before seeking PR approval. The mechanism for doing this will vary considerably, so use your judgement as to how and when to do this.

  2. Configuration is an important part of many changes. Where applicable please try to document configuration examples.

  3. Tick whichever testing boxes are applicable. If you are adding Manual Tests, please document the manual testing (extensively) in the Exceptions.

@bnjjj bnjjj requested a review from a team as a code owner April 10, 2025 11:06
@svc-apollo-docs
Copy link
Collaborator

svc-apollo-docs commented Apr 10, 2025

✅ Docs preview has no changes

The preview was not built because there were no changes.

Build ID: bd25f776418f31d80fee64cb

@bnjjj bnjjj requested a review from a team April 10, 2025 11:06

This comment has been minimized.

@router-perf
Copy link

router-perf bot commented Apr 10, 2025

CI performance tests

  • connectors-const - Connectors stress test that runs with a constant number of users
  • const - Basic stress test that runs with a constant number of users
  • demand-control-instrumented - A copy of the step test, but with demand control monitoring and metrics enabled
  • demand-control-uninstrumented - A copy of the step test, but with demand control monitoring enabled
  • enhanced-signature - Enhanced signature enabled
  • events - Stress test for events with a lot of users and deduplication ENABLED
  • events_big_cap_high_rate - Stress test for events with a lot of users, deduplication enabled and high rate event with a big queue capacity
  • events_big_cap_high_rate_callback - Stress test for events with a lot of users, deduplication enabled and high rate event with a big queue capacity using callback mode
  • events_callback - Stress test for events with a lot of users and deduplication ENABLED in callback mode
  • events_without_dedup - Stress test for events with a lot of users and deduplication DISABLED
  • events_without_dedup_callback - Stress test for events with a lot of users and deduplication DISABLED using callback mode
  • extended-reference-mode - Extended reference mode enabled
  • large-request - Stress test with a 1 MB request payload
  • no-tracing - Basic stress test, no tracing
  • reload - Reload test over a long period of time at a constant rate of users
  • step-jemalloc-tuning - Clone of the basic stress test for jemalloc tuning
  • step-local-metrics - Field stats that are generated from the router rather than FTV1
  • step-with-prometheus - A copy of the step test with the Prometheus metrics exporter enabled
  • step - Basic stress test that steps up the number of users over time
  • xlarge-request - Stress test with 10 MB request payload
  • xxlarge-request - Stress test with 100 MB request payload

Arc::new(Schema::parse_and_validate(SCHEMA_REQUIRES, "test.graphql").unwrap());
let query = "query { topProducts { shippingEstimate price } }";

let subgraphs = MockedSubgraphs([
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll do this in a follow up PR but good suggestion

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah that's fine :)

Signed-off-by: Benjamin <[email protected]>
@bnjjj bnjjj requested a review from a team as a code owner April 10, 2025 12:58
reason: format!("can't get entity key {entity_key:?} in representations"),
})?;
representation_entity_keys.insert(key, value);
let entry = representation.remove_entry(entity_key.as_str());
Copy link
Contributor

@duckki duckki Apr 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It appears that the extract_cache_keys is trying to partition the representation value into the "key" part and "requires" part. But, it doesn't appear to be precise.

  • This PR puts all representation fields that may be in some key directive into the key part. But, it will be an over-approximation, since any given entity fetch will only use one particular key directive at a time.
    • To make it precise, we just need to find the right @key directive that matches the given fetch node. Note there may be multiple matches due to FED-569. Choose the smallest one in that case.
  • The get_entity_keys_from_supergraph_schema function collects only the root fields from the key fields selection set. Thus, it will be imprecise since a representation value may have a field with a mixed sub-selection containing both key selections and requires selections.
    • For now, this case can't happen due to a bug FED-507 (QP fails).

It's unclear to me whether this imprecision is fine for entity caching and invalidation purposes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Example:

type T @key(fields: "k { a }") @key(fields: "k { b }") {
  k: K!
  data: Int! @requires(fields: "k { c }")
}

At the moment, we can't plan such a @require overlapping with a @key due to a bug (FED-507). But, it may be fixed in the future.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After reading #6888, I believe entity key/requires partition has to be precise. Thus, we should fix both of these:

  • Search the matching @key directive and use it to single out entity key fields.
  • Also, when we collect key fields, it should preserve nested selection set structure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @duckki ! Would you have an example of generated QP for the entity you used in your example ? Because I'm still not sure to understand the part where I should keep the selection set

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Schema & operation:

#----------------------------------------------------------
#!subgraph A

type K {
    a: Int!
    b: Int!
}

type T @key(fields: "k { a }") @key(fields: "k { b }") {
  k: K!
  r: Int! @external
  data: Int! @requires(fields: "r")
}

#----------------------------------------------------------
#!subgraph B

type K {
    a: Int! @shareable
    b: Int! @shareable
}

type T @key(fields: "k { a }") @key(fields: "k { b }") {
  k: K! @shareable
  r: Int!
}

type Query {
  start: T!
}

#----------------------------------------------------------
#!operation
{
    start {
        data
    }
}

The query plan:

QueryPlan {
  Sequence {
    Fetch(service: "B") {
      {
        start {
          __typename
          k {
            a
          }
          r
        }
      }
    },
    Flatten(path: "start") {
      Fetch(service: "A") {
        {
          ... on T {
            __typename
            k {
              a
            }
            r
          }
        } =>
        {
          ... on T {
            data
          }
        }
      },
    },
  },
}

Copy link
Contributor

@duckki duckki Apr 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, we will need to keep the response value corresponding to the @key field set of

            k {
              a
            }

in the entity key hash.

So, I think customers can invalidate such a key using a POST like

{
    "kind": "subgraph",
    "subgraph": "A",
    "type": "T",
    "key": {
        "k": {
          "a": 42
        }
    }
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok correct me if I'm wrong but in the current implementation we take the root field which is k here. So in representation I guess we'll have something like "k":{"a": 42}. In our logic we make representation.remove("k") which will result having "k": {"a": 42} and so we would have the whole data including the selection set and that would be good enough right ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

@dariuszkuc dariuszkuc Apr 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think @duckki 's changes seem reasonable but we still got an issue that given keys A + A + B we will always pick just A (that's the QP behavior anyway so I guess this should be fine? if users introduce this change to the subgraph it will cause schema reload -> if we invalidate the cache on schema reloads then we should be fine as the QP behavior will match this logic)

Long term we should update QP to provide @key + @requires info so you don't need to reparse those field sets and just read those off directly from FetchNode

@bnjjj
Copy link
Contributor Author

bnjjj commented Apr 11, 2025

Closing this one in favor of #7228

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants