You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When dealing with very large datasets (e.g., 20,000+ items), the current implementation loads the entire merged list into memory, or into storage, which could cause performance issues, out-of-memory errors, or SQLite record exceptions.
While relay's cursor based pagination may make this experience unlikely, offset-based pagination (Assuming we have some offset connection merge policy), local population, or alternative initial sync type back-fills might support these use cases.
Potential Solutions:
• Implement windowing in the cache to only keep a subset of items in memory
• Add support for pagination boundaries to limit the maximum number of items stored
• Implement a sliding window approach that discards items that are far from the current view
Key to this (from what I see) is the idea that records are merged into the initial response's first record edges|nodes fields. It seems like we may benefit from some windowed record chained above the current caches to help manage this. Curious what others thoughts are on this.
I think you've been already looking at some of this concept in #121 (comment). Sorry if this is a dupe
The text was updated successfully, but these errors were encountered:
When dealing with very large datasets (e.g., 20,000+ items), the current implementation loads the entire merged list into memory, or into storage, which could cause performance issues, out-of-memory errors, or SQLite record exceptions.
While relay's cursor based pagination may make this experience unlikely, offset-based pagination (Assuming we have some offset connection merge policy), local population, or alternative initial sync type back-fills might support these use cases.
Potential Solutions:
• Implement windowing in the cache to only keep a subset of items in memory
• Add support for pagination boundaries to limit the maximum number of items stored
• Implement a sliding window approach that discards items that are far from the current view
Key to this (from what I see) is the idea that records are merged into the initial response's first record edges|nodes fields. It seems like we may benefit from some windowed record chained above the current caches to help manage this. Curious what others thoughts are on this.
I think you've been already looking at some of this concept in #121 (comment). Sorry if this is a dupe
The text was updated successfully, but these errors were encountered: