Skip to content

Support for large connection data sets. #146

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Marsjohn-11 opened this issue Apr 26, 2025 · 0 comments
Open

Support for large connection data sets. #146

Marsjohn-11 opened this issue Apr 26, 2025 · 0 comments

Comments

@Marsjohn-11
Copy link

Marsjohn-11 commented Apr 26, 2025

When dealing with very large datasets (e.g., 20,000+ items), the current implementation loads the entire merged list into memory, or into storage, which could cause performance issues, out-of-memory errors, or SQLite record exceptions.

While relay's cursor based pagination may make this experience unlikely, offset-based pagination (Assuming we have some offset connection merge policy), local population, or alternative initial sync type back-fills might support these use cases.

Potential Solutions:
• Implement windowing in the cache to only keep a subset of items in memory
• Add support for pagination boundaries to limit the maximum number of items stored
• Implement a sliding window approach that discards items that are far from the current view

Key to this (from what I see) is the idea that records are merged into the initial response's first record edges|nodes fields. It seems like we may benefit from some windowed record chained above the current caches to help manage this. Curious what others thoughts are on this.

I think you've been already looking at some of this concept in #121 (comment). Sorry if this is a dupe

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant