Skip to content

Conversation

@DustinCampbell
Copy link
Member

@DustinCampbell DustinCampbell commented Nov 3, 2025

While reviewing #12434, I noticed that the fix applied doesn't actually fix the race condition in MemoryCache<TKey, TValue>.Compact(). The reason is subtle! The fix was to avoid enumerating ConcurrentDictionary while items could be added and removed by first calling ConcurrentDictionary<TKey, TValue>.ToArray(), which is thread-safe. However, the receiver of the call isn't a ConcurrentDictionary<TKey, TValue>; it's an IDictionary<TKey, TValue>, which doesn't offer a ToArray() method. So, the new call actually goes to Enumerable.ToArray(...), and it's still enumerating the dictionary while items can be added and removed -- it's just doing it earlier than before.

To address this, I went ahead and implemented a comprehensive fix for the MemoryCache<TKey, TValue> thread-safety issues. I also updated the tests to use a test accessor instead of subclassing, which better encapsulates the MemoryCache implementation. Many thanks to copilot for helping with comments, documentation, and tests!

Here is a summary of the fixes and improvements made to MemoryCache<TKey, TValue>.

  • Atomic remove-then-check pattern to eliminate race conditions in compaction
  • Thread-safe last access time updates using volatile operations
  • Improved test structure using test accessors instead of subclassing
  • Enhanced documentation and comments
  • Added test coverage for concurrent and edge scenarios

- Mark as sealed and change protected members to private
- Change "_dict" from "IDictionary<TKey, TValue>" to "ConcurrentDictionary<TKey, TValue>" and rename to "_map".
- Rename "CacheEntry" to "Entry"
- Change "Entry" to protect its data
- Change "Entry.LastAccess" to get and set its data using Volatile.Read/Write to ensure that other threads don't get stale values due to instruction reordering.
- Change "Compact" method to "CompactIfNeeded" and make it acquire the lock.
- Use double-checked locking for compaction
- Use the "SelectAndOrderByAsArray" to avoid the extra allocation of LINQ "OrderBy" followed by "ToArray"
- Update compaction to be thread-safe by...
    1. Capture the value of LastAccess when acquiring an ordered array of all items. This is used when removing items to ensure that items touched by other threads during compaction aren't thrown away.
    2. Switch from IDictionary<TKey, TValue>.Remove(...) to ConcurrentDictionary<TKey, TValue>.TryRemove(...) for removing items.
    3. If we remove an item that was touched during compaction, call TryAdd to restore it.
    4. Count the number of items we remove to ensure that we remove *at least* the expected number of items.
- Add comments throughout.
@DustinCampbell DustinCampbell requested a review from a team as a code owner November 3, 2025 22:02
Copy link
Member

@davidwengier davidwengier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM (but I didn't really look at the new tests. Sorry. Not supposed to be working today anyway 😛)

@DustinCampbell DustinCampbell force-pushed the clean-up-memorycache branch 4 times, most recently from 013e64e to 2b92d41 Compare November 3, 2025 23:25
Cover more concurrency and edge cases.
Copy link
Contributor

@ToddGrun ToddGrun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

@DustinCampbell DustinCampbell merged commit e5017fa into dotnet:main Nov 7, 2025
11 checks passed
@DustinCampbell DustinCampbell deleted the clean-up-memorycache branch November 7, 2025 17:36
@dotnet-policy-service dotnet-policy-service bot added this to the Next milestone Nov 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants