Skip to content

Conversation

Wraith2
Copy link
Contributor

@Wraith2 Wraith2 commented Jul 31, 2025

Fixes #3519
Fixes: #3572

Fix 1:
When reading multi packet strings it is possible for multiple strings to happen in a single row. When reading asynchronously a snapshot is used which contains a linked list of packets. The current codebase has logic which keeps a cleared spare linked list node when the snapshot is cleared. The logic to clear the spare packet was faulty and did not clear all the fields leaving the data length in the node. In specific circumstances it is possible to re-use the spare linked list node containing an old data value as the first packet in a new linked list of packets. When this happens in a read which reaches the continue stage (3 or more packets) the size calculation is incorrect and various errors can occur.

The spare packet functionality is not very useful because it can store a single node. It doesn't retain the byte[] buffer so the memory saving is tiny. I have removed it and changed the linked list node fields to be readonly. This resolves the bug.

Fix 2:
When reading a multi packet string the plp chunks are read from each packet and the end is signalled by a terminator. It is possible for the data to align such that the contents of a string complete exactly at the end of a packet and the terminator is in the next packet. In this case some pre-existing logic checks for a 0 chars remaining and exists early.

This logic needed to be updated so that in when continuing it returns the entire existing length read and not a 0 value.

Fix 3:
While debugging the first two issues the buffer sizes and calculations were confusing me. I eventually realised that the code was directly using _longlenleft which is measured in bytes to size a char array, meaning that all char arrays were twice as long as needed. I have updated the code to handle that and use smaller appropriately sized arrays.

I have updated the existing test to iterate from 512 (minimum packet size) to 2048 bytes in size. This can cause lots of interesting alignments in the data testing the paths through the string reading code more effectively. The range could be increased but I considered that the runtime needed to be low enough to not timeout CI runs, most higher packet size will be similar to lower sized runs due to factoring.

Thanks to @erenes and @Suchiman for their help finding the reproduction that worked on my machine, without that I would have been unable to fix anything

@dotnet/sqlclientdevteam can I get a CI run please.

/cc @Jakimar

Edits:

Fixes 3572.
When reading data sequentially and calling GetFieldValueAsync it is possible for the call to TryReadColumnInternal to fail because there is not enough data. When this happens an async read has been started so we should wait for it to return but the code does not do this and instead falls through to attempting a call to TryReadColumnValue. Under a debugger it's clear that there is a problem because you hit an assert but at runtime this was silently failing which could cause data loss as an async read collided with a sync read.. The fix is to prevent the fallthrough and queue the wait for the next packet.

@Wraith2 Wraith2 requested a review from a team as a code owner July 31, 2025 20:57
@Wraith2
Copy link
Contributor Author

Wraith2 commented Jul 31, 2025

@ErikEJ this might reduce memory usage for string reads. It might be worth benching the artifacts if the CI runs green.

@mdaigle
Copy link
Contributor

mdaigle commented Jul 31, 2025

/azp run

Copy link

Azure Pipelines successfully started running 2 pipeline(s).

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 1, 2025

I've added an additional fix which is the same as the 0 length left in terminator case and which occurs on the varchar not nvarchar read path.

@mdaigle
Copy link
Contributor

mdaigle commented Aug 1, 2025

/azp run

Copy link

Azure Pipelines successfully started running 2 pipeline(s).

@apoorvdeshmukh apoorvdeshmukh added this to the 6.1.1 milestone Aug 4, 2025
@cheenamalhotra cheenamalhotra removed this from the 6.1.1 milestone Aug 12, 2025
@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 12, 2025

@dotnet/sqlclientdevteam can I get a CI run on this please.

I've added a new commit which forces process sni mode to compatibility mode (and by extension, disabled async-continue mode) and adds in a fix for the pending read counter imbalance that we discussed and that @rhuijben has been assisting with tracking down today. This is a possible stable current codebase state to evaluate.

@mdaigle
Copy link
Contributor

mdaigle commented Aug 12, 2025

/azp run

Copy link

Azure Pipelines successfully started running 2 pipeline(s).

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 12, 2025

I've aligned the appcontext switch test with the new defaults. Can i get another run please @dotnet/sqlclientdevteam

@paulmedynski
Copy link
Contributor

/azp run

Copy link

Azure Pipelines successfully started running 2 pipeline(s).

Copy link
Contributor

@paulmedynski paulmedynski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Asking for some clarity on >> 1 versus / 2.

@@ -13206,7 +13206,7 @@ bool writeDataSizeToSnapshot
if (stateObj._longlen == 0)
{
Debug.Assert(stateObj._longlenleft == 0);
totalCharsRead = 0;
totalCharsRead = startOffsetByteCount >> 1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a division by 2 in disguise? Are you using a special property of right-bit-shift that divide-by-2 doesn't have? Something else?

If the former, please use startOffsetByteCount / 2 for clarity. If the either of the latter, please document why.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No magic. Just using the same idiom as the containing methods. I've changed it to use division instead of shift.

I've also changed the multiplexer test detection of compatibility to match the library which should skip the multiplexer tests correctly now.

@rhuijben
Copy link

rhuijben commented Aug 13, 2025

@Wraith2 when I run the testcase from the other issue against this branch I get in DEBUG mode

 SearchDogCrash.SearchDogCrashTests.TestSearchDogCrash
   Source: Class1.cs line 11
   Duration: 947 ms

  Message: 
Microsoft.VisualStudio.TestPlatform.TestHost.DebugAssertException : Method Debug.Fail failed with 'Invalid token after performing CleanPartialRead: 04
', and was translated to Microsoft.VisualStudio.TestPlatform.TestHost.DebugAssertException to avoid terminating the process hosting the test.

  Stack Trace: 
SqlDataReader.TryCleanPartialRead() line 867
SqlDataReader.TryCloseInternal(Boolean closeReader) line 1058
SqlDataReader.Close() line 1009
SqlDataReader.Dispose(Boolean disposing) line 924
DbDataReader.DisposeAsync()
SearchDogCrashTests.Do() line 49
ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
AsyncStateMachineBox`1.MoveNext(Thread threadPoolThread)
AwaitTaskContinuation.RunOrScheduleAction(IAsyncStateMachineBox box, Boolean allowInlining)
Task.RunContinuations(Object continuationObject)
Task`1.TrySetResult(TResult result)
UnwrapPromise`1.TrySetFromTask(Task task, Boolean lookForOce)
UnwrapPromise`1.ProcessInnerTask(Task task)
Task.RunContinuations(Object continuationObject)
Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
ThreadPoolWorkQueue.Dispatch()
WorkerThread.WorkerThreadStart()

In release mode the test passes.
(Reverted to old version of the library for RepoDB and $dayjob)

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 13, 2025

multitasking here, can you link me to the exact repro you're talking about?

@rhuijben
Copy link

rhuijben commented Aug 13, 2025

multitasking here, can you link me to the exact repro you're talking about?

The testcase from
#3519 (comment)

I'm currently trying to get things reproduced against a docker instance of sqlserver 2019 so we can look at the same thing (and maybe even test this on github actions, like I do in the RepoDB project)

@rhuijben
Copy link

rhuijben commented Aug 13, 2025

This case rhuijben@1964bc1
(Extracted from #3519 (comment))

fails for me on this docker setup.
(If you have docker, running docker compose up -d in the directory with the compose script will give you a local sqlserver instance. The testcase then adds the schema and runs)

Too bad it is not the error I'm seeing myself, but it is still a valid testcase. Trying to extend this to include my case.

It fails on the first (smallest) packetsize of 512.

 SearchDogCrash.SearchDogCrashTests.OtherRepro
   Source: CrashTest.cs line 822
   Duration: 3,6 min

  Message: 
Microsoft.VisualStudio.TestPlatform.TestHost.DebugAssertException : Method Debug.Fail failed with 'partially read packets cannot be appended to the snapshot
', and was translated to Microsoft.VisualStudio.TestPlatform.TestHost.DebugAssertException to avoid terminating the process hosting the test.

  Stack Trace: 
StateSnapshot.AppendPacketData(Byte[] buffer, Int32 read) line 4158
TdsParserStateObject.ProcessSniPacketCompat(PacketHandle packet, UInt32 error) line 529
TdsParserStateObject.ProcessSniPacket(PacketHandle packet, UInt32 error) line 19
TdsParserStateObject.ReadAsyncCallback(IntPtr key, PacketHandle packet, UInt32 error) line 353
TdsParserStateObject.ReadSni(TaskCompletionSource`1 completion) line 3236
TdsParserStateObject.TryReadNetworkPacket() line 2818
TdsParserStateObject.TryPrepareBuffer() line 1299
TdsParserStateObject.TryReadByteArray(Span`1 buff, Int32 len, Int32& totalRead, Int32 startOffset, Boolean writeDataSizeToSnapshot) line 1492
TdsParserStateObject.TryReadByteArray(Span`1 buff, Int32 len, Int32& totalRead) line 1453
TdsParserStateObject.TryReadInt64(Int64& value) line 1721
<13 more frames...>
AwaitTaskContinuation.RunOrScheduleAction(Action action, Boolean allowInlining)
Task.RunContinuations(Object continuationObject)
Task`1.TrySetResult(TResult result)
TaskCompletionSource`1.TrySetResult(TResult result)
SqlDataReader.CompleteAsyncCall[T](Task`1 task, SqlDataReaderBaseAsyncCallContext`1 context) line 6123
SqlDataReaderBaseAsyncCallContext`1.CompleteAsyncCallCallback(Task`1 task, Object state) line 5822
ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
ThreadPoolWorkQueue.Dispatch()
WorkerThread.WorkerThreadStart()

@rhuijben
Copy link

rhuijben commented Aug 13, 2025

Debug.Assert(TdsEnums.HEADER_LEN + Packet.GetDataLengthFromHeader(buffer) == read, "partially read packets cannot be appended to the snapshot");

read=512
buffer = byte[512], (first and last byte are 0x04)
TdsEnums.HEADER_LEN = 8.
Packet.GetDataLengthFromHeader(buffer) returns 503

503+8 = 511, so mismatch.

Looks like the first byte of the next package is already in the buffer here.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 13, 2025

That assert will fire periodically when packet multiplexing is disabled. We should add in the context switch to the assertion.

That might be correct. I saw something similar while looking at the multipart xml reads with a weird packet size. If the packet status does not include the last packet bit, and the requiredlength is less than the total packet as long as the transferred data amount is the same as the buffer size it's technically correct, I think. I'm referring to this as padded packets. I hadn't seen them before 2 weeks ago but the spec doesn't preclude them. When i saw them the remaining data space in the packet buffer was filled with FF. This is part of the reason that i added the DumpPackets and DumpInBuff functions to my debug branch.

@rhuijben
Copy link

With packet size configured as 512 I see 511 byte packets (which fail these tests), but also one really large packet (>= 60 KB). Not sure if the debug assert does the right thing. It looks like the demultiplexer handles these cases just fine.

With this packet code you also always have to handle short-reads caused by network security and TCP packets. There are standard proxies for that last case so you can always get small (or large) packets from the network layer. The DotNet core project uses fuzzing with that to catch http errors, as do a lot of other libraries.

Looks like these asserts are on the wrong layer... as from the network you can have much smaller or larger packets than the TDS packets (smaller when processing really fast, and much longer when the network already delivered more data than a single packet... Which can also happen on slow networks when one packet got lost and is re-delivered, while others are already in the queue.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 13, 2025

With the multiplexer active every packet which is appended to the snapshot is required to have a correct header and be a complete packet. If you look at the locations where packets are assigned to _inBuff from the snapshot you'll see that it gets the status and data length from the packet, that requires a correct packet header.

With the multiplexer inactive, which is ProcessSni compatibility mode those requirements no longer hold. So the correct things to do is to change the asserts to be:

                Debug.Assert(buffer != null, "packet data cannot be null");
                Debug.Assert(LocalAppContextSwitches.UseCompatibilityProcessSni || read >= TdsEnums.HEADER_LEN, "minimum packet length is TdsEnums.HEADER_LEN");
                Debug.Assert(LocalAppContextSwitches.UseCompatibilityProcessSni || TdsEnums.HEADER_LEN + Packet.GetDataLengthFromHeader(buffer) == read, "partially read packets cannot be appended to the snapshot");

In terms of packet lengths I refer to the TDS definition of a packet which is that all packets are of the communicated packetsize unless they are the last packet in a stream. Even though it's possible in network terms to receive any length of bytes up to the buffer size that was passed to the function we will generally find that there are full packets unless your server is quite slow. Unfortunately the slow case is very hard to replicate in a reliable way so there have been holes in the code.

The demultiplexer current does not handle padded packets (physical length > logical length) correctly. I need to get some example packets with padding and then write some new test infra for the multiplexer to make sure that CurrentLength and RequiredLength are used correctly.

@rhuijben
Copy link

rhuijben commented Aug 13, 2025

On https://github.com/rhuijben/SqlClient/tree/test/repro-1 (your branch + 1 commit) I now have (with AppContext.SetSwitch("Switch.Microsoft.Data.SqlClient.UseCompatibilityProcessSni", false);)

Running the tests against a docker with a clean sqlserver 2019 container. (Create script is in that commit)

 SearchDogCrash.SearchDogCrashTests.OtherRepro
   Source: CrashTest.cs line 824
   Duration: 40,4 sec

  Message: 
    Microsoft.VisualStudio.TestPlatform.TestHost.DebugAssertException : Method Debug.Fail failed with 'dumping buffer
    _inBytesRead = 21
    _inBytesUsed = 11
    used buffer:
    04 01 00 15 00 34 01 00 FD 20 00 
    unused buffer:
    FD 00 00 00 00 00 00 00 00 00 
    
    ', and was translated to Microsoft.VisualStudio.TestPlatform.TestHost.DebugAssertException to avoid terminating the process hosting the test.

  Stack Trace: 
    TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) line 2043
    TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) line 1923
    TdsParser.ProcessAttention(TdsParserStateObject stateObj) line 8178
    TdsParser.ProcessPendingAck(TdsParserStateObject stateObj) line 344
    TdsParserStateObject.ResetCancelAndProcessAttention() line 839
    TdsParserStateObject.CloseSession() line 919
    SqlDataReader.TryCloseInternal(Boolean closeReader) line 1078
    SqlDataReader.Close() line 1009
    SqlDataReader.Dispose(Boolean disposing) line 924
    DbDataReader.DisposeAsync()
    SearchDogCrashTests.OtherRepro() line 844
    ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
    AwaitTaskContinuation.RunOrScheduleAction(IAsyncStateMachineBox box, Boolean allowInlining)
    Task.RunContinuations(Object continuationObject)
    Task.FinishSlow(Boolean userDelegateExecute)
    Task.TrySetException(Object exceptionObject)
    TaskCompletionSource`1.TrySetException(Exception exception)
    SqlDataReader.CompleteAsyncCall[T](Task`1 task, SqlDataReaderBaseAsyncCallContext`1 context) line 6115
    SqlDataReaderBaseAsyncCallContext`1.CompleteAsyncCallCallback(Task`1 task, Object state) line 5822
    ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
    Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
    ThreadPoolWorkQueue.Dispatch()
    WorkerThread.WorkerThreadStart()

(Don't look at the timings. My laptop is in silent mode, which is very slow with todays weather)

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 13, 2025

Token packet, last packet of stream. No idea what the content is supposed to represent. This would silently fail in release mode because all that diagnostic output is a #DEBUG block

@rhuijben
Copy link

04 01 00 15 00 34 01 00 FD 20 00

Not sure if I read this correctly but looks like:
Packet of 0x15 length (byte 2 and 3 in the array), so 21 decimal. This happens to be the number of bytes available. But only 11 are actually marked as read...

while (counter < 10)
SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(DataTestUtility.TCPConnectionString);
builder.PersistSecurityInfo = true;
builder.Pooling = false;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hade the most luck reproducing issues by using pooling for just 2-3 times. Disabling pooling hides some backend errors as the connection is closed anyway.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 13, 2025

04 01 00 15 00 34 01 00 FD 20 00

Not sure if I read this correctly but looks like: Packet of 0x15 length (byte 2 and 3 in the array), so 21 decimal. This happens to be the number of bytes available. But only 11 are actually marked as read...

Yes. That was my interpretation. Since I've no idea what the bytes that have been read were a part of I have no way to tell if the extra bytes are meaningful or not. It seems likely that they should be. If you've got my additional logging enabled from the -dev branch then you could look at the output of DumpLog() and see what operation consumed those bytes.

@paulmedynski
Copy link
Contributor

/azp run

@paulmedynski
Copy link
Contributor

This PR needs a merge from main to pickup macOS test failure fixes.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 28, 2025

Sure, but that doesn't block review of the code changes.

As I mentioned in #3534 (comment) I'm happy to rebase when there is clear code reason to do so but failing test infra does not affect the code changes that I've made. If The changes pass review then I'll rebase in readiness of a merge.

Copy link
Contributor

@paulmedynski paulmedynski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just one question and one suggestion.

@mdaigle
Copy link
Contributor

mdaigle commented Aug 28, 2025

I've built the latest version of the PR and re-ran my stress tests under similar constraints, results are here as before.

All stress tests pass with varying sizes of data (1/5/25MB) to servers with varying latency. This confirms the fix for #3572, and (as expected) the same fix also fixed a similar type of issue with TextReader.

Since 3572 also appears in earlier versions, we should probably backport bd780e4 to 6.1/6.0/5.1. I can handle that quickly enough if the SqlClient team agree.

@edwardneal thanks for your work on this. Did you have a chance to run your stress tests on this+feature_disabled?
And I may have missed it, but have you shared the source code for these tests anywhere? I'd like to look through them to see the types of scenarios that triggered the errors.
We're looking to strengthen our stress testing suite and the tests you've built may be a good addition. See #3558 for where those are living.

@mdaigle
Copy link
Contributor

mdaigle commented Aug 28, 2025

@mdaigle EFcore tests - the output between 6.1.1 and this+feature_disabled should be the same. - Confirmed with 6.11.0-pull.123707

Thank you @ErikEJ !

// so we must make sure that even though we are not making a network call that we do
// not cause an incorrect decrement which will cause disconnection from the native
// component
IncrementPendingCallbacks();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to self, readFromNetwork is always true in compat mode because partial packets are not supported.

@@ -5753,6 +5751,10 @@ private static Task<T> GetFieldValueAsyncExecute<T>(Task task, object state)
{
return Task.FromResult<T>(reader.GetFieldValueFromSqlBufferInternal<T>(reader._data[columnIndex], reader._metaData[columnIndex], isAsync: true));
}
else
{
return reader.ExecuteAsyncCall(context);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you help me better understand this change? It's not clear to me which issue this addresses.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from the edit I made to the OP.

Fixes 3572.
When reading data sequentially and calling GetFieldValueAsync it is possible for the call to TryReadColumnInternal to fail because there is not enough data. When this happens an async read has been started so we should wait for it to return but the code does not do this and instead falls through to attempting a call to TryReadColumnValue. Under a debugger it's clear that there is a problem because you hit an assert but at runtime this was silently failing which could cause data loss as an async read collided with a sync read.. The fix is to prevent the fallthrough and queue the wait for the next packet.

@edwardneal
Copy link
Contributor

Thanks @mdaigle. I've re-run the stress tests with both of those AppContext switches set, and the behaviour's consistent from end to end - #3572 is fixed in (almost) all cases, irrespective of the AppContext switch settings.


I've noted one bug which appears under specific circumstances. This bug appears when run against M.D.S 5.1.7, 5.2.3, 6.0.2, 6.1.1 and this PR (with the UseCompatibilityProcessSni switch set to true.)

The circumstances in question are:

  • Server: localhost (bug doesn't appear for Azure SQL databases in UKSouth or AustraliaEast.)
  • UseCompatibilityAsyncBehaviour: irrelevant
  • UseCompatibilityProcessSni: set to true (doesn't appear with this unset.)
  • Operation: read a varbinary(max) as a stream
  • Packet size: 567 bytes (doesn't appear when the packet size is 8000.)
  • Data size: 1MB or greater
  • Type: async
  • SqlDataReader CommandBehaviour: SequentialAccess

When this bug appears, disposing of the SqlDataReader throws the exception below:

System.ArgumentOutOfRangeException: Specified argument was out of the range of valid values.
   at Microsoft.Data.SqlClient.TdsParserStateObject.TryReadByteArray(Span`1 buff, Int32 len, Int32& totalRead)
   at Microsoft.Data.SqlClient.TdsParserStateObject.TryReadUInt32(UInt32& value)
   at Microsoft.Data.SqlClient.TdsParserStateObject.TryReadPlpLength(Boolean returnPlpNullIfNull, UInt64& lengthLeft)
   at Microsoft.Data.SqlClient.TdsParser.TrySkipPlpValue(UInt64 cb, TdsParserStateObject stateObj, UInt64& totalBytesSkipped)
   at Microsoft.Data.SqlClient.SqlDataReader.TryResetBlobState()
   at Microsoft.Data.SqlClient.SqlDataReader.TryCleanPartialRead()
   at Microsoft.Data.SqlClient.SqlDataReader.TryCloseInternal(Boolean closeReader)
   at Microsoft.Data.SqlClient.SqlDataReader.Close()
   at Microsoft.Data.SqlClient.SqlDataReader.Dispose(Boolean disposing)

I think it might be a reproduction of #1044.


You're welcome to cannibalise my test harness - link is here. A few notes on them though:

  • BenchmarkDotNet almost certainly isn't the right way to represent these - it was just a little more convenient for dealing with the various combinations of parameters and measuring performance. In particular, sorting through the results to find combinations of parameters which produce failures is sluggish.
  • There are 3,024 combinations of tests here, and by the time we test the presence/absence of the two AppContext switches that could spiral to 12,096. I pared down the number of tests (eliminating the Xml test) to cut down the expected time to execute them, and found that it took 1.5 days to run 2,592 combinations.
  • The Xml test is very memory hungry; I couldn't run it without slowing down the benchmark by several days. If there are any undiscovered bugs, they're likely in this area.
  • The VarCharAsTextReader and BlobAsStream operations take almost identical code paths within SqlClient - if there's a reliable bug in that area, it'll probably appear in both operations. I found it was a little easier to reproduce bugs with BlobAsStream.
  • The BlobAsByteArray operation doesn't have an async counterpart.
  • I've not tested the use of SqlBytes or SqlBinary, I expect that these are covered by the testing of byte[] and Stream.
  • I determined whether or not each combination passed by looking for benchmarks which didn't complete, then checking logs to see whether this was a timeout-related exception. Thankfully this didn't happen often.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 28, 2025

The Xml test is very memory hungry; I couldn't run it without slowing down the benchmark by several days. If there are any undiscovered bugs, they're likely in this area.

Results from my dev branch:

Method UseContinue Mean Error StdDev Gen0 Gen1 Gen2 Allocated
Sync False 72.29 ms 0.269 ms 0.252 ms 2857.1429 2571.4286 857.1429 35.91 MB
Async False 2,220.91 ms 20.277 ms 15.831 ms 1450000.0000 1355000.0000 141000.0000 20989.11 MB
Sync True 71.04 ms 0.454 ms 0.379 ms 2875.0000 2625.0000 875.0000 35.91 MB
Async True 70.63 ms 0.926 ms 0.821 ms 3125.0000 3000.0000 1125.0000 37.04 MB

@paulmedynski
Copy link
Contributor

/azp run

Copy link

Azure Pipelines successfully started running 2 pipeline(s).

@mdaigle
Copy link
Contributor

mdaigle commented Aug 29, 2025

FYI, we're having issues with macos tests at the moment, separate from this PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Unexpected data lengths when asynchronously reading Stream using SequentialAccess behaviour 6.1.0: Errors while executing the query
9 participants