Skip to content

Commit 7382c5c

Browse files
committed
fix: Prevent hang when client disconnects under load
In Azurite, all operations are managed through concurrent operation queues. A bug was identified where operations could hang indefinitely if the client disconnected before the operation was processed. This occurred because Azurite would attempt to attach event handlers to the request's readable stream (body) after it had already been closed by the client's disconnection. Since a closed stream emits no further events (like 'data', 'close', or 'error'), the operation would never complete, causing a permanent hang. This fix addresses the issue by checking if the request stream is still readable before attaching any event handlers. This ensures that we only process requests that are still active, preventing the hang and allowing the queues to continue processing other operations.
1 parent 0a186de commit 7382c5c

File tree

3 files changed

+34
-0
lines changed

3 files changed

+34
-0
lines changed

ChangeLog.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ Blob:
1212
- Added support for sealing append blobs. (issue #810)
1313
- Added support for delegation sas with version of 2015-07-05.
1414
- Fix issue on SQL: Delete a container with blob, then create container/blob with same name, and delete container will fail. (issue #2563)
15+
- Fixed hang in blob operations when a client disconnects before the OperationQueue processes the request. (issue #2575)
1516

1617
Table:
1718

src/common/persistence/FSExtentStore.ts

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -518,6 +518,18 @@ export default class FSExtentStore implements IExtentStore {
518518
let count: number = 0;
519519
let wsEnd = false;
520520

521+
if (!rs.readable) {
522+
this.logger.debug(
523+
`FSExtentStore:streamPipe() Readable stream is not readable, rejecting streamPipe.`,
524+
contextId
525+
);
526+
reject(
527+
new Error(
528+
`FSExtentStore:streamPipe() Readable stream is not readable.`
529+
));
530+
return;
531+
}
532+
521533
rs.on("data", data => {
522534
count += data.length;
523535
if (!ws.write(data)) {

tests/blob/fsStore.test.ts

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,4 +47,25 @@ describe("FSExtentStore", () => {
4747
let readable3 = await store.readExtent(extent3);
4848
assert.strictEqual(await readIntoString(readable3), "Test");
4949
});
50+
51+
it("should handle garbage collected input stream during appendExtent @loki", async () => {
52+
const store = new FSExtentStore(metadataStore, DEFAULT_BLOB_PERSISTENCE_ARRAY, logger);
53+
await store.init();
54+
55+
const stream1 = Readable.from("Test", { objectMode: false });
56+
57+
// From manual testing express.js it seems that if the request is aborted
58+
// before it is handled/listeners are set up, the stream is destroyed.
59+
// This simulates that behavior.
60+
stream1.destroy();
61+
62+
// Then we check that appendExtent handles the destroyed stream
63+
// gracefully/does not hang.
64+
try {
65+
await store.appendExtent(stream1);
66+
assert.fail("Expected an error to be thrown due to destroyed stream");
67+
} catch (err) {
68+
assert.deepStrictEqual(err.message, "FSExtentStore:streamPipe() Readable stream is not readable.");
69+
}
70+
});
5071
});

0 commit comments

Comments
 (0)