Skip to content

DOC's update on using cosmosdb as a message store/topic/log #135

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
yahorsi opened this issue Apr 12, 2019 · 2 comments
Open

DOC's update on using cosmosdb as a message store/topic/log #135

yahorsi opened this issue Apr 12, 2019 · 2 comments

Comments

@yahorsi
Copy link

yahorsi commented Apr 12, 2019

Once CosmosDb and change feed now is advertized as sor ot messaging/event sourcing store solution there are several questions that must be fully covered in the docs:
0. What API's support change feed

  1. End to end delivery guarantees. Like what happens if processor dies, or losts network connection
  2. What happens when another processor steals partition lease, is that possible that severeal messages will be delivered to the old and new processor. Is that possible that both processors will be processing same messages simultaniously and so on
  3. Duplicates detection. Like imagine we add new doc to the CosmosDb and got a network problem after message was successfully sent to the server. Client will have to retry the request that might cause duplicates. In RDBMS this is handled by transactions, how is that handled in CosmosDb

Ideally using cosmosdb as a message store/topic/log should have separate coverage in the docs with mentioning all typical messaging topics.

@bartelink
Copy link

I second this (and should probably bring over some points from equivalent issues I've raised.)

regarding point 0 - I feel the underlying APIs are an implementation detail that should not be part of the docs here, or at least a separated part of the readme. (or can you clarify your ask?)

Regarding point 3, I feel you should remove this and/or at least reword it - this lib and repo is all about processing the ChangeFeed; The answer to your question is that you can use etags and make the writes contingent on that and/or have a stored proc do some cross document transactions [within the same logical partition]

@ealsur
Copy link
Member

ealsur commented Apr 22, 2019

I agree, our docs are pending details and we will address it. I have started work on the Azure Functions Change Feed implementation docs, and will keep working on it to also include Change Feed Processor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants