-
-
Notifications
You must be signed in to change notification settings - Fork 334
fix!: better cache memory usage #1090
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hi @dunglas, thank you so much for looking into this and creating the PR! I'd be happy to test these changes in my environment. Is there a pre-built Docker image available for this pull request that I could use? I'm new to Go and I'm trying to compile the project and then build a Docker image. First, I compiled the binary with: Then, I built the Docker image using: However, when the pod starts up, it fails with the following error: Any help would be appreciated. Thanks! |
@dunglas Solved! I've never used goreleaser. I've deployed this version to the staging environment, and it will go to production tomorrow. I'll provide you with performance metrics after that. |
Here is a benchmark I ran locally: SUB_TEST_CONCURRENCY=500 SUB_TEST_TOPICS=100 SUB_TEST_MATCHPCT=70 go test -benchmem -run='^$' -count 6 -bench '^BenchmarkLocalTransport$'
benchstat old.txt new.txt
goos: darwin
goarch: arm64
pkg: github.com/dunglas/mercure
cpu: Apple M1 Pro
│ old.txt │ new.txt │
│ sec/op │ sec/op vs base │
LocalTransport/100-topics:500-concurrency:70-matchpct-10 107.96µ ± 16687715% 96.89µ ± 30112408% ~ (p=0.065 n=6)
│ old.txt │ new.txt │
│ B/op │ B/op vs base │
LocalTransport/100-topics:500-concurrency:70-matchpct-10 37.60Ki ± 74880960% 22.27Ki ± 209187700% -40.79% (p=0.026 n=6)
│ old.txt │ new.txt │
│ allocs/op │ allocs/op vs base │
LocalTransport/100-topics:500-concurrency:70-matchpct-10 31.00 ± 1100182848% 22.00 ± 2566207586% -29.03% (p=0.015 n=6) |
Thanks @naxo8628. This indeed looks way better. I'm still interested in a profile, if possible. Thank you. |
@dunglas I've just deployed to production with debugging enabled, as I wanted to wait at least 24 hours before the next deployment. I will send performance profiles within the next 48 hours. In any case, although the last deploy improved the initial memory spike, it seems the memory fills up again after a few hours. ![]() |
@naxo8628 could you copy your Mercure config? I'm especially interested in write_timeout. Thanks! |
@dunglas Added heap & allocs: This is the values.yaml of the chart (write_timeout is set to 7200s) |
golang-lru
to Otter for better performancehash/fnv
togithub.com/cespare/xxhash/v2
Closes #1024.