-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Skiplist - date histogram sub aggregation performance change #19509
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Skiplist - date histogram sub aggregation performance change #19509
Conversation
Signed-off-by: Asim Mahmood <[email protected]>
Checked vs this changeec2-user@ip-172-31-61-197 ~]$ opensearch-benchmark compare -b $checkedin -c $candidate / __ ____ ___ ____ / / ____ / / / __ ) ____ / / ____ ___ ____ / / Comparing baseline with contender
/ () ____ / / / /_____ ________
|
Metric | Task | Baseline | Contender | %Diff | Diff | Unit |
---|---|---|---|---|---|---|
Cumulative indexing time of primary shards | 0 | 0 | 0.00% | 0 | min | |
Min cumulative indexing time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative indexing time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Max cumulative indexing time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Cumulative indexing throttle time of primary shards | 0 | 0 | 0.00% | 0 | min | |
Min cumulative indexing throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative indexing throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Max cumulative indexing throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Cumulative merge time of primary shards | 0 | 0 | 0.00% | 0 | min | |
Cumulative merge count of primary shards | 0 | 0 | 0.00% | 0 | ||
Min cumulative merge time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative merge time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Max cumulative merge time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Cumulative merge throttle time of primary shards | 0 | 0 | 0.00% | 0 | min | |
Min cumulative merge throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative merge throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Max cumulative merge throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Cumulative refresh time of primary shards | 0 | 0 | 0.00% | 0 | min | |
Cumulative refresh count of primary shards | 2 | 2 | 0.00% | 0 | ||
Min cumulative refresh time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative refresh time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Max cumulative refresh time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Cumulative flush time of primary shards | 0 | 0 | 0.00% | 0 | min | |
Cumulative flush count of primary shards | 1 | 1 | 0.00% | 0 | ||
Min cumulative flush time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative flush time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Max cumulative flush time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Total Young Gen GC time | 0.032 | 0.036 | 0.01% | 0.004 | s | |
Total Young Gen GC count | 2 | 2 | 0.00% | 0 | ||
Total Old Gen GC time | 0 | 0 | 0.00% | 0 | s | |
Total Old Gen GC count | 0 | 0 | 0.00% | 0 | ||
Store size | 4.36969 | 4.36969 | 0.00% | 0 | GB | |
Translog size | 5.12227e-08 | 5.12227e-08 | 0.00% | 0 | GB | |
Heap used for segments | 0 | 0 | 0.00% | 0 | MB | |
Heap used for doc values | 0 | 0 | 0.00% | 0 | MB | |
Heap used for terms | 0 | 0 | 0.00% | 0 | MB | |
Heap used for norms | 0 | 0 | 0.00% | 0 | MB | |
Heap used for points | 0 | 0 | 0.00% | 0 | MB | |
Heap used for stored fields | 0 | 0 | 0.00% | 0 | MB | |
Segment count | 10 | 10 | 0.00% | 0 | ||
Min Throughput | date_histogram_calendar_interval | 1.50108 | 1.50289 | 0.12% | 0.0018 | ops/s |
Mean Throughput | date_histogram_calendar_interval | 1.50176 | 1.50469 | 0.19% | 0.00293 | ops/s |
Median Throughput | date_histogram_calendar_interval | 1.50162 | 1.50427 | 0.18% | 0.00266 | ops/s |
Max Throughput | date_histogram_calendar_interval | 1.50309 | 1.5083 | 0.35% | 0.00521 | ops/s |
50th percentile latency | date_histogram_calendar_interval | 227.48 | 154.661 | -32.01% 🟢 | -72.8193 | ms |
90th percentile latency | date_histogram_calendar_interval | 248.178 | 157.889 | -36.38% 🟢 | -90.2887 | ms |
99th percentile latency | date_histogram_calendar_interval | 262.979 | 161.17 | -38.71% 🟢 | -101.81 | ms |
100th percentile latency | date_histogram_calendar_interval | 263.36 | 176.769 | -32.88% 🟢 | -86.5908 | ms |
50th percentile service time | date_histogram_calendar_interval | 226.161 | 153.012 | -32.34% 🟢 | -73.1485 | ms |
90th percentile service time | date_histogram_calendar_interval | 246.997 | 156.593 | -36.60% 🟢 | -90.4046 | ms |
99th percentile service time | date_histogram_calendar_interval | 261.85 | 159.494 | -39.09% 🟢 | -102.357 | ms |
100th percentile service time | date_histogram_calendar_interval | 262.355 | 174.922 | -33.33% 🟢 | -87.4332 | ms |
error rate | date_histogram_calendar_interval | 0 | 0 | 0.00% | 0 | % |
Min Throughput | date_histogram_calendar_interval_with_filter | 1.50943 | 1.50968 | 0.02% | 0.00025 | ops/s |
Mean Throughput | date_histogram_calendar_interval_with_filter | 1.5156 | 1.516 | 0.03% | 0.0004 | ops/s |
Median Throughput | date_histogram_calendar_interval_with_filter | 1.51419 | 1.51457 | 0.03% | 0.00038 | ops/s |
Max Throughput | date_histogram_calendar_interval_with_filter | 1.52811 | 1.52884 | 0.05% | 0.00073 | ops/s |
50th percentile latency | date_histogram_calendar_interval_with_filter | 9.87088 | 9.75382 | -1.19% | -0.11707 | ms |
90th percentile latency | date_histogram_calendar_interval_with_filter | 11.1966 | 11.3472 | 1.35% | 0.15061 | ms |
99th percentile latency | date_histogram_calendar_interval_with_filter | 13.2912 | 14.5053 | +9.13% 🔴 | 1.21407 | ms |
100th percentile latency | date_histogram_calendar_interval_with_filter | 13.4544 | 30.6908 | +128.11% 🔴 | 17.2364 | ms |
50th percentile service time | date_histogram_calendar_interval_with_filter | 8.4715 | 8.32173 | -1.77% | -0.14977 | ms |
90th percentile service time | date_histogram_calendar_interval_with_filter | 9.47064 | 9.69457 | 2.36% | 0.22394 | ms |
99th percentile service time | date_histogram_calendar_interval_with_filter | 11.7108 | 12.5508 | +7.17% 🔴 | 0.84002 | ms |
100th percentile service time | date_histogram_calendar_interval_with_filter | 11.8357 | 29.4954 | +149.21% 🔴 | 17.6597 | ms |
error rate | date_histogram_calendar_interval_with_filter | 0 | 0 | 0.00% | 0 | % |
Min Throughput | date_histogram_fixed_interval_with_metrics | 0.236863 | 0.236365 | -0.21% | -0.0005 | ops/s |
Mean Throughput | date_histogram_fixed_interval_with_metrics | 0.236931 | 0.236734 | -0.08% | -0.0002 | ops/s |
Median Throughput | date_histogram_fixed_interval_with_metrics | 0.236915 | 0.236739 | -0.07% | -0.00018 | ops/s |
Max Throughput | date_histogram_fixed_interval_with_metrics | 0.237073 | 0.237013 | -0.03% | -6e-05 | ops/s |
50th percentile latency | date_histogram_fixed_interval_with_metrics | 357487 | 358005 | 0.14% | 518.032 | ms |
90th percentile latency | date_histogram_fixed_interval_with_metrics | 497919 | 498335 | 0.08% | 415.677 | ms |
99th percentile latency | date_histogram_fixed_interval_with_metrics | 529457 | 529861 | 0.08% | 403.997 | ms |
100th percentile latency | date_histogram_fixed_interval_with_metrics | 532986 | 533422 | 0.08% | 436.237 | ms |
50th percentile service time | date_histogram_fixed_interval_with_metrics | 4214.45 | 4212.22 | -0.05% | -2.22602 | ms |
90th percentile service time | date_histogram_fixed_interval_with_metrics | 4243.44 | 4233.65 | -0.23% | -9.79249 | ms |
99th percentile service time | date_histogram_fixed_interval_with_metrics | 4274.8 | 4286.27 | 0.27% | 11.4718 | ms |
100th percentile service time | date_histogram_fixed_interval_with_metrics | 4293.39 | 4302.34 | 0.21% | 8.94738 | ms |
error rate | date_histogram_fixed_interval_with_metrics | 0 | 0 | 0.00% | 0 | % |
[INFO] SUCCESS (took 0 seconds)
Baseline (without skiplist) vs Candidate[ec2-user@ip-172-31-61-197 ~]$ opensearch-benchmark compare -b $baseline -c $candidate / __ ____ ___ ____ / / ____ / / / __ ) ____ / / ____ ___ ____ / / Comparing baseline with contender
/ () ____ / / / /_____ ________
|
Metric | Task | Baseline | Contender | %Diff | Diff | Unit |
---|---|---|---|---|---|---|
Cumulative indexing time of primary shards | 0 | 0 | 0.00% | 0 | min | |
Min cumulative indexing time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative indexing time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Max cumulative indexing time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Cumulative indexing throttle time of primary shards | 0 | 0 | 0.00% | 0 | min | |
Min cumulative indexing throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative indexing throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Max cumulative indexing throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Cumulative merge time of primary shards | 0 | 0 | 0.00% | 0 | min | |
Cumulative merge count of primary shards | 0 | 0 | 0.00% | 0 | ||
Min cumulative merge time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative merge time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Max cumulative merge time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Cumulative merge throttle time of primary shards | 0 | 0 | 0.00% | 0 | min | |
Min cumulative merge throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative merge throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Max cumulative merge throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Cumulative refresh time of primary shards | 0 | 0 | 0.00% | 0 | min | |
Cumulative refresh count of primary shards | 2 | 2 | 0.00% | 0 | ||
Min cumulative refresh time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative refresh time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Max cumulative refresh time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Cumulative flush time of primary shards | 0 | 0 | 0.00% | 0 | min | |
Cumulative flush count of primary shards | 1 | 1 | 0.00% | 0 | ||
Min cumulative flush time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative flush time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Max cumulative flush time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Total Young Gen GC time | 0.033 | 0.036 | 0.01% | 0.003 | s | |
Total Young Gen GC count | 2 | 2 | 0.00% | 0 | ||
Total Old Gen GC time | 0 | 0 | 0.00% | 0 | s | |
Total Old Gen GC count | 0 | 0 | 0.00% | 0 | ||
Store size | 4.36969 | 4.36969 | 0.00% | 0 | GB | |
Translog size | 5.12227e-08 | 5.12227e-08 | 0.00% | 0 | GB | |
Heap used for segments | 0 | 0 | 0.00% | 0 | MB | |
Heap used for doc values | 0 | 0 | 0.00% | 0 | MB | |
Heap used for terms | 0 | 0 | 0.00% | 0 | MB | |
Heap used for norms | 0 | 0 | 0.00% | 0 | MB | |
Heap used for points | 0 | 0 | 0.00% | 0 | MB | |
Heap used for stored fields | 0 | 0 | 0.00% | 0 | MB | |
Segment count | 10 | 10 | 0.00% | 0 | ||
Min Throughput | date_histogram_calendar_interval | 1.22898 | 1.50289 | +22.29% 🔴 | 0.27391 | ops/s |
Mean Throughput | date_histogram_calendar_interval | 1.24148 | 1.50469 | +21.20% 🔴 | 0.26321 | ops/s |
Median Throughput | date_histogram_calendar_interval | 1.24395 | 1.50427 | +20.93% 🔴 | 0.26032 | ops/s |
Max Throughput | date_histogram_calendar_interval | 1.24555 | 1.5083 | +21.09% 🔴 | 0.26275 | ops/s |
50th percentile latency | date_histogram_calendar_interval | 14122.3 | 154.661 | -98.90% 🟢 | -13967.7 | ms |
90th percentile latency | date_histogram_calendar_interval | 19465.8 | 157.889 | -99.19% 🟢 | -19308 | ms |
99th percentile latency | date_histogram_calendar_interval | 20690.9 | 161.17 | -99.22% 🟢 | -20529.7 | ms |
100th percentile latency | date_histogram_calendar_interval | 20831.6 | 176.769 | -99.15% 🟢 | -20654.8 | ms |
50th percentile service time | date_histogram_calendar_interval | 794.394 | 153.012 | -80.74% 🟢 | -641.381 | ms |
90th percentile service time | date_histogram_calendar_interval | 809.454 | 156.593 | -80.65% 🟢 | -652.862 | ms |
99th percentile service time | date_histogram_calendar_interval | 846.295 | 159.494 | -81.15% 🟢 | -686.801 | ms |
100th percentile service time | date_histogram_calendar_interval | 856.366 | 174.922 | -79.57% 🟢 | -681.444 | ms |
error rate | date_histogram_calendar_interval | 0 | 0 | 0.00% | 0 | % |
Min Throughput | date_histogram_calendar_interval_with_filter | 1.50911 | 1.50968 | 0.04% | 0.00057 | ops/s |
Mean Throughput | date_histogram_calendar_interval_with_filter | 1.51506 | 1.516 | 0.06% | 0.00094 | ops/s |
Median Throughput | date_histogram_calendar_interval_with_filter | 1.51371 | 1.51457 | 0.06% | 0.00086 | ops/s |
Max Throughput | date_histogram_calendar_interval_with_filter | 1.52712 | 1.52884 | 0.11% | 0.00172 | ops/s |
50th percentile latency | date_histogram_calendar_interval_with_filter | 19.384 | 9.75382 | -49.68% 🟢 | -9.63021 | ms |
90th percentile latency | date_histogram_calendar_interval_with_filter | 20.1579 | 11.3472 | -43.71% 🟢 | -8.81071 | ms |
99th percentile latency | date_histogram_calendar_interval_with_filter | 23.0539 | 14.5053 | -37.08% 🟢 | -8.5486 | ms |
100th percentile latency | date_histogram_calendar_interval_with_filter | 23.2335 | 30.6908 | +32.10% 🔴 | 7.45733 | ms |
50th percentile service time | date_histogram_calendar_interval_with_filter | 17.8957 | 8.32173 | -53.50% 🟢 | -9.57401 | ms |
90th percentile service time | date_histogram_calendar_interval_with_filter | 18.55 | 9.69457 | -47.74% 🟢 | -8.85547 | ms |
99th percentile service time | date_histogram_calendar_interval_with_filter | 21.1604 | 12.5508 | -40.69% 🟢 | -8.60958 | ms |
100th percentile service time | date_histogram_calendar_interval_with_filter | 21.3555 | 29.4954 | +38.12% 🔴 | 8.1399 | ms |
error rate | date_histogram_calendar_interval_with_filter | 0 | 0 | 0.00% | 0 | % |
Min Throughput | date_histogram_fixed_interval_with_metrics | 0.21029 | 0.236365 | +12.40% 🔴 | 0.02607 | ops/s |
Mean Throughput | date_histogram_fixed_interval_with_metrics | 0.210669 | 0.236734 | +12.37% 🔴 | 0.02607 | ops/s |
Median Throughput | date_histogram_fixed_interval_with_metrics | 0.210624 | 0.236739 | +12.40% 🔴 | 0.02612 | ops/s |
Max Throughput | date_histogram_fixed_interval_with_metrics | 0.210908 | 0.237013 | +12.38% 🔴 | 0.0261 | ops/s |
50th percentile latency | date_histogram_fixed_interval_with_metrics | 410523 | 358005 | -12.79% 🟢 | -52517.9 | ms |
90th percentile latency | date_histogram_fixed_interval_with_metrics | 571403 | 498335 | -12.79% 🟢 | -73068.8 | ms |
99th percentile latency | date_histogram_fixed_interval_with_metrics | 607548 | 529861 | -12.79% 🟢 | -77686.3 | ms |
100th percentile latency | date_histogram_fixed_interval_with_metrics | 611552 | 533422 | -12.78% 🟢 | -78129.9 | ms |
50th percentile service time | date_histogram_fixed_interval_with_metrics | 4731.7 | 4212.22 | -10.98% 🟢 | -519.474 | ms |
90th percentile service time | date_histogram_fixed_interval_with_metrics | 4763.05 | 4233.65 | -11.11% 🟢 | -529.407 | ms |
99th percentile service time | date_histogram_fixed_interval_with_metrics | 4794.49 | 4286.27 | -10.60% 🟢 | -508.225 | ms |
100th percentile service time | date_histogram_fixed_interval_with_metrics | 4813.14 | 4302.34 | -10.61% 🟢 | -510.803 | ms |
error rate | date_histogram_fixed_interval_with_metrics | 0 | 0 | 0.00% | 0 | % |
[INFO] SUCCESS (took 0 seconds)
❌ Gradle check result for 49de99e: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
Next steps after discussing with @jainankitk :
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am unsure if we really need this code change
@asimmahmood1 - Thanks for capturing the points from our conversation. Can you add this to the meta issue - #19384 for tracking the progress/next action items in one place instead of specific PR? |
Skiplist vs BKD (BKD is fater 15x faster) using http_logsUsing RC2 and http_logs (since it has @timestamp, while nyc_taxis has dropoff_time) to use skiplist:
bkd_enabled: 15cc39e9-92e2-44de-a102-73ce518b2d75 [ec2-user@ip-172-31-61-197 ~]$ opensearch-benchmark compare -b 15cc39e9-92e2-44de-a102-73ce518b2d75 -c 4c0bb355-b671-4a45-90db-00cf54eaf9fe / __ ____ ___ ____ / / ____ / / / __ ) ____ / / ____ ___ ____ / / Comparing baseline with contender
/ () ____ / / / /_____ ________
|
Metric | Task | Baseline | Contender | %Diff | Diff | Unit |
---|---|---|---|---|---|---|
Cumulative indexing time of primary shards | 24.432 | 24.432 | 0.00% | 0 | min | |
Min cumulative indexing time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative indexing time across primary shard | 0.46475 | 0.46475 | 0.00% | 0 | min | |
Max cumulative indexing time across primary shard | 1.84535 | 1.84535 | 0.00% | 0 | min | |
Cumulative indexing throttle time of primary shards | 0 | 0 | 0.00% | 0 | min | |
Min cumulative indexing throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative indexing throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Max cumulative indexing throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Cumulative merge time of primary shards | 1.78762 | 1.78762 | 0.00% | 0 | min | |
Cumulative merge count of primary shards | 28 | 28 | 0.00% | 0 | ||
Min cumulative merge time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative merge time across primary shard | 0.0417667 | 0.0417667 | 0.00% | 0 | min | |
Max cumulative merge time across primary shard | 0.117217 | 0.117217 | 0.00% | 0 | min | |
Cumulative merge throttle time of primary shards | 0 | 0 | 0.00% | 0 | min | |
Min cumulative merge throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative merge throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Max cumulative merge throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Cumulative refresh time of primary shards | 3.58655 | 3.58655 | 0.00% | 0 | min | |
Cumulative refresh count of primary shards | 388 | 388 | 0.00% | 0 | ||
Min cumulative refresh time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative refresh time across primary shard | 0.07675 | 0.07675 | 0.00% | 0 | min | |
Max cumulative refresh time across primary shard | 0.252517 | 0.252517 | 0.00% | 0 | min | |
Cumulative flush time of primary shards | 0.8765 | 0.8765 | 0.00% | 0 | min | |
Cumulative flush count of primary shards | 50 | 50 | 0.00% | 0 | ||
Min cumulative flush time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative flush time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Max cumulative flush time across primary shard | 0.1505 | 0.1505 | 0.00% | 0 | min | |
Total Young Gen GC time | 0 | 0 | 0.00% | 0 | s | |
Total Young Gen GC count | 0 | 0 | 0.00% | 0 | ||
Total Old Gen GC time | 0 | 0 | 0.00% | 0 | s | |
Total Old Gen GC count | 0 | 0 | 0.00% | 0 | ||
Store size | 3.29087 | 3.29087 | 0.00% | 0 | GB | |
Translog size | 2.04891e-06 | 2.04891e-06 | 0.00% | 0 | GB | |
Heap used for segments | 0 | 0 | 0.00% | 0 | MB | |
Heap used for doc values | 0 | 0 | 0.00% | 0 | MB | |
Heap used for terms | 0 | 0 | 0.00% | 0 | MB | |
Heap used for norms | 0 | 0 | 0.00% | 0 | MB | |
Heap used for points | 0 | 0 | 0.00% | 0 | MB | |
Heap used for stored fields | 0 | 0 | 0.00% | 0 | MB | |
Segment count | 25 | 25 | 0.00% | 0 | ||
Min Throughput | hourly_agg | 0.20682 | 0.205724 | -0.53% | -0.0011 | ops/s |
Mean Throughput | hourly_agg | 0.211195 | 0.209354 | -0.87% | -0.00184 | ops/s |
Median Throughput | hourly_agg | 0.210143 | 0.208488 | -0.79% | -0.00166 | ops/s |
Max Throughput | hourly_agg | 0.219761 | 0.216426 | -1.52% | -0.00334 | ops/s |
50th percentile latency | hourly_agg | 45.8745 | 803.3 | +1651.08% 🔴 | 757.425 | ms |
90th percentile latency | hourly_agg | 49.7933 | 813.639 | +1534.03% 🔴 | 763.846 | ms |
100th percentile latency | hourly_agg | 53.338 | 827.742 | +1451.88% 🔴 | 774.404 | ms |
50th percentile service time | hourly_agg | 41.7846 | 797.834 | +1809.40% 🔴 | 756.049 | ms |
90th percentile service time | hourly_agg | 44.3224 | 810.65 | +1728.99% 🔴 | 766.328 | ms |
100th percentile service time | hourly_agg | 50.0832 | 826.398 | +1550.05% 🔴 | 776.315 | ms |
error rate | hourly_agg | 0 | 0 | 0.00% | 0 | % |
Min Throughput | hourly_agg_with_filter | 0.205852 | 0.205893 | 0.02% | 4e-05 | ops/s |
Mean Throughput | hourly_agg_with_filter | 0.209567 | 0.209636 | 0.03% | 7e-05 | ops/s |
Median Throughput | hourly_agg_with_filter | 0.208684 | 0.208745 | 0.03% | 6e-05 | ops/s |
Max Throughput | hourly_agg_with_filter | 0.216806 | 0.216929 | 0.06% | 0.00012 | ops/s |
50th percentile latency | hourly_agg_with_filter | 699.418 | 699.89 | 0.07% | 0.47272 | ms |
90th percentile latency | hourly_agg_with_filter | 717.875 | 712.923 | -0.69% | -4.9521 | ms |
100th percentile latency | hourly_agg_with_filter | 742.912 | 716.146 | -3.60% | -26.7658 | ms |
50th percentile service time | hourly_agg_with_filter | 695.423 | 696.912 | 0.21% | 1.48901 | ms |
90th percentile service time | hourly_agg_with_filter | 714.297 | 708.445 | -0.82% | -5.85187 | ms |
100th percentile service time | hourly_agg_with_filter | 737.929 | 713.164 | -3.36% | -24.7657 | ms |
error rate | hourly_agg_with_filter | 0 | 0 | 0.00% | 0 | % |
@asimmahmood1 - This does not make sense to me. Have you verified that @timestamp is indexed sorted? Because I will also expect latency for |
So the above benchmark without sort, since that isn't enabled set to http_logs. After I explicitly set sort: skiplsit is still 6x slower: from 41.6 to 267.3. I get some flame graphs to see the difference. [ec2-user@ip-172-31-61-197 ~]$ opensearch-benchmark compare -c c0099b2e-1f87-4c9f-9bf5-7777cfc409d3 -b 481f61d2-e8f4-41cf-9c08-79d8285942af / __ ____ ___ ____ / / ____ / / / __ ) ____ / / ____ ___ ____ / / Comparing baseline with contender
/ () ____ / / / /_____ ________
|
Metric | Task | Baseline | Contender | %Diff | Diff | Unit |
---|---|---|---|---|---|---|
Cumulative indexing time of primary shards | 30.4435 | 30.4435 | 0.00% | 0 | min | |
Min cumulative indexing time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative indexing time across primary shard | 0.534992 | 0.534992 | 0.00% | 0 | min | |
Max cumulative indexing time across primary shard | 2.48933 | 2.48933 | 0.00% | 0 | min | |
Cumulative indexing throttle time of primary shards | 0 | 0 | 0.00% | 0 | min | |
Min cumulative indexing throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative indexing throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Max cumulative indexing throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Cumulative merge time of primary shards | 2.03565 | 2.03565 | 0.00% | 0 | min | |
Cumulative merge count of primary shards | 26 | 26 | 0.00% | 0 | ||
Min cumulative merge time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative merge time across primary shard | 0.0483833 | 0.0483833 | 0.00% | 0 | min | |
Max cumulative merge time across primary shard | 0.138283 | 0.138283 | 0.00% | 0 | min | |
Cumulative merge throttle time of primary shards | 0 | 0 | 0.00% | 0 | min | |
Min cumulative merge throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative merge throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Max cumulative merge throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Cumulative refresh time of primary shards | 7.47983 | 7.47983 | 0.00% | 0 | min | |
Cumulative refresh count of primary shards | 383 | 383 | 0.00% | 0 | ||
Min cumulative refresh time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative refresh time across primary shard | 0.155075 | 0.155075 | 0.00% | 0 | min | |
Max cumulative refresh time across primary shard | 0.5468 | 0.5468 | 0.00% | 0 | min | |
Cumulative flush time of primary shards | 1.76652 | 1.76652 | 0.00% | 0 | min | |
Cumulative flush count of primary shards | 55 | 55 | 0.00% | 0 | ||
Min cumulative flush time across primary shard | 0 | 0 | 0.00% | 0 | min | |
Median cumulative flush time across primary shard | 0.000525 | 0.000525 | 0.00% | 0 | min | |
Max cumulative flush time across primary shard | 0.354567 | 0.354567 | 0.00% | 0 | min | |
Total Young Gen GC time | 0 | 0 | 0.00% | 0 | s | |
Total Young Gen GC count | 0 | 0 | 0.00% | 0 | ||
Total Old Gen GC time | 0 | 0 | 0.00% | 0 | s | |
Total Old Gen GC count | 0 | 0 | 0.00% | 0 | ||
Store size | 3.28917 | 3.28917 | 0.00% | 0 | GB | |
Translog size | 2.04891e-06 | 2.04891e-06 | 0.00% | 0 | GB | |
Heap used for segments | 0 | 0 | 0.00% | 0 | MB | |
Heap used for doc values | 0 | 0 | 0.00% | 0 | MB | |
Heap used for terms | 0 | 0 | 0.00% | 0 | MB | |
Heap used for norms | 0 | 0 | 0.00% | 0 | MB | |
Heap used for points | 0 | 0 | 0.00% | 0 | MB | |
Heap used for stored fields | 0 | 0 | 0.00% | 0 | MB | |
Segment count | 25 | 25 | 0.00% | 0 | ||
Min Throughput | hourly_agg | 0.206824 | 0.20647 | -0.17% | -0.00035 | ops/s |
Mean Throughput | hourly_agg | 0.211201 | 0.210603 | -0.28% | -0.0006 | ops/s |
Median Throughput | hourly_agg | 0.210148 | 0.209615 | -0.25% | -0.00053 | ops/s |
Max Throughput | hourly_agg | 0.219776 | 0.218682 | -0.50% | -0.00109 | ops/s |
50th percentile latency | hourly_agg | 42.8195 | 266.492 | +522.36% 🔴 | 223.672 | ms |
90th percentile latency | hourly_agg | 46.2467 | 274.303 | +493.13% 🔴 | 228.057 | ms |
100th percentile latency | hourly_agg | 50.6097 | 286.535 | +466.17% 🔴 | 235.926 | ms |
50th percentile service time | hourly_agg | 39.8683 | 262.648 | +558.79% 🔴 | 222.78 | ms |
90th percentile service time | hourly_agg | 41.6636 | 267.376 | +541.75% 🔴 | 225.713 | ms |
100th percentile service time | hourly_agg | 43.4268 | 280.933 | +546.91% 🔴 | 237.506 | ms |
error rate | hourly_agg | 0 | 0 | 0.00% | 0 | % |
Min Throughput | hourly_agg_with_filter | 0.206495 | 0.206545 | 0.02% | 5e-05 | ops/s |
Mean Throughput | hourly_agg_with_filter | 0.21065 | 0.21073 | 0.04% | 8e-05 | ops/s |
Median Throughput | hourly_agg_with_filter | 0.209657 | 0.209728 | 0.03% | 7e-05 | ops/s |
Max Throughput | hourly_agg_with_filter | 0.218766 | 0.218913 | 0.07% | 0.00015 | ops/s |
50th percentile latency | hourly_agg_with_filter | 249.561 | 246.415 | -1.26% | -3.14616 | ms |
90th percentile latency | hourly_agg_with_filter | 253.833 | 249.756 | -1.61% | -4.07779 | ms |
100th percentile latency | hourly_agg_with_filter | 256.109 | 249.985 | -2.39% | -6.1244 | ms |
50th percentile service time | hourly_agg_with_filter | 247.171 | 242.743 | -1.79% | -4.42773 | ms |
90th percentile service time | hourly_agg_with_filter | 251.315 | 245.252 | -2.41% | -6.06339 | ms |
100th percentile service time | hourly_agg_with_filter | 253.597 | 247.605 | -2.36% | -5.9918 | ms |
error rate | hourly_agg_with_filter | 0 | 0 | 0.00% | 0 | % |
Min Throughput | hourly_agg_with_filter_and_metrics | 0.20531 | 0.205329 | 0.01% | 2e-05 | ops/s |
Mean Throughput | hourly_agg_with_filter_and_metrics | 0.208663 | 0.208692 | 0.01% | 3e-05 | ops/s |
Median Throughput | hourly_agg_with_filter_and_metrics | 0.207871 | 0.207896 | 0.01% | 2e-05 | ops/s |
Max Throughput | hourly_agg_with_filter_and_metrics | 0.215178 | 0.215235 | 0.03% | 6e-05 | ops/s |
50th percentile latency | hourly_agg_with_filter_and_metrics | 1126.16 | 1116.47 | -0.86% | -9.68716 | ms |
90th percentile latency | hourly_agg_with_filter_and_metrics | 1143.5 | 1137.3 | -0.54% | -6.19846 | ms |
100th percentile latency | hourly_agg_with_filter_and_metrics | 1148.94 | 1146.98 | -0.17% | -1.96357 | ms |
50th percentile service time | hourly_agg_with_filter_and_metrics | 1123 | 1113.42 | -0.85% | -9.58574 | ms |
90th percentile service time | hourly_agg_with_filter_and_metrics | 1138.64 | 1132.45 | -0.54% | -6.18625 | ms |
100th percentile service time | hourly_agg_with_filter_and_metrics | 1146.07 | 1144.33 | -0.15% | -1.74585 | ms |
error rate | hourly_agg_with_filter_and_metrics | 0 | 0 | 0.00% | 0 | % |
[INFO] SUCCESS (took 0 seconds)
[ec2-user@ip-172-31-61-197 ~]$
With opensearch-project/opensearch-benchmark-workloads#700 merged in, tomrorow we should see 3.3 vs 3.2 in nightly so who the benefit of skiplist (without sort). |
Description
Follow up the changes made in #19438 (comment)
I think it was this change:
5d45233
(#19438) showed better results but I'm not seeing it again.Related Issues
Resolves #[Issue number to be closed when this PR is merged]
Check List
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.