You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Feb 25, 2025. It is now read-only.
While load testing my environment I found out that whenever a queue is in "flow" state the plugin crashes with error message copied below (from RabbitMQ's crash.log file).
* Remove redundant Dockerfile ARG
* Allow overriding DOCKER_BASE_IMAGE when building Docker image
* Add make target that allows building ezs in a specific OTP context
This was necessary to test that the fixes for #75 work in RabbitMQ
v3.7.9 & OTP v21.2.5, the combo that ships in Docker image
rabbitmq:3.7.9. This is the first RabbitMQ Docker image that supports
prometheus_rabbitmq_exporter (compiled with OTP v21.0.2). Docker image
rabbitmq:3.7.8 ships with OTP v20.3.8.5
Notice that I disable prometheus_process_collector in the Docker build
context. This fails to build due to Docker image erlang:21.2.5 missing
native libs. Didn't want to shave this yak as well.
* Fix metric mappings to ETS tables
In RabbitMQ v3.7.x, the following mappings were not valid and would
result in crashes when a produce/consumer would connect to RabbitMQ:
* connection_metrics
* connection_recv_count, recv_count
* connection_send_count, send_count
* channel_queue_metrics
* channel_queue_get, get
* channel_queue_get_no_ack, get_no_ack
* channel_queue_deliver, deliver
* channel_queue_deliver_no_ack, deliver_no_ack
* channel_queue_redeliver, redeliver
* channel_queue_ack, ack
* channel_queue_get_empty, get_empty
The following metric was mapped incorrectly:
* connection_churn_metrics
* 7, queue_deleted
There is more line churn in this commit than necessary. I was
re-formatting all metrics as I was going over all mappings. This
confirms that I've checked that all mappings are accurate.
I've tested manually against current v3.7.x branch via `make run-broker`
as well as rabbitmq:3.7.9 Docker image:
make distclean
make otp-21.2.5-build-context
make /usr/local/bin/elixir
apt update && apt install zip
make ezs
^D
make docker_build DOCKER_BASE_IMAGE=rabbitmq:3.7.9-management
docker run -it -p 5672:5672 -p 15672:15672 deadtrickster/rabbitmq_prometheus:3.7
cd ~/github.com/rabbitmq/rabbitmq-perf-test
make run ARGS="-x 1 -y 1 -r 1"
curl localhost:15672/api/metrics
# NO CRASHES!
Fixes#75
cc @dcorbacho
While load testing my environment I found out that whenever a queue is in "flow" state the plugin crashes with error message copied below (from RabbitMQ's crash.log file).
2020-07-08 18:39:39 =CRASH REPORT==== crasher: initial call: cowboy_stream_h:request_process/3 pid: <0.1663.0> registered_name: [] exception error: {{case_clause,flow},[{prometheus_rabbitmq_queues_collector,'-collect_mf/2-fun-3-',1,[{file,"src/collectors/prometheus_rabbitmq_queues_collector.erl"},{line,75}]},{prometheus_rabbitmq_queues_collector,'-collect_metrics/2-lc$^1/1-0-',3,[{file,"src/collectors/prometheus_rabbitmq_queues_collector.erl"},{line,102}]},{prometheus_model_helpers,create_mf,5,[{file,"/home/dead/Projects/rabbitmq/prometheus_rabbitmq_exporter/deps/prometheus/src/model/prometheus_model_helpers.erl"},{line,127}]},{prometheus_rabbitmq_queues_collector,mf,3,[{file,"src/collectors/prometheus_rabbitmq_queues_collector.erl"},{line,94}]},{prometheus_rabbitmq_queues_collector,'-collect_mf/2-lc$^4/1-2-',3,[{file,"src/collectors/prometheus_rabbitmq_queues_collector.erl"},{line,75}]},{prometheus_rabbitmq_queues_collector,collect_mf,2,[{file,"src/collectors/prometheus_rabbitmq_queues_collector.erl"},{line,75}]},{prometheus_collector,collect_mf,3,[{file,"/home/dead/Projects/rabbitmq/prometheus_rabbitmq_exporter/deps/prometheus/src/prometheus_collector.erl"},{line,153}]},{prometheus_registry,'-collect/2-lc$^0/1-0-',3,[{file,"/home/dead/Projects/rabbitmq/prometheus_rabbitmq_exporter/deps/prometheus/src/prometheus_registry.erl"},{line,86}]}]} ancestors: [<0.1662.0>,<0.632.0>,<0.631.0>,rabbit_web_dispatch_sup,<0.616.0>] message_queue_len: 0 messages: [] links: [#Port<0.329>,<0.1662.0>] dictionary: [{{xtype_to_module,topic},rabbit_exchange_type_topic},{{xtype_to_module,direct},rabbit_exchange_type_direct},{{xtype_to_module,headers},rabbit_exchange_type_headers},{{xtype_to_module,fanout},rabbit_exchange_type_fanout}] trap_exit: false status: running heap_size: 10958 stack_size: 27 reductions: 12436045
The text was updated successfully, but these errors were encountered: