-
Notifications
You must be signed in to change notification settings - Fork 940
ARTEMIS-5745 Unix socket - initial commit #6027
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
What's the essential use-case for this change? |
|
I would say it's proxying traffic to the socket instead of TCP port for performance and/or security reasons, when using another webserver on the same machine. |
|
I can see how a Unix socket might perform better than a TCP socket in a relative sense, but what's the use-case for actually needing this kind of performance increase for the embedded web server? I wouldn't expect the embedded web server to be getting a heavy enough load for this to actually make a statistically significant difference. |
|
I understand your point, however for me it's not about "needing" the extra performance, it's about having the option. I'm not sure what you mean by statistically significant differences ... I haven't done extensive testing but I would expect this to have a measurable, reproducible impact on website loading times in the range of a couple of percents. This would matter a lot more if the load on the console would be heavier, I completely agree. I'm sure a number of cases could be made for decoupling messaging traffic from web traffic, and for security. We're probably looking at this from very different perspectives :) |
Why would you want to have the option if you don't need the extra performance (assuming the main use-case here really is related to performance)?
I mean a meaningful, measurable difference for a real-world use-case. For example, 10% faster average load times for the web console using the default configuration, 20% faster load times for the web console in high-load uses-cases (e.g. lots of addresses & queues), etc. Something like a 2-3% increase probably isn't worth the additional complexity (i.e. technical debt) here.
What kind of decoupling did you have in mind? Messaging traffic is handled via Netty by the Core broker which is already separate from web traffic, which is handled by the embedded Jetty instance. Both use independent thread pools. Also, traffic for each goes over different ports and potentially even different network interfaces. Can you elaborate on the security aspect here?
That's certainly possible. Since you specifically solicited opinions in the description of the Jira I figured I'd jump in. I'm always trying to better understand how folks use the broker, and I'm not super familiar with Unix domain sockets so I'm keen to learn about the use-case here. Aside from that I'd like to reduce complexity and technical debt where it makes sense. I don't want to fall into premature optimization. |
Maybe that was a poor choice of words ... I'll try to explain my reasoning as best as I can. I hope you can understand the tone of the message :) Need is a strong word. You could say that no-one needs the extra 10% when accessing the web console. Regardless, for the sake of this argument, let's say that I don't "need" this extra 10% at this moment, because the web console seems to be handling things ok. It's nice to have the option! though, because I might need it in the future. I'm expecting my broker to be under heavier loads, and my network to degrade ... etc.
I thought that's what you were getting at. This is also why I wanted some opinions. You're right, this adds additional complexity, and so the question is if it's worth it. Since I'm not an "expert" developer I'm not entirely sure how big the technical debt of adding this feature could potentially be. I've been trying to benchmark this today but I'm getting wildly different results while benchmarking the same instance - so I don't want to mention any numbers at this moment ... But I intend to post the results here once I'm able to reproduce them.
I was referring to network traffic, if we assume that all the netty connectors are on the same interface as jetty, but you've already mentioned different interfaces.
I think most people would mention linux file permissions here, but I don't think I'm suited to defend that point of view. |
I look forward to that! Thanks for your investment here. 👍 |
|
This was a bit harder to test that I thought (had some issues with reproducing results and puppeteer memory usage, and so on). For now I have 3 comparisons. I used the same broker instance for testing this, with this bootstrap.xml config: I proxied both addresses via nginx with minimal config. I tested the "status" console page manually (with chrome dev tools) and the /console/artemis login screen with puppetteer.js: The calls via puppeteer to /console/artemis made 10 requests on average. Manual testing the status page had 76 requests.
|
|
Do you plan on doing more testing? The current results aren't exactly compelling. |
No description provided.