-
Notifications
You must be signed in to change notification settings - Fork 17
Proposed guest blog post about containers, public IPs and Firewalld port forwarding #29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
…rs-using-firewalld'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for submitting your post! I have some suggestions.
## The Scenario | ||
You have some LXC containers running on a host, the default LXD setup creates a virtual bridge to which all the containers are connected, they have their own private network say in the 10.10.1.0/24 subnet. | ||
|
||
You use Firewalld to forward ports from the public internet to the containers. In this scenario when a container does a DNS lookup to which the answer is the public IP address of the LXD host and the container then tries to connect to say port 80 on that public IP it will fail. Why? The HTTP request is received on the input chain by firewalld, no processing is required as it appears to be destined for the public internet. Firewalld outputs the packet to the public interface of the host. Since the HTTP Proxy is not bound to the public interface, it is instead reached via port forwards in Firewalld, the connection fails. This is because Firewalld handles the port forwarding and, after outputting the packet to the public interface, it never returns and therefore Firewalld cannot process the port forward. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
HTTP request is received on the input chain by firewalld, no processing is required as it appears to be destined for the public internet.
You are correct that it's received on INPUT, but that means it's destined to the LXC HOST. Not the internet.
Since the HTTP Proxy is not bound to the public interface, it is instead reached via port forwards in Firewalld, the connection fails.
Rearranging the wording would help here.
"Since the HTTP Proxy is not bound to the public interface the connection fails. It instead must be reached via port forwards in Firewalld."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies, the first point was phrased poorly. I was trying to convey that it "appears" from the Firewalld perspective that it is destined to be OUTPUT, no further processing will be done.
In any case I am happy to correct it. I will also amend the second point I agree my version is clunky.
Thank you
## The Solution | ||
Destination NAT rules in firewalld are the solution here. | ||
|
||
I understand NAT would appear to be an obvious answer. Indeed if you simply enable masquerading on the zone which contains the container virtual network this will begin to work, this has the unintended consequence of also source NATing all incoming requests to the containers. This means client IPs will no longer visible to applications running in containers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this has the unintended consequence of also source NATing all incoming requests to the containers. This means client IPs will no longer visible to applications running in containers.
That's surprising. It should not be the case. Source NAT (masquerade) should only happen for traffic leave the LXC host and destined to the internet/LAN.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am almost certain this is what is happening since this is what I was initially trying to solve.
When I enabled masquerading on the zone which dealt with the container virtual network, I was able to reach the port forwards in Firewalld. Specifically port forwarding of port 80 and 443 to the NGINX container. However there was a problem, NGINX could not ascertain the HTTP client IP address, instead every request appeared to orginate from the Firewalld host.
When I turned of masquerading, NGINX was still accessible from the public internet via the Firewalld port forwarding and the HTTP client IP address was correct. Obviously it was no longer accessible from the internal virtual container network.
This is why I concluded that masquerading was also resulting in SRC-NAT.
Of course, I could be wrong. I am to make any edit you may suggest? I think an explanation of why "--add-masquerade" doesn't work and the rich rule does is necessary though.
Thank you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I follow you now. I reworded it a bit below. What do you think?
NAT would appear to be an obvious answer. If you enable masquerading (source NAT) on the zone which contains the container virtual network, e.g. trusted
, the traffic will pass. Unfortunately it has unintended consequence of source NAT-ing all incoming requests to the containers. This means client IPs will no longer visible to applications running in containers.
However, I'm still not convinced the traffic should have worked with --zone trusted --add-masquerade
.
What version of firewalld are you using?
|
||
I understand NAT would appear to be an obvious answer. Indeed if you simply enable masquerading on the zone which contains the container virtual network this will begin to work, this has the unintended consequence of also source NATing all incoming requests to the containers. This means client IPs will no longer visible to applications running in containers. | ||
|
||
Destination NAT is applied on the input chain, before the routing decision, where it modifies the destination IP address of the packet based. In the example the diagram describes Firewalld recognises that the destination IP for the HTTP request is the public IP of the host. After this it takes the packet and changes the destination IP address to the internal IP address of the Web Proxy Container. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Destination NAT is applied on the input chain
It's actually applied on the prerouting
chain in the nat
table (nat
hook type in nftables).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, I will correct this.
|
||
1. Execute these commands on the Firewalld host: | ||
``` | ||
sudo firewall-cmd --zone=trusted --add-rich-rule='rule family="ipv4" destination address="123.123.123.123" forward-port port="80" protocol="tcp" to-port="80" to-addr="10.10.1.20"' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can replace 123.123.123.123
with 192.0.2.123
. The latter in part of the reserved example network, 192.0.2.0/24
.
Not required. Just a suggestion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy to change it, may I suggest "203.0.113.1"? This is also part of an reserved example network according to your link. However it has the advantage of not beginning with 192 and being confused with a private address as they often begin with 192.
Thank you.
sudo firewall-cmd --zone=trusted --add-rich-rule='rule family="ipv4" destination address="123.123.123.123" forward-port port="80" protocol="tcp" to-port="80" to-addr="10.10.1.20"' | ||
sudo firewall-cmd --zone=trusted --add-rich-rule='rule family="ipv4" destination address="123.123.123.123" forward-port port="443" protocol="tcp" to-port="443" to-addr="10.10.1.20"' | ||
``` | ||
**Explainer:** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
*Explanation
- You will need to make sure `--zone=` matches the zone you have your container virtual network bound to. | ||
- In this example 123.123.123.123 is our public IP address, change this to match yours | ||
- In this example 10.10.1.20 is the internal IP address of the container running HTTP reverse proxy. Change this IP to match your setup. | ||
- If you add other protocols which where handled by port forwarding, you would just continue adding rules with the appropriate port numbers.eriy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Extra string "eriy" at the end.
sudo firewall-cmd --permanent --zone=trusted --add-rich-rule='rule family="ipv4" destination address="123.123.123.123" forward-port port="80" protocol="tcp" to-port="80" to-addr="10.10.1.20"' | ||
sudo firewall-cmd --permanent --zone=trusted --add-rich-rule='rule family="ipv4" destination address="123.123.123.123" forward-port port="443" protocol="tcp" to-port="443" to-addr="10.10.1.20"' | ||
``` | ||
**Explainer:** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Explanation
…-ip-from-vms-containers-using-firewalld'
e053b16
to
3145386
Compare
c1295bb
to
ad8e5d2
Compare
Hello,
I was told in IRC that the Firewalld blog sometimes publishes guest posts and it was suggested my post may be suitable.
While a simple problem, I found it very hard to find the solution to the challenge elsewhere on the internet and it was a eureka moment in an IRC channel that led me to it. I think the reason it may be so difficult to find the answer is that simply applying
--add-masquerade
to a zone will get things working. However this has the unintended consequence of also doing SRC-NAT.So the solution I finally found had the small nuance of only doing DST-NAT. It's hard to get a search engine to seek out this nuance.
If there any questions or suggested edits, I'd be happy to help.
The original post is at https://www.dfoley.ie/blog/access-to-public-ip-from-vms-containers-using-firewalld.
Thank you.