-
Notifications
You must be signed in to change notification settings - Fork 128
Add NAT64 to enable IPv6 provision in v4 only host #1567
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add NAT64 to enable IPv6 provision in v4 only host #1567
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
1abf0ce
to
6122b0f
Compare
This PR is quite big so I can also split it into smaller PRs. The IPv6 is not ready in dev env after this, but I just want to have some feedback and discussion on how to do the DNS stuff. The bot already pinged @elfosardo but you might also want to test this on your tests, because this is quite big and introduces new moving parts. |
/test metal3-centos-e2e-integration-test-release-1-10 metal3-dev-env-integration-test-ubuntu-main |
{% endif %} | ||
{% if IP_STACK == 'v6' or IP_STACK == 'v4v6' %} | ||
- 2001:4860:4860::8888 | ||
- {{ LOCAL_DNS_V6 }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is little controversial. I don't think that it really matters inside the dev env, as it does not really use IPv6 itself, but this might break some users use case.
- Install Tayga as NAT64 service - Install Bind9 and configure it as local DNS server with DNS64 - Add IPv6 specific addresses and logic to cluster, controlplane, and worker templates - Update vars.md with new variables - Update CNI configuration for IPv6 Signed-off-by: Nuutti Hakala <[email protected]>
6122b0f
to
cfafbd0
Compare
/test metal3-centos-e2e-integration-test-release-1-10 metal3-dev-env-integration-test-ubuntu-main |
/cc @terror96 |
@tuminoid: GitHub didn't allow me to request PR reviews from the following users: terror96. Note that only metal3-io members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/cc @terror96 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor comments & suggestions
@@ -1,5 +1,7 @@ | |||
# CentOS specific worker kubeadm config | |||
preKubeadmCommands: | |||
- sysctl -w net.ipv6.conf.all.forwarding=1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would there be need to guard all changes (in this file) to inside {% if IP_STACK == "v6" %}
?
@@ -1,5 +1,7 @@ | |||
# CentOS specific controlplane kubeadm config | |||
preKubeadmCommands: | |||
- sysctl -w net.ipv6.conf.all.forwarding=1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would there be need to guard all changes (in this file) to inside {% if IP_STACK == "v6" %}
?
{# According to VRRP, link local address should come first with v6 #} | ||
{{ CLUSTER_APIENDPOINT_LINK_LOCAL }} | ||
{% endif %} | ||
{{ CLUSTER_APIENDPOINT_IP }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as earlier: is there an ...
spec: | ||
controlPlaneEndpoint: | ||
host: ${ CLUSTER_APIENDPOINT_HOST } | ||
host: ${ CLUSTER_APIENDPOINT_IP } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there an original setup where we would prefer using CLUSTER_APIENDPOINT_HOST?
DOCKER_HUB_PROXY: "{{ lookup('env', 'DOCKER_HUB_PROXY') }}" | ||
WORKING_DIR: "{{ lookup('env', 'WORKING_DIR') | default('/opt/metal3-dev-env', true) }}" | ||
LOCAL_DNS_V6: "{{ lookup('env', 'LOCAL_DNS_V6') | default('fd00:abcd::1', true) }}" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about keeping the original default value: '2001:4860:4860::8888' or is that some obsolete value?
CALICO_PATCH_RELEASE: "{{ lookup('env', 'CALICO_PATCH_RELEASE') | default('v3.25.1', true) }}" | ||
DOCKER_HUB_PROXY: "{{ lookup('env', 'DOCKER_HUB_PROXY') }}" | ||
WORKING_DIR: "{{ lookup('env', 'WORKING_DIR') | default('/opt/metal3-dev-env', true) }}" | ||
LOCAL_DNS_V6: "{{ lookup('env', 'LOCAL_DNS_V6') | default('fd00:abcd::1', true) }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Traditionally fd ULA-addresses have a 40-bit of Global ID: https://en.wikipedia.org/wiki/Unique_local_address ... yes, it can be zeros, but would still pick something else .... not that it really matters in this case.
DNS64_PREFIX: "{{ lookup('env', 'DNS64_PREFIX') | default('fd00:ffff:ffff::/96', true) }}" | ||
# Local DNS is only used in IPv6 only env | ||
LOCAL_DNS_V4: "{{ lookup('env', 'LOCAL_DNS_V4') | default('127.0.0.2', true) }}" | ||
LOCAL_DNS_V6: "{{ lookup('env', 'LOCAL_DNS_V6') | default('fd00:abcd::1', true) }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same ULA nit.
shell: | ||
cmd: "ip addr del {{ LOCAL_DNS_V6 }}/128 dev lo" | ||
become: yes | ||
ignore_errors: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could also add IPv4 and IPv6 addresses to the tunnel device in order to enable sending of ICMP(v6) messages. It is a stupid that the need to be added manually, because they also need to be specified in the tayga configuration file. Well, life is.
ipv4-addr 192.168.255.1 | ||
prefix {{ DNS64_PREFIX }} | ||
dynamic-pool 192.168.255.0/24 | ||
data-dir /var/spool/tayga |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe also IPv6 address for ICMPv6 traffic.
"dns": [ "{{ DOCKER_LOCAL_DNS_V6 }}" ], | ||
{% endif %} | ||
"ipv6": {{ DOCKER_IPV6_SUPPORT }}, | ||
"fixed-cidr-v6": "fd00:d0c4::/32", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not your doing, but I would stick to a minimum of a 48-bit prefix. See: https://en.wikipedia.org/wiki/Unique_local_address
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This adds
Why we need this? Previously IPv6 support was added so that the dev env could create bare metal hosts over IPv6. This PR extends that so that we can also provision the bare metal hosts over IPv6 and the provisioned images will be IPv6 only. The tricky part is that our CI environment does not support IPv6 natively, so the VMs cannot access internet over IPv6 and hence also cannot download Kubernetes images and other needed images to set up cluster.
Introducing NAT64/DNS64 solves that issue. It essentially allows the dev env to be deployed with IPv6 only scenario on IPv4 host. Furthermore, this PR introduces required changes to the templates so that they are configured to use IPv6.
NOTE! Running
make test
does not pass yet. However, these changes should allow IPv6 only BMH to be provisioned with operating system and creating a K8s cluster in those BMHs. Only working with centos node images.Other not directly related changes that could actually be in their own PR:
vm-setup/roles/packages_installation/files/daemon.json
tovm-setup/roles/packages_installation/templates/daemon.json
to better reflect the purpose of the file.Other considerations
cannot parse input: [fd55::1]:5000/localimages/cluster-api-provider-metal3:main
. I manually tested this and crio was able to pull images after creating a hostname in/etc/hosts
and specifying that hostname instead of bare IPv6 address.CLUSTER_APIENDPOINT_HOST
intoCLUSTER_APIENDPOINT_IP
in some templates, because having brackets around the IPv6 address caused errors.