published: 18th of February 2024
As part of my home lab rebuild, I updated the IP Addressing that I use within the lab. I needed to update the IP addressing for my 3x node Proxmox cluster to align with the new IP addressing scheme. There is alot of info out there on how to do this and most of it is out of date. I am documenting this process for future Brad and, if you are reading this post, maybe you too 🥳
The following software versions were used in this post.
The following high-level steps are required to update the IP addressing and ensure the cluster is working correctly post migration.
Once this is completed on all nodes, the following steps finish off the migration.
The following table lists the Proxmox node IP addressing change plan.
Node | Old IP Address | New IP Address |
---|---|---|
pmx01 | 172.31.255.10 | 10.100.52.10 |
pmx02 | 172.31.255.11 | 10.100.52.11 |
pmx03 | 172.31.255.12 | 10.100.52.12 |
The corosync process synchronizes the cluster configuration. The /etc/pve/corosync.conf file defines the cluster sync parameters. When making changes to the corosync.conf file, the version_number needs to be incremented. This ensures that the nodes in the cluster are aware of the changes and can sync the new configuration.
The ring0_addr parameter of each node defines the IP address of the node. This should be updated to the new IP address of the node.
I recommend changing the IP address of the nodes one at a time. This will ensure that the cluster remains operational during the change. Each time the config_version needs to be incremented. I only changed this file on my primary node pmx01 for each iterration.
nodelist {
node {
name: pmx01
ring0_addr: 172.21.255.10
}
node {
name: pmx02
ring0_addr: 172.31.255.11
}
node {
name: pmx03
ring0_addr: 10.100.52.12
}
}
totem {
config_version: 420
}
Update the /etc/hosts file to use the new IP address.
10.100.52.12 pmx03.krazy.house pmx03
Update the /etc/network/interfaces file to use the new IP address. I am using the vlan52 interface in this example.
You can either alter the existing interface or delete the existing one and create a new one. I prefer to create a new interface and comment the old one in place. This allows me to quickly roll back to the old IP addressing if required.
auto vlan52
iface vlan52 inet static
address 10.100.52.12/23
gateway 10.100.52.1
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=52
iface vlan52 inet6 static
address fd00:10:100:52::12/64
gateway fd00:10:100:52::1
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=52
Once all this is done, the services need to be restarted.
To load the new interface config restart the networking service.
systemctl restart networking.service
On the target node restart the pve-cluster service.
systemctl restart pve-cluster.service
On the all cluster nodes restart the corosync service.
systemctl restart corosync.service
Once this is done, check the status of the cluster with the following command.
pvecm status
This should show the cluster is in a stable state and the IP address of the node you just changed is correct.
When all the services have been restarted, the node should be operational with the new IP addressing. Wait till everything is stable, then rinse and repeat for the other nodes in the cluster.
Once all the nodes have been updated, update the /etc/pve/priv/known_hosts file with the new IP addresses on each node. Then SSH to each node from all other nodes. This will ensure that the nodes can seamlessly reach each other.
Previously, I had a vlan90 interface as the management interface. I am no longer using this interface so I need to remove it. Deleting the configuration for vlan90 from the /etc/network/interfaces file does not clear the existing IP address from the running configuration.
To clear the IP addressing from the vlan90 interface, use the following command.
ip addr flush dev vlan90
This will clear the IP addressing, but the interface will still be present in the running configuration. However, the interface will be removed on the next reboot.
The IP address in the /etc/issue file needs to be updates to the new IP address of the node.
Hopefully, this post has been helpful. I was a bit nervous about bricking my cluster during this process. Thankfully, it was pretty strainght forward and the cluster remained operational during the change.
Until next time, happy labbing! 🤘