Intro

As part of my home lab rebuild, I updated the IP Addressing that I use within the lab. I needed to update the IP addressing for my 3x node Proxmox cluster to align with the new IP addressing scheme. There is alot of info out there on how to do this and most of it is out of date. I am documenting this process for future Brad and, if you are reading this post, maybe you too 🥳

Software Versions

The following software versions were used in this post.

  • Proxmox - 7.4-17

Migration Steps

The following high-level steps are required to update the IP addressing and ensure the cluster is working correctly post migration.

  • Update the /etc/pve/corosync.conf file from a single node.
  • Update the /etc/hosts file on the target node.
  • Update the /etc/network/interfaces file on the target node.
  • Restart the networking service on the target node.
  • Restart the pve-cluster service on the target node.
  • Restart the corosync service on all nodes.

Once this is completed on all nodes, the following steps finish off the migration.

  • Update the /etc/pve/priv/known_hosts with the new IP addresses of all nodes.
  • SSH to all nodes from all other nodes.

The following table lists the Proxmox node IP addressing change plan.

Node Old IP Address New IP Address
pmx01 172.31.255.10 10.100.52.10
pmx02 172.31.255.11 10.100.52.11
pmx03 172.31.255.12 10.100.52.12

Corosync

The corosync process synchronizes the cluster configuration. The /etc/pve/corosync.conf file defines the cluster sync parameters. When making changes to the corosync.conf file, the version_number needs to be incremented. This ensures that the nodes in the cluster are aware of the changes and can sync the new configuration.

Note
Changes to the /etc/pve/corosync.conf file only need to be made on one node. The changes will be sync'd to the other nodes in the cluster.

The ring0_addr parameter of each node defines the IP address of the node. This should be updated to the new IP address of the node.

I recommend changing the IP address of the nodes one at a time. This will ensure that the cluster remains operational during the change. Each time the config_version needs to be incremented. I only changed this file on my primary node pmx01 for each iterration.

/etc/pve/corosync.conf
nodelist {
  node {
    name: pmx01
    ring0_addr: 172.21.255.10
  }
  node {
    name: pmx02
    ring0_addr: 172.31.255.11
  }
  node {
    name: pmx03
    ring0_addr: 10.100.52.12
  }
}

totem {
  config_version: 420
}

Hosts File

Update the /etc/hosts file to use the new IP address.

/etc/hosts
10.100.52.12 pmx03.krazy.house pmx03

Interfaces File

Update the /etc/network/interfaces file to use the new IP address. I am using the vlan52 interface in this example.

You can either alter the existing interface or delete the existing one and create a new one. I prefer to create a new interface and comment the old one in place. This allows me to quickly roll back to the old IP addressing if required.

/etc/network/interfaces
auto vlan52
iface vlan52 inet static
        address 10.100.52.12/23
        gateway 10.100.52.1
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options tag=52
iface vlan52 inet6 static
        address fd00:10:100:52::12/64
        gateway fd00:10:100:52::1
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options tag=52
Note
I am using Open vSwitch in my Proxmox cluster. If you are not using Open vSwitch, your config will be different. Consult the Proxmox documentation for the correct configuration.

Restart Services

Once all this is done, the services need to be restarted.

Network Service

To load the new interface config restart the networking service.

cmd
systemctl restart networking.service

PVE Cluster Service

On the target node restart the pve-cluster service.

cmd
systemctl restart pve-cluster.service

Corosync Service

On the all cluster nodes restart the corosync service.

cmd
systemctl restart corosync.service

Validate

Once this is done, check the status of the cluster with the following command.

cmd
pvecm status

This should show the cluster is in a stable state and the IP address of the node you just changed is correct.

Repeat

When all the services have been restarted, the node should be operational with the new IP addressing. Wait till everything is stable, then rinse and repeat for the other nodes in the cluster.

SSH

Once all the nodes have been updated, update the /etc/pve/priv/known_hosts file with the new IP addresses on each node. Then SSH to each node from all other nodes. This will ensure that the nodes can seamlessly reach each other.

Clean Up

Existing Interface

Previously, I had a vlan90 interface as the management interface. I am no longer using this interface so I need to remove it. Deleting the configuration for vlan90 from the /etc/network/interfaces file does not clear the existing IP address from the running configuration.

To clear the IP addressing from the vlan90 interface, use the following command.

cmd
ip addr flush dev vlan90

This will clear the IP addressing, but the interface will still be present in the running configuration. However, the interface will be removed on the next reboot.

Issue File

The IP address in the /etc/issue file needs to be updates to the new IP address of the node.

Outro

Hopefully, this post has been helpful. I was a bit nervous about bricking my cluster during this process. Thankfully, it was pretty strainght forward and the cluster remained operational during the change.

Until next time, happy labbing! 🤘