Edit the file
/etc/ocfs2/cluster.conf. It should look like this:
node: ip_port = 7777 ip_address = 10.0.0.10 number = 0 name = flexnode01 cluster = ocfs2 node: ip_port = 7777 ip_address = 10.0.0.11 number = 1 name = flexnode02 cluster = ocfs2 cluster: node_count = 2 name = ocfs2
Delete the node entry of the host you are removing, adjust the node numbers of the following nodes and decrease the node count of the cluster. Be sure to write the same configuration in all nodes. For instance, if we removed node flexnode01 in the previous example, the resulting configuration file would be:
node: ip_port = 7777 ip_address = 10.0.0.11 number = 0 name = flexnode02 cluster = ocfs2 cluster: node_count = 1 name = ocfs2
- Restart the cluster services with
systemctl restart o2cb.
Resizing an OCFS2 volume
It is possible to resize an OCFS2 volume to make it bigger (never smaller), just follow these steps:
- Make a backup of the contents of the volume.
- You need to unmount the volume, so:
- Stop all the guests with an image in that volume.
Stop the flexvdi-agent service in all the hosts that share the volume. Otherwise, they will remount it as soon as they detect it is not mounted:
# systemctl stop flexvdi-agent
- Unmount the volume in all the hosts.
In one host, perform a filesystem check. Assuming it is in partition
# fsck.ocfs2 -fn /dev/sdb1
- Resize the underlying device to the desired capacity. This may be a logical volume in a shared storage cluster, for instance. How you do this is out of the scope of this guide.
Rescan the underlying device in all your hosts. Assuming the device is
/dev/sdb, run in all the hosts:
# echo 1 > /sys/block/sdb/device/rescan
If your device is part of a multipath device, rescan all the devices. Then, assuming it is called
mpatha, run in all the hosts:
# multipathd resize map /dev/mapper/mpatha
Resize the underlying device partition. Assuming the device is
/dev/sdb, run in one host only:
# parted -s /dev/sdb resizepart 0 100%
Now, refresh the partition sizes in all your hosts.
Resize the OCFS2 filesystem and check it again:
# tunefs.ocfs2 -S /dev/sdb1 # fsck.ocfs2 -fn /dev/sdb1
- Finally, restart the flexvdi-agent service again in all your hosts, and they will mount the volume again in the right place.
Accessing shared storage
The following sections explain how to use a shared storage as an Image Storage and access its Volumes, and how to move your flexVDI Manager instance to a shared storage Volume to provide high availability.