Getting multi-node Microstack Instance Migration to work

I recently started work on deploying Microstack to replace a lot of manual management of VMs / networks. Why Microstack? because i’m very lazy and doing a full Openstack install on more than one server does not sound like my idea of a good time.

Getting started with Microstack is very easy, just a SNAP install on each host and a couple of quick commands and you’re up and running with a multi-node Openstack environment. However doing much more than spinning up the CirrOS VM covered in the tutorial and you’ll start to run into issues. Note that this guide currently covers Ubuntu 20.04.02 LTS and Microstack version Ussuri 2020-11-23 (222) installed via the guide available here

(Edit: This has been fixed as of snap 233) The first issue I discovered was that image files over one gig could not be uploaded via the web ui, and thanks to the way SNAP works works it does a great job of hiding where the image files live (/var/snap/microstack/common/images/). The issue is caused by the default nginx config not allowing large file uploads and can be fixed by editing the following file:

sudo vi /var/snap/microstack/common/etc/nginx/snap/nginx.conf

to include this line:

client_max_body_size 4096M;

this will allow up to 4GB uploads via the web ui. Once able to upload my own images I started spinning up VMs and seeing what Microstack can do. But it wasn’t long before I ran into another issue that was a little more work to fix.

The issue I ran into was out of the box I was unable to migrate instances between hosts. Microstack’s setup sifting through logs a bit difficult, just dumping them all into syslog, but eventually I found a few issues. Firstly that the two old mac minis i was using as a lab environment had different generations of processor causing incompatibility between them. This will almost certainly be an issue in production as well unless you buy and replace all your servers at once. To resolve the issue edit the following file on all nodes:

sudo vi /var/snap/microstack/common/etc/nova/nova.conf.d/hypervisor.conf

with the following:

[DEFAULT]
compute_driver = libvirt.LibvirtDriver

[workarounds]
disable_rootwrap = True

[libvirt]
virt_type = kvm
#cpu_mode = host-passthrough
cpu_mode = custom
cpu_model = kvm64

Note: this may have changed to /var/snap/microstack/common/etc/nova/nova.conf.d/nova-snap.conf as of snap 233, doing some digging on this. Additionally is looks like libvirtd is now listening on the correct ports out of the box.

edit 6/16/2021:

for snap 223 to get HVM working instead of the default PV edit /var/snap/microstack/common/etc/nova/nova.conf.d/nova-snap.conf libvirt section to look like the following on all hosts. Also make sure that hosts are able to resolve each other by hostname, you may need to edit /ets/hosts. This may require running as root to edit so try sudo su.

[libvirt]
virt_type = kvm
cpu_mode = custom
cpu_models = kvm64

This will make libvirt ignore the hypervisor processor and allow that check to pass. With that solved it was on to the next issue. Now when attempting to migrate instances the CLI said it completed, and the GUI showed no errors, however the VM would not migrate.

Sifting through logs showed that libvirtd was unable to connect to the other node (via TCP port 16509 by default) to initiate the migration. I was unable to find much in the way of a resolution for this, or even many others with the same issue. Some digging into libvirtd showed that there was a recent change to how to place it into listen mode and select a port. Relevant bug from Redhat here and man page. It seems this change was missed in Microstack’s default config. The required socket files for libvirt to open and listen on a port were not present in /etc/systemd/system. To resolve this do the following:

copy these files:

/snap/microstack/222/usr/lib/systemd/system/libvirtd-tcp.socket
/snap/microstack/222/usr/lib/systemd/system/libvirtd.socket

to /etc/systemd/system/. Next you’ll need to make a few minor edits. In both libvirtd.socket and libvirtd-tcp.socket you’ll change two lines:

#Before=libvirtd.service
Before=snap.microstack.libvirtd.service

#Service=libvirtd.service
Service=snap.microstack.libvirtd.service

remove or comment the line SocketGroup=libvirt from libvirtd.socket. next you’ll run systemctl daemon-reloadsystemctl enable libvirtd-tcp.socket and lastly systemctl enable libvirtd.socket. Once that is done on any and all microstack compute / controller nodes reboot them and you should be able to live migrate instances between hosts.

Update 4/23/2021

If you get the error:

Error: Could not find default role "_member_" in Keystone

when attempting to create a new project in Microstack there is a simple fix. Just go to the Identity > Roles page and add a role named: _member_ and then try again and the error should be resolved.

If your VMs are unable to access the external network run this command to see if ip forwarding is enabled on the hypervisor:

sudo sysctl net.ipv4.ip_forward

if you get a result of 0 then run the following command to enable forwarding:

sudo sysctl -w net.ipv4.ip_forward=1

To make this run at boot edit:

/etc/sysctl.conf

and uncomment the line:

net.ipv4.ip_forward=1

now networking will work for VMs after a reboot.

edit 4/24/2021:

(note, this needs to be reviewed and possibly updated)

Cinder doesnt work out of the box and after much fiddling it was as simple as starting the cinder scripts:

snap.microstack.cinder-backup.service
snap.microstack.cinder-scheduler.service
snap.microstack.cinder-uwsgi.service
snap.microstack.cinder-volume.service

and to run them at boot just run the following for each cinder service:

sudo systemctl enable snap.microstack.cinder-<replace>.service

and then reload the systemctl daemon:

sudo systemctl daemon-reload

Update: fixed this chunk on 6/10/2021

With cinder now running we can create a disk to store our volumes. You’ll need a blank disk mounted to /dev/<drive name> and then we’ll run a couple of commands to complete the setup:

sudo pvcreate /dev/<drive name>
sudo vgcreate cinder-volumes /dev/<drive name>

to get volumes to attach to CentOS / RedHat instances. I’ll add this when i run into the issue.

Update 5/20/2021:

Verified that getting instances directly connected to the external network is a bit of an issue, but the excellent workaround available here: https://connection.rnascimento.com/2021/03/08/openstack-single-node-microstack/ mostly worked to resolve the issue. The one issue i ran into is that the work around script doesnt add a default gateway which caused the server to be unreachable. After following the directions in the link above I modified the workaround script thusly:

#!/bin/bash
#
# Workaround to enable physical network access to MicroStack
#
# Adds the server physical ip address to br-ex. Replace the physicalcidr and gateway values to match your NIC's IP address and gateway

physicalcidr=<your ip>
gateway=<your gateway ip>
# Add IP address to br-ex
ip address add $physicalcidr dev br-ex || :
ip route del default via $gateway || :
ip route add default via $gateway dev br-ex ||:

Edit 6/16/2021:

Next you’ll need to update netplan (assuming you’re using ubuntu) to the following:

network:
  ethernets:
    <physical interface>:
      dhcp4: false
    br-ex:
        addresses:
             - <server IP/subnet>
        gateway4: <gateway ip>
        nameservers:
             addresses: [<dns server IP(s)>]
  version: 2

then run netplan apply and now everything is working as expected after rebooting the server.

Update 5/26/2021:

been doing a fair amount of work on this, will be pushing some more updates soon.

Update 6/2/2021:

collecting all the useful command aliases i’ve found and used:

sudo snap alias microstack.openstack openstack
sudo snap alias microstack.ovs-vsctl ovs-vsctl

Update 6/11/2021:

Tested the steps for direct network connectivity of VMs listed above and verified they also work in a multi-node setup.

To Do:

get HTTPS working on horizon

One Reply to “Getting multi-node Microstack Instance Migration to work”

  1. Thanks for the great blog post here. As an FYI, the listening for the libvirt daemons was enabled in a recent update to Microstack (released into the beta channel on May 1).

    As for the tls enablement, there’s actually some work going on around tls enablement now – https://review.opendev.org/c/x/microstack/+/772901, so watch this space for TLS bits.

Comments are closed.