Creating a virtual machine using a distribution’s Cloud base image – the example of CentOS

Peter Boy, Jan Kuparinen Version F34-F36 Last review: 2022-05-10

The objective here is to create a virtual machine based on any distribution. But instead of going through the distribution-specific installer, a cloud image is to be used. This reduces the installation process to copying a disk image with subsequent initial adaptation to the concrete runtime environment. The workload shrinks to a few minutes.

Cloud Images

A Cloud Image is a virtual machine disk image containing the operating system of a specific distribution. It is ready to run in a virtual runtime environment, customized to one of the cloud-platforms such as Openstack, Amazon EC2, Google GCE etc. Most distributions also provide a generic image or base image with a runtime environment without additions to a specific cloud system. Such a generic image is (usually) suitable for a QEMU/KVM/libvirt runtime environment.

The procedure described here is therefore only suitable for distributions that provide such a generic image.

Depending on the distribution, cloud images may differ more or less from a default installation using the distribution’s installation media. There are also distributions, e.g. Ubuntu, that explicitly distinguish between cloud images and server images. Documentation about goals and differences are mostly sparse or non-existent. Debian at least makes an attempt to document differences.

The system administrator must investigate whether the cloud image does meet the requirements and expectations associated with using a VM of the distribution.

How it works

A cloud image is a kind of template, a runtime agnostic bootable generic operating system directory tree. It is used directly as a bootable (virtual) system disk in a virtual machine to be created, although still practically unusable due to lack of concrete runtime-specific configuration.

The challenge is to initially inject the specific runtime configuration into the image. This includes first of all a user account with password and administration authorization. In addition, network configuration, console or other devices may be required. Only then does a cloud image gets the ability to run in a specific environment and, above all, become usable. This is the prerequisite for being able to perform detailed and more extensive configurations later on, if necessary.

In a standard installation this basic configuration is part of the distribution-specific installer. In case of a cloud image there are two widely used disk image modification tools for Cloud images

  • cloud-init

  • ignition

Both are designed to get the configuration data from the cloud system at the first boot and apply it to the image. The developers fortunately had some foresight and provided a 'nocloud' procedure, too. So, the system administrator of an autonomous server can provide a replacement. As you may guess, in a cloud centric development a nocloud option has a tough time. It remains somewhat of a challenge.

A third option is

  • virt-customize

a generic tool to modify any non-running virtual machine, provided by guestfs-tools. It allows to install cloud images of older distributions like CentOS 7 or earlier Ubuntu editions.

What you get

In particular, you get time. The workload is significantly lower and correspondingly faster.

But by cloud base image you (usually) don’t get an alternatively built but otherwise identical build of a (server) distribution. There are some subtle differences. Some are conceptual. For example, most cloud images do not install a firewall. The cloud system usually provides this function. The use concept for the persistent storage is also different due to technical differences. And last but not least, cloud image developers may have different goals than the developers of the server variant installation of a distribution.

It is up to the system administrator to decide whether the functionality is identical to the extent that the advantages outweigh the disadvantages and it makes sense to use a specific cloud image as a virtual machine.

How to proceed

First of all you need a working Fedora Server Edition including virtualization support added and libvirtd daemon active. We assume an internal network 'default' with virbr0, DHCP, and DNS set up as well (see section 'Add Virtualization Support'). External network connectivity will be provided by macvlan (ethernet interface) rsp. macvtap (libvirt naming).

You have various options:

  • Using Cockpit graphical interactive tool to perform a quick minimal VM setup

  • Using virt-install CLI interactive tool to perform a quick mminimal VM setup based on cloud-init

  • Using virt-install CLI interactive tool to perform a elaborate VM setup based on cloud-init

  • virt-customize and virt-install CLI tools for a fairly easy, interactive VM setup

  • Using any of the CLI tools to perform a script based automated installation

We will only cover the former two variants here. They are so universal that they are applicable to virtually all distributions. The latter ones are heavily dependent on details of a particular distribution.

General preparations

Whichever of the presented installation methods is chosen, a cloud image always has to be downloaded and verified. In the case of CentOS, this involves the following steps.

  1. Check on the CentOS project site the lastest release of GenericCloud image: https://cloud.centos.org/centos/9-stream/x86_64/images/ At the time of this writing it was CentOS-Stream-GenericCloud-9-20220315.0.x86_64.qcow2

  2. In the Cockpit terminal window, fetch a CentOS 9-stream generic image file and store it into the directory /var/lib/libvirt/boot. This is by convention the libvirt default location of images for installation. Check the integrity of the download.

    […]$ sudo su -
    […]# cd /var/lib/libvirt/boot
    […]# wget https://cloud.centos.org/centos/9-stream/x86_64/images/CentOS-Stream-GenericCloud-9-20220315.0.x86_64.qcow2
    […]# wget https://cloud.centos.org/centos/9-stream/x86_64/images/CentOS-Stream-GenericCloud-9-20220315.0.x86_64.qcow2.SHA256SUM
    […]# sha256sum --ignore-missing -c *.SHA256SUM

    You may want to gather some information about the image

    […]# qemu-img  info CentOS-Stream-GenericCloud-9-20220315.0.x86_64.qcow2
    
    image: CentOS-Stream-GenericCloud-9-20220315.0.x86_64.qcow2
    file format: qcow2
    virtual size: 10 GiB (10737418240 bytes)
    disk size: 777 MiB
    cluster_size: 65536
    Format specific information:
        compat: 0.10
        compression type: zlib
        refcount bits: 16

    The virtual size, i.e. the amount of storage available to a VM at runtime, is 10 GiB. The actual file size is much smaller, due to the dynamic property of the qcow2 format. It grows on demand while executing.

Using Cockpit for a graphical interactive installation

You can start without any additional preparations.

Installation

Select Virtual Machines in the left navigation bar und click on Create VM. A new form will open up

Cockpit `__Create new virtual machine__` form

When the form first opens, it looks a bit different from the above screenshot. Specify a name for the virtual machine to be created. It must be unique at least in the host server’s namespace and at best in the designated domain namespace. Select a connection type, use system for production deployments. Consult the guide Adding Virtualization Support for details about connection type.

Then select the installation type Cloud base image from the drop down menu. The bottom part of the form changes and now resembles the screenshot above except for the last 3 lines. Fill in Installation source and Operating system and specify disk size and amount of RAM.

Finally tick Set cloud init parameters. The form changes again and reveals the last 3 lines. Enter a root password, optionally an additional user name and password.

You must enter a root password. This will activate the root account at the same time. Otherwise, you can not obtain administrative privileges. The additional user account doesn’t help!

NON-US system administrators: Cloud image usually configures inially a US keyboard, and you can adjust the keyboard layout after the first login at the earliest. Limit the password to matching key positions (and change it later if you want)

Select Create to start the installation.

After some seconds the VM is up and running.

Post installation tasks

In the list of running virtual machines click on the newly created box.

Cockpit `__Create new virtual machine__` form

The created runtime environment is rather basic. With cloud image installation, you cannot defer the creation process and fine-tune the runtime configuration, as you can with other installation options. There is a default disk configuration, e.g. a default CDrom and one disk as configured. And there is just one network connection, which uses libvirt’s default virtual network.

Log in with the root account. If you look around, you will find some resemblance to a CentOS server configuration. Cockpit is installed, but not activated. Firewall installation is completely missing. The virtual disk contains one flat XFS file system. The active network configuration resides in /etc/sysconfig/network-scripts (due to cloud-init limitations). Residues of a NetworkManager network configuration exist in /etc/NetworkManager/system-connections.

So there is some post-installation work to do.

Adjust locale and non-US keyboard layout

Users of a non-US keyborad layout probably want to customize the keyboard layout first of all.

  1. Check the current locale configuration

    […]# localectl
    System Locale: LANG=en_US.UTF-8
        VC Keymap: us
       X11 Layout: us
  2. List available keyboard mappings filtered by your short county code part

    […]# localectl list-keymaps  | grep de-
    de-T3
    de-deadacute
    de-deadgraveacute
    de-deadtilde
    de-mac
    de-mac_nodeadkeys
    de-neo
    de-nodeadkeys
    ...
  3. Determine applicable key mapping and apply it

    […]# localectl set-keymap de-nodeadkeys
    ...

    The setting is immediately activ.

Network configuration

As noted, there is only one internal network connection. If the VM is only to take over internal services, this is perfect. Skip this section.

However, virtual machines mostly provide public services and require access to the public network and also require reachability from the public network. The key part is to isolate these accesses from the host server for security reasons.

One option is to bind a virtual bridge to the physical interface and attach a VM to that bridge. The other option is Mac-vlan, where a VM adds a virtual interface to the physical host interface. It gets its own Mac address and its own IP (and alias IPs if needed). The libvirt toolkit refers to this as direct attachment.

The latter is now the recommended approach. It acts similar to a bridge, but with less system load. The disadvantage is that direct communication between the host and the VMS is not possible, but between the VMs it is. Hosts and VMs can only communicate via the internal, protected network. For administrators of remote, not directly accessible servers, the additional big advantage is that after the basic installation and initial network configuration, there is no need to touch the precious network connection again.

  1. While logged in use the terminal window to set the static hostname. It ensures a correct DNS setup in a DHCP environment.

    […]$ hostnamectl set-hostname vm1-el9.example.com
    […]$ hostnamectl
  2. If you expanded the terminal window click on the VM name in the breadcrumb to get the default view. Select shutdown to stop the virtual machine.

  3. An administrator who sticks to the habit that the first network adapter in the device list establishes the external connection will now edit and rearrange the existing network configuration. Select Edit to access the Configuration form.

    Cockpit `__Virtual ethernet configuration form__`

    Replace the interface type by Direct attachment and select the external physical interface of the host in the Source field. Leave model and MAC address unchanged.

  4. If you also want an internal network (and you definitely should in most cases), select Add network interface. A nearly identical form pops up. Select Interface tpye as Virtual network if it is not already preselected and default as Source. Again, leave model (Linux, perf) and MAC address (Generate automatically) unchanged. Click Create to finish to create the network configuration.

  5. Start the virtual machine again.

Check network connections

  1. Check internal connections

    From a terminal window in the host system you should be able to ping your VM using the internal virtual network.

    […]$ ping  vm1-el9

    If the name service setup in the host is correct, the short name should work. Otherwise try the internal FQDN name (i.e. something like vm1-el9.example.lan). If name resolution doesn’t work, switch to the VM’s Cockpit terminal window. Use ip a to determine the internal IP and use this to ping the VM.

    If pinging the IP address works, fix the name resolution. Otherwise check again network configuration.

  2. Check external connections

    From a machine on your network try to ping the virtual machinge

    […]$ ping  vm1-el9.example.com

    In case of issues proceed analog to the internal connection.

Security Enhancement

A Cockpit installation cannot implement the widely used security concept of locking the root account. It requires additional administrative modifications later on. In the CentOS default configuration, however, root can log in via ssh and password. A certain protection of the root account exists nevertheless.

  1. If there is already another user and this user is to be granted administrative rights, the account must be assigned to the wheel group.

    […]# usermod -aG wheel <USERNAME>

    Test if login and sudo work!

  2. If you decide to lock the root account, login as your administrative user and execute

    […]# sudo passwd -l root

    Log off and try to login as root (e.g. using the host’s Cockpit instance). The system should respong with 'Login incorrect'.

  3. If you decide to use the root account and you chose a simple password during installation, you should set a long and secure password. Log in as root and execute

    […]# passwd

    If root also needs access via ssh, a key-based login must be set up. Follow step 5 of the post-installation guide.

  4. Install and activate the firewall

    […]# dnf install firewalld
    […]# systemctl  enable  firewalld   --now
    […]# firewall-cmd  --list-all
  5. If you want to use Cockpit you have to enable it

    […]# systemctl  enable  cockpit.socket   --now

    Cockpit should start up as soon as you connect with your browser.

  6. Finally, if you want the virtual machine to start automatically at system startup, check the corresponding box in the Cockpit VM overview. Alternatively execute

    […]# virsh autostart vm1-el9

Using virt-install for a CLI interactive minimal effort installation

This type of installation uses the --cloud-init parameter of virt-install without any value or subparameters. The approach causes the generation and display of a root password shortly after the start of installation, enabling a one-time login. You have to note or copy it, of course.

Apart from that, the configuration is limited to configuring and starting DHCP-supported Ethernet interfaces. Other interfaces are basically defined but not dealt with further.

Addition preparations

Essentially, you need to copy the downloaded image file to the libvirt disk images pool yourself and name it as needed.

  1. Copy the disk image from the installation media pool to the disk images pool and choose the intended VM name as target.

    […]$ sudo su -
    […]# cd /var/lib/libvirt/boot
    […]# cp CentOS-Stream-GenericCloud-9-20220315.0.x86_64.qcow2  ../images/vm2-el9.qcow2
  2. Inspect the disk size and optionally adjust it. The default is about 10 GiB.

    […]# qemu-img  info   /var/lib/libvirt/images/vm2-el9.qcow2
    […]# qemu-img resize  /var/lib/libvirt/images/vm2-el9.qcow2  +10G

    The example above adds 10 GiB to a total size of about 20 GiB.

    You can resize the virtual disk later, too. Therefore, there is no reason to plan too generously in terms of size now. Due to the qcow2 format resizing does not affect the current image file size. It is dynamically adjusted as needed up to the maximum specified.

Installation

Use a terminal window to execute

[…]# virt-install  --name vm2-el9\
     --memory 2048  --cpu host --vcpus 2 --graphics none\
     --os-variant centos-stream9\
     --import  \
     --disk /var/lib/libvirt/images/vm2-el9.qcow2,format=qcow2,bus=virtio \
     --network type=direct,source=enp0s25,source_mode=bridge,model=virtio \
     --network bridge=virbr0,model=virtio  \
     --cloud-init

The parameters are quite descriptive. You will find a more detailed explanation in the appendix.

You see a lot of output:

WARNING  Defaulting to --cloud-init root-password-generate=yes,disable=yes

Starting install...
Password for first root login is: YFlTBHprYYDh5gZ7
Creating domain...                                                                                            |    0 B  00:00:00

Running text console command: virsh --connect qemu:///system console vm2-el9
Connected to domain 'vm2-el9'
Escape character is ^] (Ctrl + ])

[    0.000000] Linux version 5.14.0-71.el9.x86_64 (mockbuild@x86-05.....
...
...
...
[  OK  ] Finished Execute cloud user/final scripts.
[  OK  ] Reached target Cloud-init target.
[  OK  ] Created slice Slice ...
[  OK  ] Started dbus-:1.2-org...

CentOS Stream 9
Kernel 5.14.0-71.el9.x86_64 on an x86_64

Activate the web console with: systemctl enable --now cockpit.socket

localhost login:

The installation ends with an active open terminal into the created VM!

Log in to the root account giving the password displayed early in the installation process (YFlTBHprYYDh5gZ7 in this example). This password is single use and must be replace during the first login.

NON-US system administrators: Cloud Image usually configures a US keyboard first! The easiest way is to copy & paste the password. Limit the new password to matching key positions, choose a rather simple one to minimize the chance ot typos, and change it to a secure password later after keyboard configuration..

Post-Installation Tasks

As usual, also in computer science the "law of conservation of energy" applies. The lower the installation effort, the greater the post-installation requirements.

  1. Non-US system administrators should adjust the layout first.

    1. Check the current locale configuration

      […]# localectl
      System Locale: LANG=en_US.UTF-8
          VC Keymap: us
         X11 Layout: us
    2. List available keyboard mappings filtered by your short county code part. Replaye "de-" by your country, i.e. "<COUNTRYCODE>-"

      […]# localectl list-keymaps  | grep de-
      de-T3
      de-deadacute
      de-deadgraveacute
      de-deadtilde
      de-mac
      de-mac_nodeadkeys
      de-neo
      de-nodeadkeys
      ...
    3. Determine applicable key mapping and apply it

      […]# localectl set-keymap de-nodeadkeys
      ...

      The setting is immediately activ.

  2. Check network connection

    # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 ....
        ...
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> ...
        link/ether ....
        altname enp1s0
        inet uuu.vvv.www.xxx/zz ......
        ...
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> ...
        link/ether ....
        altname enp2s0
        inet uuu.vvv.www.xxx/zz ...
        ...

    If DHCP was available for all interfaces, a complete interface configuration is displayed.

    Check for connectivity:

    […]# ping guardian.co.uk
    […]# ping <YOUR_EXTERNAL_DEFAULT_GATEWAY_ADDRESS>
    […]# ping 192.168.122.1  # your host system internal virtual network address

    The VM can connect to internal and external destinations. The name resolution for the vm itself can’t work because the static hostname is not set yet. The external host address is not responding due to Mac vlan technology and the internal name resolution is not working yet.

    Check the interface devices.

    […]# nmcli dev
    DEVICE  TYPE      STATE      CONNECTION
    eth0    ethernet  connected  System eth0
    eth1    ethernet  connected  Wired connection 1
    lo      loopback  unmanaged  --

    The active (authoritative) configuration of the external network connection is /etc/sysconfig/network-scripts/ifcfg-eth0. The file /etc/NetworkManager/system-connections/ens3.nmconnection is a leftover from the pre cloud-init default system configuration. Currently cloud-init sticks to the deprecated filesystem position. There is no persistent configuration file for the internal interface to libvirt virbr0. The configuration is auto-generated with every boot process.

    Probably you want to have a persistent configuration file, so can assign a firewall zone to the connection. Just rename the connection.

    […]# nmcli con mod 'Wired connection 1' connection.id eth1

    The renaming triggers NetworkManager to create a file /etc/sysconfig/network-scripts/ifcfg-eth1 with the current configuration.

    If DHCP is not available for the external interface, the configuration file is just a stub. Use the NetworkManager nmcli utility to define a static configuration.

  3. In case the virtual disk size has been changed, the partition sizes must be adjusted.

    […]# cfdisk  /dev/vda

    The only partition should already have the adjusted size. Otherwise select resize and then write.

    Next resize the file system, if not already done. First check the size of the file system, e.g. using df.

    […]# df -h
    […]# resize2fs -p /dev/vda1
  4. Finally, let’s set the hostname

    […]# ##hostnamectl set-hostname  VM_NAME.example.com
    […]# hostnamectl set-hostname  vm2-el9.example.com

Exit and close the console typing <ctrl>+].

You may reboot the VM and than check /var/lib/libvirt/dnsmasq/virbr0.status again. It’s now listing a hostname, internal name resolution is working now.

If your external DHCP server provides dynamic DNS as well, you should be able to connect to your VM from the public network:

[…]# ping VM_NAME.example.com

Last action is to enable autostart of the VM.

[…]# virsh  autostart  VM_NAME

Everything is working fine now, nearly out of the box. You would now start configuring the VM in detail according to its intended use. Just as it would be required after a standard installation.

In the end it takes only some 5 minutes to set up a fully functional system with minimal effort. It is ideal to quickly create a virtual machine for an ad-hoc solution or as an interim solution for a test.

Conclusion

Using any of the described installation methods, provides an quite comfortable installation and configuration process that would otherwise consume a lot of time. It is this performance that makes the use of cloud images so attractive.

The use of Cloud Base Images to create a distribution’s virtual machine installation comes with some inevitable inconveniences and shortcomings due to the different nature of the techniques. So, in a production deployment you may be better off with dedicated virtual machine images, if available. But for experimentation and testing it should be viable and suitable.

Appendix

Short explanation of the virt-install parameter used

--name VM_NAME

Unique name of the VM to install as shown e.g.in VM list

--memory 3074

Amount of memory to allocate, adjust as appropriate

--cpu host

same cpu type as host

--vcpus 3

number of cpus for VM, adjust as appropriate

--os-variant centos-stream9

Target operating system. Adjust distribution and version as needed

--import

Fixed, skips installation procedure and boots from the first (virtual) disk as specified by the first disk parameter.

--graphics none

Fixed, enforces a redirect of the VM login prompt to the host terminal window for immediate access.

--disk /var/lib/libvirt/images/VM_NAME.qcow2, format=qcow2,bus=virtio

disk image file, adjust VM_NAME

--network direct,source=enpXsY,source_mode=bridge, model=virtio

specify external netwok (macvlan) first, it will get the name eth0 as usual. Adjust interface name as appropriate.

--network bridge=virbr0,model=virtio

specify the internal network (libvirt generated bridge) second. It will get the name eth1 as usual.

--cloud-init

Deal with nocloud configuration using defaults