Note

The Funtoo Linux project has transitioned to "Hobby Mode" and this wiki is now read-only.

Difference between revisions of "Libvirt"

From Funtoo
Jump to navigation Jump to search
 
(181 intermediate revisions by 2 users not shown)
Line 1: Line 1:
=Introduction=
 
This guide aims to help Funtoo users configure and administrate "KVM" (a hypervisor) and "Qemu" (an emulator) by using the "Libvirt" toolkit along with the GUI front-end "Virt-Manager" and finally the "Spice" protocols in order to run graphical virtual machines.
{{Article
|Author=a-schaefers
}}
 
==Introduction==
 
This page documents the configuration of KVM/Qemu using Libvirt with the GUI front-end Virt-Manager.
 
An overview according to libvirt.org, the [libvirt] project:
An overview according to libvirt.org, the [libvirt] project:


    is a toolkit to manage virtualization platforms
* is a toolkit to manage virtualization platforms
    is accessible from C, Python, Perl, Java and more
* is accessible from C, Python, Perl, Java and more
    is licensed under open source licenses
* is licensed under open source licenses
    supports KVM, QEMU, Xen, Virtuozzo, VMWare ESX, LXC, BHyve and more
* supports KVM, QEMU, Xen, Virtuozzo, VMWare ESX, LXC, BHyve and more
    targets Linux, FreeBSD, Windows and OS-X
* targets Linux, FreeBSD, Windows and OS-X
    is used by many applications
* is used by many applications


=Check for KVM hardware support=
==Check for KVM hardware support==
To use KVM, one will need to verify the processor supports Intel Vt-x or AMD-V technology and that the necessary virtualization features are enabled within the BIOS. The following command should reveal if your hardware has virtualization enabed:
Verify the processor supports Intel Vt-x or AMD-V technology and that the necessary virtualization features are enabled within the BIOS. The following command should reveal if your hardware supports virtualization:
{{console|body=
$ ##i##LC_ALL=C lscpu {{!}} grep Virt
}}


LC_ALL=C lscpu | grep Virtualization
==Kernel configuration==
The default Funtoo kernel, sys-kernel/debian-sources, has the needed KVM virtualization and virtual networking features enabled by default and will not require any reconfiguration. Non debian-sources users will need to verify the necessary kernel features are turned on in order to run KVM virtual machines and use virtual networking.


=Kernel configuration=
==Install libvirt==
The default Funtoo kernel, "debian-sources", has the needed KVM virtualization and virtual networking features enabled by default and will not require any reconfiguration. Non debian-sources users will need to verify the necessary kernel features are turned on in order to run KVM virtual machines and use virtual networking. (a link to the funtoo KVM page should go here, but the funtoo KVM page is outdated currently Wed May 23 12:09:53 PDT 2018, see https://www.funtoo.org/Talk:KVM)
Optionally build libvirt with policykit support which will allow non-root users to authenticate as root in order to manage VMs and will also allow members of the libvirt group to manage VMs without using the root password.


=Installing libvirt=
{{console|body=
Optionally build libvirt with policykit support which will allow non-root users to authenticate as root in order to manage VM's and will also allow members of the libvirt group to manage VM's without using the root password.
$ ##i##echo 'app-emulation/libvirt policykit' >> /etc/portage/package.use
}}


echo 'app-emulation/libvirt policykit' >> /etc/portage/package.use
For desktop VM usage it is recommended to build app-emulation/qemu with spice support. The Spice protocols can be used to gain improved graphical and audio experience, clipboard-sharing and directory-sharing.


For desktop VM usage, It is recommended to build app-emulation/qemu with spice support, (Qemu will be pulled-in by emerging libvirt), for improved graphical and audio performance, clipboard-sharing and directory-sharing.
{{console|body=
$ ##i##echo 'app-emulation/qemu spice' >> /etc/portage/package.use
}}


echo 'app-emulation/qemu spice' >> /etc/portage/package.use
It's likely to need further USE flag changes in /etc/portage/package.use -- if it asks, add the changes needed in order to be emerged.


It will likely ask to make some USE flag changes to the file /etc/portage/package.use -- if the changes look good, go ahead and add the changes it needs in order to be emerged.
{{console|body=
$ ##i##emerge -av app-emulation/libvirt
}}


emerge -av libvirt
After libvirt is finished compiling, you will have installed libvirt and pulled-in all of its necessary dependencies, such as app-emulation/qemu and also net-firewall/ebtables and net-dns/dnsmasq for the default NAT/DHCP networking.


After libvirt is finished compiling, you will have installed libvirt and pulled-in all of its' necessary dependencies, such as app-emulation/qemu and also net-firewall/ebtables and net-dns/dnsmasq for the default NAT/DHCP networking.
==Enable the libvirtd service==


=Enabling the libvirtd service=
Start the libvirtd service.


Start the libvirtd service:
{{console|body=
$ ##i## rc-service libvirtd start
}}


rc-service libvirtd start
Add the libvirtd service to the openrc default runlevel.


Add the libvirtd service to the openrc default runlevel:
{{console|body=
$ ##i## rc-update add libvirtd
}}


rc-update add libvirtd
==Enable the "default" libvirt NAT==


=Enabling the "default" libvirt NAT=
Set the virsh network "default" to be autostarted by libvirtd.


Set the virsh net "default" (the default libvirt NAT) to be autostarted by libvirtd
{{console|body=
$ ##i## virsh net-autostart default
}}


virsh net-autostart default
Start the "default" virsh network.


Start the "default" virsh network
{{console|body=
$ ##i## virsh net-start default
}}


virsh net-start default
Restart the libvirtd service to ensure everything has taken effect.


Restart the libvirtd service to ensure everything has taken effect.
{{console|body=
$ ##i## rc-service libvirtd restart
}}
 
Use ifconfig to verify the default NAT's network interface is up.


rc-service libvirtd restart
{{console|body=
$ ##i## ifconfig
}}
    ...
    virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
    inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
    ether 52:54:00:74:7a:ac  txqueuelen 1000  (Ethernet)
    RX packets 6  bytes 737 (737.0 B)
    RX errors 0  dropped 5  overruns 0  frame 0
    TX packets 0  bytes 0 (0.0 B)
    TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    ...


You can verify the default NAT is up is up using ifconfig:
Notice the "default" libvirt NAT inserts its additional iptables rules automatically upon every libvirtd restart.


ifconfig
{{console|body=
$ ##i## iptables -S
}}
    ...
    -A FORWARD -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
    -A FORWARD -i virbr0 -o virbr0 -j ACCEPT
    -A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
    -A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
    -A FORWARD -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
    -A FORWARD -i virbr0 -o virbr0 -j ACCEPT
    -A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
    -A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
    -A FORWARD -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
    -A FORWARD -i virbr0 -o virbr0 -j ACCEPT
    -A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
    -A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
    -A OUTPUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
    -A OUTPUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
    -A OUTPUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT


...
==Most virsh commands require root privileges==
virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:74:7a:ac  txqueuelen 1000  (Ethernet)
        RX packets 6  bytes 737 (737.0 B)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
...


You may also notice the "default" libvirt NAT inserts its' own additional iptables rules automatically upon every libvirtd restart:
Libvirt VMs are managed using the virsh cli and the GUI front-end Virt-Manager. Using these tools will require root privileges.


iptables -S
As noted from man(1) virsh,
{{quote|"Most virsh commands require root privileges to run due to the communications channels used to talk to the hypervisor. Running as non root will return an error."}}


...
Running as root, the following are some example virsh commands:
-A FORWARD -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
-A FORWARD -i virbr0 -o virbr0 -j ACCEPT
-A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
-A FORWARD -i virbr0 -o virbr0 -j ACCEPT
-A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
-A FORWARD -i virbr0 -o virbr0 -j ACCEPT
-A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
-A OUTPUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
-A OUTPUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
-A OUTPUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT`


=Most virsh commands require root privileges=
{{console|body=
$ ##i## virsh list --all
}}


Libvirt VM's are managed using the "virsh" cli tools and the GUI front-end "Virt-Manager". Using these tools will require root privileges unless they are built with policykit support enabled.
{{console|body=
as noted from man virsh(1),
$ ##i## virsh start foo
}}


    Most virsh commands require root privileges to run due to the communications channels used to talk to the hypervisor. Running as non root will return an error.
{{console|body=
$ ##i## virsh destroy foo
}}


Running as root, here are some example virsh commands:
If libvirt was built with polictykit support, non-root users can run the same example virsh commands by addressing qemu:///system and authenticating as root via policykit.


virsh list --all # show the status of all virtual machines
{{console|body=
virsh start foo  # start the virtual machine "foo"
$ ##i## virsh --connect qemu:///system list --all
virsh destroy foo # shutdown virtual machine "foo"
}}


If libvirt was built with polictykit support, non-root users can run the same example virsh commands by addressing qemu:///system and authenticating as root via policykit:
{{console|body=
$ ##i## virsh --connect qemu:///system start foo
}}


virsh --connect qemu:///system list --all  # show the status of all virtual machines
{{console|body=
virsh --connect qemu:///system start foo  # start the virtual machine "foo"
$ ##i## virsh --connect qemu:///system destroy foo
virsh --connect qemu:///system destroy foo # shutdown virtual machine "foo"
}}


=Passwordless, non-root VM administration=
==Passwordless, non-root VM administration==


If libvirt was built with policykit support, you may want to add your user to the additional "libvirt" group in order to administrate Virtual Machines without authenticating as root. You will need log out and back in for these changes to take affect.
If libvirt was built with policykit support, add a user to the additional "libvirt" group in order to administrate Virtual Machines without authenticating as root. Log out and back in for these changes to take affect.


gpasswd -a $USER libvirt
{{console|body=$ ##i## gpasswd -a $USER libvirt}}


=Tell Qemu where the UEFI bios firmware is located=
==Tell Qemu where the UEFI BIOS firmware is located==


/etc/libvirt/qemu.conf
Edit /etc/libvirt/qemu.conf and include the following contents:


<pre>
nvram = [
nvram = [
     "/usr/share/edk2-ovmf/OVMF_CODE.fd:/usr/share/edk2-ovmf/OVMF_VARS.fd"
     "/usr/share/edk2-ovmf/OVMF_CODE.fd:/usr/share/edk2-ovmf/OVMF_VARS.fd"
]
]
</pre>
==Install Virt-Manager to create and configure VM templates==
Libvirt VM Templates are configured using XML and it is recommended to install Virt-Manager in order to ease the virtual machine creation and configuration process. It may also be desirable to revisit the virsh commands later on, as it may be necessary to use them in order to make some advanced changes to XML templates that are not shown in the GUI Virt-Manager application. Using the virsh cli is also essential for doing work remotely where Virt-Manager may not be available.
Make sure the gtk USE flag is enabled for Virt-Mananger, as it is a graphical application, and also polictykit, and then begin the emerge process:
{{console|body=
$ ##i## echo 'app-emulation/virt-manager gtk policykit' >> /etc/portage/package.use
}}
It's likely to need further USE flag changes in /etc/portage/package.use -- if it asks, add the changes needed in order to be emerged.
{{console|body=
$ ##i## emerge -av app-emulation/virt-manager
}}


=Using Virt-Manager to create and configure VM templates=
== Create a new Virtual Machine Template ==


Libvirt VM Templates are configured using XML and it is strongly recommended to install Virt-Manager in order to ease the virtual machine creation and configuration process. It may also be desirable to revisit the "virsh" commands later on, as it may be necessary to use them in order to make some advanced changes to XML templates that are not shown in the GUI Virt-Manager application.
[[File:01createvm.png|400px|Creating a new Virtual Machine Template]]
Make sure the "gtk" use flag is enabled for Virt-Mananger, as it is a graphical application, and also "polictykit" unless you plan to run it as root, and then begin the emerge process:


echo 'app-emulation/virt-manager gtk policykit' >> /etc/portage/package.use
On first use of Virt-Manager, while browsing to find an ISO image, create a dedicated ISO "pool" which will be a directory on the filesystem where ISO files are stored. Select the "+" in order to "Add pool". After creating the ISO pool and moving ISO images into the directory, browse for your ISO images in your ISO pool.


Emerge -av virt-manager
[[File:05isopoolimage.png|750px|Choose Volume]]


Once again it is likely to need further USE flag changes to /etc/portage/package.use -- if the changes look good, go ahead and add the changes it needs in order to be emerged.
== Customize configuration before VM install ==
At this point Virtual Machine Creation and administration should be relatively intuitive using the Virt-Manager GUI tool. What follows will be screenshots and guidelines for creating a Windows 10 Virtual Machine using Virt-Manager and the additional spice guest drivers:
Using Virt-Manager


[[File:00virt-manager.png|681px|Virt-Manager]]
During the final step of VM template creation, it is a good idea to check the box "Customize Configuration Before Install," as it will allow, on the next screen, the option to choose which BIOS firmware will be used. After a BIOS is chosen for a VM template, it is unable to be changed without starting the creation process again from the beginning.


Creating a new Virtual Machine Template
[[File:09customizebefore.png|400px|Customize Configuration Before Install]]


01createvm.png
The default BIOS is seabios, which is a legacy BIOS. OVMF, a UEFI BIOS, is also available. When deciding on a chipset, i440FX is for emulation of older BIOS chipsets, and Q35 is for emulation of newer BIOS chipsets. It is recommended to use the default seabios unless a UEFI BIOS is needed.


Click on File > New Virtual Machine > Local install media (ISO image or CDROM) > select "Forward"
[[File:10choosebios.png|816px|Choose your BIOS from the customize configuration menu]]
Choose "Use ISO image:" and select "Browse"


02nostorage.png
It is possible to return to the VM customization menu at any time by selecting a VM from the main Virt-Manager window, choosing "open" and then selecting "Details" from the VMs dropdown menu.
Select the "+" to "Add pool"
Note: on first use, you should create a dedicated ISO "pool" which will be a directory on your filesystem where you can store all of your ISO files. After creating the ISO pool and moving your ISO images into the directory, and then you can browse for your ISO image to use for the virtual machine install.
Give the ISOPOOL a name and location


03isopoolname.png
== Other considerations in the customization menu ==


04isopoollocation.png
=== The Q35 BIOS chipset will not work with IDE ===
Move the ISO images to be used for VM installation to the new ISOPOOL.
If using the Q35 chipset, remove the IDE Storage devices and select "Add Hardware," in the "Storage" section, select SATA and add new SATA Storage devices instead.
In my case, I have downloaded a Microsoft Windows 10 ISO install image from https://www.microsoft.com/en-us/software-download/windows10ISO


mv ~/Downloads/Win10_1709_English_x64.iso ~/ISOPOOL
It is also important to decide between using the qcow2 or raw storage types. Qcow2 allows for snapshotting with rollbacks and is sparse provisioned. Raw does not have the features of qcow2, but may have better performance under some circumstances.


Now upon refresh of ISOPOOL in Virt-Manager, it can be selected. Go ahead and select the ISO image and "Choose Volume", then hopefully it will automatically detect the operating system, otherwise uncheck the tick box and find the operating system you are trying to install and choose "Forward."
[[File:11newsatacdrom.png|525px|Adding a new SATA cdrom Device and loading it with a windows 10 ISO image]]


05isopoolimage.png
=== How to use SCSI Storage Devices with the Virtio SCSI Disk Controller ===
While adding virtual storage hardware, it is an excellent time to choose the disk "Device type" of "SCSI" which will use the Redhat passthrough "Virtio SCSI" disk controller for improved I/O performance.


06isoloaded.png
ZFS users can also use a sparse provisioned ZVOL which can be addressed in the "Select or create custom storage" textbox as follows:
Allocate Memory and CPU to the VM
{{console|body=
/dev/zvol/tank/VOLNAME
}}


07memcpu.png
The virtio guest driver for taking advantage of the "Virtio SCSI" disk controller is included by default in Linux and FreeBSD kernels. Windows will be unable to see the Virtio SCSI storage disk and therefore unable to install until you load the Redhat Virtio SCSI driver during the Windows install process. (Windows installers include a menu that is able to load drivers during installation.) The RedHat Virtio SCSI driver for Windows is available to [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.149-2 download as an ISO image].
Allocate Storage (or not) to the VM (This can be easily changed later)


08allocstore.png
[[File:11redoingstorage.png|525px|Adding a new SCSI Storage Device and addressing it to a preconfigured ZVOL]]
Customize Configuration Before Install


09customizebefore.png
=== About the Virt-Manager defaults ===
At this point it may be a good idea to check the box "Customize Configuration Before Install" because that will allow you to choose which bios firmware will be used, which is an important feature that can only be configured once per template. After you choose a bios template on the next screen, you will be unable to change bios' without starting the process all over again. Choosing to customize before install also has many other benefits that I will try to go over next, briefly.
Virt-Manager configures the Spice server with QXL video, ich6 sound, USB redirectors and shared clipboard by default. These are all generally good things to have for graphical VMs, but depending on the user application, some may opt to remove or change these devices away from defaults.
Choose Bios


10choosebios.png
== Begin Installation ==
The default is to use seabios, which is a legacy bios. OVMF (a uefi bios) is also available. When deciding on a chipset, i440FX is for emulation of older BIOS chipsets, and Q35 is for emulation of newer BIOS chipsets. After you select these options and choose "begin installation", there is no going back without creating a new template from scratch
When ready to begin, choose "Begin Installation", Virt-Manager's equivalent of the "virt-install" command.
It is usually wise to keep things simple and use the legacy seabios unless you have a reason to need the UEFI bios (e.g. For pci-passthrough of GPU's, or working on UEFI or secure boot applications...)
Other considerations in the customization menu:


    Q35 BIOS will not work with IDE, remove the IDE CDROM and IDE Storage and Select "Add Hardware" and add under "Storage" select SATA and add a new SATA cdrom and SATA harddrive instead. At this time it is important for most users to decide if they want to use the "qcow2" format which allows for snapshotting with rollbacks and rollforwards and is additionally sparse provisioned to only use the space it needs, or to choose "raw" which will have none of these things, but may have better performance under some circumstances. 11newsatacdrom.png
[[File:12begininstallation.png|681px|Windows 10 Virtual Machine]]


Adding a new, SATA cdrom Device and loading it with a windows 10 ISO image.
After successful installation of an OS, it is a good idea to install and enable the app-emulation/spice-vdagent service on graphical guests. If it is a Windows guest, a Spice executeable can be downloaded from the official Spice [https://www.spice-space.org/download.html downloads page].


    Note for ZFS users: While selecting Storage drives this is an excellent time to choose the disk "Device type" of "SCSI" which will use the Redhat passthrough "Virtio Scsi" disk controller for improved I/O performance, and you can create a sparse provisioned ZVOL with "zfs create -s -V 100G tank/VOLNAME" which can be addressed in this section of the Virt-Manager Manage-box as "/dev/zvol/tank/VOLNAME". The virtio guest driver for taking advantage of the "Virtio Scsi" disk controller is included by default in Linux and FreeBSD kernels. Windows will be unable to see the Virtio Scsi storage disk and unable to install until you load the Redhat Virtio Scsi Driver during the Windows install process. (Windows installers include a menu with the ability to load drivers during the install process.) The RedHat Virtio Scsi driver for windows is available as an ISO image
== XML Template editing using "virsh edit" ==


11redoingstorage.png
=== Allow sparse provisioned guests to trim the filesystem and return unused space to the host ===
Adding a new, SCSI Storage Device, and addressing it to a preconfigured ZVOL
Some features can only be changed by editing the XML template directly. This example shows an XML template that was originally generated using Virt-Manager, but by using the "virsh edit" command, has been modified on line 42 with the added parameter of discard='unmap' which allows sparse provisioned guest VMs to trim the filesystem and return unused space to the host.


    Virt-Manager configures by default to offer the Spice server with QXL video and ich6 sound in addition to two USB redirectors and a shared clipboard. These are all generally good things to have for desktop / graphical operating systems.
[[File:Virsh-edit-xml-example.png|798px|Editing XML templates manually with virsh edit]]


    You can return to the customization menu at any time from the main Virt-Manager by selecting the virtual machine in question and choosing "open" and then selecting from the dropdown menu via "View > Details** to reconfigure the VM's general options.
== Troubleshooting ==


Troubleshooting: Be sure to check your devices are enabled and in the proper order in the "boot device order" section of the "boot options" if you're unable to get beyond the bios menu!
Can't get beyond the BIOS screen? Check if the boot devices are enabled and in the correct order. Also check if the CDROM has an ISO loaded! It may also be possible the ISO installation media only supports legacy BIOS.
When you are ready to begin, choose "Begin Installation"


12begininstallation.png
==Links ==
After successful installation of a desktop operating system, you still will need to install and enable the app-emulation/spice-vdagent service on every graphical guest machine. If it is a Windows guest machine, you can download a spice executeable from the official spice downloads page.
* https://www.linux-kvm.org
* https://www.qemu.org
* https://libvirt.org
* https://www.spice-space.org
* https://virt-manager.org
* https://libvirt.org/docs.html
* https://wiki.archlinux.org/index.php/Libvirt
* https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF


See also:
[[Category:Virtualization]]
https://www.qemu.org
[[Category:KVM]]
https://www.linux-kvm.org
https://libvirt.org
https://virt-manager.org
https://www.spice-space.org

Latest revision as of 08:45, May 25, 2018


   Support Funtoo!
Get an awesome Funtoo container and support Funtoo! See Funtoo Containers for more information.


Introduction

This page documents the configuration of KVM/Qemu using Libvirt with the GUI front-end Virt-Manager.

An overview according to libvirt.org, the [libvirt] project:

  • is a toolkit to manage virtualization platforms
  • is accessible from C, Python, Perl, Java and more
  • is licensed under open source licenses
  • supports KVM, QEMU, Xen, Virtuozzo, VMWare ESX, LXC, BHyve and more
  • targets Linux, FreeBSD, Windows and OS-X
  • is used by many applications

Check for KVM hardware support

Verify the processor supports Intel Vt-x or AMD-V technology and that the necessary virtualization features are enabled within the BIOS. The following command should reveal if your hardware supports virtualization:

user $ LC_ALL=C lscpu | grep Virt

Kernel configuration

The default Funtoo kernel, sys-kernel/debian-sources, has the needed KVM virtualization and virtual networking features enabled by default and will not require any reconfiguration. Non debian-sources users will need to verify the necessary kernel features are turned on in order to run KVM virtual machines and use virtual networking.

Install libvirt

Optionally build libvirt with policykit support which will allow non-root users to authenticate as root in order to manage VMs and will also allow members of the libvirt group to manage VMs without using the root password.

user $ echo 'app-emulation/libvirt policykit' >> /etc/portage/package.use

For desktop VM usage it is recommended to build app-emulation/qemu with spice support. The Spice protocols can be used to gain improved graphical and audio experience, clipboard-sharing and directory-sharing.

user $ echo 'app-emulation/qemu spice' >> /etc/portage/package.use

It's likely to need further USE flag changes in /etc/portage/package.use -- if it asks, add the changes needed in order to be emerged.

user $ emerge -av app-emulation/libvirt

After libvirt is finished compiling, you will have installed libvirt and pulled-in all of its necessary dependencies, such as app-emulation/qemu and also net-firewall/ebtables and net-dns/dnsmasq for the default NAT/DHCP networking.

Enable the libvirtd service

Start the libvirtd service.

user $  rc-service libvirtd start

Add the libvirtd service to the openrc default runlevel.

user $  rc-update add libvirtd

Enable the "default" libvirt NAT

Set the virsh network "default" to be autostarted by libvirtd.

user $  virsh net-autostart default

Start the "default" virsh network.

user $  virsh net-start default

Restart the libvirtd service to ensure everything has taken effect.

user $  rc-service libvirtd restart

Use ifconfig to verify the default NAT's network interface is up.

user $  ifconfig
   ...
   virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
   inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
   ether 52:54:00:74:7a:ac  txqueuelen 1000  (Ethernet)
   RX packets 6  bytes 737 (737.0 B)
   RX errors 0  dropped 5  overruns 0  frame 0
   TX packets 0  bytes 0 (0.0 B)
   TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
   ...

Notice the "default" libvirt NAT inserts its additional iptables rules automatically upon every libvirtd restart.

user $  iptables -S
   ...
   -A FORWARD -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
   -A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
   -A FORWARD -i virbr0 -o virbr0 -j ACCEPT
   -A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
   -A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
   -A FORWARD -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
   -A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
   -A FORWARD -i virbr0 -o virbr0 -j ACCEPT
   -A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
   -A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
   -A FORWARD -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
   -A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
   -A FORWARD -i virbr0 -o virbr0 -j ACCEPT
   -A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
   -A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
   -A OUTPUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
   -A OUTPUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
   -A OUTPUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT

Most virsh commands require root privileges

Libvirt VMs are managed using the virsh cli and the GUI front-end Virt-Manager. Using these tools will require root privileges.

As noted from man(1) virsh, Template:Quote

Running as root, the following are some example virsh commands:

user $  virsh list --all
user $  virsh start foo
user $  virsh destroy foo

If libvirt was built with polictykit support, non-root users can run the same example virsh commands by addressing qemu:///system and authenticating as root via policykit.

user $  virsh --connect qemu:///system list --all
user $  virsh --connect qemu:///system start foo
user $  virsh --connect qemu:///system destroy foo

Passwordless, non-root VM administration

If libvirt was built with policykit support, add a user to the additional "libvirt" group in order to administrate Virtual Machines without authenticating as root. Log out and back in for these changes to take affect.

user $  gpasswd -a $USER libvirt

Tell Qemu where the UEFI BIOS firmware is located

Edit /etc/libvirt/qemu.conf and include the following contents:

nvram = [
    "/usr/share/edk2-ovmf/OVMF_CODE.fd:/usr/share/edk2-ovmf/OVMF_VARS.fd"
]

Install Virt-Manager to create and configure VM templates

Libvirt VM Templates are configured using XML and it is recommended to install Virt-Manager in order to ease the virtual machine creation and configuration process. It may also be desirable to revisit the virsh commands later on, as it may be necessary to use them in order to make some advanced changes to XML templates that are not shown in the GUI Virt-Manager application. Using the virsh cli is also essential for doing work remotely where Virt-Manager may not be available.

Make sure the gtk USE flag is enabled for Virt-Mananger, as it is a graphical application, and also polictykit, and then begin the emerge process:

user $  echo 'app-emulation/virt-manager gtk policykit' >> /etc/portage/package.use

It's likely to need further USE flag changes in /etc/portage/package.use -- if it asks, add the changes needed in order to be emerged.

user $  emerge -av app-emulation/virt-manager

Create a new Virtual Machine Template

Creating a new Virtual Machine Template

On first use of Virt-Manager, while browsing to find an ISO image, create a dedicated ISO "pool" which will be a directory on the filesystem where ISO files are stored. Select the "+" in order to "Add pool". After creating the ISO pool and moving ISO images into the directory, browse for your ISO images in your ISO pool.

Choose Volume

Customize configuration before VM install

During the final step of VM template creation, it is a good idea to check the box "Customize Configuration Before Install," as it will allow, on the next screen, the option to choose which BIOS firmware will be used. After a BIOS is chosen for a VM template, it is unable to be changed without starting the creation process again from the beginning.

Customize Configuration Before Install

The default BIOS is seabios, which is a legacy BIOS. OVMF, a UEFI BIOS, is also available. When deciding on a chipset, i440FX is for emulation of older BIOS chipsets, and Q35 is for emulation of newer BIOS chipsets. It is recommended to use the default seabios unless a UEFI BIOS is needed.

Choose your BIOS from the customize configuration menu

It is possible to return to the VM customization menu at any time by selecting a VM from the main Virt-Manager window, choosing "open" and then selecting "Details" from the VMs dropdown menu.

Other considerations in the customization menu

The Q35 BIOS chipset will not work with IDE

If using the Q35 chipset, remove the IDE Storage devices and select "Add Hardware," in the "Storage" section, select SATA and add new SATA Storage devices instead.

It is also important to decide between using the qcow2 or raw storage types. Qcow2 allows for snapshotting with rollbacks and is sparse provisioned. Raw does not have the features of qcow2, but may have better performance under some circumstances.

Adding a new SATA cdrom Device and loading it with a windows 10 ISO image

How to use SCSI Storage Devices with the Virtio SCSI Disk Controller

While adding virtual storage hardware, it is an excellent time to choose the disk "Device type" of "SCSI" which will use the Redhat passthrough "Virtio SCSI" disk controller for improved I/O performance.

ZFS users can also use a sparse provisioned ZVOL which can be addressed in the "Select or create custom storage" textbox as follows:

/dev/zvol/tank/VOLNAME

The virtio guest driver for taking advantage of the "Virtio SCSI" disk controller is included by default in Linux and FreeBSD kernels. Windows will be unable to see the Virtio SCSI storage disk and therefore unable to install until you load the Redhat Virtio SCSI driver during the Windows install process. (Windows installers include a menu that is able to load drivers during installation.) The RedHat Virtio SCSI driver for Windows is available to download as an ISO image.

Adding a new SCSI Storage Device and addressing it to a preconfigured ZVOL

About the Virt-Manager defaults

Virt-Manager configures the Spice server with QXL video, ich6 sound, USB redirectors and shared clipboard by default. These are all generally good things to have for graphical VMs, but depending on the user application, some may opt to remove or change these devices away from defaults.

Begin Installation

When ready to begin, choose "Begin Installation", Virt-Manager's equivalent of the "virt-install" command.

Windows 10 Virtual Machine

After successful installation of an OS, it is a good idea to install and enable the app-emulation/spice-vdagent service on graphical guests. If it is a Windows guest, a Spice executeable can be downloaded from the official Spice downloads page.

XML Template editing using "virsh edit"

Allow sparse provisioned guests to trim the filesystem and return unused space to the host

Some features can only be changed by editing the XML template directly. This example shows an XML template that was originally generated using Virt-Manager, but by using the "virsh edit" command, has been modified on line 42 with the added parameter of discard='unmap' which allows sparse provisioned guest VMs to trim the filesystem and return unused space to the host.

Editing XML templates manually with virsh edit

Troubleshooting

Can't get beyond the BIOS screen? Check if the boot devices are enabled and in the correct order. Also check if the CDROM has an ISO loaded! It may also be possible the ISO installation media only supports legacy BIOS.

Links