Monday, September 2, 2024

how to install Xen on Debian 12 “bookworm”

A brief guide to installing Xen with a single "PVH" domU on Debian using contemporary hardware.

motivation

There are a few “official” instructions for installing Xen, and many unofficial ones (like this). The “Xen Project Beginners Guide” is a useful introduction to Xen, and even uses Debian as its host OS. But for someone who is already familiar with Xen and simply wants a short set of current instructions, it's too verbose. It even includes basics on installing the Debian OS prior to installing Xen, which is something that I take as a given here. Further, it doesn't address the optimized “PVH” configuration for domUs, which is available for modern hardware. Much of the Xen documentation seems to have last been touched in 2018, when AWS was still using Xen.

The Debian wiki also has a series of pages on “Xen on Debian”, but the writing appears unfocused, speculating about all sorts of alternative approaches one could take. Some useful information can be gleaned from it, but it doesn't have the brevity that I'm looking for here.

The Xen wiki's “Xen Common Problems” page is a good source for various factoids, but not a set of cookbook instructions. Various unofficial instructions can be found on the Web, but I found them to be incomplete for my purposes.

preparation

Xen 4.17 is the current version in Debian 12.6.0; Xen 4.19 was recently released, so the Debian version is probably sufficiently recent for most needs.

VT-d virtualization is required for Intel processors. (I don't address AMD or other chip virtualization standards, but the corresponding technology is required in that case.) In /boot/config-*, one can confirm that CONFIG_INTEL_IOMMU=y for the kernel, and “dmesg | grep DMAR” (in the non-Xen kernel[1]) returns lines like:

ACPI: DMAR 0x00000000… 000050 (v02 INTEL  EDK2     00000002     01000013)
ACPI: Reserving DMAR table memory at [mem 0x…-0x…]
DMAR: Host address width 39
DMAR: DRHD base: 0x000000fed91000 flags: 0x1
…
DMAR-IR: Enabled IRQ remapping in xapic mode
…
DMAR: Intel(R) Virtualization Technology for Directed I/O

so VT-d seems to be working. If VT-d is not available, you may need to enable it in your BIOS settings.

The Xen wiki's “Linux PVH” page has information on PVH mode on Linux, but it hasn’t been updated since 2018 and references Xen 4.11 at the latest. All of the kernel config settings mentioned there are present in the installed kernel, except that CONFIG_PARAVIRT_GUEST doesn’t exist. I assume it was removed.

Xen installation on Debian

start Xen

apt install xen-system-amd64 xen-tools

installs the Xen Debian packages and the tools for creating the domUs. (If you see references that say to install the xen-hypervisor virtual package, know that xen-system-amd64 depends on the latest xen-hypervisor-*-amd64. You may need a different architecture than -amd64.) Rebooting will go into Xen: it will have been added to GRUB as the default kernel.

configure dom0

configure GRUB’s Xen configuration

The Xen command line options are described in /etc/default/grub.d/xen.cfg. The complete documentation is at https://xenbits.xen.org/docs/unstable/misc/xen-command-line.html.

In /etc/default/grub.d/xen.cfg, set:

XEN_OVERRIDE_GRUB_DEFAULT=1

so that GRUB doesn’t whine about Xen being prioritized.

By default, dom0 is given access to all vCPUs (we'll assume 32 on this hypothetical hardware) and all memory (64GB, here). It doesn’t need that much. Furthermore, as domUs are started, the dom0 memory is ballooned down in size, so that the dom0 Linux kernel no longer has as much memory as it thought it had at start-up. So the first step is to scale this back: dom0_mem=4G,max:4G. The fixed size will avoid ballooning at all. Likewise, for the vCPUs: dom0_max_vcpus=1-4.

Since we’re not running unsafe software in dom0, we can set xpti off there. So in /etc/default/grub.d/xen.cfg, set:

GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=4G,max:4G dom0_max_vcpus=1-4 xpti=dom0=false,domu=true"

(There’s no need to change the autoballoon setting in /etc/xen/xl.conf, since "auto" does what’s needed.)

Then run update-grub and reboot.

configure Xen networking

create a xenbr0 bridge

Xen domUs require a bridge in order to attach to the dom0’s network interface. (There are other options, but bridging is the most common.) Following the Xen network configuration wiki page, in /etc/network/interfaces, change:

allow-hotplug eno1
iface eno1 inet static
…

to:

iface eno1 inet manual

auto xenbr0
iface xenbr0 inet static
	bridge_ports eno1
	bridge_waitport 0
	bridge_fd 0
	… # the rest of the original eno1 configuration

(Obviously this is assuming that your primary interface is named eno1, which is typical for an onboard Ethernet NIC.) Run ifdown eno1 before saving this change, and ifup xenbr0 after.

xenbr0 is the default bridge name for the XL networking tools, which is what we’ll use.

about Netfilter bridge filtering

You may see a message in the kernel logs:

bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.

This is of no concern. It gets printed when the bridge module gets loaded, in the process of bringing up xenbr0 for the first time since boot.

It used to be that bridged packets would be sent through Netfilter. This was considered to be confusing, since it required setting up a Netfilter FORWARD rule to accept those packets—something that most people expected automatically with a bridge. The solution was to remove that behavior to a new module (br_netfilter). This message is a remnant reminder of the change, for those who were depending on it. See the kernel patch; it has been this way since 3.18.

configure the dom0 restart behavior

When dom0 is shut down, by default Xen will save the images of all running domUs, in order to restore them on reboot. This takes some time, and disk space for the images, and most likely you'll want to shut down the domUs instead. To configure that, in /etc/default/xendomains, set:

XENDOMAINS_RESTORE=false

and comment out XENDOMAINS_SAVE.

create a PVH domU

create the domU configuration

There isn’t any obvious documentation on using xen-create-image, the usual tool, for creating a specifically PVH Linux guest. So this is your summary.

Edit /etc/xen-tools/xen-tools.conf, which is used to set variables for the Perl scripts that xen-tools uses, to set:

lvm = "server-disk"
pygrub = 1

This assumes that you're using LVM to provide a root file system to the domUs, and the VG for the domUs is named as shown. Then run this (I recommend using script, though a log is created by default):

xen-create-image --hostname=my-domu.internal --verbose \
  --ip=192.168.1.2 --gateway=192.168.1.1 --netmask=255.255.255.0 \
  --memory=60G --maxmem=60G --size=100G --swap=60G

or whatever settings you choose; see the xen-create-image man page for explanations.

The --memory setting can be tuned later to the maximum available memory, if you're not adding any other domU. It's only used to set the memory setting in the /etc/xen/*.cfg file, and can be edited there. Likewise for maxmem. Setting them equal provides the benefit that no memory ballooning will be performed by Xen on the domU, so there will be no surprises while the domU is running and unable to obtain more memory. The available memory for domUs can be found in the free_memory field in the xl info output in the dom0; it may not be precisely what you can use, since there may be some unaccounted-for overhead in starting the domU.

The --size and --swap settings for the root and swap LV partitions can be expanded later if needed, using the LVM tools in the usual way.

Adjust the /etc/xen/*.cfg file by adding:

type     = "pvh"
vcpus    = 4
maxvcpus = 31
xen_platform_pci = 1
cpu_weight = 128

The maxvcpus setting here assumes that 32 vCPUs are available; it leaves one for the exclusive use of the dom0. Four vCPUs should be enough to start the domU quickly. The cpu_weight deprioritizes the domU's CPU usage vs. the dom0's. xl sched-credit2 shows the current weights.

To have the domU automatically start when dom0 starts:

mkdir -p /etc/xen/auto
cd /etc/xen/auto
ln -s ../my-domu.internal.cfg .

(You may see instructions that tell you to symlink the whole /etc/xen/auto directory to /etc/xen. The downside is that will report warnings as Xen tries to parse the non-configuration example files in /etc/xen.)

fix the domU networking configuration

Due to Debian bug #1060394, eth0 is used in /etc/network/interfaces, not enX0. You can mount the domU's LV disk device temporarily in order to correct this.

add software to the domU

Except for the sudo package needed for the next step, the rest of this is optional but are things that I typically do. This requires starting the domU (with xl create) and logging in to it (with xl console, using the root password given at the end of the xen-create-image output).

Edit /etc/apt/sources.list to drop the deb-src and use non-free-firmware. While you're in there, fix the Debian repositories to be the ones that you wanted; there's a bug in xen-create-image in that it doesn't use the repos from dom0. Then:

apt update
apt upgrade
apt install ca-certificates

Now you can edit /etc/apt/sources.list to use https. Then:

apt update
apt install sudo

plus any other tools you find useful (aptitude, curl, emacs, gpg, lsof, man-db, net-tools, unattended-upgrades,…).

add a normal user to the domU

Connecting to the domU can only be done, initially, with xl console, which requires logging in as root, since that’s the only user that we created so far. (The generated root password will have been printed at the end of the xen-create-image output.) xl console appears to have limitations to its terminal emulation, so connection via SSH would be better. An SSH server is already installed. The SSH daemon (by default) prohibits login as root, and anyway it's best to not log in as root, even via the console, so create a normal user that has complete root privileges [2]:

adduser john
adduser john sudo
passwd --lock root

That's all you need to get started with your new domU!


1 The “non-Xen kernel” is the Linux kernel that is installed with a simple Debian installation, that is, a kernel that isn't enhanced with the Xen hypervisor. When the hypervisor is running in the kernel, it hides certain information from the kernel. Most of that information can be found when the hypervisor is running by using “xl dmesg”.

2 Note that once you lock root's password, if you log out of the console without having created the normal user with admin privileges, you will be locked out of the domU. The way to get access again in that case is to shut down the domU, mount its file system, and edit /etc/shadow to remove the “!” at the start of root's password.

No comments:

Post a Comment