Enable xenconsoled Serial Logging


Be sure to take into consideration the potential security implications of storing every detail from every management session conducted by serial. By nature of the interface, most will be root/sudoer logins.

On a fresh Red Hat/CentOS dom0 you will find /var/log/xen/console/ barren, or perhaps not even extant (mkdir -P). On older installations that come with a xend service, you can enable serial console recording globally (all domUs at once) by setting XENCONSOLED_LOG_GUESTS=yes in /etc/sysconfig/xend and restarting the xend service.


On newer system.d installations run out of xencommons I had to run down the environment variables:

locate xenconsoled.service /etc/systemd/system/multi-user.target.wants/xenconsoled.service /usr/lib/systemd/system/xenconsoled.service cat /usr/lib/systemd/system/xenconsoled.service [Unit] Description=Xenconsoled - handles logging from guest consoles and hypervisor Requires=proc-xen.mount xenstored.service After=proc-xen.mount xenstored.service ConditionPathExists=/proc/xen/capabilities [Service] Type=simple Environment=XENCONSOLED_ARGS= Environment=XENCONSOLED_TRACE=none Environment=XENCONSOLED_LOG_DIR=/var/log/xen/console EnvironmentFile=/etc/sysconfig/xencommons ExecStartPre=/bin/grep -q control_d /proc/xen/capabilities ExecStartPre=/bin/mkdir -p ${XENCONSOLED_LOG_DIR} ExecStart=/usr/sbin/xenconsoled -i --log=${XENCONSOLED_TRACE} --log-dir=${XENCONSOLED_LOG_DIR} $XENCONSOLED_ARGS [Install] WantedBy=multi-user.target xenconsoled --help Usage: xenconsoled [-h] [-V] [-v] [-i] [--log=none|guest|hv|all] [--log-dir=DIR] [--pid-file=PATH] [-t, --timestamp=none|guest|hv|all] [-o, --overflow-data=discard|keep]
Documentation regarding xenconsoled's options is scarce to say the least. At the time of publication I'm apparently the first person on the internet to have tried figuring out what important-sounding --overflow-data does. Fortunately xenconsoled plays nicely with being killed live on the command line and having its flags twiddled with - it will even auto-daemonize. Avoid spawning more than one instance at a time, though doing so produces no catastrophically ill effects.

Once you have settled on a configuration, it is made persistent by dropping environment variables into the very top of /etc/sysconfig/xencommons:

## Path: System/Virtualization ## Type: string ## Default: "none" # # Log xenconsoled messages (cf xl dmesg) #XENCONSOLED_TRACE=[none|guest|hv|all] XENCONSOLED_TRACE=all XENCONSOLED_ARGS="--timestamp=all" #XENCONSOLED_LOG_DIR=/var/log/xen/console

On Debian and derivatives like Ubuntu the same is accomplished by setting:
XENCONSOLED_ARGS="--timestamp=all --log=all --log-dir=/var/log/xen/console/"
in /etc/default/xend.

Another option is the direct route:
nohup xl console domain 2>&1 tee /var/log/xen/console/guest-domain.log &

Note that the actual client interface, xenconsole can be called directly, but typically is not in your PATH by default and must be called from its location. It accepts a domain ID number as opposed to a domain name, but allows one to specify if the interface to which you are connecting is provided by HVM/QEMU (serial) or PV (pv) using the --type flag:

/usr/lib64/xen/bin/xenconsole --help Usage: /usr/lib64/xen/bin/xenconsole [OPTION] DOMID Attaches to a virtual domain console -h, --help display this help and exit -n, --num N use console number N --type TYPE console type. must be 'pv' or 'serial' --start-notify-fd N file descriptor used to notify parent xl list | grep domain domain 5 8192 5 -b---- 119.1 /usr/lib64/xen/bin/xenconsole 5

You can tee your interactive session to a logfile, but unlike the configuration shown above the flow into the file will cease when you exit xenconsole:

/usr/lib64/xen/bin/xenconsole 5 tee:stdio,file:/var/log/xen/console/guest-domain.log

Some interesting and highly technical background on the provisioning and configuration of the underlaying serial devices and emulations is provided in https://xenbits.xenproject.org/docs/unstable/misc/console.txt however you are unlikely to find it useful to your logging endeavour unless you are engaged in extreme debugging.

Running OPNsense Serial image Installer on Xen


I'm not used to using hard drive images as VM installation media as opposed to ISO images. The boot="dc" directive in a Xen configuration flatfile specifies boot order by device type and not address - putting 'd' before 'c' does not mean the second image file specified will be booted, it means any CD-ROM image will be booted before any hard drive image.

From the official Xen wiki, Setting boot order for domUs:

# boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d) # default: hard disk, cd-rom, floppy boot="cda"

Pecking order is then resolved by the sequence of definition. The same is even true of XenCenter which provides a dropdown selector of bootable media categories and no way to specify a single image. Even the CLI toolstack does not provide this facility and that makes me leery of installations where the second disk image that we are installing to will be misidentified by device node upon completing the installation when we've detatched the installer's image and the installed image's device node is bumped up in sequence.

Fortunately (so far!), at least OPNsense seems to boot without issue from what was adb1 during installation but forevermore occupies adb0. However, from experience I know this will not hold true for many installations. The fix is to mount the guest image while it is shut down and try to search-and-replace every reference to the root device's node. I start by grepping the whole /etc tree and then focus on the guest's bootloader - which may require an offline, chrooted redeployment post-reconfiguration.

I used the following config and simply twiddled the disk line:

name = "opnsense-serial" type = "hvm" vcpus = 2 memory = 2048 vif = [ 'bridge=nullbr0', 'type=ioemu, bridge=nullbr0' ] disk = ['file:/xen/opnsense/OPNsense-23.7-serial-amd64.img,hda,w', 'file:/xen/opnsense/opnsense.hdd,hdb,w'] #disk = ['file:/xen/opnsense/opnsense.hdd,hda,w'] serial = 'pty'

Then reverted to my original production config.

I would note that vgaconsole is entirely disabled on the serial installer, so configuring graphics and vnc/other GUI access is rendered moot.

Always remember to launch your serially-managed VMs with xenconsole by affixing the -c flag to your create command line - at least if you are managing or troubleshooting the VM - as this is the only way to view every line of output from bootstrap onward, unless you enable console logging to the dom0 - taking into consideration the potential security implications of storing every detail from every management session thereby conducted.

To enable logging of this interface, check out Enable xenconsoled Serial Logging.

Everything You Never Wanted to But Absolutely Have to Know About Sparse Files


This article began as what I hoped would be a short and helpful sentence about a tiny caveat regarding managing sparsely allocated files which I felt was needed badly enough to edit a straightforward and very succinct little primer I wrote all the way back in 2010 called Managing Raw Disk/File System Image Files. This effort escalated so violently out of proportion into a subsection bigger and much more dense than the original article that I turned it into one. Fill yer boots!

So! What are sparse files? I'm so glad you didn't have to ask. Despite vicious rumours, they are not a surreptitious form of filesystem debt. They look something like this in the wild:

ls-lsah total 9.4G 0 drwxr-xr-x. 2 root root 101 Dec 10 00:09 . 0 drwxr-xr-x. 7 root root 92 Dec 10 00:09 .. 724M -rw-r--r--. 1 root root 724M Dec 9 14:46 opnsense-23.7.9-xen.hdd.xz 4.0K -rw-------. 1 root root 466 Dec 9 14:31 opnsense.conf 6.1G -rw-r--r--. 1 root root 20G Dec 10 00:08 opnsense.hdd 2.6G -rw-r--r--. 1 root root 20G Dec 9 11:58 pristine.hdd

These files have been allocated 20GB to one day grow - or not grow into - but for now they weigh about 9GB on the books, and that's all you'll be billed for when you check df -h because that is all the space they are in fact using at present.

Weird. So what gives?

Sparse files are almost always the best choice for a new VM's raw disk image because while presenting a concrete limit to the system contained therein, they are only allocated blocks on the host filesystem as they are written to. This permits the most optimal use of truly available storage space and even makes overprovisioning possible.

To create a sparse file run:
dd if=/dev/zero of=image.img seek=X bs=1M count=0
Where X is the size of the image in MB.

To install a fresh and empty filesystem in your sparse image call up your mkfs of choice and direct it at the filename. You may pull up a man page at this point and feel inclined to sprinkle some sensible, hopefully optimizing flags on top. The image file may now be treated as a single partition.

To create a whole disk image out of it you can use fdisk and its competitors, the same as you would on a block device, but specifying the filename instead. This results in a somewhat more complicated scenario however as you will need to make use of loop devices to access the constituent partitions. This can be made easier (or even more convoluted!) with a bevy of tools and scripts that come with little hope of being already installed, readily found or easy to use. Luckily for you, I found a really good shell script that uses common tools and will likely work right out of the copypasta box for you when I was writing Mounting LUKS Encrypted Drives, Disk Images and Partitions Thereof. The first script posted in that article is for you. The second version, I kind of taught it crypto. Because I play a spooky hacker on TV.

When it comes to virtual machines, the choice of whole disk vs single image partitions is usually made for you. While there are oddball pre-built appliance images and it was my favourite way to build a PV image in the earlier Xen era, even a custom import/conversion is unlikely to occupy a single partition image. Most contemporary virtualization solutions tend to expect unmodified bootloaders (installed to MBR or EFI) and in-system kernels etc. Additionally, one is hard pressed to find an OS installation medium that expects to perform anything less than a whole-disk install.

Essentially the only time it is actually preferable to create a directly, linearly allocated raw image is when handling situations where potential fragmentation are markedly detrimental - or are in fact unacceptable - as is the case with swap image files:

dd if=/dev/zero of=swap.img bs=1M count=2048 mkswap swap.img swapon swap.img

Don't forget to add the additional swap to /etc/fstab if it is to be a permanent addition. If different sources of swap space have different performance characteristics it is prudent to add a usage priority weighting to enable optimum performance. Add pri=X after the sw keyword:

/dev/sda2 none swap sw,pri=1 0 0 /swap.img none swap sw,pri=2 0 0

Swap image files are an essential tool in production situations when physical RAM is exhausted and can not be allocated to a starved VM suffering out-of-memory process kills and crashes. Or, as I prefer to frame it: they are a last-minute hack permissible only in a crisis and demand revisiting with a real solution that does not all but require ceaseless thrashing to storage what should be properly fixed in RAM. Try to avoid making directly allocated image files from within running VMs that may draw on sparse images and create them externally on the host system (dom0/hypervisor manager) instead. It is usually possible to attach new storage images to a running VM.

It is also necessary to be aware of how tools interact with a sparse image in certain situations; often programs that are not sparse aware or have not been made aware (usually a flag, if supported) will seem to inflate a sparse file as it is being manipulated or moved (i.e.: over sftp), spontaneously filling in unallocated blocks with zero at the destination, thereby undoing their purpose. It is necessary to employ a little forethought: rsync and tar possess the -S or --sparse facility and it is possible to leverage tar to either make a safe-for-transport tar archive which retains the sparse properties of its contents upon extraction, wherever that may be. It is also possible to leverage this capability with a convenient pipe:
tar cSv sparse.img | ssh remote "tar xSv"

When files and metadata are deleted from a filesystem contained in a sparsely allocated image the unused blocks are not de-allocated in the host filesystem. There is no mechanism for communication between the two about such operations, which if there were would be akin to an SSD's TRIM functionality. That means that a sparse image's actual size can only increase, never decrease. Furthermore, the recent data written inside the image itself becomes increasingly fragmented as the now "conventionally empty" blocks (allocated but empty, like a normal file) are then re-allocated. When they are reallocated they are quite likely to be assigned chunks of even more non-contiguously arranged data, ever-worsening. It is highly unlikely to become so severe, but depending on the usage pattern, scale and value, you may find it may actually be worth the space and performance reclaimed to simply start over. With a brand new sparse image, copy over the contents of the original so they are written with an efficient, contiguous burst at the beginning of the file. Where it is a simple matter of mounting the contained filesystems simultaneously to access their contents, you can easily leverage cp's archival capability to preserve the filesystem structure and details:
mkdir /mnt/old mkdir /mnt/new dd if=/dev/zero of=new.img seek=X bs=1M count=0 mkfs.ext4 new.img mount old.img /mnt/old mount new.img /mnt/new cp -ax /mnt/old/* /mnt/new/ umount /mnt/new umount /mnt/old

To enlarge a sparse file run:
dd if=/dev/zero of=image.img seek=X bs=1M count=0
Where seek=X should be the current size of the sparse file plus the amount of space you wish to grow the image by in MB (set by bs).

When you're working with a single partition image, expanding the existing filesystem usually has a fun, easy utility like expand2fs that is so easygoing in spite of its name it will even expand an ext4 filesystem for free, if you don't make a big deal about it. If you are expanding or shuffling partitions in a full disk image however, good luck with that unmitigated nightmare. They don't pay me enough to do horror writing. Situations like that are the very archetype of why alternative, extremely flexible solutions like LVM are so popular. If you need to perform non-optional, invasive partition surgery on a file-backed drive (raw or otherwise) I would at that point strongly advise you to consider migrating your data to LVM instead.

A Fresh Squeezed List of Xen Paravirtual Devices Provisioned to FreeBSD DomUs


For no reason aside from it being nice to at least acquaint one's self with the buzzwords - if not at some level of forced compunction make a half-hearted go of skiming through the terrifying guts of a big and complicated system you have already or have meritous reason to fear a long and soul-sucking betrothal to. In my case it's NetBSD under Xen, if you've been at all tracing the concerning trajectory my recently published articles are steadfastly committed to plough through arse-first with no feigned notion of expertise, as one does when one endeavours to endear from the blessedly beleagured-by-spitballs-and-wise-cracks back of the class from whence I clearly matriculated. Indeed, you have stumbled on another unclever attempt to snooker some SEO whilst also self-abasingly stretching my shameful, verbial pud - as they say in the old country.

acpi0: <Xen> debug0: <Xen debug handler> on xenpv0 evtchn0: <Xen event channel user-space device> on xenpv0 gntdev0: <Xen grant-table user-space device> on xenpv0 granttable0: <Xen Grant-table Device> on xenpv0 privcmd0: <Xen privileged interface user-space device> on xenpv0 xbd0: 20480MB <Virtual Block Device> at device/vbd/768 on xenbusb_front0 xctrl0: <Xen Control Device> on xenstore0 xenballoon0: <Xen Balloon Device> on xenstore0 xenbusb_back0: <Xen Backend Devices> on xenstore0 xenbusb_front0: <Xen Frontend Devices> on xenstore0 xen_et0: registered as a time-of-day clock, resolution 0.000001s xen_et0: <Xen PV Clock> on xenpv0 xenpci0: <Xen Platform Device> port 0xc000-0xc0ff mem 0xf4000000-0xf4ffffff irq 24 at device 2.0 on pci0 xenpv0: <Xen PV bus> xenstore0: <XenStore> on xenpv0 xn0: backend features:xn2: <Virtual Network Interface> at device/vif/2 on xenbusb_front0 xn0: <Virtual Network Interface> at device/vif/0 on xenbusb_front0 xn1: <Virtual Network Interface> at device/vif/1 on xenbusb_front0 xn3: <Virtual Network Interface> at device/vif/3 on xenbusb_front0 xsd_dev0: <Xenstored user-space device> on xenpv0 xs_dev0: <Xenstore user-space device> on xenstore0

For the interested, one produces such a tidy amalgum by piping dmesg through a convoluted chain of cozy standbys we in the industry lean upon from time to time what'fer making one's shell scripts shine:

dmesg | grep xen > temp; dmesg | grep Xen >> temp; uniq temp | sort > temp

Then one naturally calls in nano, champion of the vim-emacs holy war, to put in the spit shine needed to weed out the straggling non-device-noded contestants in this wonderful game we call: how to make a vapid and keyword rich utterance before the kettle whistles. Medium, eat your goddamned heart out.

OPNsense-23.7.9-xen: All Gussied up with the foxpa.ws Xen Appliance Treatment


What could be said more truthfully of Xen than: optimizing new domUs requires a measure of patience. This image is a ready-to-roll, fully xen-system-tool'd up and xen console-wranglified minimal installation with but a few minor cosmetic enhancements and some blow-away initial configuration, complete with an olde-timey xl config flatfile. I blew a night so you don't have to - and neither shall I blow another so long as it remains fully automatically upgradable from this point and, given their track record at thoughtful upgrades, I am in fact anticipating a worthwhile remuneration in life's most precious resource.

Download the image from our Telegram downloads channel at https://t.me/foxpaws_downloads/17

The process covered in Xen HVM Configuration for Installing Guest DomU from ISOs was used to bootstrap from the latest vga installation ISO and retroactively de-vga'd then en-console'd to ensure a Xen compatible administrative console; if a redux is called for some time I might roll the dice on the serial installer image. Nano looks great for embedded but I'm wrangling weapons grade amd64 carrier gateways and I feel confident that I'll be sticking with OpenWRT wherever WiFi is concerned.

An example config for your consideration:
name = "opnsense-foxpaws" builder = "hvm" vcpus = 2 memory = 2048 serial = 'pty' disk = ['file:/var/xen/opnsense-23.7.9-xen/opnsense-23.7.9-xen.hdd,hda,w'] vif = [ 'bridge=wanbr0', 'bridge=lanbr0', 'bridge=dmzbr0' ] #vga = "stdvga" #videoram = 64 #vnc = 1 #vnclisten = "" #vncdisplay = 0 #vncpasswd = "securepass"

If you run into trouble try uncommenting the graphical and vnc directives as you may gain insights into the bootstrap and vgaconsole (more fully available via a bootloader option if you catch it fast enough) not otherwise possible. Try to be quick on the draw with your vnc client (i.e. have your command line or host already configured) as the first moments can be the most critical. If you are not a casual vnc user grab RealVNC Viewer from my vast and always updated free windows software recommendations list. It made the cut because it's well designed, widely compatible, performant and isn't fraught with intellectual property issues.

Uncompressed, the image stands ~7GB tall within its 20GB (sparse allocated) limit. The following configuration has been performed in the interest of a speedy launch and must be upended to some extent:

  • root (console, webconfig) password is foxpaws
  • WAN interface is set to gw and block private networks has been unchecked.
  • LAN (webconfig, ssh) is
  • Primary console: Serial, secondary console: EFI. Enabled vt driver.
  • DNS resolvers set to,,,
  • os-xen and os-qemu-agent plugins installed
  • nano and screen packages installed
  • os-theme-cicada (dark) theme installed and enabled
  • Webconfig, SSH as root and with password authentication listening on LAN interface only