On Apple Macs with Silicon processors, UTM is one of the best free virtualization options available, offering solid performance for ARM-based virtual machines.
As we saw in the Vagrant - installing and operationg post, Vagrant provides a convenient way for automating the setup, configuration, and management of virtual machines, enabling reproducible and consistent development environments.
Sadly, when dealing with Vagrant, there are only a few prebuilt Vagrant boxes supporting the UTM provider, which limits its out-of-the-box usability. Create ARM64 VagrantBox for Oracle Linux on UTM arch64 from Scratch shows how to create an aarch64 Oracle Linux 10 Vagrant box supporting the UTM ARM provider, making it easier to deploy and manage Linux VMs on Apple Silicon.
Create The Oracle Linux VM
Creating a brand new Vagrant box requires having a Virtual Machine satisfying some Vagrant related requisites.
So, let's start the creation process by installing the related Linux VM, using the Hypervisor that will be supported by the Vagrant box.
Download The Boot Image
First thing first, we must download the Oracle Linux 10 AARM installation image which, along with the other Oracle Linux installation images, is available at https://yum.oracle.com/oracle-linux-isos.html.
Define The Oracle Linux VM Instance
Once downloaded, launch the UTM UI and create the definition of the new VM for Oracle Linux 10.
The current wizard at the time of writing this post has the following steps:
- Create a New Virtual Machine, in this lab we call it ol10
- Chose between Virtualize or Emulate - mind that emulate, as the name suggests, despite enabling the emulation of different hardware architecture, has worse performances - chose Virtualize
- Select the operating system family (Linux in our case)
- As the installation source, select the ISO image you just downloaded
- Assign capacity: 2 CPU and 2GiB of RAM are more than enough for smoothly running the installation process
- Size the disk to 20GB (this is the bare minimum required disk space for installing Oracle Linux 10).
Before saving, tick the Open VM Settings option - this is necessary to perform the following additional settings:
- remove sound hardware (this is unlikely to be needed - of course, if necessary, keep it)
- make sure first network card is set to Shared Network (this is a mandatory requirement for the UTM VM to get a dynamic IP address)
- add a network card of kind Emulated VLAN (this is a mandatory requirment to support port forwarding)
Install Oracle Linux
Once done, boot the Oracle Linux 10 VM and wait unti the Anaconda Linux installer to complete its load.
Then, from Anaconda installation wizard:
- Set English language and keyboard layout
- Set UTC timezone
- Software Selection: Minimal install
- Select Custom storage, and partition your VM. My personal suggestion, which is CIS compliant and also leave roughly 1.2 GiB of free space in the LVM Volume Group for contingency, is:
- /boot 2GiB, standard partition
- /boot/EFI 1GiB, standard partition
- SWAP 1GIB, LVM
- /tmp 1 GIB, LVM
- /var 4 GiB, LVM
- /var/lib 2 GiB, LVM
- /var/log 512 MiB, LVM
- /var/log/audit 256MiB, LVM
- /home 1 GiB, LVM
- / 6 GiB, LVM
- enable root account, set password vagrant
Once the installation completes, eject the virtual DVD and then reboot the system.
Satisfying Vagrant Requirements
As we anticipated, Vagrant boxes must satisfy some Vagrant specific requirements: to meet them, we must still perform some post installation steps.
Unluckily, since SSH access as the root user is disabled by default, to complete the setup we must log in to the system console as the root user.
Create The Vagrant User
Once logged in, since Vagrant operates using the vagrant user, as the first post installation step we must create this user.
Just run the following statement:
adduser vagrant
we must then assign its password:
passwd vagrant
When prompted, assign the word vagrant as the password, since it is this is the default password for the vagrant user in Vagrant boxes,
Create The Vagrant User Sudo Rule
Since the vagrant user is required to have administrative rights, is also necessary to create a sudo rule to enable it to perform any administrative task as any user.
Just create the /etc/sudoers.d/vagrant file with the following contents:
%vagrant ALL=(ALL) NOPASSWD: ALL
Operating in console is quite impractical: for example we are missing copy and paste. Since we luckily now have a sudo-enabled user which can connect to the VM using SSH, to work more easily, we can now SSH connect to th VM using it.
We can easily get the VM's IP address to SSH connect to by running:
ip -4 a | grep inet | awk '{print $2}'
The right one should be the one on the second line.
So, let's now connect to the VM using SSH as the vagrant user.
For example, assuming the IP address is 10.2.3.4, on a terminal on your computer, run:
ssh -l vagrant 10.2.3.4
accept the remote hosts' fingerprint by typing yes, and then type vagrant as password.
Disable DNS Resolution On SSHd
Since Vagrant boxes are typically used in lab and dev environments, there is no real value in having SSH performing DNS lookups for each incoming connection to better identify the source host of the incoming connections. So, to speed up the SSH connection process, let's disable it by adding the following line to the /etc/ssh/sshd_config file:
set UseDNS no
Setup Vagrant's Default SSH Public Key
While provisioning a new Vagrant VM, Vagrant generates each time on the hypervisor host a new private key dedicated to that specific VM and authorizes on the spawned VM its related public key. To perform the initial connection to authorize that public key, it uses a default SSH key pair which is expected to be pre-authorized on the Vagrant box.
To enable this mechanism to work, we must authorize Vagrant's default SSH public key.
First, as the vagrant user, we must create the .ssh directory as follows:
mkdir -m 0700 ~/.ssh
Then, we can retrieve vagrant's default SSH public key from GitHub as follows:
curl -k -o ~/.ssh/authorized_keys \
https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub
chmod 0600 .ssh/authorized_keys
Configure VM Guest Additions
Vagrant provides smooth integrations with the hypervisor host such as shared folders, which provide a way to mount a direcotry tree from the Virtualization host's filesystem into the VM, so to be able to easily share contents.
These integrations requires installing a few guest addition components: more sepcifically, when using UTM, they are the spice-vdagent and qemu-guest-agent RPM packages.
Let's install them as follows:
sudo dnf install -y spice-vdagent qemu-guest-agent
On Oracle Linux, for security reason, it is possible to restrict the list of actions which are allowed to run while using the quemu-guest-agent.
This is governed by the FILTER_RPC_ARGS variable in the /etc/sysconfig/qemu-ga file.
In addition to the default allowed set, UTM requires also:
- guest-exec : this is used to allow the host to run arbitrary commands or scripts inside the guest VM.
- guest-exec-status: this is used query the status of a previously executed command (initiated by guest-exec).
This means that we must add them to the list in the FILTER_RPC_ARGS variable.
For example:
FILTER_RPC_ARGS="--allow-rpcs=guest-sync-delimited,guest-sync,guest-ping,guest-get-time,guest-set-time,guest-info,guest-shutdown,guest-fsfreeze-status,guest-fsfreeze-freeze,guest-fsfreeze-freeze-list,guest-fsfreeze-thaw,guest-fstrim,guest-suspend-disk,guest-suspend-ram,guest-suspend-hybrid,guest-network-get-interfaces,guest-get-vcpus,guest-set-vcpus,guest-get-disks,guest-get-fsinfo,guest-set-user-password,guest-get-memory-blocks,guest-set-memory-blocks,guest-get-memory-block-info,guest-get-host-name,guest-get-users,guest-get-timezone,guest-get-osinfo,guest-get-devices,guest-ssh-get-authorized-keys,guest-ssh-add-authorized-keys,guest-ssh-remove-authorized-keys,guest-get-diskstats,guest-get-cpustats,guest-network-get-route,guest-exec,guest-exec-status"
Don't forget this step, or at the end of the vagrant up process you will get the following puzzling error:
There was an error while executing `utmctl`, a CLI used by vagrant-utm
for controlling UTM. The command and stderr is shown below.
Command: ["exec", "6582E75C-BD9B-4058-8E17-F696D32602F3", "--cmd", "whoami"]
Stderr: Error from event: The operation couldn’t be completed. (OSStatus error -2700.)
Command guest-exec has been disabled: the command is not allowed
Lock The Root User
We are nearly done whit the post installation steps to satisfy Vagrant's requirements.
We just need to lock the root user - just run:
sudo passwd -l root
The VM has now all the necessary prerequisites for being used with Vagrant: we are almost ready to be package it as Vagrant box.
Cleanups
Let's just cleanup caches:
sudo rm -rf /var/cache/dnf/*
sudo rm -rf /tmp/* /var/tmp/*
and logs:
sudo truncate -s 0 /var/log/messages /var/log/secure
sudo journalctl --vacuum-time=1s
then, let's remove SSH host's keys to make sure they get re-generated when provisioning Vagrant VMs:
sudo rm -f /etc/ssh/ssh_host_*_key*
To minimize the final box size, the best practice is to zero out free space so it compresses better:
sudo dd if=/dev/zero of=/EMPTY bs=1M
sudo rm -f /EMPTY
Lastly, let's empty the BASH history:
history -c && rm -f ~/.bash_history
Before starting the Vagrant box creation process, shut down the VM:
sudo shutdown -h now
Generate The VagrantBox
The Vagrantbox is just an archive with a single VM with a certain operating system of a specific major version, for a particular hardware architecture, supporting a certain Vagrant provider.
The contents of a Vagrant box are:
- The VM definition or, depending on the provider, at least the main VM disk
- the metadata.json file, with a very few metadata describing the Vagrant box contents
- an optional info.json file, whici is used to provide additional custom information
- an optional Vagrantfile containing provider-specific directives, which can be used as a reference when creating Vagrantfile
Conversely from when dealing with creating Vagrant images for the Virtualbox provider, when dealing with UTM there are no command line utilities to automate the Vagrant box generation.
Since we are doing the process manually, for our convenience, let's create a staging directory on the virtualisation host:
mkdir ~/staging
cd ~/staging
We will use it to store all the necessary pieces for assembling the Vagrant box image of the UTM VM we just set up.
Let's start from coping to the staging directory the directory tree containing all the components of the fresly created VM (qcow2 disk, metadata, thumbnail image and so on):
cp -r ~/Library/Containers/com.utmapp.UTM/Data/Documents/ol10.utm box.utm
As we said, Vagrant box must also contain the metadata.json file, which is used to provide a few information about the Vagrant box contents: when dealing with UTM, it is enough to specify in it utm as the Vagrant box's provider, along with the hardware architecture of the box.
So, in the staging directory, create the metadata.json file with the following contents:
{
"architecture": "arm64",
"provider": "utm"
}
If you fancy, you can add additional information in the info.json file.
For example:
{
"author": "Marco Antonio Carcano",
"homepage": "https://grimoire.carcano.ch"
}
Last but not least, to document and illustrate provider-specific settings, it is also possible to optionally provide a Vagrantfile file. This has nothing to deal with the Vagrantfile generated by running vagrant init: the solely purpose of this file is providing a template the user can easily extract from the Vagrantbox, using it as a reference to infer its own Vagrantfile.
In the staging directory, create the Vagrantfile with the following contents:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "grimoire/ol10"
config.vm.provider "utm" do |u|
u.name = "grimoire_lab_01"
u.memory = "2048"
u.cpus = 2
end
config.vm.synced_folder '.', '/vagrant'
end
so, to summarize, let's have a look at the final contents of the staging directory on my system:
find .
the output is as follows:
metadata.json
box.utm
box.utm/screenshot.png
box.utm/config.plist
box.utm/Data
box.utm/Data/efi_vars.fd
box.utm/Data/1767593C-76D8-458B-B8FF-EA445CC162FB.qcow2
Vagrantfile
info.json
The last step is perform the actual packaging by running:
tar czf ~/ol10-1.0.0-arm64-utm.box *
This completes the procedure to create the Vagrant Box for the UTM provider.
We can now remove the staging directory, since we don't need it anymore:
cd ..
rm -rf staging
Install the Vagrant Box
Our brand new Vagrant box is ready to be tested.
To use it, we must first install it by running:
vagrant box add --name grimoire/ol10 \
--provider utm ~/ol10-1.0.0-arm64-utm.box
The output is as follows:
==> box: Box file was not detected as metadata. Adding it directly...
==> box: Adding box 'grimoire/ol10' (v0) for provider: utm
box: Unpacking necessary files from: file:///Users/mcarcano/ol10-1.0.0-arm64-utm.box
==> box: Successfully added box 'grimoire/ol10' (v0) for 'utm'!
As said, since the provider does not support the version attribute, the detected version falls back to v0.
This is acceptable in a lab or in a personal environment such as when running a POC, but it will lead to version collisions if extensively used.
Test The Vagrant Box
Once done, before putting it into a Vagrant Boxes repository, we must test the freshly created Vagrant box.
Create a directory for the testing Vagrant project, such as ol10-testing
mkdir ~/ol10-testing
create a Vagrantfile - we can just extract the one we put in the Vagrant box:
tar xfz ~/ol10-1.0.0-arm64-utm.box -C ~/ol10-testing Vagrantfile
change to the Vagrant project directory:
cd ~/ol10-testing
if you fancy, to test additional features, adjust the Vagrantfile, for example adding a port-forwarding rule.
Before going on, you must remove from UTM the virtual machine you derived the Vagrant Box from. This step is necessary since the UTM provider copies the UTM VM definition directory from the Vagrant box to the UTM datastore directory, renaming it as configured in the Vagrant file in the next step. For this reason, leaving the VM we installed leads to naming collision.
rm -rf ~/Library/Containers/com.utmapp.UTM/Data/Documents/ol10.utm
Once done, as in any Vagrant project using off-shelf Vagrant boxes, simply run:
vagrant up
wait for Vagrant to complete the provisioning of the brand new system, then run:
vagrant ssh
Footnotes
We have come to the end of this post. As you saw, producing a Vagrant box for the UTM provider is not that hard, it just require a few time, that is anyway worth the effort by the saved time provided by using Vagrant each time you need to quickly provision mocked environments. I hope this post was usefull to youm and if you liked this post, … please give your tip in the small cup below:
