This page describes the steps necessary to get Fedora for RISC-V running on emulated hardware.
Quickstart
This section assumes that you have already set up libvirt/QEMU on your machine and you're familiar with them, so it only highlights the details that are specific to RISC-V. It also assumes that you're running Fedora 40 as the host.
First of all, you need to download a disk image from https://dl.fedoraproject.org/pub/alt/risc-v/disk_images/Fedora-40/
As of this writing, the most recent image is Fedora-Minimal-40-20240502.n.0-sda.raw.xz
so I will be using that throughout the section. If you're using a different image, you will need to adjust things accordingly.
Once you've downloaded the image, start by uncompressing it:
$ unxz Fedora-Minimal-40-20240502.n.0-sda.raw.xz
You need to figure out the root filesystem's UUID so that you can later pass this information to the kernel. The virt-filesystems
utility, part of the guestfs-tools
package, takes care of that:
$ virt-filesystems \ -a Fedora-Minimal-40-20240502.n.0-sda.raw \ --long \ --uuid \ | grep ^btrfsvol: \ | awk '{print $7}' \ | sort -u ae525e47-51d5-4c98-8442-351d530612c3
Additionally, you need to extract the kernel and initrd from the disk image. The virt-get-kernel
tool automates this step:
$ virt-get-kernel \ -a Fedora-Minimal-40-20240502.n.0-sda.raw download: /boot/vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 -> ./vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 download: /boot/initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img -> ./initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img
Now move all the files to a directory that libvirt has access to:
$ sudo mv \ Fedora-Minimal-40-20240502.n.0-sda.raw \ vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 \ initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img \ /var/lib/libvirt/images/
At this point, everything is ready and you can create the libvirt VM:
$ virt-install \ --import \ --name fedora-riscv \ --osinfo fedora40 \ --arch riscv64 \ --vcpus 4 \ --ram 4096 \ --boot uefi,kernel=/var/lib/libvirt/images/vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64,initrd=/var/lib/libvirt/images/initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img,cmdline='root=UUID=ae525e47-51d5-4c98-8442-351d530612c3 ro rootflags=subvol=root rhgb LANG=en_US.UTF-8 console=ttyS0 earlycon=sbi' \ --disk path=/var/lib/libvirt/images/Fedora-Minimal-40-20240502.n.0-sda.raw \ --network default \ --tpm none \ --graphics none
Note how the UUID discovered earlier is included in the kernel command line. Quoting is also very important to get right.
Disabling the TPM with --tpm none
is only necessary as a temporary measure due to issues currently affecting swtpm in Fedora 40. If you want to, you can try omitting that option and see whether it works.
You should see a bunch of output coming from edk2 (the UEFI implementation we're using), followed by the usual kernel boot messages and, eventually, a login prompt. Please be patient, as the use of emulation makes everything significantly slower. Additionally, a SELinux relabel followed by a reboot will be performed as part of the import process, which slows things down further. Subsequent boots will be a lot faster.
To shut down the VM, run poweroff
inside the guest OS. To boot it up again, use
$ virsh start fedora-riscv --console
UKI images
These can be found in the same location but follow a different naming convention. As of this writing, the most recent image is Fedora.riscv64-40-20240429.n.0.qcow2
.
The steps are similar to those described above, except that instead of dealing with kernel and initrd separately you need to extract a single file:
$ virt-copy-out \ -a Fedora.riscv64-40-20240429.n.0.qcow2 \ /boot/efi/EFI/Linux/6.8.7-300.4.riscv64.fc40.riscv64.efi \ .
The virt-install
command line is slightly different too, in particular the --boot
option becomes:
--boot uefi,kernel=/var/lib/libvirt/images/6.8.7-300.4.riscv64.fc40.riscv64.efi,cmdline='root=UUID=57cbf0ca-8b99-45ae-ae9d-3715598f11c4 ro rootflags=subvol=root rhgb LANG=en_US.UTF-8 console=ttyS0 earlycon=sbi'
These changes are enough to get the image to boot, but there are no passwords set up so you won't be able to log in. In order to address that, it's necessary to create a configuration file for cloud-init
, for example with the following contents:
#cloud-config password: fedora_rocks! chpasswd: expire: false
Save this as user-data.yml
, then add the following options to your virt-install
command line:
--controller scsi,model=virtio-scsi \ --cloud-init user-data=user-data.yml
The configuration data should be picked up during boot, setting the default user's password as requested and allowing you to log in.
Host setup
The steps outlined above assume that your machine is already set up for running RISC-V VMs. If that's not the case, read on.
At the very least, the following package will need to be installed:
$ sudo dnf install \ libvirt-daemon-driver-qemu \ libvirt-daemon-driver-network \ libvirt-daemon-config-network \ libvirt-client \ virt-install \ qemu-system-riscv-core \ edk2-riscv64
This will result in a fairly minimal install, suitable for running headless VMs. If you'd rather have a fully-featured install, add libvirt-daemon-qemu
and libvirt-daemon-config-nwfilter
to the list. Be warned though: doing so will result in significantly more packages being dragged in, some of which you might not care about (e.g. support for several additional architectures).
In order to grant your user access to libvirt and allow it to manage VMs, it needs to be made a member of the corresponding group:
$ sudo usermod -a -G libvirt $(whoami)
Finally, the default libvirt URI needs to be configured:
$ mkdir -p ~/.config/libvirt && \ echo 'uri_default = "qemu:///system"' >~/.config/libvirt/libvirt.conf
Now reboot the host. This is necessary because the changes to group membership won't be effective until the next login, and because the libvirt services are not automatically started during package installation.
After rebooting and logging back in, virsh
should work and the default network should be up:
$ virsh uri qemu:///system $ virsh net-list Name State Autostart Persistent -------------------------------------------- default active yes yes
All done! You can now start creating RISC-V VMs.