Can't make the date? If you come to this page before or after the test day is completed, your testing is still valuable, and you can use the information on this page to test, file any bugs you find at
Fedora CoreOS issue tracker, and add your results to the results section. If this page is more than a month old when you arrive here, please check the
current schedule and see if a similar but more recent Test Day is planned or has already happened.
What to test?[edit]
Today's installment of Fedora Test Day will focus on Fedora CoreOS
Who's available?[edit]
The following cast of characters will be available for testing, workarounds, bug fixes, and general discussion ...
For real time help, please join us on the IRC: #fedora-coreos[?] on https://libera.chat/.
Documentation is also available here. Documentation feedback is welcome through chat, mailing list, github tracker and in the form of a pull request to the documentation sources.
Prerequisites for Test Day[edit]
- Virtual machine (x86_64, aarch64, s390x)
- Test day Image
Grab images/artifacts/information for the most current next
stream release (38
) from our download page: https://getfedora.org/en/coreos/download?tab=cloud_operators&stream=next
How to test?[edit]
Run the tests[edit]
Visit the results page and click on the column title links to see the tests that need to be run: most column titles are links to a specific test case. Follow the instructions there, then enter your results by clicking the Enter result button for the test.
Reporting bugs[edit]
Please report all bugs, issues or enhancement proposals to the Fedora CoreOS issue tracker.
If you are unsure about exactly how to file the report or what other information to include, just ask on IRC #fedora-coreos[?], #fedora-test-day[?] or #fedora-qa[?] and we will help you.
Test Results[edit]
Installation[edit]
User
|
Profile
|
Virtual install
|
Bare Metal install
|
References
|
azukku
|
|
pass [1] pass [2]
|
pass [3]
|
- ↑ All worked fine for me :
1. rpm-ostree install
2. rpm-ostree kargs --append
3. rpm-ostree kargs --delete
4. static IP via ignition kernelArguments
- ↑ All worked fine for me :
1. rpm-ostree install
2. rpm-ostree kargs --append
3. rpm-ostree kargs --delete
- ↑ zVM + DASD - all works fine:
1. rpm-ostree install
2. rpm-ostree kargs --append
3. rpm-ostree kargs --delete
|
brianmcarey
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
|
|
danniel
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
|
|
donaldsebleung
|
fedora/x86_64/coreos/next (38.20230322.1.0) on QEMU/KVM
|
pass
|
|
|
garrmcnu
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
|
|
geraldosimiao
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
|
|
hricky
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
pass
|
|
pnemade
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
|
|
sayaksarkar
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
|
|
vishalvvr
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
|
|
User
|
Profile
|
Virtual install
|
Bare Metal install
|
IBM Cloud
|
References
|
azukku
|
|
pass [1] pass [2]
|
pass [3]
|
|
- ↑ All worked fine for me :
1. rpm-ostree install
2. rpm-ostree kargs --append
3. rpm-ostree kargs --delete
4. static IP via ignition kernelArguments
- ↑ All worked fine for me :
1. rpm-ostree install
2. rpm-ostree kargs --append
3. rpm-ostree kargs --delete
- ↑ zVM + DASD - all works fine:
1. rpm-ostree install
2. rpm-ostree kargs --append
3. rpm-ostree kargs --delete
|
brianmcarey
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
|
|
|
danniel
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
|
|
|
donaldsebleung
|
fedora/x86_64/coreos/next (38.20230322.1.0) on QEMU/KVM
|
pass
|
|
|
|
garrmcnu
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
|
|
|
geraldosimiao
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
|
|
|
hricky
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
pass
|
|
|
lravicha
|
IBM Cloud/s390x/bz2-1x4/Fedora CoreOS 38.20230326.10.0
|
|
|
pass [1]
|
- ↑ Worked fine for me:
1. Download F38 ibmcloud-qcow image
2. create ibm cloud resources for cloud object storage using above qcow
3. create cloud instance using dedicated cloud resources
4. successful ssh to the F38 cloud instance
|
lravicha
|
https://github.com/LakshmiRavichandran1
|
|
|
pass [1]
|
- ↑ IBM Cloud / s390x / bz2-1x4 / Fedora CoreOS 38.20230326.10.0
|
mnguyen
|
bx2-2x8
|
|
|
pass
|
|
pnemade
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
|
|
|
sayaksarkar
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
|
|
|
vishalvvr
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
|
|
|
Cloud launch[edit]
User
|
Profile
|
AWS
|
Azure
|
GCP
|
DigitalOcean
|
VMWare
|
Exoscale
|
IBM Cloud
|
VirtualBox
|
Vultr
|
OpenStack
|
Alibaba
|
References
|
apiaseck
|
aws m5.large, fcos next Image: ami-05426870cdf857352,
|
pass [1]
|
|
|
|
|
|
|
|
|
|
|
- ↑ A few things did not resonate as straightforward in the documentation for a novice aws user.
Since my AWS account was pretty much 'clean' I had to figure out a few details that led me to a successful implementation of this test case.
Things that slowed me down:
1. Create private VPC with a set up of subnets with an appropriate pool of IPv4 CIDR
2. Security groups inbound rules / adding the SSH access for the sg
3. Creation of elastic IP, and elastic IP association
|
dustymabe
|
s-2vcpu-2gb in nyc3
|
|
|
|
pass
|
|
|
|
|
|
|
|
|
fifofonix
|
Newly provisoned 'next' nodes on vSphere 7.0.3.01200
|
|
|
|
|
pass [1]
|
|
|
|
|
|
|
- ↑ Provisioned via terraform.
This testing also validated: (i) corproate proxy configuration, (ii) systemd rpm-ostree layering of open-vm-tools.
|
hhei
|
Launch 38.20230322.1.0 on gcp
|
|
|
pass [1]
|
|
|
|
|
|
|
|
|
- ↑ 1. Launch with gcloud using GCP web console, and also add an Ignition file that includes ssh public key with
hhei , verify vm works well and hhei can login via ssh.
2. Launch with gcloud from local client, and no any custom instance metadata, verify vm works well and can ssh using $ gcloud compute ssh core@fcos --zone=us-central1-a .
|
lravicha
|
IBM Cloud/s390x/bz2-1x4/Fedora CoreOS 38.20230326.10.0
|
|
|
|
|
|
|
pass [1]
|
|
|
|
|
- ↑ Worked fine for me:
1. Download F38 ibmcloud-qcow image
2. create ibm cloud resources for cloud object storage using above qcow
3. create cloud instance using dedicated cloud resources
4. successful ssh to the F38 cloud instance
|
lravicha
|
https://github.com/LakshmiRavichandran1
|
|
|
|
|
|
|
pass [1]
|
|
|
|
|
- ↑ IBM Cloud / s390x / bz2-1x4 / Fedora CoreOS 38.20230326.10.0
|
mnguyen
|
bx2-2x8
|
|
|
|
|
|
|
pass
|
|
|
|
|
|
ravanelli
|
38.20230322.1.0/x86_64/GCP
|
|
|
pass
|
|
|
|
|
|
|
|
|
|
ravanelli
|
VirtualBox 7.0.6 r155176 (Qt5.15.2)
|
|
|
|
|
|
|
|
pass [1]
|
|
|
|
- ↑ Tested using NAT networking
|
vishalvvr
|
VirtualBox 6.1.42 r155177 / 38.20230322.1.0 / x86_64
|
|
|
|
|
|
|
|
pass
|
|
|
|
|
aarch64[edit]
Advanced configuration[edit]
Really Advanced Config[edit]
Upgrade[edit]
User
|
Profile
|
Switch stream
|
Bootloader updates
|
References
|
Nemric
|
Next 38 automatically updated from 37 : Bare Metal/x86_64
|
|
pass [1]
|
- ↑ Did run the test whithout changing stream (was already on next) show results :
sudo bootupctl status
Component EFI
Installed: grub2-efi-x64-1:2.06-29.fc36.x86_64,shim-x64-15.4-5.x86_64
Update: Available: grub2-efi-x64-1:2.06-88.fc38.x86_64,shim-x64-15.6-2.x86_64
No components are adoptable.
CoreOS aleph image ID: fedora-coreos-36.20220410.1.1-metal.x86_64.raw
Boot method: EFI
After update and successful reboot
sudo bootupctl status
Component EFI
Installed: grub2-efi-x64-1:2.06-88.fc38.x86_64,shim-x64-15.6-2.x86_64
Update: At latest version
No components are adoptable.
CoreOS aleph image ID: fedora-coreos-36.20220410.1.1-metal.x86_64.raw
Boot method: EFI
|
brianmcarey
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
|
|
hricky
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
pass
|
|
jlebon
|
38.20230322.1.0/x86_64/QEMU
|
|
pass [1]
|
- ↑ Tested both manual updates and via a bootupd automated systemd service.
|
Tutorials[edit]
User
|
Profile
|
Autologin
|
Systemd unit service
|
References
|
apiaseck
|
i7-10610U, 32GB, fcos-37.20230303.3.0-qemu.x86_64.qcow2
|
pass [1]
|
pass [2]
|
- ↑ ignition-validate autologin.ign && echo 'Success!', systemctl cat serial-getty@ttyS0.service, hostnamectl -> Worked as expected.
systemctl status --full zincati.service -> Worked as expected.
After the
`initialization complete, auto-updates logic enabled ` it returned a client-side error: [ERROR zincati::cincinnati] failed to check Cincinnati for updates: client-side error
- ↑ Since I rebooted the laptop, an error popped up related to virbr0 not running:
ERROR /usr/libexec/qemu-bridge-helper --use-vnet --br=virbr0 --fd=28: failed to communicate with bridge helper: stderr=failed to get mtu of bridge `virbr0': No such device
I restarted the firewall and libvirt, and test completed successfully.
$ sudo systemctl restart firewalld \
$ sudo systemctl restart libvirtd
|
hricky
|
38.20230322.1.0/x86_64/QEMU
|
pass
|
pass
|
|
Miscellaneous[edit]