From Fedora Project Wiki

< QA
Revision as of 12:19, 4 March 2011 by Robatino (talk | contribs)

Introduction

The QA group co-ordinates release validation test events before each Fedora release and pre-release is made, to ensure they reach Fedora_Release_Criteria. Testing covers installation process and desktop functionality. This page provides the guidance on arranging such a release validation test event. For any concerns, please contact the QA group.

Look up the date

The validation test event is held one or two weeks before the release date for any given milestone. See quality task schedule for specific dates as Test 'version' Test Compose or Test 'Version' Candidate. Then make the following preparations of that.

Track image creation tickets

Find relative image creation ticket from tickets list and add yourself to the cc. Then keep tracking this ticket until the images are available. If the images are not posted on time as scheduled or have a critical bug, please inform the rel-eng team about this by adding comments to the ticket.

Create test result page

Test result pages are needed to gather test results of installation and desktop against Fedora Release Candidate builds. Refer to QA:Create_Install_Test_Result_Page for guidance to create an installation result page using QA:Fedora 42_Install_Results_Template. In a similar way, create a desktop result page using QA:Desktop_validation_results_template.

Please note that you can make some special changes to the result pages according to the practical situation, such as immigrating parts of previous results to the new result page, highlighting critical cases, adding important notes etc.

Change current links and IRC topic

One or two days before test event day, update the following redirect links and make sure they point to the pages you created:

Also change the topic of #fedora-qa (freenode) to current test event (if you have the permission).

Make announcement

Currently, we announce validation test events to the test-announce. The announcement mail should provide enough information related to the test focus areas. See an example which generally includes:

  • Introduction of this test event (date, what to test, which release criteria to meet, etc)
  • How and where to add test results
  • Contact information of QA members who are available on test event and can help testers who encounter problems.
  • Others to be emphasized.

Download links should not be contained in the announcement, since in general there are several download options, and these are already documented on the Install and Desktop test pages. The announcement should direct people to go to these pages for download instructions.

Please announce this event on the mailing list at least one or two days in advance. This should give testers sufficient time to arrange their calendars and prepare a test environment. It is also a good idea to send a reminder e-mail the day before the test. Try to take timezones into account, to maximize convenience for testers from different regions or countries.

Provide help during test event

During the test event, many people will participate, including experienced users and new comers. Make sure the QA folks whose contact information was announced to mailing list in this test event are available during the testing period. They will provide assistance to those who encounter issues. QA people should be available at:

When new candidate available

If a new candidate is available for testing, such as TC2, TC3, RC3, you need send a new announcement for this, like the example and better to explain the differences/fixes between this version and the last one. Then create installation and desktop test result pages for it and repeat the above steps.

Report and Summary

After testing has completed, send a test summary to the test-announce mailing list. The test summary is intended to keep testers informed as to what was accomplished by their testing, whether there are any remaining tasks and to recognize key contributors. A sample report is available at http://lists.fedoraproject.org/pipermail/test-announce/2010-October/000164.html.

The test summary should include the following information:

Test status
A summary about what has been tested, what's remaining to accomplish, what issues were faced during test and whether we have achieved testing aim
Bugs reported
List filed bugs and reported issues which were found during test. Bug list including bug id, status, summary, etc...
The following command can be used to generate a list of bugs:
curl --stderr /dev/null "https://fedoraproject.org/wiki/Test_Results:Current_Installation_Test" \
   | grep -o "bugzilla\.redhat\.com.*[=\/][0-9]\{6\}" \
   | grep -o "[0-9]\{6\}" | tr '\n' ',' | sed 's|,$||' \
   | xargs bugzilla query --outputformat="%{bug_id} %{bug_status} %{resolution} - %{short_desc}" -b
Got python-bugzilla?
This command requires the python-bugzilla package be installed. You may need to weed out any 'false' references to pre-existing bugs which are mentioned on the page for some reason.
Analyze results and assess risk
Analyze the results and provide an assessment where additional risk areas may exist
Thanks to all testers
Testers have volunteered their time, it never hurts to be thankful for their contributions.
To generate a list of testers who contributed to the wiki, you can use the following command:
curl --stderr /dev/null "https://fedoraproject.org/w/index.php?title=Test_Results:Fedora_14_Final_RC1_Install&action=raw" \
   | grep -A999999 "^=.*Test Matrix.*=$" \
   | sed -e "s|{{\(result[^}]*\)}}|\n\1 |g" | grep "^result" \
   | gawk 'BEGIN{FS="[| ]"} $3{print $3} !$3{print "UNTESTED"}' \
   | sort | uniq -c | sort -bgr