From Fedora Project Wiki
 
(One intermediate revision by the same user not shown)
Line 3: Line 3:
= History =  
= History =  


Created by [[User:Roshi | Roshi]] on 23 April 2014
* Created by [[User:Roshi | Roshi]] on 23 April 2014
* Edited by [[User:Roshi | Roshi]] on 02 October 2014


= Introduction =
= Introduction =
Line 67: Line 68:


* Manually executed test cases  
* Manually executed test cases  
* Automatically executed test cases. (Should have some more specifics here)
* Automatically executed test cases (exact testcases to come at a later date).


= Test Deliverables =
= Test Deliverables =

Latest revision as of 01:00, 3 October 2014

History

  • Created by Roshi on 23 April 2014
  • Edited by Roshi on 02 October 2014

Introduction

With the advent of different Fedora Products for Fedora 21 there is a need for a specific test plan for each product. Historically, Release Validation and all Testing of Fedora releases was handled by QA - but now testing and release validation will be largely handled by the specific Working Group.

The goals of this plan are to:

  • Organize the test effort
  • Communicate the planned tests to all relevant stake-holders for their input and approval
  • Serve as a base for the test planning for future Fedora Cloud product releases

Test Strategy

Testing and Release Validation for the Cloud product will follow the same pattern Fedora 20 followed. There will be three milestones - each milestone will have a related set of Release Criteria and Validation Matrix for the release.

Schedule/Milestones

  • Alpha
  • Beta
  • Final

Test Priority

This test plan prioritizes tests according to the major release milestones for Fedora 21, including the Alpha, Beta and Final release milestones. All test cases are intended for execution at every milestone. However, priority should be given to tests specific to the milestone under test.

Alpha test cases Beta test cases Final test cases
Alpha priority tests are intended to verify that booting the image is possible on common cloud providers (EC2, Openshift). These tests also attempt to validate Alpha Release Requirements. Beta priority tests take a step further to include additional use cases. These tests also attempt to validate Beta Release Requirements. Final priority tests capture all remaining use cases and functionality checks. These tests also attempt to validate Final Release Requirements.
Verification consists of:

- Does the image boot on supported platforms?
- Does the image properly utilize basic metadata (ssh-key, etc)?
- Can you ssh into the booted image?

Verification consists of:

- Does yum update the image properly?
- Can the booted image be rebooted?

Verification consists of:

- Can an in use cloud image be upgraded to a Fedora Server role?

Test Pass/Fail Criteria

The milestone release of Fedora Cloud should conform these criteria:

Scope and Approach

For the cloud image, the following areas will be tested:

  • Boot process
  • Initialization
  • Post-Boot actions
  • Virtualization

In addition to those aspects of the image being tested, any bugs marked as a blocker for the current milestone must be addressed and tested.

Testing will include:

  • Manually executed test cases
  • Automatically executed test cases (exact testcases to come at a later date).

Test Deliverables

  • This test plan
  • Test summary documents for each major milestone
  • A list of defects filed
  • Any test scripts used for automation or verification

Testing Tasks

Testing will include test cases to ensure the desired functionality of the cloud image works as intended.

  • Link to Test Matrix (to be written)
  • Automated testing
  • Image works with major cloud providers

Test Environment/Configs

Test cases will be executed on the primary supported hardware platforms. This includes:

  • i386
  • x86_64

Responsibilities

Cloud WG team members are responsible for executing this test plan. Contributions from QA testers and other interested parties are encouraged.

Risks and Contingencies

If new images are provided for an already inprogress test run, a new test run must be initiated. Test results from the previous run may be carried forward to the new test run if they are not affected by the changes introduced by the freshly generated image.

Reporting Bugs and Debugging Problems

If defects/problems are encountered, please go ahead and file the bugs following the guide below:

Reviewers

References