(Define the attribute Environment) |
(Fix environment example) |
||
Line 125: | Line 125: | ||
environment: | environment: | ||
PACKAGE | PACKAGE: python37 | ||
PYTHON | PYTHON: python3.7 | ||
== Duration == | == Duration == |
Revision as of 14:09, 4 September 2018
Requirements
In order to use the Flexible Metadata Format effectively for the CI testing we need to agree on the essential set of attributes to be used. For each attribute we need to standardize:
- Name ... unique, well chosen, possibly with a prefix
- Type ... expected content type: string, number, list, dictionary
- Purpose ... description of the attribute purpose
For names we should probably consider using namespace prefix (such as test-description, requirement-description) to prevent future collisions with other attributes. Each attribute definition should contain at least one apt example of the usage. Or better, a set of user stories to be covered.
Attributes
In this section there are attributes proposed so far. Material for discussion. Nothing final for now.
Summary
In order to efficiently collaborate on test maintenance it's crucial to have a short summary of what the test does.
- Name ... summary
- Type ... string (one line, up to 50 characters)
- Purpose ... concise summary of what the test does
User stories:
- As a developer reviewing 10 failed tests I would like to get quickly an idea of what my change broke.
Notes:
- Shall we recommend 50 characters or less? Like there is for commits? Yes.
Example:
summary: Test wget recursive download options
Description
For complex tests it makes sense to provide more detailed description to better clarify what is covered by the test.
- Name ... description
- Type ... string (multi line, plain text)
- Purpose ... detailed description of what the test does
User stories:
- As a tester I come to a test code I wrote 10 years ago (so I have absolutely no idea about it) and would like to quickly understand what it does.
- As a developer I review existing test coverage for my component and would like to get an overall idea what is covered without having to read the whole test code.
Example:
description: | This test checks all available wget options related to downloading files recursively. First a tree directory structure is created for testing. Then a file download is performed for different recursion depth specified by the "--level=depth" option.
Tags
Throughout the years, free-form tags proved to be useful for many, many scenarios. Primarily to provide an easy way how to select a subset of objects.
- Name: tags
- Type: list
- Purpose: free-form tags for easy filtering
Notes:
- Tags are case-sensitive.
- Using lowercase is recommended.
User stories:
- A a developer/tester I would like to run only a subset of available tests.
Example:
tags: [Tier1, fast]
Test
This is the key content attribute defining how the test is to be executed.
- Name: test
- Type: string
- Purpose: shell command which executes the test
User stories:
- As a developer/tester I want to easily execute all available tests with just one command.
- As a test writer I want to run a single test script in multiple ways (e.g. providing different parameters)
Example:
test: ./runtest.sh
Path
As the object hierarchy does not need to copy the filesystem structure (e.g. virtual test cases) we need a way how to define where the test is located.
- Name: path
- Type: string
- Purpose: filesystem directory to be entered before executing the test
User stories:
- As a test writer I define two virtual test cases, both using the same script for executing. See also the Virtual Tests example.
Example:
path: wget/recursion
Environment
Test scripts might require certains environment variable to be set. Although this can be done on the shell command line as part of the Test attribute it makes sense to have a dedicated field for this, especially when the number of parameters grows. This might be useful for virtual test cases as well.
- Name: environment
- Type: dictionary
- Purpose: environment variables to be set before running the test
User stories:
- As a tester I need to pass environment variables to my test script to properly execute the desired test scenario.
- As a tester I'm using a single test script for testing different Python implementations specified by environment variable PYTHON.
Example:
environment: PACKAGE: python37 PYTHON: python3.7
Duration
In order to prevent stuck tests consuming resources we should be able to define a maximum time for test execution.
- Name: duration
- Type: string
- Purpose: maximum time for test execution after which a running test is killed by the test harness
Notes:
- Let's use the same format as the
sleep
command. For example: 3m, 2h, 1d.
User stories:
- As a deverloper/tester I want to prevent resource wasting by stuck tests.
- As a test harness I need to know after how long time I should kill test if it is still running.
Example:
duration: 5m
Relevancy
Sometimes a test case is only relevant for specific environment. Test Case Relevancy allows to filter irrelevant test cases out.
- Name: relevancy
- Type: list
- Purpose: Test Case Relevancy rules used for filtering relevant test cases for given environment.
User stories:
- As a tester I want to skip execution of a particular test case in given test environment.
Notes:
Environment is defined by one or more environment dimensions such as product
, distro
, collection
, variant
, arch
, component
. Relevancy consists of a set of rules of the form condition: decision
. For more details see the Test Case Relevancy documentation.
Example:
relevancy: - "distro < f-28: False" - "distro = rhel-7 & arch = ppc64: False"
Contact
When there are several people collaborating on tests it's useful to have a way how find who is responsible for what.
- Name ... contact
- Type ... string (name with email address)
- Purpose ... person maintaining the test
User stories:
- As a developer reviewing a complex failed test I would like to contact the person who maintains the code and understands it well.
Example:
contact: Name Surname <email@address.org>
Component
It's useful to be able to easily select all tests relevant for given component or package. As they do not always have to be stored in the same repository and because many tests cover multiple components a dedicated field is needed.
- Name ... component
- Type ... list of strings
- Purpose ... relevant fedora/rhel source package names for which test should be executed
User stories:
- As a SELinux tester testing the
checkpolicy
component I want to run Tier1 tests for all SELinux components plus all checkpolicy tests.
Example:
component: [libselinux, checkpolicy]
Notes:
The following fmf
command can be used to select test set described by the user story above:
fmf --key test --filter 'tags:Tier1 | component:checkpolicy'
Tier
It's quite common to organize tests into "tiers" based on their importance, stability, duration and other aspects. For this tags have been used quite often as there was not corresponding attribute available. It might make sense to have a dedicated field for this functionality as well.
- Name ... tier
- Type ... string
- Purpose ... name of the tier set this test belongs to
User stories:
- As a tester testing a security advisory I want to run the stable set of important tests which cover the most essential functionality and can provide test results in a short time.
Example:
tier: 1
Provision
In some cases tests have special requirements for the environment in order to run successfully. For now just simple qemu options for the standard-inventory-qcow2 provisioner are supported.
- Name ... provision
- Type ... dictionary
- Purpose ... set of environment requirements
User stories:
- As a tester I want to specify amount of the memory which needs to be available for the test.
- As a tester I want to specify network interface card to be used in qemu.
Example:
provision: standard-inventory-qcow2: qemu: m: 3G net_nic: model: e1000
Memory size is specified in megabytes. Optionally, a suffix of “M” or “G”. Use qemu-system-x86_64 -net nic,model=help for a list of available devices. See also real life example usage.
Examples
Below you can find some basic examples using the metadata definen above. Separate Examples page illustrates integration with Standard Test Roles on some real-life components.
BeakerLib Tests
Three beakerlib tests, each in it's own directory:
main.fmf
test: ./runtest.sh /one: path: tests/one /two: path: tests/two /three: path: tests/three
fmf
tests/one path: tests/one test: ./runtest.sh
tests/two path: tests/two test: ./runtest.sh
tests/three path: tests/three test: ./runtest.sh
Three Scripts
Three different script residing in a single directory:
main.fmf
path: tests /one: test: ./one /two: test: ./two /three: test: ./three
fmf
tests/one path: tests test: ./one
tests/two path: tests test: ./two
tests/three path: tests test: ./three
Virtual Tests
Thre virtual test cases based on a single test script:
main.fmf
path: tests/virtual /one: test: ./script --one /two: test: ./script --two /three: test: ./script --three
fmf
tests/one path: tests test: ./script --one
tests/two path: tests test: ./script --two
tests/three path: tests test: ./script --three