Introduction
Here's some info on writing tests for AutoQA. There's three parts to a test: the test code, the control file, and the test object. Typically they all live in a single directory, located in the tests/ dir of the autoqa source tree.
Write test code first
I'll say it again: Write the test first. The tests don't require anything from autotest or autoqa. You should have a working test before you even start thinking about AutoQA.
You can package up pre-existing tests or you can write a new test in whatever language you're comfortable with. It doesn't even need to return a meaningful exit code if you don't want it to. You'll handle parsing the output and returning a useful result in the test object.
If you are writing a brand new test, there are some python libraries that have been developed for use in existing AutoQA tests. More information about this will be available once these libraries are packaged correctly, but they are not necessary to write your own tests. You can choose to use whatever language and libraries you want.
The test directory
Create a new directory to hold your test. The directory name will be used as the test name, and the test object name should match that. Choose a name that doesn't use spaces, dashes, or dots. Underscores are fine.
Drop your test code into the directory - it can be a bunch of scripts, a tarball of sources that may need compiling, whatever.
Next, copy control.template
and test_class_template.py
from the appropriate hook into your test dir. Rename them to control
and [testname].py
.
The control file
The control file defines the metadata for this test - who wrote it, what kind of a test it is, what test arguments it uses from AutoQA, and so on. Here's an example control file:
control file for conflicts test
AUTHOR = "Will Woods <wwoods@redhat.com>" TIME="SHORT" NAME = 'conflict' DOC = """ This test runs potential_conflict from yum-utils to check for possible file / package conflicts. """ TEST_TYPE = 'CLIENT' TEST_CLASS = 'General' TEST_CATEGORY = 'Functional' job.run_test('conflicts', baseurl=url, parents=parents, reponame=reponame, config=autoqa_conf)
As mentioned above, each hook should contain a file called control.template
, which you can use as the template for the control file for your new test.
Required data
The following control file items are required for valid AutoQA tests:
- AUTHOR: Your name and email address.
- TIME: either 'SHORT', 'MEDIUM', or 'LONG'. This defines the expected runtime of the test - either 15 minutes, less than 4 hours, or more than 4 hours.
- NAME: The name of the test. Should match the test directory name, the test object name, etc.
- DOC: A verbose description of the test - its purpose, the logs and data it will generate, and so on.
- TEST_TYPE: either 'CLIENT' or 'SERVER'. Use 'CLIENT' unless your test requires multiple machines (e.g. a client and server for network-based testing).
- TEST_CLASS: This is used to group tests in the UI. 'General' is fine. We may use this field to refer to the test hook in the future.
- TEST_CATEGORY: This defines the category your test is a part of - usually this describes the general type of test it is. Examples include Functional, Stress, Performance, and Regression.
Optional data
DEPENDENCIES = 'POWER, CONSOLE' SYNC_COUNT = 1
- DEPENDENCIES: Comma-separated list of hardware requirements for the test. Currently unsupported.
- SYNC_COUNT: The number of hosts to set up and synchronize for this test. Only relevant for SERVER-type tests that need to run on multiple machines.
Launching the test object
Most simple tests will have a line in the control file like this:
job.run_test('conflicts', baseurl=url, treename=treename, config=autoqa_conf)
This will create a 'conflicts' test object (see below) and pass along the given variables.
The test hook defines what variables will be provided. The control file template should list these variables for you, and the template's example run_test()
line should already include them.
Those variables will be inserted into the control file by the autoqa test harness when it's time to schedule the test.
Control files are python scripts
The control file is actually interpreted as a Python script. So you can do any of the normal pythonic things you might want to do, but in general it's best to keep the control file as simple as possible and put all the complicated bits into the test object or the test itself.
Before it reads the control file, Autotest imports all the symbols from the autotest_lib.client.bin.util
module.[1] This means the control files can use any function defined in common_lib.utils
or bin.base_utils
[2]. This lets you do things like:
arch = get_arch() baseurl = '%s/development/%s/os/' % (mirror_baseurl, arch) job.run_test('some_rawhide_test', arch=arch, baseurl=baseurl)
since get_arch
is defined in common_lib.utils
.
Test Objects
The test object is a python file that defines an object that represents your test. It handles the setup for the test (installing packages, modifying services, etc), running the test code, and sending results to Autotest (and other places).
Convention holds that the test object file - and the object itself - should have the same name as the test. For example, the conflicts
test contains a file named conflicts.py
, which defines a conflicts
class, as follows:
from autotest_lib.client.bin import test, utils from autotest_lib.client.bin.test_config import config_loader class conflicts(test.test): ...
The name of the class must match the name given in the run_test()
line of the control file, and test classes must be subclasses of the autotest test.test
class. But don't worry too much about how this works - each hook should contain a test_class_template.py
that contains the skeleton of an appropriate test object for that hook, complete with the usual setup code used by AutoQA tests. Just change the name of the file (and class!) to something appropriate for your test.
initialize()
This is an optional method of the test class. It does any pre-test initialization that needs to happen. AutoQA tests typically use this method to parse the autoqa config data passed from the server:
def initialize(self, config): self.config = config_loader(config, self.tmpdir)
Check out autoqa.conf to see what data this variable would hold.
setup()
This is another optional method of the test class. This is where you make sure that any required packages are installed, services are started, your test code is compiled, and so on. For example:
def setup(self): utils.system('yum -y install httpd') if utils.system('service httpd status') != 0: utils.system('service httpd start')
run_once()
This is where the test code actually gets run. It's the only required method for your test object.
In short, this method should build the argument list and run the test binary, like so:
def run_once(self, baseurl, parents, reponame): os.chdir(self.bindir) cmd = "./sanity.py --scratchdir %s --logdir %s" % (self.tmpdir, self.resultsdir) cmd += " %s" % baseurl retval = utils.system(cmd) if retval != 0: raise error.TestFail
See the section on test object attributes for information about self.bindir
, self.tmpdir
, etc. Also see Getting proper test results for more information about getting results from your tests.
postprocess_iteration()
This method can be used to gather extra data from the test output - detailed failure info, performance numbers, and so on. For example:
def run_once(self, testtype): cmd = './transfer-test --some --flags --testtype=%s' % testtype self.output = utils.system_output(cmd, retain_output=True) def postprocess_iteration(self): for line in self.output: if line.startswith('Max transfer speed: '): (dummy, max_speed) = line.split('speed: ') keyval['max_speed'] = max_speed self.write_test_keyval(keyval)
(See Returning extra data for details about write_test_keyval
.)
This method will be run after each iteration of run_once()
, but note that it gets no arguments passed in. Any data you want from the test run needs to be saved into the test object - hence the use of self.output
to hold the output of the command.
Useful test object attributes
test
objects have the following attributes available[3]:
outputdir eg. results/<job>/<testname.tag> resultsdir eg. results/<job>/<testname.tag>/results profdir eg. results/<job>/<testname.tag>/profiling debugdir eg. results/<job>/<testname.tag>/debug bindir eg. tests/<test> src eg. tests/<test>/src tmpdir eg. tmp/<tempname>_<testname.tag>
Getting proper test results
First, the basic rule for test results: If your run_once()
method does not raise an exception, the test result will be PASS. If it raises error.TestFail
or error.TestWarn
the test result is FAIL or WARN. Any other exception yields an ERROR result.
For simple tests you can just run the test binary like this:
self.results = utils.system_output(cmd, retain_output=True)
If cmd
is successful (i.e. it returns an exit status of 0) then utils.system_output()
will return the output of the command. Otherwise it will raise error.CmdError
, which will immediately end the test with an ERROR result. If you want to FAIL the test instead, try this:
testfail = False try: # Add "2>&1" to cmd to include stderr in output out = utils.system_output(cmd + " 2>&1", retain_output=True) except error.CmdError, e: testfail = True out = e.result_obj.stdout # Do other post-testing stuff here, and then... if testfail: raise error.TestFail
Some tests don't return a useful exit status - they always return 0 - so you'll need to inspect their output to decide whether they passed or failed. That would look more like this:
output = utils.system_output(cmd, retain_output=True) if 'FAILED' in output: raise error.TestFail elif 'WARNING' in output: raise error.TestWarn
Log files and scratch data
Any files written to self.resultsdir
will be saved at the end of the test. Anything written to self.tmpdir
will be discarded.
Returning extra data
Further test-level info can be returned by using test.write_test_keyval(dict)
:
extrainfo = dict() for line in self.results.stdout: if line.startswith("kernel version "): extrainfo['kernelver'] = line.split()[3] ... self.write_test_keyval(extrainfo)
- For per-iteration data (performance numbers, etc) there are three methods:
- Just attr:
test.write_attr_keyval(attr_dict)
- Test attributes are limited to 100 characters.[4]
- Just perf:
test.write_perf_keyval(perf_dict)
- Performance values must be floating-point numbers.
- Both:
test.write_iteration_keyval(attr_dict, perf_dict)
- Just attr:
References
- ↑ http://autotest.kernel.org/browser/branches/0.10.1/client/bin/job.py#L19
- ↑ http://autotest.kernel.org/browser/branches/0.10.1/client/bin/utils.py
- ↑ http://autotest.kernel.org/browser/branches/0.10.1/client/common_lib/test.py#L9
- ↑ http://autotest.kernel.org/browser/branches/0.10.1/tko/migrations/001_initial_db.py#L114
External Links
From the upstream Autotest wiki: