(Add some links to the autotest wiki) |
(Added a whole bunch of info about control files and tests) |
||
Line 2: | Line 2: | ||
{{draft}} | {{draft}} | ||
== Introduction == | == Introduction == | ||
Here's some info on writing tests for [[AutoQA]]. There's three parts to a test: the test code, the control file, and the test object. | Here's some info on writing tests for [[AutoQA]]. There's three parts to a test: the test code, the control file, and the test object. Typically they all live in a single directory, located in [http://git.fedorahosted.org/git/?p=autoqa.git;a=tree;f=tests the <code>tests/</code> dir of the autoqa source tree]. | ||
== | == Write test code == | ||
You should have a working test before you even start thinking about [[AutoQA]]. You can package up pre-existing tests or you can write a new test in whatever language you're comfortable with. It doesn't even need to return a meaningful exit code if you don't want it to. You'll handle parsing the output and returning a useful result in the test object. | |||
If you ''are'' writing a brand new test, there are some python libraries that have been developed for use in existing [[AutoQA]] tests. ''More information about this will be available once these libraries are packaged correctly.'' | If you ''are'' writing a brand new test, there are some python libraries that have been developed for use in existing [[AutoQA]] tests. ''More information about this will be available once these libraries are packaged correctly.'' | ||
== | == The test directory == | ||
The control file defines the metadata for this test - who wrote it, what kind of a test it is, what test arguments it uses from [[AutoQA]], and so on. Here's an example | Create a new directory to hold your test. The directory name will be used as the test name, and the test object name should match that. Choose a name that doesn't use spaces, dashes, or dots. Underscores are fine. | ||
Drop your test code into the directory - it can be a bunch of scripts, a tarball of sources that may need compiling, whatever. | |||
Next, copy <code>control.template</code> and <code>test_class_template.py</code> from the appropriate hook into your test dir. Rename them to <code>control</code> and <code>[testname].py</code>. | |||
== The control file == | |||
The control file defines the metadata for this test - who wrote it, what kind of a test it is, what test arguments it uses from [[AutoQA]], and so on. Here's an example control file: | |||
=== control file for conflicts test === | === control file for conflicts test === | ||
<pre> | <pre> | ||
AUTHOR = "Will Woods <wwoods@redhat.com>" | |||
TIME="SHORT" | TIME="SHORT" | ||
NAME = 'conflict' | |||
DOC = """ | DOC = """ | ||
This test runs potential_conflict from yum-utils to check for possible | This test runs potential_conflict from yum-utils to check for possible | ||
file / package conflicts. | file / package conflicts. | ||
""" | """ | ||
TEST_TYPE = 'CLIENT' | TEST_TYPE = 'CLIENT' | ||
TEST_CLASS = 'General' | TEST_CLASS = 'General' | ||
Line 32: | Line 40: | ||
</pre> | </pre> | ||
As mentioned above, each hook should contain a file called <code>control.template</code>, which you can use as the template for the control file for your new test. | |||
=== Required data === | |||
The following control file items are required for valid AutoQA tests: | |||
* '''AUTHOR''': Your name and email address. | |||
* '''TIME''': either 'SHORT', 'MEDIUM', or 'LONG'. This defines the expected runtime of the test - either 15 minutes, less than 4 hours, or more than 4 hours. | |||
* '''NAME''': The name of the test. Should match the test directory name, the test object name, etc. | |||
* '''DOC''': A verbose description of the test - its purpose, the logs and data it will generate, and so on. | |||
* '''TEST_TYPE''': either 'CLIENT' or 'SERVER'. Use 'CLIENT' unless your test requires multiple machines (e.g. a client and server for network-based testing). | |||
* '''TEST_CLASS''': This is used to group tests in the UI. 'General' is fine. We may use this field to refer to the test hook in the future. | |||
* '''TEST_CATEGORY''': This defines the category your test is a part of - usually this describes the general type of test it is. Examples include ''Functional'', ''Stress'', ''Performance'', and ''Regression''. | |||
=== Optional data === | |||
<pre> | |||
DEPENDENCIES = 'POWER, CONSOLE' | |||
SYNC_COUNT = 1 | |||
</pre> | |||
* '''DEPENDENCIES''': Comma-separated list of hardware requirements for the test. Currently unsupported. | |||
* '''SYNC_COUNT''': The number of hosts to set up and synchronize for this test. Only relevant for SERVER-type tests that need to run on multiple machines. | |||
=== Launching the test object === | |||
Most simple tests will have a line in the control file like this: | |||
<pre>job.run_test('conflicts', baseurl=url, treename=treename, config=autoqa_conf)</pre> | |||
This will create a 'conflicts' ''test object'' (see below) and pass along the given variables. | |||
The test hook defines what variables will be provided. The control file template should list these variables for you, and the template's example <code>run_test()</code> line should already include them. | |||
Those variables will be inserted into the control file by the autoqa test harness when it's time to schedule the test. | |||
=== Control files are python scripts === | === Control files are python scripts === | ||
Line 52: | Line 85: | ||
== Test Objects == | == Test Objects == | ||
The test object is a python file that defines an object that represents your test. It handles the setup for the test (installing packages, modifying services, etc), running the test, and sending results to [[Autotest]] (and other places). | The test object is a python file that defines an object that represents your test. It handles the setup for the test (installing packages, modifying services, etc), running the test code, and sending results to [[Autotest]] (and other places). | ||
Convention holds that the test object file - and the object itself - should have the same name as the test. For example, the <code>conflicts</code> test contains a file named <code>conflicts.py</code>, which defines a <code>conflicts</code> class, as follows: | Convention holds that the test object file - and the object itself - should have the same name as the test. For example, the <code>conflicts</code> test contains a file named <code>conflicts.py</code>, which defines a <code>conflicts</code> class, as follows: | ||
Line 67: | Line 100: | ||
=== initialize() === | === initialize() === | ||
This is an optional method of the test class. It does any pre-test initialization that needs to happen. AutoQA tests typically use this method to parse the autoqa config data passed from the server: | |||
<pre> | |||
def initialize(self, config): | |||
self.config = config_loader(config, self.tmpdir) | |||
</pre> | |||
=== setup() === | === setup() === | ||
This is another optional method of the test class. This is where you make sure that any required packages are installed, services are started, your test code is compiled, and so on. For example: | |||
<pre> | |||
def setup(self): | |||
utils.system('yum -y install httpd') | |||
if utils.system('service httpd status') != 0: | |||
utils.system('service httpd start') | |||
</pre> | |||
=== run_once() === | === run_once() === | ||
TODO | |||
=== Getting test results === | This is the only required method. This is where the test code actually gets run. | ||
First, the basic rule for test results: If your <code>run_once()</code> method does not raise an exception, the test result | |||
'''TODO FINISH''' | |||
=== Useful test object attributes === | |||
<code>test</code> objects have the following attributes available<ref>http://autotest.kernel.org/browser/branches/0.10.1/client/common_lib/test.py#L9</ref>: | |||
<pre> | |||
outputdir eg. results/<job>/<testname.tag> | |||
resultsdir eg. results/<job>/<testname.tag>/results | |||
profdir eg. results/<job>/<testname.tag>/profiling | |||
debugdir eg. results/<job>/<testname.tag>/debug | |||
bindir eg. tests/<test> | |||
src eg. tests/<test>/src | |||
tmpdir eg. tmp/<tempname>_<testname.tag> | |||
</pre> | |||
=== Getting proper test results === | |||
First, the basic rule for test results: If your <code>run_once()</code> method does not raise an exception, the test result will be PASS. If it raises <code>error.TestFail</code> or <code>error.TestWarn</code> the test result is FAIL or WARN. Any other exception yields an ERROR result. | |||
For simple tests you can just run the test binary like this: | For simple tests you can just run the test binary like this: | ||
Line 87: | Line 153: | ||
</pre> | </pre> | ||
=== | === Log files and scratch data === | ||
Any files written to <code>self.resultsdir</code> will be saved at the end of the test. Anything written to <code>self.tmpdir</code> will be discarded. | |||
=== Returning extra data === | === Returning extra data === | ||
Further test-level info can be returned by using <code>test.write_test_keyval(dict)</code>: | Further test-level info can be returned by using <code>test.write_test_keyval(dict)</code>: | ||
<pre> | <pre> | ||
Line 104: | Line 172: | ||
** Just perf: <code>test.write_perf_keyval(perf_dict)</code> | ** Just perf: <code>test.write_perf_keyval(perf_dict)</code> | ||
** Both: <code>test.write_iteration_keyval(attr_dict, perf_dict)</code> | ** Both: <code>test.write_iteration_keyval(attr_dict, perf_dict)</code> | ||
== References == | == References == | ||
Line 125: | Line 181: | ||
From the upstream Autotest wiki: | From the upstream Autotest wiki: | ||
* [http://autotest.kernel.org/wiki/ControlRequirements ControlRequirements (covers the requirements for control files)] | |||
* [http://autotest.kernel.org/wiki/AddingTest AddingTest (covers the basics of test writing)] | * [http://autotest.kernel.org/wiki/AddingTest AddingTest (covers the basics of test writing)] | ||
* [http://autotest.kernel.org/wiki/AutotestApi AutotestApi (more detailed info about writing tests)] | * [http://autotest.kernel.org/wiki/AutotestApi AutotestApi (more detailed info about writing tests)] | ||
[[Category:AutoQA]] | [[Category:AutoQA]] |
Revision as of 19:29, 26 August 2009
Introduction
Here's some info on writing tests for AutoQA. There's three parts to a test: the test code, the control file, and the test object. Typically they all live in a single directory, located in the tests/
dir of the autoqa source tree.
Write test code
You should have a working test before you even start thinking about AutoQA. You can package up pre-existing tests or you can write a new test in whatever language you're comfortable with. It doesn't even need to return a meaningful exit code if you don't want it to. You'll handle parsing the output and returning a useful result in the test object.
If you are writing a brand new test, there are some python libraries that have been developed for use in existing AutoQA tests. More information about this will be available once these libraries are packaged correctly.
The test directory
Create a new directory to hold your test. The directory name will be used as the test name, and the test object name should match that. Choose a name that doesn't use spaces, dashes, or dots. Underscores are fine.
Drop your test code into the directory - it can be a bunch of scripts, a tarball of sources that may need compiling, whatever.
Next, copy control.template
and test_class_template.py
from the appropriate hook into your test dir. Rename them to control
and [testname].py
.
The control file
The control file defines the metadata for this test - who wrote it, what kind of a test it is, what test arguments it uses from AutoQA, and so on. Here's an example control file:
control file for conflicts test
AUTHOR = "Will Woods <wwoods@redhat.com>" TIME="SHORT" NAME = 'conflict' DOC = """ This test runs potential_conflict from yum-utils to check for possible file / package conflicts. """ TEST_TYPE = 'CLIENT' TEST_CLASS = 'General' TEST_CATEGORY = 'Functional' job.run_test('conflicts', baseurl=url, parents=parents, reponame=reponame, config=autoqa_conf)
As mentioned above, each hook should contain a file called control.template
, which you can use as the template for the control file for your new test.
Required data
The following control file items are required for valid AutoQA tests:
- AUTHOR: Your name and email address.
- TIME: either 'SHORT', 'MEDIUM', or 'LONG'. This defines the expected runtime of the test - either 15 minutes, less than 4 hours, or more than 4 hours.
- NAME: The name of the test. Should match the test directory name, the test object name, etc.
- DOC: A verbose description of the test - its purpose, the logs and data it will generate, and so on.
- TEST_TYPE: either 'CLIENT' or 'SERVER'. Use 'CLIENT' unless your test requires multiple machines (e.g. a client and server for network-based testing).
- TEST_CLASS: This is used to group tests in the UI. 'General' is fine. We may use this field to refer to the test hook in the future.
- TEST_CATEGORY: This defines the category your test is a part of - usually this describes the general type of test it is. Examples include Functional, Stress, Performance, and Regression.
Optional data
DEPENDENCIES = 'POWER, CONSOLE' SYNC_COUNT = 1
- DEPENDENCIES: Comma-separated list of hardware requirements for the test. Currently unsupported.
- SYNC_COUNT: The number of hosts to set up and synchronize for this test. Only relevant for SERVER-type tests that need to run on multiple machines.
Launching the test object
Most simple tests will have a line in the control file like this:
job.run_test('conflicts', baseurl=url, treename=treename, config=autoqa_conf)
This will create a 'conflicts' test object (see below) and pass along the given variables.
The test hook defines what variables will be provided. The control file template should list these variables for you, and the template's example run_test()
line should already include them.
Those variables will be inserted into the control file by the autoqa test harness when it's time to schedule the test.
Control files are python scripts
The control file is actually interpreted as a Python script. So you can do any of the normal pythonic things you might want to do, but in general it's best to keep the control file as simple as possible and put all the complicated bits into the test object or the test itself.
Before it reads the control file, Autotest imports all the symbols from the autotest_lib.client.bin.util
module.[1] This means the control files can use any function defined in common_lib.utils
or bin.base_utils
[2]. This lets you do things like:
arch = get_arch() baseurl = '%s/development/%s/os/' % (mirror_baseurl, arch) job.run_test('some_rawhide_test', arch=arch, baseurl=baseurl)
since get_arch
is defined in common_lib.utils
.
Test Objects
The test object is a python file that defines an object that represents your test. It handles the setup for the test (installing packages, modifying services, etc), running the test code, and sending results to Autotest (and other places).
Convention holds that the test object file - and the object itself - should have the same name as the test. For example, the conflicts
test contains a file named conflicts.py
, which defines a conflicts
class, as follows:
from autotest_lib.client.bin import test, utils from autotest_lib.client.bin.test_config import config_loader class conflicts(test.test): ...
All test objects must be subclasses of the autotest test.test
class. But don't worry too much about how this works - each hook should contain a test_class_template.py
that contains the skeleton of an appropriate test object for that hook, complete with the usual setup code used by AutoQA tests.
initialize()
This is an optional method of the test class. It does any pre-test initialization that needs to happen. AutoQA tests typically use this method to parse the autoqa config data passed from the server:
def initialize(self, config): self.config = config_loader(config, self.tmpdir)
setup()
This is another optional method of the test class. This is where you make sure that any required packages are installed, services are started, your test code is compiled, and so on. For example:
def setup(self): utils.system('yum -y install httpd') if utils.system('service httpd status') != 0: utils.system('service httpd start')
run_once()
This is the only required method. This is where the test code actually gets run.
TODO FINISH
Useful test object attributes
test
objects have the following attributes available[3]:
outputdir eg. results/<job>/<testname.tag> resultsdir eg. results/<job>/<testname.tag>/results profdir eg. results/<job>/<testname.tag>/profiling debugdir eg. results/<job>/<testname.tag>/debug bindir eg. tests/<test> src eg. tests/<test>/src tmpdir eg. tmp/<tempname>_<testname.tag>
Getting proper test results
First, the basic rule for test results: If your run_once()
method does not raise an exception, the test result will be PASS. If it raises error.TestFail
or error.TestWarn
the test result is FAIL or WARN. Any other exception yields an ERROR result.
For simple tests you can just run the test binary like this:
self.results = utils.system_output(cmd, retain_output=True)
If cmd
is successful (i.e. it returns an exit status of 0) then utils.system_output()
will return the output of the command. Otherwise it will raise error.CmdError
, which will immediately end the test.
Some tests always exit successfully, so you'll need to inspect their output to decide whether they passed or failed. That would look more like this:
output = utils.system_output(cmd, retain_output=True) if 'FAILED' in output: raise error.TestFail elif 'WARNING' in output: raise error.TestWarn
Log files and scratch data
Any files written to self.resultsdir
will be saved at the end of the test. Anything written to self.tmpdir
will be discarded.
Returning extra data
Further test-level info can be returned by using test.write_test_keyval(dict)
:
extrainfo = dict() for line in self.results.stdout: if line.startswith("kernel version "): extrainfo['kernelver'] = line.split()[3] ... self.write_test_keyval(extrainfo)
- For per-iteration data (performance numbers, etc) there are three methods:
- Just attr:
test.write_attr_keyval(attr_dict)
- Just perf:
test.write_perf_keyval(perf_dict)
- Both:
test.write_iteration_keyval(attr_dict, perf_dict)
- Just attr:
References
External Links
From the upstream Autotest wiki: