(→Test Objects: Getting test results: system_output raises an exception if the command fails) |
|||
Line 10: | Line 10: | ||
=== Test Objects: Getting test results === | === Test Objects: Getting test results === | ||
First, the basic rule for test results: If your <code>run_once()</code> method does not raise an exception, the test result is PASS. If it raises <code>error.TestFail</code> or <code>error.TestWarn</code> the test result is FAIL or WARN. Any other exception yields an ERROR result. | |||
For simple tests you can just run the test binary like this: | |||
<pre>self.results = utils.system_output(cmd, retain_output=True)</pre> | <pre>self.results = utils.system_output(cmd, retain_output=True)</pre> | ||
If <code>cmd</code> is successful (i.e. it returns an exit status of 0) then <code>utils.system_output()</code> will return | If <code>cmd</code> is successful (i.e. it returns an exit status of 0) then <code>utils.system_output()</code> will return the output of the command. Otherwise it will raise <code>error.CmdError</code>, which will immediately end the test. | ||
Some tests always exit successfully, so you'll need to inspect their output to decide whether they passed or failed. That would look more like this: | |||
<pre>output = utils.system_output(cmd, retain_output=True) | |||
if 'FAILED' in output: | |||
raise error.TestFail | |||
elif 'WARNING' in output: | |||
raise error.TestWarn | |||
</pre> | |||
=== Test Objects: Returning extra data === | |||
Further test-level info can be returned by using <code>test.write_test_keyval(dict)</code>: | Further test-level info can be returned by using <code>test.write_test_keyval(dict)</code>: | ||
<pre> | <pre> |
Revision as of 16:55, 13 August 2009
Will's notes (to be integrated into main page)
Control Files
The control file is actually interpreted as a Python script. So you can do any of the normal pythonic things you might want to do.
Before it reads the control file, Autotest imports all the symbols from the autotest_lib.client.bin.util
module.[1] This means the control files can use any function defined in common_lib.utils
or bin.base_utils
[2]. This lets you do things like:
arch = get_arch() baseurl = '%s/development/%s/os/' % (mirror_baseurl, arch) job.run_test('some_rawhide_test', arch=arch, baseurl=baseurl)
since get_arch
is defined in common_lib.utils.
Test Objects: Getting test results
First, the basic rule for test results: If your run_once()
method does not raise an exception, the test result is PASS. If it raises error.TestFail
or error.TestWarn
the test result is FAIL or WARN. Any other exception yields an ERROR result.
For simple tests you can just run the test binary like this:
self.results = utils.system_output(cmd, retain_output=True)
If cmd
is successful (i.e. it returns an exit status of 0) then utils.system_output()
will return the output of the command. Otherwise it will raise error.CmdError
, which will immediately end the test.
Some tests always exit successfully, so you'll need to inspect their output to decide whether they passed or failed. That would look more like this:
output = utils.system_output(cmd, retain_output=True) if 'FAILED' in output: raise error.TestFail elif 'WARNING' in output: raise error.TestWarn
Test Objects: Returning extra data
Further test-level info can be returned by using test.write_test_keyval(dict)
:
extrainfo = dict() for line in self.results.stdout: if line.startswith("kernel version "): extrainfo['kernelver'] = line.split()[3] ... self.write_test_keyval(extrainfo)
- For per-iteration data (performance numbers, etc) there are three methods:
- Just attr:
test.write_attr_keyval(attr_dict)
- Just perf:
test.write_perf_keyval(perf_dict)
- Both:
test.write_iteration_keyval(attr_dict, perf_dict)
- Just attr:
Test Objects: Attributes for directories
test
objects have the following attributes available[3]:
outputdir eg. results/<job>/<testname.tag> resultsdir eg. results/<job>/<testname.tag>/results profdir eg. results/<job>/<testname.tag>/profiling debugdir eg. results/<job>/<testname.tag>/debug bindir eg. tests/<test> src eg. tests/<test>/src tmpdir eg. tmp/<tempname>_<testname.tag>