m (remove a FIXME that I don't really understand) |
(first run of updating to 0.5.0 - incomplete yet!) |
||
Line 112: | Line 112: | ||
<pre> | <pre> | ||
import | from autotest_lib.client.bin import utils | ||
from autoqa.test import AutoQATest | from autoqa.test import AutoQATest | ||
from autoqa.decorators import ExceptionCatcher | from autoqa.decorators import ExceptionCatcher | ||
class conflicts(AutoQATest): | class conflicts(AutoQATest): | ||
Line 125: | Line 124: | ||
=== AutoQATest base class === | === AutoQATest base class === | ||
This class contains the functionality common to all the tests | This class contains the functionality common to all the tests. When you override some of its methods (like <code>setup()</code>, <code>initialize()</code> or <code>run_once()</code>) it is important to call the parent method first. | ||
The most important attribute of this class is <code>detail</code> (an instance of <code>TestDetail</code> class) that is used for storing all test outcomes. | |||
The most important methods include <code>log()</code> that you are advised to use for logging test output and <code>post_results()</code> that you can use for changing the way your test results are reported (if you're not satisfied with the default behavior). | |||
Whenever your test crashes, the <code>process_exception()</code> method will automatically catch the exception and log it (more on that in the next section). | |||
=== ExceptionCatcher decorator === | === ExceptionCatcher decorator === | ||
When an unintended exception is raised during test setup (<code>setup()</code>), initialization (<code>initialize()</code>) or execution (<code>run_once()</code>) the | When an unintended exception is raised during test setup (<code>setup()</code>), initialization (<code>initialize()</code>) or execution (<code>run_once()</code>) and <code>ExceptionCatcher()</code> decorator is used for these methods (the default), it calls the <code>process_exception()</code> method instead of simply crashing. In this way we are able to operate and submit results even of crashed tests. | ||
When such event occurs, the test result is set to ''CRASHED'', the exception traceback is added into the test output and the exception info is put into the test summary. Then the results are reported in a standard way (by creating log files and sending emails). Finally the original exception is re-raised. | |||
If a different recovery procedure than <code>process_exception()</code> is desired, you may define the method and provide the method name as an argument to the decorator. For example, see below: | |||
<pre> | <pre> | ||
Line 157: | Line 149: | ||
</pre> | </pre> | ||
{{admon/important| **kwargs parameter | Because of some nasty Autotest magic, it is required to have the <code>**kwargs</code> argument in the decorated function. This is because Autotest magic can not find out | {{admon/important| **kwargs parameter | Because of some nasty Autotest magic, it is required to have the <code>**kwargs</code> argument in the decorated function. This is because Autotest magic can not find out what is the correct subset of arguments from <code>**autoqa_args</code> to pass, so it passes them all - which causes error, if you don't have them all listed.}} | ||
=== Test stages === | === Test stages === | ||
Line 168: | Line 160: | ||
@ExceptionCatcher() | @ExceptionCatcher() | ||
def setup(self): | def setup(self): | ||
utils.system('yum -y install httpd') | retval = utils.system('yum -y install httpd') | ||
assert retval == 0 | |||
if utils.system('service httpd status') != 0: | if utils.system('service httpd status') != 0: | ||
utils.system('service httpd start') | utils.system('service httpd start') | ||
Line 176: | Line 169: | ||
==== initialize() ==== | ==== initialize() ==== | ||
This does any pre-test initialization that needs to happen. AutoQA tests typically uses this method to | This does any pre-test initialization that needs to happen. AutoQA tests typically uses this method to initialize various structures, set <code>self.detail.id</code> and similar attributes. This is an optional method. | ||
All basic initialization is done in the AutoQATest class, so check it out, before you re-define it. | All basic initialization is done in the AutoQATest class, so check it out, before you re-define it. | ||
Line 189: | Line 182: | ||
<pre> | <pre> | ||
@ExceptionCatcher() | @ExceptionCatcher() | ||
def run_once(self, baseurl, parents, | def run_once(self, baseurl, parents, name, **kwargs): | ||
super(self.__class__, self).run_once() | super(self.__class__, self).run_once() | ||
cmd = './potential_conflict.py --tempcache --newest ' \ | |||
'--repofrompath=target,%s --repoid=target' % baseurl | |||
out = utils.system_output(cmd, retain_output=True) | |||
self.log(out, printout=False) | |||
</pre> | </pre> | ||
This above example will run the command {{command|potential_conflict.py}} and save its output. It will raise <code>CmdError</code> if the command ends with non-zero exit code. | |||
If you need to receive just the exit code of the command, use <code>utils.system()</code> method instead. | |||
Additionally, if you need both the exit code and command output, use the built-in <code>utils.run()</code> method: | Additionally, if you need both the exit code and command output, use the built-in <code>utils.run()</code> method: | ||
<pre> | <pre> | ||
cmd_result = utils.run(cmd, ignore_status=True, stdout_tee=utils.TEE_TO_LOGS, stderr_tee=utils.TEE_TO_LOGS) | |||
output = cmd_result.stdout | |||
output = | retval = cmd_result.exit_status | ||
retval = | |||
</pre> | </pre> | ||
==== Useful test object attributes ==== | ==== Useful test object attributes ==== | ||
<code> | <code>AutoQATest</code> instances have the following attributes available<ref>http://autotest.kernel.org/browser/branches/0.10.1/client/common_lib/test.py#L9</ref>: | ||
<pre> | <pre> | ||
outputdir eg. results/<job>/<testname.tag> | outputdir eg. results/<job>/<testname.tag> | ||
Line 257: | Line 216: | ||
=== Test Results === | === Test Results === | ||
The <code>AutoQATest</code> class provides a | The <code>AutoQATest</code> class provides a <code>detail</code> attribute to be used for storing test results. This is an instance of <code>TestDetail</code> class and serves as a container for everything related to the test outcome. | ||
{{admon/note||If your test tests several independent things at once, you can create several <code>TestDetail</code> objects and then submit results for all of them manually. For example ''upgradepath'' test uses that for reporting results for every proposed Bodhi update. But this is really advanced stuff, see ''upgradepath'' or ''depcheck'' tests for inspiration.}} | |||
==== Overall Result ==== | ==== Overall Result ==== | ||
The overall test result is stored in | The overall test result is stored in <code>self.detail.result</code>. You should set it in <code>run_once()</code> according to the result of your test. You can choose from these values: | ||
* <code>PASSED</code> - the test has passed, there is no problem with it | * <code>PASSED</code> - the test has passed, there is no problem with it | ||
* <code>INFO</code> - the test has passed, but there is some important information that a relevant person would very probably like to review | * <code>INFO</code> - the test has passed, but there is some important information that a relevant person would very probably like to review | ||
* <code>FAILED</code> - the test has failed, requirements are not met | * <code>FAILED</code> - the test has failed, requirements are not met | ||
* <code>NEEDS_INSPECTION</code> ''(default)''- the test has failed, but a relevant person is needed to inspect | * <code>NEEDS_INSPECTION</code> ''(default)''- the test has failed, but a relevant person is needed to inspect it and possibly may waive the errors | ||
* <code>ABORTED</code> - some third party error has occurred (networking error, external script used for testing has crashed, etc) and the test could not complete because of that | * <code>ABORTED</code> - some third party error has occurred (networking error, external script used for testing has crashed, etc) and the test could not complete because of that. Re-running this test with same input arguments could solve this problem. | ||
* <code>CRASHED</code> - the test has crashed because of a programming error somewhere in our code (test script or autoqa code) | * <code>CRASHED</code> - the test has crashed because of a programming error somewhere in our code (test script or autoqa code). Close inspection is necessary to be able to solve this issue. | ||
If no value is set in <code>self.result</code>, a value of <code>NEEDS_INSPECTION</code> | If no value is set in <code>self.detail.result</code>, a value of <code>NEEDS_INSPECTION</code> is used. | ||
If an exception occurs, and is caught by the <code>ExceptionCatcher</code> decorator (i.e. you don't catch it yourself), <code>self.result</code> is set to <code>CRASHED</code>. | If an exception occurs, and is caught by the <code>ExceptionCatcher</code> decorator (i.e. you don't catch it yourself), <code>self.detail.result</code> is set to <code>CRASHED</code>. | ||
===== Using ABORTED result properly ===== | ===== Using ABORTED result properly ===== | ||
If you want to end your test with <code>ABORTED</code> result, simple set <code>self.result</code> and then re-raise the original exception. <code>self.summary</code> will be filled-in automatically (extracted from the exception message), if empty. | If you want to end your test with <code>ABORTED</code> result, simple set <code>self.detail.result</code> and then re-raise the original exception. <code>self.detail.summary</code> will be filled-in automatically (extracted from the exception message), if empty. | ||
<pre> | <pre> | ||
Line 282: | Line 243: | ||
//download from Koji | //download from Koji | ||
except IOError, e: //or some other error | except IOError, e: //or some other error | ||
self.result = 'ABORTED' | self.detail.result = 'ABORTED' | ||
raise | raise | ||
</pre> | </pre> | ||
If you don't have any exception to re-raise but still want to end the test, again set <code>self.result</code>, but this time be sure to also provide an explanation in <code>self.summary</code> and then end the test by raising <code>autotest_lib.client.common_lib.error.TestFail</code>. Alternatively you can provide the error explanation as an argument to the <code>TestFail</code> class instead of filling in <code>self.summary</code>. | If you don't have any exception to re-raise but still want to end the test, again set <code>self.detail.result</code>, but this time be sure to also provide an explanation in <code>self.detail.summary</code> and then end the test by raising <code>autotest_lib.client.common_lib.error.TestFail</code>. Alternatively you can provide the error explanation as an argument to the <code>TestFail</code> class instead of filling in <code>self.detail.summary</code>. | ||
<pre> | <pre> | ||
Line 292: | Line 253: | ||
foo = //do some stuff | foo = //do some stuff | ||
if foo == None: | if foo == None: | ||
self.result = 'ABORTED' | self.detail.result = 'ABORTED' | ||
raise error.TestFail('No result returned from service bar') | raise error.TestFail('No result returned from service bar') | ||
</pre> | </pre> | ||
Line 298: | Line 259: | ||
===== Posting feedback into Bodhi ===== | ===== Posting feedback into Bodhi ===== | ||
After the result of a test is known, it can be sent into Bodhi | After the result of a test is known, it can be sent into Bodhi. You need to manually call <code>post_results()</code> method at the end of our test and provide a <code>bodhi</code> parameter. | ||
<pre> | |||
# Report all results to Bodhi | |||
for title, td in update_result.items(): # mapping of Bodhi update title -> TestDetail object | |||
self.post_results(td, bodhi = {'title': title}) | |||
</pre> | |||
==== Summary ==== | ==== Summary ==== | ||
The <code>self.summary</code> | The <code>self.detail.summary</code> should contain a few words summarizing the test output. It is then used in the log overview and in the email subject. E.g. for ''conflicts'' test it can be ''"69 packages with file conflicts"''. Don't repeat test name, test result or test ID in here. | ||
==== Highlights ==== | ==== Highlights ==== | ||
Line 355: | Line 308: | ||
===== {{filename|output.log}} ===== | ===== {{filename|output.log}} ===== | ||
After the test completes, an {{filename|output.log}} file is created in the directory referenced by <code>self.resultsdir</code>. The {{filename|output.log}} file combines all test output variables (<code>self.result/summary/highlights/outputs</code>) and writes them in a consistent format - the same format that is used for email reports. You can use this file to review the final test report even if you don't have access to the email one. | After the test completes, an {{filename|output.log}} file is created in the directory referenced by <code>self.resultsdir</code>. The {{filename|output.log}} file combines all test output variables (<code>self.result/summary/highlights/outputs</code>) and writes them in a consistent format - the same format that is used for email reports. You can use this file to review the final test report even if you don't have access to the email one. | ||
== How to run AutoQA tests == | == How to run AutoQA tests == |
Revision as of 12:47, 14 June 2011
Introduction
Here's some info on writing tests for AutoQA. There's four parts to a test: the test code, the test object, the Autotest control file, and the AutoQA control file. Typically they all live in a single directory, located in the tests/ dir of the autoqa source tree.
Write test code first
I'll say it again: Write the test first. The tests don't require anything from autotest or autoqa. You should have a working test before you even start thinking about AutoQA.
You can package up pre-existing tests or you can write a new test in whatever language you're comfortable with. It doesn't even need to return a meaningful exit code if you don't want it to (even though it is definitely better). You'll handle parsing the output and returning a useful result in the test object.
If you are writing a brand new test, there are some python libraries that have been developed for use in existing AutoQA tests. More information about this will be available once these libraries are packaged correctly, but they are not necessary to write your own tests. You can choose to use whatever language and libraries you want.
The test directory
Create a new directory to hold your test. The directory name will be used as the test name, and the test object name should match that. Choose a name that doesn't use spaces, dashes, or dots. Underscores are acceptable.
Drop your test code into the directory - it can be a bunch of scripts, a tarball of sources that may need compiling, whatever.
Next, from the directory autoqa/doc/
, copy template files control.template
, control.autoqa.template
and test_class.py.template
into your test directory. Rename them to control
, control.autoqa
and [testname].py
, respectively.
The control
file
The control file defines some metadata for this test - who wrote it, what kind of a test it is, what test arguments it uses from AutoQA, and so on. Here's an example control file:
control file for conflicts test
AUTHOR = "Will Woods <wwoods@redhat.com>" TIME="SHORT" NAME = 'conflict' DOC = """ This test runs potential_conflict from yum-utils to check for possible file / package conflicts. """ TEST_TYPE = 'CLIENT' TEST_CLASS = 'General' TEST_CATEGORY = 'Functional' job.run_test('conflicts', config=autoqa_conf, **autoqa_args)
Required data
The following control file items are required for valid AutoQA tests. The first three are important for us, the rest is not so important but still required.
- NAME: The name of the test. Should match the test directory name, the test object name, etc.
- AUTHOR: Your name and email address.
- DOC: A verbose description of the test - its purpose, the logs and data it will generate, and so on.
- TIME: either 'SHORT', 'MEDIUM', or 'LONG'. This defines the expected runtime of the test - either 15 minutes, less than 4 hours, or more than 4 hours.
- TEST_TYPE: either 'CLIENT' or 'SERVER'. Use 'CLIENT' unless your test requires multiple machines (e.g. a client and server for network-based testing).
- TEST_CLASS: This is used to group tests in the UI. 'General' is fine. We may use this field to refer to the test event in the future.
- TEST_CATEGORY: This defines the category your test is a part of - usually this describes the general type of test it is. Examples include Functional, Stress, Performance, and Regression.
Optional data
The following control file items are optional, and infrequently used, for AutoQA tests.
DEPENDENCIES = 'POWER, CONSOLE' SYNC_COUNT = 1
- DEPENDENCIES: Comma-separated list of hardware requirements for the test. Currently unsupported.
- SYNC_COUNT: The number of hosts to set up and synchronize for this test. Only relevant for SERVER-type tests that need to run on multiple machines.
Launching the test object
Most tests will have a line in the control file like this:
job.run_test('conflicts', config=autoqa_conf, **autoqa_args)
This will create a 'conflicts' test object (see below) and pass along the following variables.
autoqa_conf
- Contains string with autoqa.conf file, usually located at
/etc/autoqa/autoqa.conf
. Note, though, that some of the values in autoqa_conf are changed by the autoqa harness while scheduling the testrun.
autoqa_args
- A dictionary, containing all the event-specific variables (e.g. kojitag for post-koji-build event). Documentation on these is to be found in
events/[eventname]/README
files. Some more variables may be also present, as described in the template file.
Those variables will be inserted into the control file by the autoqa test harness when it's time to schedule the test.
The control.autoqa
file
The control.autoqa
file allows a test to define any scheduling requirements or modify input arguments. This file will decide whether to run this test at all, on what architectures/distributions it should run, and so on. It is evaluated on the AutoQA server before the test itself is scheduled and run on AutoQA client.
All variables available in control.autoqa
are documented in doc/control.autoqa.template
. You can override them to customize your test's scheduling. Basically you can influence:
- Which event the test runs for and under which conditions.
- The type of system the test needs. This includes system architecture, operating system version and whether the system supports virtualization (see autotest labels for additional information)
- Data passed from the event to the test object.
Here is example control.autoqa
file:
# this test can be run just once and on any architecture, # override the default set of architectures archs = ['noarch'] # this test may be destructive, let's require a virtual machine for it labels = ['virt'] # we want to run this test just for post-koji-build event; # please note that 'execute' defaults to 'False' and to have # the test scheduled, control.autoqa needs to complete with # 'execute' set to 'True' if event in ['post-koji-build']: execute = True
Similar to the control
file, the control.autoqa
file is a Python script, so you can execute conditional expressions, loops or virtually any other Python statements there. However, it is heavily recommended to keep this file as simple as possible and put all the logic to the test object.
Test Object
The test object is a python file that defines an object that represents your test. It handles the setup for the test (installing packages, modifying services, etc), running the test code, and sending results to Autotest (and other places).
Convention holds that the test object file - and the object itself - should have the same name as the test. For example, the conflicts
test contains a file named conflicts.py
, which defines a conflicts
class, as follows:
from autotest_lib.client.bin import utils from autoqa.test import AutoQATest from autoqa.decorators import ExceptionCatcher class conflicts(AutoQATest): ...
The name of the class must match the name given in the run_test()
line of the control file, and test classes must be subclasses of the AutoQATest
class. But don't worry too much about how this works - the test_class.py.template
contains the skeleton of an appropriate test object. Just change the name of the file (and class!) to something appropriate for your test.
AutoQATest base class
This class contains the functionality common to all the tests. When you override some of its methods (like setup()
, initialize()
or run_once()
) it is important to call the parent method first.
The most important attribute of this class is detail
(an instance of TestDetail
class) that is used for storing all test outcomes.
The most important methods include log()
that you are advised to use for logging test output and post_results()
that you can use for changing the way your test results are reported (if you're not satisfied with the default behavior).
Whenever your test crashes, the process_exception()
method will automatically catch the exception and log it (more on that in the next section).
ExceptionCatcher decorator
When an unintended exception is raised during test setup (setup()
), initialization (initialize()
) or execution (run_once()
) and ExceptionCatcher()
decorator is used for these methods (the default), it calls the process_exception()
method instead of simply crashing. In this way we are able to operate and submit results even of crashed tests.
When such event occurs, the test result is set to CRASHED, the exception traceback is added into the test output and the exception info is put into the test summary. Then the results are reported in a standard way (by creating log files and sending emails). Finally the original exception is re-raised.
If a different recovery procedure than process_exception()
is desired, you may define the method and provide the method name as an argument to the decorator. For example, see below:
def my_exception_handler(self, exc = None): '''do something different''' @ExceptionCatcher('self.my_exception_handler') def run_once(self, **kwargs): ...
Test stages
setup()
This is an optional method of the test class. This is where you make sure that any required packages are installed, services are started, your test code is compiled, and so on. For example:
@ExceptionCatcher() def setup(self): retval = utils.system('yum -y install httpd') assert retval == 0 if utils.system('service httpd status') != 0: utils.system('service httpd start')
initialize()
This does any pre-test initialization that needs to happen. AutoQA tests typically uses this method to initialize various structures, set self.detail.id
and similar attributes. This is an optional method.
All basic initialization is done in the AutoQATest class, so check it out, before you re-define it.
run_once()
This is where the test code actually gets run. It's the only required method for your test object.
In short, this method should build the argument list, run the test binary and process the test result and output. For example, see below:
@ExceptionCatcher() def run_once(self, baseurl, parents, name, **kwargs): super(self.__class__, self).run_once() cmd = './potential_conflict.py --tempcache --newest ' \ '--repofrompath=target,%s --repoid=target' % baseurl out = utils.system_output(cmd, retain_output=True) self.log(out, printout=False)
This above example will run the command potential_conflict.py
and save its output. It will raise CmdError
if the command ends with non-zero exit code.
If you need to receive just the exit code of the command, use utils.system()
method instead.
Additionally, if you need both the exit code and command output, use the built-in utils.run()
method:
cmd_result = utils.run(cmd, ignore_status=True, stdout_tee=utils.TEE_TO_LOGS, stderr_tee=utils.TEE_TO_LOGS) output = cmd_result.stdout retval = cmd_result.exit_status
Useful test object attributes
AutoQATest
instances have the following attributes available[1]:
outputdir eg. results/<job>/<testname.tag> resultsdir eg. results/<job>/<testname.tag>/results profdir eg. results/<job>/<testname.tag>/profiling debugdir eg. results/<job>/<testname.tag>/debug bindir eg. tests/<test> src eg. tests/<test>/src tmpdir eg. tmp/<tempname>_<testname.tag>
Test Results
The AutoQATest
class provides a detail
attribute to be used for storing test results. This is an instance of TestDetail
class and serves as a container for everything related to the test outcome.
Overall Result
The overall test result is stored in self.detail.result
. You should set it in run_once()
according to the result of your test. You can choose from these values:
PASSED
- the test has passed, there is no problem with itINFO
- the test has passed, but there is some important information that a relevant person would very probably like to reviewFAILED
- the test has failed, requirements are not metNEEDS_INSPECTION
(default)- the test has failed, but a relevant person is needed to inspect it and possibly may waive the errorsABORTED
- some third party error has occurred (networking error, external script used for testing has crashed, etc) and the test could not complete because of that. Re-running this test with same input arguments could solve this problem.CRASHED
- the test has crashed because of a programming error somewhere in our code (test script or autoqa code). Close inspection is necessary to be able to solve this issue.
If no value is set in self.detail.result
, a value of NEEDS_INSPECTION
is used.
If an exception occurs, and is caught by the ExceptionCatcher
decorator (i.e. you don't catch it yourself), self.detail.result
is set to CRASHED
.
Using ABORTED result properly
If you want to end your test with ABORTED
result, simple set self.detail.result
and then re-raise the original exception. self.detail.summary
will be filled-in automatically (extracted from the exception message), if empty.
try: //download from Koji except IOError, e: //or some other error self.detail.result = 'ABORTED' raise
If you don't have any exception to re-raise but still want to end the test, again set self.detail.result
, but this time be sure to also provide an explanation in self.detail.summary
and then end the test by raising autotest_lib.client.common_lib.error.TestFail
. Alternatively you can provide the error explanation as an argument to the TestFail
class instead of filling in self.detail.summary
.
from autotest_lib.client.common_lib import error foo = //do some stuff if foo == None: self.detail.result = 'ABORTED' raise error.TestFail('No result returned from service bar')
Posting feedback into Bodhi
After the result of a test is known, it can be sent into Bodhi. You need to manually call post_results()
method at the end of our test and provide a bodhi
parameter.
# Report all results to Bodhi for title, td in update_result.items(): # mapping of Bodhi update title -> TestDetail object self.post_results(td, bodhi = {'title': title})
Summary
The self.detail.summary
should contain a few words summarizing the test output. It is then used in the log overview and in the email subject. E.g. for conflicts test it can be "69 packages with file conflicts". Don't repeat test name, test result or test ID in here.
Highlights
The self.highlights
should contain a digest from the stdout/stderr generated by your test. Traditionally, this is used to draw attention to important warnings or errors. For example, you may have several hundred/thousand lines of test output (self.outputs), but you wouldn't want to inspect that everytime to determine the nature of a failure. Draw attention to specific issues by using self.highlights
.
The self.highlights
can contain a string, or a list of strings.
Detailed Output
Put any detailed output into the self.outputs
variable. Usually it contains the stdout/stderr of your test script, but it may contain less or more, as you wish. This detailed output will probably represent the largest portion of the test result report.
The self.outputs
can contain a string or a list of strings.
Extra Data
Further test-level info can be returned by using test.write_test_keyval(dict)
. The following example demonstrates extracting and saving the kernel version used when running a test:
extrainfo = dict() for line in self.results.stdout: if line.startswith("kernel version "): extrainfo['kernelver'] = line.split()[3] ... self.write_test_keyval(extrainfo)
In addition to test-level key/value pairs, per-iteration key/value information (e.g. performance metrics) can be recorded:
self.write_attr_keyval(attr_dict)
- Store test attributes (string data). Test attributes are limited to 100 characters. [2]self.write_perf_keyval(perf_dict)
- Store test performance metrics (numerical data). Performance values must be floating-point numbers.self.write_iteration_keyval(attr_dict, perf_dict)
- Storing both, attributes and performance data
Log files and scratch data
Autotest automatically logs all the client/server output, the full output of any commands you run, operating system variables and others and stores them on the server. There is a hyperlink to the directory with all these log files in every test report.
If you want to store a custom file (like your own log), just save it to self.resultsdir
directory. All those files will be saved at the end of the test. On the other hand, any files written to self.tmpdir
will be discarded.
output.log
After the test completes, an output.log
file is created in the directory referenced by self.resultsdir
. The output.log
file combines all test output variables (self.result/summary/highlights/outputs
) and writes them in a consistent format - the same format that is used for email reports. You can use this file to review the final test report even if you don't have access to the email one.
How to run AutoQA tests
Install AutoQA from GIT
First of all, you'll need to checkout some version from GIT. You can either use master, or some tagged 'release'.
To checkout master branch:
git clone git://git.fedorahosted.org/autoqa.git autoqa cd autoqa
To checkout tagged release:
git clone git://git.fedorahosted.org/autoqa.git autoqa cd autoqa git tag -l # now you'll get a list of tags, at the time of writing this document, the latests tag was v0.3.5-1 git checkout -b v0.3.5-1 tags/v0.3.5-1
Add your test
The best way to add your test into the directory structure is to create a new branch, copy your test and make install autoqa.
git checkout -b my_new_awesome_test cp -r /path/to/directory/with/your/test ./tests make clean install
Run your test
This is dependent on the event, your test is supposed to run under. Let's assume, that it is the post-koji-build
.
/usr/share/autoqa/post-koji-build/watch-koji-builds.py --dry-run
This command will show you current koji builds e.g.
No previous run - checking builds in the past 3 hours autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 autoqa post-koji-build --kojitag dist-f11-updates-candidate --arch x86_64 kdemultimedia-4.3.4-1.fc11 autoqa post-koji-build --kojitag dist-f11-updates-candidate --arch x86_64 kdeplasma-addons-4.3.4-1.fc11 autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 cryptopp-5.6.1-0.1.svn479.fc12 autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 drupal-6.15-1.fc12 autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 seamonkey-2.0.1-1.fc12 ... output trimmed ...
So to run your test, just select one of the lines, and add parameters --test name_of_your_test --local
, which will locally execute the test you just wrote.
If you wanted to run rpmlint, for example, the command would be
autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 --test rpmlint --local