From Fedora Project Wiki

m (better wording, link)
 
(17 intermediate revisions by 3 users not shown)
Line 5: Line 5:


{{admon/important| Start with a test | Before considering integrating a test into [[AutoQA]] or [[Autotest]], create a '''working''' test.  Creating a working test should not require knowledge of autotest or autoqa.  This page outlines the process of integrating an existing test into [[AutoQA]].}}
{{admon/important| Start with a test | Before considering integrating a test into [[AutoQA]] or [[Autotest]], create a '''working''' test.  Creating a working test should not require knowledge of autotest or autoqa.  This page outlines the process of integrating an existing test into [[AutoQA]].}}
{{admon/note|No support for per-package tests|AutoQA currently supports only generic Fedora tests (tests that concern packages - or other objects - in general). We have plans to support per-package tests in the future (e.g. tests for {{package|sshd}}, tests for {{package|gimp}}, etc).  Until AutoQA support exists, consider merging your test into upstream's test suite, or including your test in the Fedora rpm package, and have it executed during <code>%check</code> or ask the package maintainer to manually execute tests and provide results during bodhi updates.}}


== Write test code first ==
== Write test code first ==
Line 23: Line 25:
== The {{filename|control}} file ==
== The {{filename|control}} file ==


The control file defines some metadata for this test - who wrote it, what kind of a test it is, what test arguments it uses from [[AutoQA]], and so on. Here's an example control file:
The control file defines some metadata for this test and executes the test. Here's an example control file:


=== control file for conflicts test ===
=== control file for conflicts test ===
<pre>
<pre>
AUTHOR = "Will Woods <wwoods@redhat.com>"
TIME="SHORT"
NAME = 'conflict'
DOC = """
DOC = """
This test runs potential_conflict from yum-utils to check for possible
This test runs potential_conflict from yum-utils to check for possible
file / package conflicts.
file / package conflicts.
"""
"""
TEST_TYPE = 'CLIENT'
MAINTAINER = "Martin Krizek <mkrizek@redhat.com>"
TEST_CLASS = 'General'
TEST_CATEGORY = 'Functional'


job.run_test('conflicts', config=autoqa_conf, **autoqa_args)
job.run_test('conflicts', **autoqa_args)
</pre>
</pre>
{{admon/important| FIXME | Append some real control file to show 'how it looks'}}


=== Required data ===
=== Required data ===


The following control file items are required for valid AutoQA tests. The first three are important for us, the rest is not so important but still required.
The following control file items are required for valid AutoQA tests:


* '''NAME''': The name of the test. Should match the test directory name, the test object name, etc.
* '''AUTHOR''': Your name and email address.
* '''DOC''': A verbose description of the test - its purpose, the logs and data it will generate, and so on.
* '''DOC''': A verbose description of the test - its purpose, the logs and data it will generate, and so on.
* '''TIME''': either 'SHORT', 'MEDIUM', or 'LONG'. This defines the expected runtime of the test - either 15 minutes, less than 4 hours, or more than 4 hours.
* '''MAINTAINER''': The contact person for this test. If it doesn't work or there are some problems, then we know who to contact.
* '''TEST_TYPE''': either 'CLIENT' or 'SERVER'. Use 'CLIENT' unless your test requires multiple machines (e.g. a client and server for network-based testing).
* '''TEST_CLASS''': This is used to group tests in the UI. 'General' is fine. We may use this field to refer to the test hook in the future.
* '''TEST_CATEGORY''': This defines the category your test is a part of - usually this describes the general type of test it is. Examples include ''Functional'', ''Stress'', ''Performance'', and ''Regression''.
 
=== Optional data ===
The following control file items are optional, and infrequently used, for AutoQA tests.
<pre>
DEPENDENCIES = 'POWER, CONSOLE'
SYNC_COUNT = 1
</pre>
* '''DEPENDENCIES''': Comma-separated list of hardware requirements for the test. Currently unsupported.
* '''SYNC_COUNT''': The number of hosts to set up and synchronize for this test. Only relevant for SERVER-type tests that need to run on multiple machines.


=== Launching the test object ===
=== Launching the test object ===


Most tests will have a line in the control file like this:
Most tests will have a line in the control file like this:
<pre>job.run_test('conflicts', config=autoqa_conf, **autoqa_args)</pre>
<pre>job.run_test('conflicts', **autoqa_args)</pre>
This will create a 'conflicts' ''test object'' (see below) and pass along the following variables
This will create a 'conflicts' ''test object'' (see below) and pass along all required variables.
 
; <code>autoqa_conf</code>
: Contains string with autoqa.conf file, usually located at <code>/etc/autoqa/autoqa.conf</code>. Note, though, that some of the values in autoqa_conf are changed by the autoqa harness while scheduling the testrun.


; <code>autoqa_args</code>
; <code>autoqa_args</code>
: A dictionary, containing all the hook-specific variables (e.g. ''kojitag'' for post-koji-build hook). Documentation on these is to be found in <code>hooks/[hookname]/README</code> files. Some more variables may be also present, as described in the template file.
: A dictionary, containing all the event-specific variables (e.g. ''kojitag'' for post-koji-build event). Documentation on these is to be found in <code>events/[eventname]/README</code> files. Some more variables may be also present, as described in the template file.


Those variables will be inserted into the control file by the autoqa test harness when it's time to schedule the test.
Those variables will be inserted into the control file by the autoqa test harness when it's time to schedule the test.
Line 82: Line 60:


All variables available in {{filename|control.autoqa}} are documented in [http://git.fedorahosted.org/git/?p=autoqa.git;a=blob_plain;f=doc/control.autoqa.template;hb=HEAD {{filename|doc/control.autoqa.template}}]. You can override them to customize your test's scheduling.  Basically you can influence:
All variables available in {{filename|control.autoqa}} are documented in [http://git.fedorahosted.org/git/?p=autoqa.git;a=blob_plain;f=doc/control.autoqa.template;hb=HEAD {{filename|doc/control.autoqa.template}}]. You can override them to customize your test's scheduling.  Basically you can influence:
* Which event (i.e. hook) the test runs for and under which conditions.
* Which event the test runs for and under which conditions.
* The type of system the test needs.  This includes system architecture, operating system version and whether the system supports virtualization (see [[Managing autotest labels|autotest labels]] for additional information)
* The type of system the test needs.  This includes system architecture, operating system version and whether the system supports virtualization (see [[Managing autotest labels|autotest labels]] for additional information)
* Data passed from the hook to the test object.
* Data passed from the event to the test object.


Here is example {{filename|control.autoqa}} file:
Here is example {{filename|control.autoqa}} file:
Line 95: Line 73:
labels = ['virt']
labels = ['virt']


# we want to run this test just for post-koji-build hook
# we want to run this test just for post-koji-build event;
if hook not in ['post-koji-build']:
# please note that 'execute' defaults to 'False' and to have
     execute = False
# the test scheduled, control.autoqa needs to complete with
# 'execute' set to 'True'
if event in ['post-koji-build']:
     execute = True
</pre>
</pre>


Line 109: Line 90:


<pre>
<pre>
import autoqa.util
from autotest_lib.client.bin import utils
from autoqa.test import AutoQATest
from autoqa.test import AutoQATest
from autoqa.decorators import ExceptionCatcher
from autoqa.decorators import ExceptionCatcher
from autotest_lib.client.bin import utils


class conflicts(AutoQATest):
class conflicts(AutoQATest):
Line 122: Line 102:
=== AutoQATest base class ===
=== AutoQATest base class ===


This class contains the functionality common to all the tests - i.e. it initializes the variables used for storing results in its <code>__init__</code> function. The default initialize method then parses the config string passed in the control file into <code>self.config</code>, and prepares <code>self.autotest_url</code> - a url pointing to the autotest storage place, where all the logs will be once the test finishes.
This class contains the functionality common to all the tests. When you override some of its methods (like <code>setup()</code>, <code>initialize()</code> or <code>run_once()</code>) it is important to call the parent method first.


It also contains a <code>postprocess_iteration</code> method, which uses the <code>self.result</code>, <code>self.summary</code>, <code>self.highlights</code> and <code>self.outputs</code> to send a pretty formatted email to [https://fedorahosted.org/mailman/listinfo/autoqa-results autoqa-results mailing list].
The most important attribute of this class is <code>detail</code> (an instance of <code>TestDetail</code> class) that is used for storing all test outcomes.


{{admon/important| Saving test results | When writing tests and determining how to record test results, please make sure to use the variables <code>self.result</code>, <code>self.summary</code>, <code>self.highlights</code> or <code>self.outputs</code> when writing tests. These variables make the 'central dispatch' of test results possible, and will be used for future enhancements like ResultsDB.}}
The most important methods include <code>log()</code> that you are advised to use for logging test output and <code>post_results()</code> that you can use for changing the way your test results are reported (if you're not satisfied with the default behavior).


The AutoQATest base class defines an additional method - <code>process_exception</code>.  This is used by the <code>ExceptionCatcher</code> decorator when an exception occurs in any of <code>setup</code>, <code>initialize</code> or <code>run_once</code> methods (for details, see [[Writing_AutoQA_Tests#ExceptionCatcher_decorator]]).
Whenever your test crashes, the <code>process_exception()</code> method will automatically catch the exception and log it (more on that in the next section).


=== ExceptionCatcher decorator ===
=== ExceptionCatcher decorator ===


When an unintended exception is raised during test setup (<code>setup()</code>), initialization (<code>initialize()</code>) or execution (<code>run_once()</code>) the test immediately ends without calling the <code>postprocess_iteration</code> method, which is supposed to send all the gathered data to mailing list.  This behaviour is not desirable as test cleanup may not have been run, and the exception cannot be properly handled or reported.
When an unintended exception is raised during test setup (<code>setup()</code>), initialization (<code>initialize()</code>) or execution (<code>run_once()</code>) and <code>ExceptionCatcher()</code> decorator is used for these methods (the default), it calls the <code>process_exception()</code> method instead of simply crashing. In this way we are able to operate and submit results even of crashed tests.


To address this, an <code>ExceptionCatcher</code> decorator is available.  When using the <code>ExceptionCatcher</code> decorator, if an unhandled exception an exception is raised, it calls the function <code>process_exception()</code> which sets the test result to ''CRASHED'', updates <code>self.outputs</code> to include traceback information, updates <code>self.summary</code> to reflect the cause of the exception, and attempts to call <code>postprocess_iteraction()</code> to further cleanup the test. Finally, before the test completes, the decorator will attempt to re-raise the exception.
When such event occurs, the test result is set to ''CRASHED'', the exception traceback is added into the test output and the exception info is put into the test summary. Then the results are reported in a standard way (by creating log files and sending emails). Finally the original exception is re-raised.


To take advantage of the decorator in your tests, add <code>@ExceptionCatcher()</code> before each autotest method as shown below:
If a different recovery procedure than <code>process_exception()</code> is desired, you may define the method and provide the method name as an argument to the decorator. For example, see below:
<pre>
@ExceptionCatcher()
def run_once(self, **kwargs):
    ...
</pre>
 
Optionally, if a different recovery procedure other than <code>process_exception()</code> is desired, you may define the method and provide the method name as an argument to the decorator. For an example, see below:


<pre>
<pre>
Line 154: Line 127:
</pre>
</pre>


{{admon/important| **kwargs parameter | Because of some nasty Autotest magic, it is required to have the <code>**kwargs</code> argument in the decorated function. This is because Autotest magic can not find out, what is the correct subset of arguments from <code>**autoqa_args</code> to pass, so it passes them all - which causes error, if you don't have them all listed.}}
{{admon/important| **kwargs parameter | Because of some nasty Autotest magic, it is required to have the <code>**kwargs</code> argument in the decorated function. This is because Autotest magic can not find out what is the correct subset of arguments from <code>**autoqa_args</code> to pass, so it passes them all - which causes error, if you don't have them all listed.}}
 
You are advised to call the original <code>process_exception()</code> method inside your custom handler, if applicable.


=== Test stages ===
=== Test stages ===
Line 165: Line 140:
     @ExceptionCatcher()
     @ExceptionCatcher()
     def setup(self):
     def setup(self):
         utils.system('yum -y install httpd')
         retval = utils.system('yum -y install httpd')
        assert retval == 0
         if utils.system('service httpd status') != 0:
         if utils.system('service httpd status') != 0:
             utils.system('service httpd start')
             utils.system('service httpd start')
</pre>
</pre>


==== initialize() ====
==== initialize() ====


This does any pre-test initialization that needs to happen. AutoQA tests typically uses this method to parse the autoqa config data provided by the server or to create initial test result data structures. This is an optional method.
This does any pre-test initialization that needs to happen. AutoQA tests typically uses this method to initialize various structures, set <code>self.detail.id</code> and similar attributes. This is an optional method.


All basic initialization is done in the AutoQATest class, so check it out, before you re-define it.
All basic initialization is done in the AutoQATest class, so check it out, before you re-define it.
Line 184: Line 159:


In short, this method should build the argument list, run the test binary and process the test result and output.  For example, see below:
In short, this method should build the argument list, run the test binary and process the test result and output.  For example, see below:
<pre>
<pre>
     @ExceptionCatcher()
     @ExceptionCatcher()
     def run_once(self, baseurl, parents, reponame, **kwargs):
     def run_once(self, baseurl, parents, name, **kwargs):
         super(self.__class__, self).run_once()
         super(self.__class__, self).run_once()
        cmd = "./sanity.py --scratchdir %s --logdir %s" % (self.tmpdir, self.resultsdir)
        cmd += " %s" % baseurl
        retval = utils.system(cmd)
        if retval != 0:
            self.result = 'FAILED'
</pre>
This above example will run the command {{command|sanity.py}}, store its exit code into the retval variable and set the test status based on the exit code. 


If you want to ''catch'' the output of the command, rather than just the exit code, you can use the built-in <code>utils.system_output()</code> method:
        cmd = './potential_conflict.py --tempcache --newest ' \
<pre>
              '--repofrompath=target,%s --repoid=target' % baseurl
from autotest_lib.client.common_lib import error


    ...
         out = utils.system_output(cmd, retain_output=True)
    try: 
        self.log(out, printout=False)
         output = utils.system_output(cmd, retain_output = True)
    except error.CmdError, e:
        output = e.result_obj.stdout
    ...
</pre>
</pre>


Additionally, if you need both the exit code and command output, use the built-in <code>utils.run()</code> method:
This above example will run the command {{command|potential_conflict.py}} and save its output. It will raise <code>CmdError</code> if the command ends with non-zero exit code.
<pre>
    ...
    result = utils.run(cmd, ignore_status = True, stdout_tee = utils.TEE_TO_LOGS)
    output = result.stdout
    retval = result.exit_status
    ...
</pre>


For more information about additional test object attributes such as <code>self.bindir</code>, <code>self.tmpdir</code>, refer to [[#Useful test object attributes|test object attributes]].  For more information on gathering results from your tests, refer to [[#Getting test results|Getting test results]].
If you need to receive just the exit code of the command, use <code>utils.system()</code> method instead.


==== postprocess_iteration() ====
Additionally, if you need both the exit code and command output, use the built-in <code>utils.run()</code> method:
 
This method is implemented in the AutoQATest base class, and it sends the data gathered in the <code>self.result/summary/highlights/outputs</code> to the autoqa-results maling list.
 
You can of course reimplement this function, if you want to (for instance) gather some extra data, or prepare the data gathered in the test before storing them, but please be sure to call the <code>AutoQATest.postprocess_iteration()</code> afterwards. In general, you should not need to reimplement this function at all.


<pre>
<pre>
     def postprocess_iteration(self):
     cmd_result = utils.run(cmd, ignore_status=True, stdout_tee=utils.TEE_TO_LOGS, stderr_tee=utils.TEE_TO_LOGS)
        for line in self.outputs:
    output = cmd_result.stdout
            if line.startswith('Max transfer speed: '):
    retval = cmd_result.exit_status
                (dummy, max_speed) = line.split('speed: ')
        keyval['max_speed'] = max_speed
        self.write_test_keyval(keyval)
 
        super(self.__class__, self).postprocess_iteration()
</pre>
</pre>


(See [[#Returning extra data|Returning extra data]] for details about <code>write_test_keyval</code>.)
==== Useful test object attributes ====
<code>AutoQATest</code> instances have the following attributes available<ref>http://autotest.kernel.org/browser/branches/0.10.1/client/common_lib/test.py#L9</ref>:


This method will be run after each iteration of <code>run_once()</code>, but note that it gets ''no arguments passed in''. Any data you want from the test run needs to be saved into the test object - hence the use of <code>self.outputs</code>.
==== Useful test object attributes ====
<code>test</code> objects have the following attributes available<ref>http://autotest.kernel.org/browser/branches/0.10.1/client/common_lib/test.py#L9</ref>:
<pre>
<pre>
outputdir      eg. results/<job>/<testname.tag>
outputdir      eg. results/<job>/<testname.tag>
Line 254: Line 199:
=== Test Results ===
=== Test Results ===


The <code>AutoQATest</code> class provides a set of variables (<code>self.result/summary/highlights/outputs</code>) to be used for storing test results. The point of these, is to be able to have one implementation of the results harness - in the <code>AutoQATest</code> class. At the time being, the results are being sent to the autoqa-results mailing list, but in the near future, we'll be using a database-based storage, which will give us a better way of reviewing the results. Proper usage of abovementioned variables is crucial to the seamless transition to this tool.
The <code>AutoQATest</code> class provides a <code>detail</code> attribute to be used for storing test results. This is an instance of <code>TestDetail</code> class and serves as a container for everything related to the test outcome.
 
{{admon/note||If your test tests several independent things at once, you can create several <code>TestDetail</code> objects and then submit results for all of them manually. For example ''upgradepath'' test uses that for reporting results for every proposed Bodhi update. But this is really advanced stuff, see ''upgradepath'' or ''depcheck'' tests for inspiration.}}
 
==== ID ====
 
Every test run should contain a string identification of the test in <code>self.detail.id</code>. This doesn't have to be unique, but it should be descriptive enough for the log reader to quickly understand what has been tested. For RPM-based tests this will be probably a build NVR. For Bodhi update-based tests this will be probably a Bodhi update title. For yum repository-based tests this will be a repository name or address. And so on.
 
Usually the ID is set in <code>initialize()</code> method (ASAP) because it is not changed throughout the test:
 
<pre>
    @ExceptionCatcher()
    def initialize(self, config, nvr, **kwargs):
        super(self.__class__, self).initialize(config)
        self.detail.id = nvr
</pre>
 
==== Architecture ====
 
If your test is not an architecture-independent test (aka ''noarch'' test, the default), you must set the architecture of tested items in <code>self.detail.arch</code>. That is usually also done in <code>initialize()</code> method.


==== Overall Result ====
==== Overall Result ====


The overall test result is stored in the variable <code>self.result</code>. You should set it in <code>run_once()</code> according to the result of your test. You can choose from these values:
The overall test result is stored in <code>self.detail.result</code>. You should set it in <code>run_once()</code> according to the result of your test. You can choose from these values:


* <code>PASSED</code> - the test has passed, there is no problem with it
* <code>PASSED</code> - the test has passed, there is no problem with it
* <code>INFO</code> - the test has passed, but there is some important information that a relevant person would very probably like to review
* <code>INFO</code> - the test has passed, but there is some important information that a relevant person would very probably like to review
* <code>FAILED</code> - the test has failed, requirements are not met
* <code>FAILED</code> - the test has failed, requirements are not met
* <code>NEEDS_INSPECTION</code> ''(default)''- the test has failed, but a relevant person is needed to inspect, and possibly waive, the errors
* <code>NEEDS_INSPECTION</code> ''(default)''- the test has failed, but a relevant person is needed to inspect it and possibly may waive the errors
* <code>ABORTED</code> - some third party error has occurred (networking error, external script used for testing has crashed, etc) and the test could not complete because of that; re-running this test with same input arguments should usually solve this problem
* <code>ABORTED</code> - some third party error has occurred (networking error, external script used for testing has crashed, etc) and the test could not complete because of that. Re-running this test with same input arguments could solve this problem.
* <code>CRASHED</code> - the test has crashed because of a programming error somewhere in our code (test script or autoqa code); close inspection is necessary to be able to solve this issue;
* <code>CRASHED</code> - the test has crashed because of a programming error somewhere in our code (test script or autoqa code). Close inspection is necessary to be able to solve this issue.


If no value is set in <code>self.result</code>, a value of <code>NEEDS_INSPECTION</code> will be used during <code>postprocess_iteration()</code>.
If no value is set in <code>self.detail.result</code>, a value of <code>NEEDS_INSPECTION</code> is used.


If an exception occurs, and is caught by the <code>ExceptionCatcher</code> decorator (i.e. you don't catch it yourself), <code>self.result</code> is set to <code>CRASHED</code>.
If an exception occurs, and is caught by the <code>ExceptionCatcher</code> decorator (i.e. you don't catch it yourself), <code>self.detail.result</code> is set to <code>CRASHED</code>.


===== Using ABORTED result properly =====
===== Using ABORTED result properly =====


If you want to end your test with <code>ABORTED</code> result, simple set <code>self.result</code> and then re-raise the original exception. <code>self.summary</code> will be filled-in automatically (extracted from the exception message), if empty.
If you want to end your test with <code>ABORTED</code> result, simple set <code>self.detail.result</code> and then re-raise the original exception. <code>self.detail.summary</code> will be filled-in automatically (extracted from the exception message), if empty.


<pre>
<pre>
Line 279: Line 243:
     //download from Koji
     //download from Koji
except IOError, e: //or some other error
except IOError, e: //or some other error
     self.result = 'ABORTED'
     self.detail.result = 'ABORTED'
     raise
     raise
</pre>
</pre>


If you don't have any exception to re-raise but still want to end the test, again set <code>self.result</code>, but this time be sure to also provide an explanation in <code>self.summary</code> and then end the test by raising <code>autotest_lib.client.common_lib.error.TestFail</code>. Alternatively you can provide the error explanation as an argument to the <code>TestFail</code> class instead of filling in <code>self.summary</code>.
If you don't have any exception to re-raise but still want to end the test, again set <code>self.detail.result</code>, but this time be sure to also provide an explanation in <code>self.detail.summary</code> and then end the test by raising <code>autotest_lib.client.common_lib.error.TestFail</code>. Alternatively you can provide the error explanation as an argument to the <code>TestFail</code> class instead of filling in <code>self.detail.summary</code>.


<pre>
<pre>
Line 289: Line 253:
foo = //do some stuff
foo = //do some stuff
if foo == None:
if foo == None:
     self.result = 'ABORTED'
     self.detail.result = 'ABORTED'
     raise error.TestFail('No result returned from service bar')
     raise error.TestFail('No result returned from service bar')
</pre>
</pre>


===== Posting feedback into Bodhi =====
==== Summary ====
 
The <code>self.detail.summary</code> should contain a few words summarizing the test output. It is then used in the log overview and in the email subject. E.g. for ''conflicts'' test it can be ''"69 packages with file conflicts"''. Or for ''rpmlint'' test it can be ''"3 errors, 5 warnings"''. Don't repeat test name, test result or test ID in here.


After the result of a test is known, it can be sent into Bodhi, for example:
==== Output ====


    from autoqa.bodhi_utils import bodhi_post_testresult
Log any test output you want to save (to be emailed and stored in a log file on the server) by using <code>self.log()</code> method. Usually this is used for saving important information throughout the test. For ''rpmlint'' test this may include information which RPM packages will be downloaded and tested, and then the very output of the ''rpmlint'' command.
    ...
    @ExceptionCatcher()
    def run_once(self, envrs, kojitag, **kwargs):
        super(self.__class__, self).run_once()
        title = kwargs['name'] # title of the update in Bodhi
        arch = ... # package architecture
        ...
        ''Test runs''
        ...
        # post the result
        bodhi_post_testresult(title, self.__class__.__name__, self.result, self.autotest_url, arch)


==== Summary ====
<pre>
self.log('Build to be tested: %s' % nvr)
cmd = '...'
output = utils.system_output(cmd, retain_output=True)
self.log(output, printout=False)
...
self.log('Found %d errors in the command output' % errors, stderr=True)
</pre>


The <code>self.summary</code> is used as a subject for the purposes of the autoqa-results mailing list. It is intended to contain a short summary of the testrun - e.g. for Conflicts test, it can be ''"69 packages with file conflicts in rawhide-i386"''.  It should be a short string describing the outcome of the test.
The output is stored in <code>self.detail.output</code> variable. You can modify it if needed (for example filter out some lines you don't want to see in the log), but you're highly discouraged from adding new content to that variable directly (use <code>log()</code> method instead).


<code>postprocess_iteration()</code> then adds the name of the test and <code>self.result</code>, so the whole summary (as used for the mailing list autoqa-results) would be ''"Conflicts: FAILED;69 packages with file conflicts in rawhide-i386"''.
{{admon/note|Know when to <code>print</code>|You can also use <code>print</code> statements in your code. But note that these lines won't be visible in the test log (but they will be in the ''[[#Log_files_and_scratch_data|debug log]]''). You can use them for printing low-level details that can potentially help you find a problem later. If you want something in between (not visible in the test log, but better accessible than debug log), consider using <code>self.log(message, output{{=}}False)</code>. That won't show up the in the test log, but it will be visible in the ''full log''.}}


==== Highlights ====
==== Highlights ====


The <code>self.highlights</code> should contain a digest from the stdout/stderr generated by your test. Traditionally, this is used to draw attention to important warnings or errors.  For example, you may have several hundred/thousand lines of test output (self.outputs), but you wouldn't want to inspect that everytime to determine the nature of a failure.  Draw attention to specific issues by using <code>self.highlights</code>
If you want to emphasize a certain line or set of lines in your output, you can do it by highlighting them. That enables the reader to easily spot warnings and errors amongst hundreds of lines of output. It is usually done by:
 
<pre>
self.log('RPM checksum invalid', highlight=True)
</pre>


The <code>self.highlights</code> can contain a string, or a list of strings.
You can also provide a descriptive comment to the highlighted lines:
<pre>
self.log('= Problems overview =\n %s' % problems, highlight='This test will not pass until all of these problems are solved.')
</pre>


==== Detailed Output ====
If you need to highlight a line that has been already logged (e.g. part of the command output), you can directly assign to <code>self.detail.highlights</code> variable (it is a list of tuples <code>(line, comment)</code>).


Put any detailed output into the <code>self.outputs</code> variable. Usually it contains the stdout/stderr of your test script, but it may contain less or more, as you wish. This detailed output will probably represent the largest portion of the test result report.
==== Additional log fields ====


The <code>self.outputs</code> can contain a string or a list of strings.
You can provide some additional log fields to be displayed in your log. Explore <code>self.detail.log</code> variable (instance of <code>PrettyLog</code> class). Especially these attributes:
* <code>summary_detail</code> - allows you to provide additional detail for your summary section
* <code>kojitag</code>/<code>bodhi_title</code>/<code>bodhi_id</code> - allows you to provide more information about which Kojitag/Bodhi update this test is related to
* <code>custom_fields</code> - allows you to specify any custom field for the log overview section


==== Extra Data ====
==== Extra Data ====
Line 344: Line 316:
# <code>self.write_perf_keyval(perf_dict)</code> - Store test performance metrics (numerical data).  Performance values must be floating-point numbers.
# <code>self.write_perf_keyval(perf_dict)</code> - Store test performance metrics (numerical data).  Performance values must be floating-point numbers.
# <code>self.write_iteration_keyval(attr_dict, perf_dict)</code> - Storing both, attributes and performance data
# <code>self.write_iteration_keyval(attr_dict, perf_dict)</code> - Storing both, attributes and performance data
{{admon/note|Not displayed|This extra data are not yet displayed anywhere in the log, but you can access them in the test results directory.}}


==== Log files and scratch data ====
==== Log files and scratch data ====


Autotest automatically logs all the client/server output, the full output of any commands you run, operating system variables and others and stores them on the server. There is a hyperlink to the directory with all these log files in every test report.
For any test execution several logs are created. The default log is {{filename|<test_id>.html}} and it is being linked and emailed to any configured recipients. It is designated for the large audience.


If you want to store a custom file (like your own log), just save it to <code>self.resultsdir</code> directory. All those files will be saved at the end of the test. On the other hand, any files written to <code>self.tmpdir</code> will be discarded.
There is also an {{filename|full.log}}. It contains everything that has been logged throughout the test execution (the default log may have some lines filtered out, but they should be always present in this log).


===== {{filename|output.log}} =====
The most detailed log is ''debug log'' placed at {{filename|debug/client.DEBUG}} at root test results directory. Into that file autotest automatically logs everything printed on the screen and the full output of any commands you run.
After the test completes, an {{filename|output.log}} file is created in the directory referenced by <code>self.resultsdir</code>. The {{filename|output.log}} file combines all test output variables (<code>self.result/summary/highlights/outputs</code>) and writes them in a consistent format - the same format that is used for email reports. You can use this file to review the final test report even if you don't have access to the email one.


If you want to store a custom file (like your own log), just save it to <code>self.resultsdir</code> directory. All those files will be saved at the end of the test.
If you need to store some scratch data that should be discarded on test finish, use <code>self.tmpdir</code> directory for that.
==== Posting feedback into Bodhi ====
After the result of a test is known, it can be sent into Bodhi. You need to manually call <code>post_results()</code> method at the end of our test and provide a <code>bodhi</code> parameter.
<pre>
        # Report test result to Bodhi
        # 'name' variable contains Bodhi update title relevant for this test
        self.post_results(bodhi = {'title': name})
</pre>
==== ResultsDB ====
Results are also automagically stored into the ResultsDB <i>FIXME: add URL of resultsdb frontend</i>. The only stuff, you need to take care of are some 'identification' data. For example, when your test deals with packages/builds, you want to store <code>envr</code>, maybe <code>kojitag</code> or another usefull information.
To store this information, add appropriate key-value pairs to the <code>self.detail.keyvals</code> dictionary.
# taken from rpmguard
self.detail.keyvals = {'kojitag': kojitag, 'envr': nvr}
Even though the key-value pairs are entirely up to your decision, please make sure, that you store envr(s) of the tested packages/builds/whatever-has-envr under the 'envr' key.
Also, you can store multiple values under the same key, when the value is list of strings:
self.detail.keyvals = {'envr': ['foo', 'bar', 'foobar']}
===== Tests with multiple results per testrun =====
Some tests produce more results in one testrun (e.g. Depcheck). If you want to store them separately in ResultsDB, make sure to set the <code>self.is_multiresult</code> attribute to True.
Then you can create multiple TestDetail instances (e.g. one for each tested package/update/whatever), fill them out (don't forget the test_detail.keyvals), and post them using <code>self.post_results()</code> method.
...
all_test_details = []
for build in all_builds_to_test:
    td = TestDetail(self, id = build['nvr'], arch = arch)
    all_test_details.append(td)
    ...
    td.result = "PASSED"
    td.keyvals = {'envr': build['nvr'], ...}
    ...
for td in all_test_details:
    self.post_results(td)
...
self.test_detail.keyvals = {'envr': [build['nvr'] for build in all_builds_to_test]}
...
This will create multiple 'Testrun' records in the ResultsDB, one for each TestDetail created & reported. And one 'Job' record, which encapsulates all the 'Testruns'.


== How to run AutoQA tests ==
== How to run AutoQA tests ==
Line 389: Line 410:
=== Run your test ===
=== Run your test ===


This is dependent on the hook, your test is supposed to run under. Let's assume, that it is the <code>post-koji-build</code>.
This is dependent on the event, your test is supposed to run under. Let's assume, that it is the <code>post-koji-build</code>.
<pre>
<pre>
/usr/share/autoqa/post-koji-build/watch-koji-builds.py --dry-run
/usr/share/autoqa/post-koji-build/watch-koji-builds.py --dry-run
Line 413: Line 434:


{{admon/important| --local| It is important to add the <code>--local</code> parameter. If you won't, the test will fail to run provided that you don't have [[Install and configure autotest|Autotest server]] installed and configured.}}
{{admon/important| --local| It is important to add the <code>--local</code> parameter. If you won't, the test will fail to run provided that you don't have [[Install and configure autotest|Autotest server]] installed and configured.}}
== Debugging problems ==
If your test behaves incorrectly in certain cases, you can use python debugger even when it is run through AutoQA/Autotest framework. Add <code>import pdb; pdb.set_trace()</code> line where you want to stop the program and begin debugging:
<pre>
    @ExceptionCatcher()
    def run_once(self, **kwargs):
        super(self.__class__, self).run_once()
        do_some_stuff()
        import pdb
        pdb.set_trace()
        do_some_other_stuff()
</pre>
Then [[#Run your test|run your test]] with <code>--local</code> option. You will be given a Pdb shell on the chosen line:
<pre>
16:02:08 INFO | > /usr/share/autotest/client/site_tests/rpmlint/rpmlint.py(51)run_once()
16:02:08 INFO | -> self.log('%s\n%s\n%s' % ('='*40, nvr, '='*40))
help
16:02:22 INFO | (Pdb)
16:02:22 INFO | Documented commands (type help <topic>):
16:02:22 INFO | ========================================
16:02:22 INFO | EOF    bt        cont      enable  jump  pp      run      unt 
16:02:22 INFO | a      c          continue  exit    l    q        s        until
16:02:22 INFO | alias  cl        d        h      list  quit    step    up   
16:02:22 INFO | args  clear      debug    help    n    r        tbreak  w   
16:02:22 INFO | b      commands  disable  ignore  next  restart  u        whatis
16:02:22 INFO | break  condition  down      j      p    return  unalias  where
16:02:22 INFO |
16:02:22 INFO | Miscellaneous help topics:
16:02:22 INFO | ==========================
16:02:22 INFO | exec  pdb
16:02:22 INFO |
16:02:22 INFO | Undocumented commands:
16:02:22 INFO | ======================
16:02:22 INFO | retval  rv
16:02:22 INFO |
list
16:02:37 INFO | (Pdb)  46          super(self.__class__, self).run_once()
16:02:37 INFO |  47 
16:02:37 INFO |  48          import pdb
16:02:37 INFO |  49          pdb.set_trace()
16:02:37 INFO |  50          # add header
16:02:37 INFO |  51  ->         self.log('%s\n%s\n%s' % ('='*40, nvr, '='*40))
16:02:37 INFO |  52 
16:02:37 INFO |  53          # download packages
16:02:37 INFO |  54          pkgs = self.koji.get_nvr_rpms(nvr, self.tmpdir, debuginfo=False,
16:02:37 INFO |  55                                        src=True)
16:02:37 INFO |  56          # run rpmlint
</pre>
For a 10-minute introduction how to use Pdb read [http://pythonconquerstheuniverse.wordpress.com/category/python-debugger/ Debugging in Python].
== Providing documentation ==
Every test should have at least basic documentation written in its {{filename|control}} file and in its test object. But if you intend to create a test for public consumption (not just for your own needs), you should provide proper and detailed documentation on these wiki pages.
All AutoQA test documentation is in [[:Category:AutoQA tests]], and the page name should be ''AutoQA tests/<Test>'', where test name starts with uppercase letter. These pages will be linked from the log of every executed test.
See the existing documentation for inspiration how to create your own one.


== References ==
== References ==

Latest revision as of 15:18, 14 March 2012

Introduction

Here's some info on writing tests for AutoQA. There's four parts to a test: the test code, the test object, the Autotest control file, and the AutoQA control file. Typically they all live in a single directory, located in the tests/ dir of the autoqa source tree.

Start with a test
Before considering integrating a test into AutoQA or Autotest, create a working test. Creating a working test should not require knowledge of autotest or autoqa. This page outlines the process of integrating an existing test into AutoQA.
No support for per-package tests
AutoQA currently supports only generic Fedora tests (tests that concern packages - or other objects - in general). We have plans to support per-package tests in the future (e.g. tests for sshd, tests for gimp, etc). Until AutoQA support exists, consider merging your test into upstream's test suite, or including your test in the Fedora rpm package, and have it executed during %check or ask the package maintainer to manually execute tests and provide results during bodhi updates.

Write test code first

I'll say it again: Write the test first. The tests don't require anything from autotest or autoqa. You should have a working test before you even start thinking about AutoQA.

You can package up pre-existing tests or you can write a new test in whatever language you're comfortable with. It doesn't even need to return a meaningful exit code if you don't want it to (even though it is definitely better). You'll handle parsing the output and returning a useful result in the test object.

If you are writing a brand new test, there are some python libraries that have been developed for use in existing AutoQA tests. More information about this will be available once these libraries are packaged correctly, but they are not necessary to write your own tests. You can choose to use whatever language and libraries you want.

The test directory

Create a new directory to hold your test. The directory name will be used as the test name, and the test object name should match that. Choose a name that doesn't use spaces, dashes, or dots. Underscores are acceptable.

Drop your test code into the directory - it can be a bunch of scripts, a tarball of sources that may need compiling, whatever.

Next, from the directory autoqa/doc/, copy template files control.template, control.autoqa.template and test_class.py.template into your test directory. Rename them to control, control.autoqa and [testname].py, respectively.

The control file

The control file defines some metadata for this test and executes the test. Here's an example control file:

control file for conflicts test

DOC = """
This test runs potential_conflict from yum-utils to check for possible
file / package conflicts.
"""
MAINTAINER = "Martin Krizek <mkrizek@redhat.com>"

job.run_test('conflicts', **autoqa_args)

Required data

The following control file items are required for valid AutoQA tests:

  • DOC: A verbose description of the test - its purpose, the logs and data it will generate, and so on.
  • MAINTAINER: The contact person for this test. If it doesn't work or there are some problems, then we know who to contact.

Launching the test object

Most tests will have a line in the control file like this:

job.run_test('conflicts', **autoqa_args)

This will create a 'conflicts' test object (see below) and pass along all required variables.

autoqa_args
A dictionary, containing all the event-specific variables (e.g. kojitag for post-koji-build event). Documentation on these is to be found in events/[eventname]/README files. Some more variables may be also present, as described in the template file.

Those variables will be inserted into the control file by the autoqa test harness when it's time to schedule the test.

The control.autoqa file

The control.autoqa file allows a test to define any scheduling requirements or modify input arguments. This file will decide whether to run this test at all, on what architectures/distributions it should run, and so on. It is evaluated on the AutoQA server before the test itself is scheduled and run on AutoQA client.

All variables available in control.autoqa are documented in doc/control.autoqa.template. You can override them to customize your test's scheduling. Basically you can influence:

  • Which event the test runs for and under which conditions.
  • The type of system the test needs. This includes system architecture, operating system version and whether the system supports virtualization (see autotest labels for additional information)
  • Data passed from the event to the test object.

Here is example control.autoqa file:

# this test can be run just once and on any architecture,
# override the default set of architectures
archs = ['noarch']

# this test may be destructive, let's require a virtual machine for it
labels = ['virt']

# we want to run this test just for post-koji-build event;
# please note that 'execute' defaults to 'False' and to have
# the test scheduled, control.autoqa needs to complete with
# 'execute' set to 'True'
if event in ['post-koji-build']:
    execute = True

Similar to the control file, the control.autoqa file is a Python script, so you can execute conditional expressions, loops or virtually any other Python statements there. However, it is heavily recommended to keep this file as simple as possible and put all the logic to the test object.

Test Object

The test object is a python file that defines an object that represents your test. It handles the setup for the test (installing packages, modifying services, etc), running the test code, and sending results to Autotest (and other places).

Convention holds that the test object file - and the object itself - should have the same name as the test. For example, the conflicts test contains a file named conflicts.py, which defines a conflicts class, as follows:

from autotest_lib.client.bin import utils
from autoqa.test import AutoQATest
from autoqa.decorators import ExceptionCatcher

class conflicts(AutoQATest):
    ...

The name of the class must match the name given in the run_test() line of the control file, and test classes must be subclasses of the AutoQATest class. But don't worry too much about how this works - the test_class.py.template contains the skeleton of an appropriate test object. Just change the name of the file (and class!) to something appropriate for your test.

AutoQATest base class

This class contains the functionality common to all the tests. When you override some of its methods (like setup(), initialize() or run_once()) it is important to call the parent method first.

The most important attribute of this class is detail (an instance of TestDetail class) that is used for storing all test outcomes.

The most important methods include log() that you are advised to use for logging test output and post_results() that you can use for changing the way your test results are reported (if you're not satisfied with the default behavior).

Whenever your test crashes, the process_exception() method will automatically catch the exception and log it (more on that in the next section).

ExceptionCatcher decorator

When an unintended exception is raised during test setup (setup()), initialization (initialize()) or execution (run_once()) and ExceptionCatcher() decorator is used for these methods (the default), it calls the process_exception() method instead of simply crashing. In this way we are able to operate and submit results even of crashed tests.

When such event occurs, the test result is set to CRASHED, the exception traceback is added into the test output and the exception info is put into the test summary. Then the results are reported in a standard way (by creating log files and sending emails). Finally the original exception is re-raised.

If a different recovery procedure than process_exception() is desired, you may define the method and provide the method name as an argument to the decorator. For example, see below:

def my_exception_handler(self, exc = None):
   '''do something different'''

@ExceptionCatcher('self.my_exception_handler')
def run_once(self, **kwargs):
    ...
**kwargs parameter
Because of some nasty Autotest magic, it is required to have the **kwargs argument in the decorated function. This is because Autotest magic can not find out what is the correct subset of arguments from **autoqa_args to pass, so it passes them all - which causes error, if you don't have them all listed.

You are advised to call the original process_exception() method inside your custom handler, if applicable.

Test stages

setup()

This is an optional method of the test class. This is where you make sure that any required packages are installed, services are started, your test code is compiled, and so on. For example:

    @ExceptionCatcher()
    def setup(self):
        retval = utils.system('yum -y install httpd')
        assert retval == 0
        if utils.system('service httpd status') != 0:
            utils.system('service httpd start')

initialize()

This does any pre-test initialization that needs to happen. AutoQA tests typically uses this method to initialize various structures, set self.detail.id and similar attributes. This is an optional method.

All basic initialization is done in the AutoQATest class, so check it out, before you re-define it.

Call AutoQATest.initialize
If you re-implement the initialize function, make sure, that you call super(self.__class__, self).initialize(config) inside it, so all the required initialization is executed

run_once()

This is where the test code actually gets run. It's the only required method for your test object.

In short, this method should build the argument list, run the test binary and process the test result and output. For example, see below:

    @ExceptionCatcher()
    def run_once(self, baseurl, parents, name, **kwargs):
        super(self.__class__, self).run_once()

        cmd = './potential_conflict.py --tempcache --newest ' \
              '--repofrompath=target,%s --repoid=target' % baseurl

        out = utils.system_output(cmd, retain_output=True)
        self.log(out, printout=False)

This above example will run the command potential_conflict.py and save its output. It will raise CmdError if the command ends with non-zero exit code.

If you need to receive just the exit code of the command, use utils.system() method instead.

Additionally, if you need both the exit code and command output, use the built-in utils.run() method:

    cmd_result = utils.run(cmd, ignore_status=True, stdout_tee=utils.TEE_TO_LOGS, stderr_tee=utils.TEE_TO_LOGS)
    output = cmd_result.stdout
    retval = cmd_result.exit_status

Useful test object attributes

AutoQATest instances have the following attributes available[1]:

outputdir       eg. results/<job>/<testname.tag>
resultsdir      eg. results/<job>/<testname.tag>/results
profdir         eg. results/<job>/<testname.tag>/profiling
debugdir        eg. results/<job>/<testname.tag>/debug
bindir          eg. tests/<test>
src             eg. tests/<test>/src
tmpdir          eg. tmp/<tempname>_<testname.tag>

Test Results

The AutoQATest class provides a detail attribute to be used for storing test results. This is an instance of TestDetail class and serves as a container for everything related to the test outcome.

If your test tests several independent things at once, you can create several TestDetail objects and then submit results for all of them manually. For example upgradepath test uses that for reporting results for every proposed Bodhi update. But this is really advanced stuff, see upgradepath or depcheck tests for inspiration.

ID

Every test run should contain a string identification of the test in self.detail.id. This doesn't have to be unique, but it should be descriptive enough for the log reader to quickly understand what has been tested. For RPM-based tests this will be probably a build NVR. For Bodhi update-based tests this will be probably a Bodhi update title. For yum repository-based tests this will be a repository name or address. And so on.

Usually the ID is set in initialize() method (ASAP) because it is not changed throughout the test:

    @ExceptionCatcher()
    def initialize(self, config, nvr, **kwargs):
        super(self.__class__, self).initialize(config)
        self.detail.id = nvr

Architecture

If your test is not an architecture-independent test (aka noarch test, the default), you must set the architecture of tested items in self.detail.arch. That is usually also done in initialize() method.

Overall Result

The overall test result is stored in self.detail.result. You should set it in run_once() according to the result of your test. You can choose from these values:

  • PASSED - the test has passed, there is no problem with it
  • INFO - the test has passed, but there is some important information that a relevant person would very probably like to review
  • FAILED - the test has failed, requirements are not met
  • NEEDS_INSPECTION (default)- the test has failed, but a relevant person is needed to inspect it and possibly may waive the errors
  • ABORTED - some third party error has occurred (networking error, external script used for testing has crashed, etc) and the test could not complete because of that. Re-running this test with same input arguments could solve this problem.
  • CRASHED - the test has crashed because of a programming error somewhere in our code (test script or autoqa code). Close inspection is necessary to be able to solve this issue.

If no value is set in self.detail.result, a value of NEEDS_INSPECTION is used.

If an exception occurs, and is caught by the ExceptionCatcher decorator (i.e. you don't catch it yourself), self.detail.result is set to CRASHED.

Using ABORTED result properly

If you want to end your test with ABORTED result, simple set self.detail.result and then re-raise the original exception. self.detail.summary will be filled-in automatically (extracted from the exception message), if empty.

try:
    //download from Koji
except IOError, e: //or some other error
    self.detail.result = 'ABORTED'
    raise

If you don't have any exception to re-raise but still want to end the test, again set self.detail.result, but this time be sure to also provide an explanation in self.detail.summary and then end the test by raising autotest_lib.client.common_lib.error.TestFail. Alternatively you can provide the error explanation as an argument to the TestFail class instead of filling in self.detail.summary.

from autotest_lib.client.common_lib import error
foo = //do some stuff
if foo == None:
    self.detail.result = 'ABORTED'
    raise error.TestFail('No result returned from service bar')

Summary

The self.detail.summary should contain a few words summarizing the test output. It is then used in the log overview and in the email subject. E.g. for conflicts test it can be "69 packages with file conflicts". Or for rpmlint test it can be "3 errors, 5 warnings". Don't repeat test name, test result or test ID in here.

Output

Log any test output you want to save (to be emailed and stored in a log file on the server) by using self.log() method. Usually this is used for saving important information throughout the test. For rpmlint test this may include information which RPM packages will be downloaded and tested, and then the very output of the rpmlint command.

self.log('Build to be tested: %s' % nvr)
cmd = '...'
output = utils.system_output(cmd, retain_output=True)
self.log(output, printout=False)
...
self.log('Found %d errors in the command output' % errors, stderr=True)

The output is stored in self.detail.output variable. You can modify it if needed (for example filter out some lines you don't want to see in the log), but you're highly discouraged from adding new content to that variable directly (use log() method instead).

Know when to print
You can also use print statements in your code. But note that these lines won't be visible in the test log (but they will be in the debug log). You can use them for printing low-level details that can potentially help you find a problem later. If you want something in between (not visible in the test log, but better accessible than debug log), consider using self.log(message, output=False). That won't show up the in the test log, but it will be visible in the full log.

Highlights

If you want to emphasize a certain line or set of lines in your output, you can do it by highlighting them. That enables the reader to easily spot warnings and errors amongst hundreds of lines of output. It is usually done by:

self.log('RPM checksum invalid', highlight=True)

You can also provide a descriptive comment to the highlighted lines:

self.log('= Problems overview =\n %s' % problems, highlight='This test will not pass until all of these problems are solved.')

If you need to highlight a line that has been already logged (e.g. part of the command output), you can directly assign to self.detail.highlights variable (it is a list of tuples (line, comment)).

Additional log fields

You can provide some additional log fields to be displayed in your log. Explore self.detail.log variable (instance of PrettyLog class). Especially these attributes:

  • summary_detail - allows you to provide additional detail for your summary section
  • kojitag/bodhi_title/bodhi_id - allows you to provide more information about which Kojitag/Bodhi update this test is related to
  • custom_fields - allows you to specify any custom field for the log overview section

Extra Data

Further test-level info can be returned by using test.write_test_keyval(dict). The following example demonstrates extracting and saving the kernel version used when running a test:

extrainfo = dict()
for line in self.results.stdout:
    if line.startswith("kernel version "):
        extrainfo['kernelver'] = line.split()[3]
    ...
self.write_test_keyval(extrainfo)

In addition to test-level key/value pairs, per-iteration key/value information (e.g. performance metrics) can be recorded:

  1. self.write_attr_keyval(attr_dict) - Store test attributes (string data). Test attributes are limited to 100 characters. [2]
  2. self.write_perf_keyval(perf_dict) - Store test performance metrics (numerical data). Performance values must be floating-point numbers.
  3. self.write_iteration_keyval(attr_dict, perf_dict) - Storing both, attributes and performance data
Not displayed
This extra data are not yet displayed anywhere in the log, but you can access them in the test results directory.

Log files and scratch data

For any test execution several logs are created. The default log is <test_id>.html and it is being linked and emailed to any configured recipients. It is designated for the large audience.

There is also an full.log. It contains everything that has been logged throughout the test execution (the default log may have some lines filtered out, but they should be always present in this log).

The most detailed log is debug log placed at debug/client.DEBUG at root test results directory. Into that file autotest automatically logs everything printed on the screen and the full output of any commands you run.

If you want to store a custom file (like your own log), just save it to self.resultsdir directory. All those files will be saved at the end of the test.

If you need to store some scratch data that should be discarded on test finish, use self.tmpdir directory for that.

Posting feedback into Bodhi

After the result of a test is known, it can be sent into Bodhi. You need to manually call post_results() method at the end of our test and provide a bodhi parameter.

        # Report test result to Bodhi
        # 'name' variable contains Bodhi update title relevant for this test
        self.post_results(bodhi = {'title': name})

ResultsDB

Results are also automagically stored into the ResultsDB FIXME: add URL of resultsdb frontend. The only stuff, you need to take care of are some 'identification' data. For example, when your test deals with packages/builds, you want to store envr, maybe kojitag or another usefull information.

To store this information, add appropriate key-value pairs to the self.detail.keyvals dictionary.

# taken from rpmguard
self.detail.keyvals = {'kojitag': kojitag, 'envr': nvr}

Even though the key-value pairs are entirely up to your decision, please make sure, that you store envr(s) of the tested packages/builds/whatever-has-envr under the 'envr' key.

Also, you can store multiple values under the same key, when the value is list of strings:

self.detail.keyvals = {'envr': ['foo', 'bar', 'foobar']}
Tests with multiple results per testrun

Some tests produce more results in one testrun (e.g. Depcheck). If you want to store them separately in ResultsDB, make sure to set the self.is_multiresult attribute to True.

Then you can create multiple TestDetail instances (e.g. one for each tested package/update/whatever), fill them out (don't forget the test_detail.keyvals), and post them using self.post_results() method.

...
all_test_details = []
for build in all_builds_to_test:
    td = TestDetail(self, id = build['nvr'], arch = arch)
    all_test_details.append(td)
    ...
    td.result = "PASSED"
    td.keyvals = {'envr': build['nvr'], ...}
    ...
for td in all_test_details:
    self.post_results(td)
...
self.test_detail.keyvals = {'envr': [build['nvr'] for build in all_builds_to_test]}
...

This will create multiple 'Testrun' records in the ResultsDB, one for each TestDetail created & reported. And one 'Job' record, which encapsulates all the 'Testruns'.

How to run AutoQA tests

Install AutoQA from GIT

First of all, you'll need to checkout some version from GIT. You can either use master, or some tagged 'release'.

To checkout master branch:

git clone git://git.fedorahosted.org/autoqa.git autoqa
cd autoqa

To checkout tagged release:

git clone git://git.fedorahosted.org/autoqa.git autoqa
cd autoqa
git tag -l
# now you'll get a list of tags, at the time of writing this document, the latests tag was v0.3.5-1
git checkout -b v0.3.5-1 tags/v0.3.5-1

Add your test

The best way to add your test into the directory structure is to create a new branch, copy your test and make install autoqa.

git checkout -b my_new_awesome_test
cp -r /path/to/directory/with/your/test ./tests
make clean install
Dependencies
It's possible that make install will fail due to missing some python modules (e.g. turbogears2). In that case install them using yum.

Run your test

This is dependent on the event, your test is supposed to run under. Let's assume, that it is the post-koji-build.

/usr/share/autoqa/post-koji-build/watch-koji-builds.py --dry-run

This command will show you current koji builds e.g.

No previous run - checking builds in the past 3 hours
autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12
autoqa post-koji-build --kojitag dist-f11-updates-candidate --arch x86_64 kdemultimedia-4.3.4-1.fc11
autoqa post-koji-build --kojitag dist-f11-updates-candidate --arch x86_64 kdeplasma-addons-4.3.4-1.fc11
autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 cryptopp-5.6.1-0.1.svn479.fc12
autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 drupal-6.15-1.fc12
autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 seamonkey-2.0.1-1.fc12
... output trimmed ...

So to run your test, just select one of the lines, and add parameters --test name_of_your_test --local, which will locally execute the test you just wrote. If you wanted to run rpmlint, for example, the command would be

autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 --test rpmlint --local
--local
It is important to add the --local parameter. If you won't, the test will fail to run provided that you don't have Autotest server installed and configured.

Debugging problems

If your test behaves incorrectly in certain cases, you can use python debugger even when it is run through AutoQA/Autotest framework. Add import pdb; pdb.set_trace() line where you want to stop the program and begin debugging:

    @ExceptionCatcher()
    def run_once(self, **kwargs):
        super(self.__class__, self).run_once()

        do_some_stuff()

        import pdb
        pdb.set_trace()

        do_some_other_stuff()

Then run your test with --local option. You will be given a Pdb shell on the chosen line:

16:02:08 INFO | > /usr/share/autotest/client/site_tests/rpmlint/rpmlint.py(51)run_once()
16:02:08 INFO | -> self.log('%s\n%s\n%s' % ('='*40, nvr, '='*40))
help
16:02:22 INFO | (Pdb) 
16:02:22 INFO | Documented commands (type help <topic>):
16:02:22 INFO | ========================================
16:02:22 INFO | EOF    bt         cont      enable  jump  pp       run      unt   
16:02:22 INFO | a      c          continue  exit    l     q        s        until 
16:02:22 INFO | alias  cl         d         h       list  quit     step     up    
16:02:22 INFO | args   clear      debug     help    n     r        tbreak   w     
16:02:22 INFO | b      commands   disable   ignore  next  restart  u        whatis
16:02:22 INFO | break  condition  down      j       p     return   unalias  where 
16:02:22 INFO | 
16:02:22 INFO | Miscellaneous help topics:
16:02:22 INFO | ==========================
16:02:22 INFO | exec  pdb
16:02:22 INFO | 
16:02:22 INFO | Undocumented commands:
16:02:22 INFO | ======================
16:02:22 INFO | retval  rv
16:02:22 INFO | 
list
16:02:37 INFO | (Pdb)  46  	        super(self.__class__, self).run_once()
16:02:37 INFO |  47  	
16:02:37 INFO |  48  	        import pdb
16:02:37 INFO |  49  	        pdb.set_trace()
16:02:37 INFO |  50  	        # add header
16:02:37 INFO |  51  ->	        self.log('%s\n%s\n%s' % ('='*40, nvr, '='*40))
16:02:37 INFO |  52  	
16:02:37 INFO |  53  	        # download packages
16:02:37 INFO |  54  	        pkgs = self.koji.get_nvr_rpms(nvr, self.tmpdir, debuginfo=False,
16:02:37 INFO |  55  	                                      src=True)
16:02:37 INFO |  56  	        # run rpmlint

For a 10-minute introduction how to use Pdb read Debugging in Python.

Providing documentation

Every test should have at least basic documentation written in its control file and in its test object. But if you intend to create a test for public consumption (not just for your own needs), you should provide proper and detailed documentation on these wiki pages.

All AutoQA test documentation is in Category:AutoQA tests, and the page name should be AutoQA tests/<Test>, where test name starts with uppercase letter. These pages will be linked from the log of every executed test.

See the existing documentation for inspiration how to create your own one.

References

Links

Autotest documentation