Syntax Description
method_name (arg1, [arg2, arg3 = "Foo"]) -> return_value
method_name
~ name of the respective method (see #Methods)arg1
~ required argumentarg2
~ optional argument, default value is set to Nonearg3
~ optional argument, default value is set to "Foo"-> return_value
~ method gives back the return_value
Methods
start_job
start_job ([testplan_url]) -> job_id
Params
testplan_url
~ link to wiki page with metadata (usefull for frontends)
Returns
job_id
~ job identifier for Job <-> Testrun relationship.
Intended to be used mostly by the AutoQA scheduler, when one will need to logically connect results of more tests for one package/repo/...
The job_id value will then be passed to the test probably via control file (i.e. another argument for job.run()
)
start_testrun
start_testrun (test_url, [job_id]) -> testrun_id
start_testrun(self, testcase_url, [job_id], [keyval_pairs]) -> testrun_id
Params
testcase_url
~ link to wiki page with metadata (usefull for frontends)job_id
~ optional argument. If set, new record will be created in the Job <-> Testrun relationship table.keyval_pairs
~ Dictionary (JSON?) of key-value pairs to be stored
Returns
testrun_id
~ identifier of the record inside Testrun table.
Use to create new entry in the Testrun table. Sets up the start_time and creates new entry in the Job<->Testrun relationship table, if job_id was set. Returns testrun_id which is required as an argument for almost every method. testrun_id is the key identifying the relationship between Testrun and the other tables in database.
end_testrun
end_testrun (testrun_id, result, log_url, [keyval_pairs, summary, highlights, score])
Params
testrun_id
~ Testrun identifier (see #start_testrun)result
~ PASSED, FAILED, INFO, ... (see <<Result>> at ResultsDB schema)log_url
~ URL pointing to logs etc. (most probably in the Autotest storage)keyval_pairs
~ Dictionary (JSON?) of key-value pairs to be stored (see #store_keyval).summary
~ ? not sure right now, probably name of the file with summary which could be found atlog_url
highlights
~ ? not sure right now, probably name of the file, which will contain 'digest' from the logs (created by the test by selecting appropriate error/warn messages etc.) with summary which could be found atlog_url
score
~ Optional score. This can be any number, the test decides how to use it. It can display the number of errors, or some other metric, like performance for performance tests.
end_testrun(self, testrun_id, result, log_url, [keyval_pairs, summary, highlights, outputs, score)
Params
testrun_id
~ Testrun identifierresult
~ PASSED, FAILED, ABORTED ... if non-correct value is passed, NEEDS_INSPECTION is setlog_url
~ URL pointing to logs etc. (most probably in the Autotest storage)keyval_pairs
~ Dictionary (JSON?) of key-value pairs to be storedsummary
~ ? not sure right now, probably name of the file with summary which could be found at log_urlhighlights
~ ? not sure right now, probably name of the file, which will contain 'digest' from the logs (created by the test by selecting appropriate error/warn messages etc.) with summary which could be found at log_urloutputs
- Logged (and possibli a bit filtered) log of stdin/stderrscore
~ Optional score. This can be any number, the test decides how to use it. It can display the number of errors, or some other metric, like performance for performance tests.
Be aware, that while storing keyval pairs, all non-string keys/values (or lists/tuples of strings in case of values) are skipped without further notice.
start_phase
start_phase (testrun_id, name)
Params
testrun_id
~ Testrun identifier (see #start_testrun)name
~ Name of the phase - used for displaying in the frontends.
Some tests may be devided in a number of phases. Phases may be nested, but you always can end only the "most recently started" phase (see #Phases_-_nested).
Each phase has it's own result (see #end_phase), but it does not directly influence the Testrun.result (i.e. you still need to set result
in the #end_testrun
end_phase
end_phase (testrun_id, result)
Params
testrun_id
~ Testrun identifier (see #start_testrun)result
~ PASSED, FAILED, INFO, ... (see <<Result>> at ResultsDB schema)
Ends the "most recently started" phase. The result
is used only for frontend purposes, and does not by any way directly influence the Testrun result (at least for the API purposes).
store_keyval
store_keyval (testrun_id, keyval_pairs)
Params
testrun_id
~ Testrun identifier (see #start_testrun)keyval_pairs
~ Dictionary (JSON?) of key-value pairs to be stored.
Be aware, that while storing keyval pairs, all non-string keys/values (or lists/tuples of strings in case of values) are skipped without further notice.
Keyval pairs are required/recommended/other additional data specific for each type of test (package test/repo test/install test/... see AutoQA_resultsdb_schema#Default_key-values_for_basic_test_classes), one can of course add any other keyval pairs, for his/her own frontend etc.
These values, represented by dictionary will be parsed and stored as separate entries in the TestrunData table. Keys will have to be strings, values can be either string or list of strings.
Examples
{"key1" : "value1"}
will be saved as one record.{"arch" : ["i686", "x86_64"]}
will create two rows ("arch":"i686"
and"arch":"x86_64"
)
Workflows
Simple
testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_test_page") end_testrun (testrun_id, "PASSED", log_url)
Phases - simple
testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_test_page") start_phase (testrun_id, "First phase") end_phase (testrun_id, "PASSED") start_phase (testrun_id, "Second phase") end_phase (testrun_id, "PASSED") end_testrun (testrun_id, "PASSED", log_url)
Phases - nested
testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_test_page") start_phase (testrun_id, "First phase") start_phase (testrun_id, "Second phase") end_phase (testrun_id, "PASSED") end_phase (testrun_id, "PASSED") end_testrun (testrun_id, "PASSED", log_url)
Note: This means phases may be nested, but they may not partially overlap (phase1 may not end while phase2 is active).
Using Job
job_id = start_job () testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_test_page", job_id) start_phase (testrun_id, "First phase") end_phase (testrun_id, "PASSED") end_testrun (testrun_id, "PASSED", log_url) testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_other_test_page", job_id) start_phase (testrun_id, "First phase") end_phase (testrun_id, "PASSED") end_testrun (testrun_id, "PASSED", log_url)