(Created page with '{{admon/caution|Draft|This page is only draft, and will change in time.}} == Syntax Description == <code>method_name (arg1, [arg2 = "Foo"]) -> return_value</code> * <code>meth...') |
No edit summary |
||
(14 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
{{admon/caution|Draft|This page is only draft, and will change in time.}} | {{admon/caution|Draft|This page is only draft, and will change in time.}} | ||
{{admon/tip|See ResultsDB schema|Knowledge from [[AutoQA_resultsdb_schema]] may be required.}} | |||
== Syntax Description == | == Syntax Description == | ||
<code>method_name (arg1, [arg2 = "Foo"]) -> return_value</code> | <code>method_name (arg1, [arg2, arg3 = "Foo"]) -> return_value</code> | ||
* <code>method_name</code> ~ name of the respective method (see [[#Methods]]) | * <code>method_name</code> ~ name of the respective method (see [[#Methods]]) | ||
* <code>arg1</code> ~ required argument | * <code>arg1</code> ~ required argument | ||
* <code>arg2</code> ~ optional argument, | * <code>arg2</code> ~ optional argument, default value is set to None | ||
* <code>arg3</code> ~ optional argument, default value is set to "Foo" | |||
* <code>-> return_value</code> ~ method gives back the return_value | * <code>-> return_value</code> ~ method gives back the return_value | ||
{{admon/caution|XMLRPC Specifics| Please note, that xmlrpc interface does not have 'named' parameters. This means that if you want to skip the parameter, you still need to insert 'placeholder' (i.e. the default value)}} | |||
== Methods == | == Methods == | ||
=== start_job === | === start_job=== | ||
<code>start_job ([testplan_url]) -> job_id </code> | |||
''Params'' | |||
* <code>testplan_url</code> ~ link to wiki page with metadata (usefull for frontends) | |||
''Returns'' | |||
* <code>job_id</code> ~ job identifier for Job <-> Testrun relationship. | |||
Intended to be used mostly by the AutoQA scheduler, when one will need to logically connect results of more tests for one package/repo/... | |||
The job_id value will then be passed to the test probably via control file (i.e. another argument for <code>job.run()</code>) | |||
=== start_testrun === | === start_testrun === | ||
<code>start_testrun(self, testcase_url, [keyval_pairs, job_id]) -> testrun_id</code> | |||
''Params'' | |||
* <code>testcase_url</code> ~ link to wiki page with metadata (usefull for frontends) | |||
* <code>keyval_pairs</code> ~ optional argument, Dictionary (JSON?) of key-value pairs to be stored | |||
* <code>job_id</code> ~ optional argument. If set, new record will be created in the Job <-> Testrun relationship table. | |||
''Returns'' | |||
* <code>testrun_id</code> ~ identifier of the record inside Testrun table. | |||
Use to create new entry in the Testrun table. Sets up the start_time and creates new entry in the Job<->Testrun relationship table, if job_id was set. Returns testrun_id which is required as an argument for almost every method. testrun_id is the key identifying the relationship between Testrun and the other tables in database. | |||
=== end_testrun === | === end_testrun === | ||
<code>end_testrun (testrun_id, result, log_url, [keyval_pairs, summary, highlights, score])</code> | |||
''Params'' | |||
* <code>testrun_id</code> ~ Testrun identifier (see [[#start_testrun]]) | |||
* <code>result</code> ~ PASSED, FAILED, INFO, ... (see <<Result>> at [[AutoQA_resultsdb_schema#Result|ResultsDB schema]]) | |||
* <code>log_url</code> ~ URL pointing to logs etc. (most probably in the Autotest storage) | |||
* <code>keyval_pairs</code> ~ Dictionary (JSON?) of key-value pairs to be stored (see [[#store_keyval]]). | |||
* <code>summary</code> ~ ? not sure right now, probably name of the file with summary which could be found at <code>log_url</code> | |||
* <code>highlights</code> ~ ? not sure right now, probably name of the file, which will contain 'digest' from the logs (created by the test by selecting appropriate error/warn messages etc.) with summary which could be found at <code>log_url</code> | |||
* <code>score</code> ~ Optional score. This can be any number, the test decides how to use it. It can display the number of errors, or some other metric, like performance for performance tests. | |||
<code>end_testrun(self, testrun_id, result, log_url, [keyval_pairs, summary, highlights, outputs, score)</code> | |||
''Params'' | |||
* <code>testrun_id</code> ~ Testrun identifier | |||
* <code>result</code> ~ PASSED, FAILED, ABORTED ... if non-correct value is passed, NEEDS_INSPECTION is set | |||
* <code>log_url</code> ~ URL pointing to logs etc. (most probably in the Autotest storage) | |||
* <code>keyval_pairs</code> ~ Dictionary (JSON?) of key-value pairs to be stored | |||
* <code>summary</code> ~ ? not sure right now, probably name of the file with summary which could be found at log_url | |||
* <code>highlights</code> ~ ? not sure right now, probably name of the file, which will contain 'digest' from the logs (created by the test by selecting appropriate error/warn messages etc.) with summary which could be found at log_url | |||
* <code>outputs</code> - Logged (and possibli a bit filtered) log of stdin/stderr | |||
* <code>score</code> ~ Optional score. This can be any number, the test decides how to use it. It can display the number of errors, or some other metric, like performance for performance tests. | |||
Be aware, that while storing keyval pairs, all non-string keys/values (or lists/tuples of strings in case of values) are skipped without further notice. | |||
=== start_phase === | === start_phase === | ||
<code>start_phase (testrun_id, name)</code> | |||
''Params'' | |||
* <code>testrun_id</code> ~ Testrun identifier (see [[#start_testrun]]) | |||
* <code>name</code> ~ Name of the phase - used for displaying in the frontends. | |||
Some tests may be devided in a number of phases. Phases may be nested, but you always can end only the "most recently started" phase (see [[#Phases_-_nested]]). | |||
Each phase has it's own result (see [[#end_phase]]), but it does not directly influence the Testrun.result (i.e. you still need to set <code>result</code> in the [[#end_testrun]] | |||
=== end_phase === | === end_phase === | ||
<code>end_phase (testrun_id, result)</code> | |||
''Params'' | |||
* <code>testrun_id</code> ~ Testrun identifier (see [[#start_testrun]]) | |||
* <code>result</code> ~ PASSED, FAILED, INFO, ... (see <<Result>> at [[AutoQA_resultsdb_schema#Result|ResultsDB schema]]) | |||
Ends the "most recently started" phase. The <code>result</code> is used only for frontend purposes, and does not by any way directly influence the Testrun result (at least for the API purposes). | |||
=== store_keyval === | === store_keyval === | ||
<code>store_keyval (testrun_id, keyval_pairs)</code> | |||
''Params'' | |||
* <code>testrun_id</code> ~ Testrun identifier (see [[#start_testrun]]) | |||
* <code>keyval_pairs</code> ~ Dictionary (JSON?) of key-value pairs to be stored. | |||
Be aware, that while storing keyval pairs, all non-string keys/values (or lists/tuples of strings in case of values) are skipped without further notice. | |||
Keyval pairs are required/recommended/other additional data specific for each type of test (package test/repo test/install test/... see [[AutoQA_resultsdb_schema#Default_key-values_for_basic_test_classes]]), one can of course add any other keyval pairs, for his/her own frontend etc. | |||
These values, represented by dictionary will be parsed and stored as separate entries in the TestrunData table. | |||
Keys will have to be strings, values can be either string or list of strings. | |||
'''Examples''' | |||
* <code>{"key1" : "value1"}</code> will be saved as one record. | |||
* <code>{"arch" : ["i686", "x86_64"]}</code> will create two rows (<code>"arch":"i686"</code> and <code>"arch":"x86_64"</code>) | |||
== Workflows == | == Workflows == | ||
=== Simple === | === Simple === | ||
<pre> | |||
testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_test_page") | |||
end_testrun (testrun_id, "PASSED", log_url) | |||
</pre> | |||
=== Phases - simple === | === Phases - simple === | ||
<pre> | |||
testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_test_page") | |||
start_phase (testrun_id, "First phase") | |||
end_phase (testrun_id, "PASSED") | |||
start_phase (testrun_id, "Second phase") | |||
end_phase (testrun_id, "PASSED") | |||
end_testrun (testrun_id, "PASSED", log_url) | |||
</pre> | |||
=== Phases - nested === | === Phases - nested === | ||
<pre> | |||
testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_test_page") | |||
start_phase (testrun_id, "First phase") | |||
start_phase (testrun_id, "Second phase") | |||
end_phase (testrun_id, "PASSED") | |||
end_phase (testrun_id, "PASSED") | |||
end_testrun (testrun_id, "PASSED", log_url) | |||
</pre> | |||
''Note: This means phases may be nested, but they may not partially overlap (phase1 may not end while phase2 is active).'' | |||
=== Using Job === | === Using Job === | ||
<pre> | |||
job_id = start_job () | |||
testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_test_page", job_id) | |||
start_phase (testrun_id, "First phase") | |||
end_phase (testrun_id, "PASSED") | |||
end_testrun (testrun_id, "PASSED", log_url) | |||
testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_other_test_page", job_id) | |||
start_phase (testrun_id, "First phase") | |||
end_phase (testrun_id, "PASSED") | |||
end_testrun (testrun_id, "PASSED", log_url) | |||
</pre> | |||
[[Category:ResultsDB Legacy]] |
Latest revision as of 13:05, 22 January 2014
Syntax Description
method_name (arg1, [arg2, arg3 = "Foo"]) -> return_value
method_name
~ name of the respective method (see #Methods)arg1
~ required argumentarg2
~ optional argument, default value is set to Nonearg3
~ optional argument, default value is set to "Foo"-> return_value
~ method gives back the return_value
Methods
start_job
start_job ([testplan_url]) -> job_id
Params
testplan_url
~ link to wiki page with metadata (usefull for frontends)
Returns
job_id
~ job identifier for Job <-> Testrun relationship.
Intended to be used mostly by the AutoQA scheduler, when one will need to logically connect results of more tests for one package/repo/...
The job_id value will then be passed to the test probably via control file (i.e. another argument for job.run()
)
start_testrun
start_testrun(self, testcase_url, [keyval_pairs, job_id]) -> testrun_id
Params
testcase_url
~ link to wiki page with metadata (usefull for frontends)keyval_pairs
~ optional argument, Dictionary (JSON?) of key-value pairs to be storedjob_id
~ optional argument. If set, new record will be created in the Job <-> Testrun relationship table.
Returns
testrun_id
~ identifier of the record inside Testrun table.
Use to create new entry in the Testrun table. Sets up the start_time and creates new entry in the Job<->Testrun relationship table, if job_id was set. Returns testrun_id which is required as an argument for almost every method. testrun_id is the key identifying the relationship between Testrun and the other tables in database.
end_testrun
end_testrun (testrun_id, result, log_url, [keyval_pairs, summary, highlights, score])
Params
testrun_id
~ Testrun identifier (see #start_testrun)result
~ PASSED, FAILED, INFO, ... (see <<Result>> at ResultsDB schema)log_url
~ URL pointing to logs etc. (most probably in the Autotest storage)keyval_pairs
~ Dictionary (JSON?) of key-value pairs to be stored (see #store_keyval).summary
~ ? not sure right now, probably name of the file with summary which could be found atlog_url
highlights
~ ? not sure right now, probably name of the file, which will contain 'digest' from the logs (created by the test by selecting appropriate error/warn messages etc.) with summary which could be found atlog_url
score
~ Optional score. This can be any number, the test decides how to use it. It can display the number of errors, or some other metric, like performance for performance tests.
end_testrun(self, testrun_id, result, log_url, [keyval_pairs, summary, highlights, outputs, score)
Params
testrun_id
~ Testrun identifierresult
~ PASSED, FAILED, ABORTED ... if non-correct value is passed, NEEDS_INSPECTION is setlog_url
~ URL pointing to logs etc. (most probably in the Autotest storage)keyval_pairs
~ Dictionary (JSON?) of key-value pairs to be storedsummary
~ ? not sure right now, probably name of the file with summary which could be found at log_urlhighlights
~ ? not sure right now, probably name of the file, which will contain 'digest' from the logs (created by the test by selecting appropriate error/warn messages etc.) with summary which could be found at log_urloutputs
- Logged (and possibli a bit filtered) log of stdin/stderrscore
~ Optional score. This can be any number, the test decides how to use it. It can display the number of errors, or some other metric, like performance for performance tests.
Be aware, that while storing keyval pairs, all non-string keys/values (or lists/tuples of strings in case of values) are skipped without further notice.
start_phase
start_phase (testrun_id, name)
Params
testrun_id
~ Testrun identifier (see #start_testrun)name
~ Name of the phase - used for displaying in the frontends.
Some tests may be devided in a number of phases. Phases may be nested, but you always can end only the "most recently started" phase (see #Phases_-_nested).
Each phase has it's own result (see #end_phase), but it does not directly influence the Testrun.result (i.e. you still need to set result
in the #end_testrun
end_phase
end_phase (testrun_id, result)
Params
testrun_id
~ Testrun identifier (see #start_testrun)result
~ PASSED, FAILED, INFO, ... (see <<Result>> at ResultsDB schema)
Ends the "most recently started" phase. The result
is used only for frontend purposes, and does not by any way directly influence the Testrun result (at least for the API purposes).
store_keyval
store_keyval (testrun_id, keyval_pairs)
Params
testrun_id
~ Testrun identifier (see #start_testrun)keyval_pairs
~ Dictionary (JSON?) of key-value pairs to be stored.
Be aware, that while storing keyval pairs, all non-string keys/values (or lists/tuples of strings in case of values) are skipped without further notice.
Keyval pairs are required/recommended/other additional data specific for each type of test (package test/repo test/install test/... see AutoQA_resultsdb_schema#Default_key-values_for_basic_test_classes), one can of course add any other keyval pairs, for his/her own frontend etc.
These values, represented by dictionary will be parsed and stored as separate entries in the TestrunData table. Keys will have to be strings, values can be either string or list of strings.
Examples
{"key1" : "value1"}
will be saved as one record.{"arch" : ["i686", "x86_64"]}
will create two rows ("arch":"i686"
and"arch":"x86_64"
)
Workflows
Simple
testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_test_page") end_testrun (testrun_id, "PASSED", log_url)
Phases - simple
testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_test_page") start_phase (testrun_id, "First phase") end_phase (testrun_id, "PASSED") start_phase (testrun_id, "Second phase") end_phase (testrun_id, "PASSED") end_testrun (testrun_id, "PASSED", log_url)
Phases - nested
testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_test_page") start_phase (testrun_id, "First phase") start_phase (testrun_id, "Second phase") end_phase (testrun_id, "PASSED") end_phase (testrun_id, "PASSED") end_testrun (testrun_id, "PASSED", log_url)
Note: This means phases may be nested, but they may not partially overlap (phase1 may not end while phase2 is active).
Using Job
job_id = start_job () testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_test_page", job_id) start_phase (testrun_id, "First phase") end_phase (testrun_id, "PASSED") end_testrun (testrun_id, "PASSED", log_url) testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_other_test_page", job_id) start_phase (testrun_id, "First phase") end_phase (testrun_id, "PASSED") end_testrun (testrun_id, "PASSED", log_url)