No edit summary |
No edit summary |
||
Line 5: | Line 5: | ||
== Motivation behind ResultsDB == | == Motivation behind ResultsDB == | ||
At the moment, all the results are stored on the autoqa-results mailing list [1]. Although this is not really a bad system for storing the results, searching for some particular result (even as simple query as "All results for foobar-1.3-fc15") is not really comfortable. ResultsDB's main purpose is not | At the moment, all the results are stored on the autoqa-results mailing list [1]. Although this is not really a bad system for storing the results, searching for some particular result (even as simple query as "All results for foobar-1.3-fc15") is not really comfortable. ResultsDB's main purpose is not only to store the data, but to provide easy access to the results via simple querying API. | ||
This means, that using the ResultsDB as dabase backend, many 'frontends' can be created. Be it simple tools which will just aggregate most recent results for specified package (maybe a table inside koji/bodhi info pages), some statistical tool for gathering fail/pass ratio of *insert test name of your choice* for given Fedora release, etc. | This means, that using the ResultsDB as dabase backend, many 'frontends' can be created. Be it simple tools which will just aggregate most recent results for specified package (maybe a table inside koji/bodhi info pages), some statistical tool for gathering fail/pass ratio of *insert test name of your choice* for given Fedora release, etc. | ||
Line 24: | Line 24: | ||
= | = Thoughts behind the database schema = | ||
It's safe to ignore the 'Job' idea at the moment - the idea behind Jobs was that we wanted to be able to internally group several test-runs (e.g. rpmgguard & conflicts for some particular package) into the job. This option is a) IMHO not really feasible with the current way of scheduling tests b) easily substituted with the 'generic testplan frontend' (see below). | It's safe to ignore the 'Job' idea at the moment - the idea behind Jobs was that we wanted to be able to internally group several test-runs (e.g. rpmgguard & conflicts for some particular package) into the job. This option is a) IMHO not really feasible with the current way of scheduling tests b) easily substituted with the 'generic testplan frontend' (see below). | ||
Line 30: | Line 30: | ||
Other than that - our primary goal was versatility - we wanted to be able to store results not only from the tests we did have at the moment, but to be able to meet the needs of 'almost anything'. | Other than that - our primary goal was versatility - we wanted to be able to store results not only from the tests we did have at the moment, but to be able to meet the needs of 'almost anything'. | ||
'Testrun' table corresponds with a single | 'Testrun' table corresponds with a single execution of a test, all the data not covered by it's attributes (such as tested package/update, architecture, etc) are to be stored in the 'TestrunData' as key-value pairs. Metadata for each test should define a set of used keyval pairs (with possible information about 'always present' and 'sometimes present' keys) - e.g. [9]. | ||
Testruns can be devided into phases (e.g. Setup phase of the test, downloading packages from koji...) - hence the TestrunPhase table. If the test developer decides to split the test into multiple phases, each phase can have it's result. Also metadata can be used to specify how to treat failures/warnings/etc from each phase - warning in setup phase might be less important than warning in run-once, etc. | Testruns can be devided into phases (e.g. Setup phase of the test, downloading packages from koji...) - hence the TestrunPhase table. If the test developer decides to split the test into multiple phases, each phase can have it's result. Also metadata can be used to specify how to treat failures/warnings/etc from each phase - warning in setup phase might be less important than warning in run-once, etc. |
Revision as of 13:08, 14 April 2011
What is ResultsDB
ResultsDB is a database, in which we'll store results of tests executed by the AutoQA framework.
Motivation behind ResultsDB
At the moment, all the results are stored on the autoqa-results mailing list [1]. Although this is not really a bad system for storing the results, searching for some particular result (even as simple query as "All results for foobar-1.3-fc15") is not really comfortable. ResultsDB's main purpose is not only to store the data, but to provide easy access to the results via simple querying API.
This means, that using the ResultsDB as dabase backend, many 'frontends' can be created. Be it simple tools which will just aggregate most recent results for specified package (maybe a table inside koji/bodhi info pages), some statistical tool for gathering fail/pass ratio of *insert test name of your choice* for given Fedora release, etc.
Current state
From development point of view, there is quite well-defined database schema [2], 'input' API [3], and proof-of-concept TurboGears2 application [4], implementing these two.
I have also started to work on a proof-of-concept frontend [5], which will allow us to create test plans (like Package Update Acceptance Testplan [6]), using mediawiki pages to define the testplan's requirements [7][8] (this will be covered in more detail further in the text).
API
We have quite well-defined 'input' API - i.e. the API for storing data into ResultsDB. The 'output' API needs to be designed. I have not gone to that particular area that much, because at the time being. My plan was to sneak ResultsDB reporting into the production to gain some reasonable dataset, and then adjust the output API based on the actual needs of our frontend(s).
Note: we all decided against 'sql-based' API, and prefer the set of specific filter-methods with monitorable (?) arguments.
Also, once Fedora Message Bus is up'n'running (if ever), we definitely want to send 'announces' via the bus too.
Thoughts behind the database schema
It's safe to ignore the 'Job' idea at the moment - the idea behind Jobs was that we wanted to be able to internally group several test-runs (e.g. rpmgguard & conflicts for some particular package) into the job. This option is a) IMHO not really feasible with the current way of scheduling tests b) easily substituted with the 'generic testplan frontend' (see below).
Other than that - our primary goal was versatility - we wanted to be able to store results not only from the tests we did have at the moment, but to be able to meet the needs of 'almost anything'.
'Testrun' table corresponds with a single execution of a test, all the data not covered by it's attributes (such as tested package/update, architecture, etc) are to be stored in the 'TestrunData' as key-value pairs. Metadata for each test should define a set of used keyval pairs (with possible information about 'always present' and 'sometimes present' keys) - e.g. [9].
Testruns can be devided into phases (e.g. Setup phase of the test, downloading packages from koji...) - hence the TestrunPhase table. If the test developer decides to split the test into multiple phases, each phase can have it's result. Also metadata can be used to specify how to treat failures/warnings/etc from each phase - warning in setup phase might be less important than warning in run-once, etc.
Links
- [1] https://fedorahosted.org/pipermail/autoqa-results/
- [2] https://fedoraproject.org/wiki/AutoQA_resultsdb_schema
- [3] https://fedoraproject.org/wiki/AutoQA_resultsdb_API
- [4] https://www.assembla.com/code/resultsdb/git/nodes?rev=devel
- [5] https://www.assembla.com/code/resultsdb_puatp/git/nodes?rev=master
- [6] https://fedoraproject.org/wiki/QA:Package_Update_Acceptance_Test_Plan
- [7] https://fedoraproject.org/wiki/QA:Test_Plan_Metadata_Test_Page
- [8] https://fedoraproject.org/wiki/User:Jskladan/Sandbox:Package_Update_Acceptance_Test_Plan_Metadata
- [9] https://fedoraproject.org/wiki/User:Jskladan/Sandbox:Rpmguard_Testcase_Metadata