(→save resources on failed mandatory tests: new section) |
m (moved User talk:Kparal/Proposal:Package update acceptance test plan to QA talk:Package Update Acceptance Test Plan: making test plan final) |
||
(6 intermediate revisions by the same user not shown) | |||
Line 36: | Line 36: | ||
* If possible, we should have links for each of the listed test cases that outline exactly what's being tested (and/or link to the source code). | * If possible, we should have links for each of the listed test cases that outline exactly what's being tested (and/or link to the source code). | ||
** ''[[User:Kparal|Kparal]]: Sure, each test case should have its own page with description, I agree.'' | ** ''[[User:Kparal|Kparal]]: Sure, each test case should have its own page with description, I agree. Created [https://fedorahosted.org/fedora-qa/ticket/62 ticket #62].'' | ||
== failed mandatory test can be brought to FESCO == | == failed mandatory test can be brought to FESCO == | ||
Line 44: | Line 44: | ||
== save resources on failed mandatory tests == | == save resources on failed mandatory tests == | ||
When some of the mandatory tests fail, should we continue on running all the other (mandatory/introspection/advisory) tests, or just stop to save resources? The update is rejected anyway. -- [[User:Kparal|Kparal]] 12:08, 5 March 2010 (UTC) | When some of the mandatory tests fail, should we continue on running all the other (mandatory/introspection/advisory) tests, or just stop to save resources? The update is rejected anyway. Also when we don't stop there, a lot of higher-level tests will fail just for the reason that the package is e.g. not installable. -- [[User:Kparal|Kparal]] 12:08, 5 March 2010 (UTC) | ||
== rpmlint - warnings threshold == | |||
The introspection rpmling test may have another requirement: | |||
"Warnings count is under certain threshold." | |||
However, adamw said that warnings are just warnings and the number of them should not be a reason the disqualify an update. However, keep in mind that this is an introspection test and it can be waived. -- [[User:Kparal|Kparal]] 14:37, 5 March 2010 (UTC) | |||
== rpmguard checks as introspection tests == | |||
[[User:Kparal|Kparal]]: Can we move some [https://fedorahosted.org/autoqa/wiki/RpmguardChecks rpmguard checks] into introspection tests? | |||
== rpmlint - no new errors present == | |||
[[User:Kparal|Kparal]]: Instead of "Rpmlint - no errors present" test case we can have "Rpmlint - no new errors present" test case in case maintainers have problems to fix or whitelist their existing valid errors. |
Latest revision as of 13:08, 28 April 2010
jlaska's test ideas
* All updates must include a new changelog entry - someday I'd like to require a bug (or ticket) in the changelog entry, but perhaps that's too aggressive now. * What MUST sections can we automate from the package review guidelines [2]? * SPEC file sanity, including ... * Proper upstream Source URL included in SPEC? * When are changes to %config files are acceptable? * Is %defattr defined in the SPEC? * Any sanity tests we can do against the %scripts included in a spec file * How to handle Unapplied %patches? * License compat review? * Stripped vs unstripped binaries, is there a preference? * Validate man pages? * What existing *lint tools can we run, and what results are acceptable? (rpmlint, elflint, xmllint) * Any relationship to the new privilege escalation policy [3]? [2] https://fedoraproject.org/wiki/Packaging:ReviewGuidelines [3] https://fedoraproject.org/wiki/Privilege_escalation_policy
wwoods thoughts
- The introduction needs to be clear that this is an acceptance test plan - All these tests have to pass before we can even think about functional testing of the package.
- Kparal: I'm little floating in the terminology, because "acceptance testing" is by wikipedia also "functional testing" or "<you-name-it> testing". But I understand what you mean, they are just the basic tests and more specific tests to that package will follow in the future.
- Specifically: it needs to be clear that when an update has PASSED this test plan, that just means it's ready for real testing. The actual testing of the update is not complete; it has just barely started at this point.
- Maybe the final result of the test plan should reflect this: If all the test cases pass, the package is ACCEPTED, otherwise it's REJECTED.
- Kparal: This is perfect, much better than my original terminology. Thanks, replaced.
- Each test case can still use PASS/FAIL, of course.
- NEEDS_INSPECTION is fine as-is.
- If possible, we should have links for each of the listed test cases that outline exactly what's being tested (and/or link to the source code).
- Kparal: Sure, each test case should have its own page with description, I agree. Created ticket #62.
failed mandatory test can be brought to FESCO
Seth Vidal has provided me with an idea that if the package maintainers don't agree with a failed mandatory test (they claim it should pass), the issue can be brought to FESCO. FESCO could e.g. grant an exception for that package or deny the request. -- Kparal 11:58, 5 March 2010 (UTC)
save resources on failed mandatory tests
When some of the mandatory tests fail, should we continue on running all the other (mandatory/introspection/advisory) tests, or just stop to save resources? The update is rejected anyway. Also when we don't stop there, a lot of higher-level tests will fail just for the reason that the package is e.g. not installable. -- Kparal 12:08, 5 March 2010 (UTC)
rpmlint - warnings threshold
The introspection rpmling test may have another requirement:
"Warnings count is under certain threshold."
However, adamw said that warnings are just warnings and the number of them should not be a reason the disqualify an update. However, keep in mind that this is an introspection test and it can be waived. -- Kparal 14:37, 5 March 2010 (UTC)
rpmguard checks as introspection tests
Kparal: Can we move some rpmguard checks into introspection tests?
rpmlint - no new errors present
Kparal: Instead of "Rpmlint - no errors present" test case we can have "Rpmlint - no new errors present" test case in case maintainers have problems to fix or whitelist their existing valid errors.