From Fedora Project Wiki
(Redirected from Talk:Changes/InvokingTests)
- AliVigni: In invocation why would I want to hardcode absolute paths for test execution, artifacts, logs, This should be a relative path so where ever you run things it is in the local workspace. My machine, Jenkins, taskotron, etc.
- MartinPitt: I reworked the invocation; it was also impractical for tests that run as non-root, and it would have potentially clobbered the root directory with temporary stuff.
- tflink: As I understand it, the proposal requires
-test
subpackages to either have globally-unique file names or explicitconflicts
in the spec file. Why not use a subdirectory matching thename
from the specfile e.g./usr/tests/gzip
for the gzip packaged tests? That would make filename conflicts much less likely and would be one less thing for packagers to worry about when including tests.- MartinPitt: Excellent point; spec changed to
/usr/tests/
srcpkgname/
to make use of the already unique name space that source packages (aka. spec file names) give us. Will that be sufficient to map a source package to all of its binary packages that contain tests? I. e. "give me all rpms of thegtk+
source that provide tests?
- MartinPitt: Excellent point; spec changed to
- pingou:
- Execute all executable files in /usr/tests/*/ directories one at a time.
This is a neatpick but there will be people complaining about it since /usr/tests isn't in the FHS and isn't really a good place for executable, we could suggest using /usr/libexec which is meant for executable and put a /test subfolder there
- I honestly do not see the advantage of packaging the tests. I doubt that for most project upstream is going to release them as a tarball which means the packagers will have to do that. Then write down the process on how to execute them. Why not doing this with something like ansible from the start? It makes it easy to list the dependencies (just install them in one task). Then you would need to specify how the tests should be run, which can be just as easily done in ansible. It also means that we would have to go through the FPC to get this approved in the packaging guideline, for imho little benefit.
- I agree with above about not enough benefits with packaged tests. Right now we have already problems with slow dnf, dowloading too much, huge metadata and extra slow dependency solving. This won't be beneficial to our users, yet it would have negative impact on them. Some tool like fedpkg or something will be needed anyway. Same with configs, dependencies, etc... we will need something else, as rpm can't cover everything. So I think it would be good idea to leave rpm out of this and not load another weigh on core infrastructure, processes and user experience.
- Writing standards is hard, particularly because they rarely remain static. However, without a way to check some target suite, framework, or test conforms to expectations, changes to the standard are inherently opposed. The "easy fix" here is to version each layer so that the layer above, can assert expectations. e.g. a versioned test can be checked by it's framework, and on up the layers.
- Whatever the tooling is, duplicating more than a moderate amount of execution logic across 100s or 1000s of packages is ripe for disaster. If there's any bug or necessary change, it means fixing the same problem x 1000s of packages. Worse, over time all the copies will tend to diverge from each-other making it even harder. Part of the standard should include a "library" of routines/roles/files etc. This can then be versioned and therefor asserted (or provided) by higher-layers. i.e. make the library package a 'BuildRequires' in the spec.
- Third-option, including the tooling-choice as part of the versioning standard. Then you can include all three (packaged scripts, ansible, or control) and add more later. e.g. if tests/VERSION ends with "a", do the Ansible thing. If it's "b", run the scripts.