From Fedora Project Wiki
General Information
Communication and Community
- how many projects are using it
- Red Hat is using it internally
- how old is it
- how many active devs from how many orgs
- I count 7 people with active code reviews on their gerrit instance, I'm guessing somewhere around 10, mostly red hat employees
- quality of docs
- Not bad, getting better
- how much mailing list traffic is there?
- not much on beaker-devel, ~20 posts/month or so
- what is the bug tracker?
- RHBZ
- what is the patch process?
- submission to and review through their gerrit instance
- what is the RFE process?
- discussion on beaker-devel, I assume. not explicit
High level stuff
- how tightly integrated are the components
- Pretty tightly integrated
- what license is the project released under
- GPL2+
- how much is already packaged in fedora
- some of it is, the packages should be installable on el6 or fedora machines without much issue, the beaker devs maintain their own yum repo for packages
API
- what mechanism does the api use (xmlrpc, json-rpc, restful-ish etc.)
- XML-RPC as the main API, RESTful XML for the alternative harness implementation
- can you schedule jobs through the api
- yes, by uploading job xml through xml-rpc
- what scheduling params are available through the api
- system type and configuration are included in the job, otherwise the jobs go into a queue
Results
- how flexible is the schema for the built in results store
- seems very flexible, there can be any number of sub-results per task
- what data is stored in the default result
- PASS/FAIL, testname, outputfile and metric (integer isstored, not used for anything in beaker - can be used for user data)
- is there a difference between failed execution and status based on result analysis
- it does not appear so
- what kinds of analysis are supported
- some basic reporting is supported by task or job, data can be retrieved via xmlrpc. other, more advanced analysis is done through raw mysql queries
VM management
- does it work with any external systems (ovirt, openstack etc.)
- ovirt/rhev-m currently, openstack soon
- does it support rapid cloning
- once openstack support is added, this will be more possible
- how are vms configured post-spawn
- they aren't really - configuration is handled in kickstart
- control over vm configuration (vnc/spice, storage type etc.)
- yes, through the job xml
- ephemeral client support?
- in a way, yes. all clients are re-provisioned for every task
Test harness
- base language
- beah
- how tightly integrated is it with the system as a whole
- not 100% sure but it looks rather tightly coupled
- are any non-primary harnesses supported
- not quite yet but there are interfaces to support non-beah harnesses and autotest support is in the works
Test execution
- how are tests stored
- inside rpms that are uploaded to the lab controller
- support for storing tests in vcs
- not directly at this time
- method for passing data into test for execution
- in the job description xml
- how are parameters stored for post-failure analysis
- support for replaying a test
- can clone a previous job
- can tests be executed locally in a dev env with MINIMAL setup
- yes, there are some caveats but in general things should work well enough outside of beaker as long as you write your tasks correctly
- external log shipping?
- probably, there don't seem to be restrictions on what commands can be run as long as any needed deps are installed
- how tightly integrated is result reporting
- reporting results to beaker requires stuff that appears to be only in beah for now - that should change somewhat once autotest integration is complete
- what kind of latency is there between tests?
- systems have to be re-provisioned between jobs, so the minimum latency would be the time needed to spin up a new openstack instance (once that's supported)