From Fedora Project Wiki

Revision as of 02:05, 6 June 2013 by Tflink (talk | contribs) (initial info on using beaker with fedora qa automation/taskbot)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Not done yet
Still gathering information and will be updating this as more information is available - please let me know if something is not correct

General Information

Communication and Community

  • how many projects are using it
    • Red Hat is using it internally
  • how old is it
  • how many active devs from how many orgs
    • I count 7 people with active code reviews on their gerrit instance, I'm guessing somewhere around 10, mostly red hat employees
  • quality of docs
    • Not bad, getting better
  • how much mailing list traffic is there?
    • not much on beaker-devel, ~20 posts/month or so
  • what is the bug tracker?
    • RHBZ
  • what is the patch process?
    • submission to and review through their gerrit instance
  • what is the RFE process?
    • discussion on beaker-devel, I assume. not explicit

High level stuff

  • how tightly integrated are the components
    • Pretty tightly integrated
  • what license is the project released under
    • GPL2+
  • how much is already packaged in fedora
    • some of it is, the packages should be installable on el6 or fedora machines without much issue, the beaker devs maintain their own yum repo for packages

API

  • what mechanism does the api use (xmlrpc, json-rpc, restful-ish etc.)
    • XML-RPC as the main API, RESTful XML for the alternative harness implementation
  • can you schedule jobs through the api
    • yes, by uploading job xml through xml-rpc
  • what scheduling params are available through the api
    • system type and configuration are included in the job, otherwise the jobs go into a queue

Results

  • how flexible is the schema for the built in results store
  • what data is stored in the default result
  • is there a difference between failed execution and status based on result analysis
  • what kinds of analysis are supported

VM management

  • does it work with any external systems (ovirt, openstack etc.)
    • ovirt/rhev-m currently, openstack soon
  • does it support rapid cloning
    • once openstack support is added, this will be more possible
  • how are vms configured post-spawn
    • they aren't really - configuration is handled in kickstart
  • control over vm configuration (vnc/spice, storage type etc.)
    • yes, through the job xml
  • ephemeral client support?
    • in a way, yes. all clients are re-provisioned for every task

Test harness

  • base language
    • beah
  • how tightly integrated is it with the system as a whole
    • not 100% sure but it looks rather tightly coupled
  • are any non-primary harnesses supported
    • not quite yet but there are interfaces to support non-beah harnesses and autotest support is in the works

Test execution

  • how are tests stored
    • inside rpms that are uploaded to the lab controller
  • support for storing tests in vcs
    • not directly at this time
  • method for passing data into test for execution
    • in the job description xml
  • how are parameters stored for post-failure analysis
  • support for replaying a test
    • can clone a previous job
  • can tests be executed locally in a dev env with MINIMAL setup
  • external log shipping?
  • how tightly integrated is result reporting
  • what kind of latency is there between tests?
    • systems have to be re-provisioned between jobs, so the minimum latency would be the time needed to spin up a new openstack instance (once that's supported)