We run a continuous build of the Ganeti unittests and vcluster qa using buildbot. The official Ganeti buildbot is running on ~okeanos, powered by Synnefo and Ganeti! We kindly thank grnet for their support.
Link to the web interface: http://buildbot.ganeti.org/ganeti/tgrid?length=25
As usual in buildbot, there are two parts:
Setup of the master machine is straightforward (it only needs to run buildbot), with the note that the global buildbot configuration is non-trivial.
The setup of the slaves, however, is more complex. Each slave needs to be able to build Ganeti, with all the required dependencies, which are “non-trivial” so to say.
Currently we have the following slaves defined:
Feel free to prepare a new slave and ask us to add it to buildbot! The only requirement is to be able to build Ganeti with all or most of its dependencies (we can make exceptions if needed).
In addition to just build Ganeti and run unit tests, there is also a QA build on a virtual cluster defined on buildbot. The rough overview of the process is:
sshon the QA machine
There are a number of requirements for this setup to work:
buildbotuser on the buildslave has to be able to connect as
rootwithout password to the QA machine
Currently we have the following QA machines defined:
See also the detailed description on how to setup a vcluster.
Apart from the virtual cluster, we also have a QA on a real cluster. The cluster is formed of 3 debian wheezy VMs. These machines use kvm as hypervisor and a private network on
eth1 as replication network. The master IP and the instance IPs also all live the 192.168.0/24 on
The QA is run from the wheezy buildslave (snf-14476.vm.okeanos.grnet.gr).
The users (which can cancel builds, force builds, etc.) are managed by hand (as we shouldn‘t need any user beside a generic one), in
/srv/buildbot/masters/ganeti/htpasswd. Note that htpasswd (from apache2-utils) needs to be run with
-d, as buildbot 0.8.6p1 doesn’t yet support md5/sha digests.
While the slaves are setup by slack, not everything was automated.
Initial setup means just installing slack, and pointing
/etc/slack.conf to localhost (assuming you ssh into the machines with an rsync server running on your machine and exported to the slave on the correct port through the -R parameter of ssh):
… SOURCE=rsync://localhost/slack …
Then afterwards running slack should be enough.
Note that for tests involving exclusive storage (
cluster-exclusive-storage options in the QA configuration) you need to have more than one LVM physical volume in the default volume group.
On grnet, the base images come with NetManager and a number of other daemons that are useful for desktop use; as the VMs don‘t have lots of memory, I’ve disabled them and switched back to plain
On wheezy, pyinotify throws epydoc into a fit, so the solution (urgh!) is to hand-modify
/usr/share/pyshared/pyinotify.py and remove the
class _PyinotifyLogger(logging.getLoggerClass()) definition (this was removed as well in an upstream commit,
The buildmaster setup is automatically started, as the buildbot is installed from Debian packages. Manually starting/stopping/reconfiguring is possible via:
cd /srv/buildbot/masters/ganeti buildbot stop|start|checkconfig|reconfig tail -f twistd.log
The wheezy slave is started/stopped as well (for the same reason, automatically from packages), but the squeeze slave is only started at boot time from
/etc/rc.local. All slaves are located under
/srv/buildbot/slaves, so starting/stopping them is a matter of:
cd /srv/buildbot/slaves/unittests-wheezy64 buildslave stop|start
The source code is stored in Git under git.ganeti.org, in the buildbot repository.