[Linux-HA] Announce: Hawk (HA Web Konsole)
tserong at novell.com
Sat Jan 16 06:14:16 MST 2010
This is to announce the development of the Hawk project,
a web-based GUI for Pacemaker HA clusters.
So, why another management tool, given that we already have
the crm shell, the Python GUI, and DRBD MC? In order:
1) We have the usual rationale for a GUI over (or in addition
to) a CLI tool; it is (or should be) easier to use, for
a wider audience.
2) The Python GUI is not always easily installable/runnable
(think: sysadmins with Windows desktops and/or people who
don't want to, or can't, forward X).
3) Believe it or not, there are a number of cases where,
citing security reasons, site policy prohibits ssh access
to servers (which is what DRBD MC uses internally).
There are also some differing goals; Hawk is not intended
to expose absolutely everything. There will be point somewhere
where you have to say "and now you must learn to use a shell".
Likewise, Hawk is not intended to install the base cluster
stack for you (whereas DRBD MC does a good job of this).
It's early days yet (no downloadable packages), but you can
get the current source as follows:
# hg clone http://hg.clusterlabs.org/pacemaker/hawk
# cd hawk
# hg update tip
This will give you a web-based GUI with a display roughly
analagous to crm_mon, in terms of status of cluster resources.
It will show you running/dead/standby nodes, and the resources
(clones, groups & primitives) running on those nodes.
It does not yet provide information about failed resources or
nodes, other than the fact that they are not running.
Display of nodes & resources is collapsible (collapsed by
default), but if something breaks while you are looking at it,
the display will expand to show the broken nodes and/or
Hawk is intended to run on each node in your cluster. You
can then access it by pointing your web browser at the IP
address of any cluster node, or the address of any IPaddr(2)
resource you may have configured.
Minimally, to see it in action, you will need the following
packages and their dependencies (names per openSUSE/SLES):
Once you've got those installed, run the following command:
Then, point your browser at http://your-server:3000/ to see
the status of your cluster.
Ultimately, hawk is intended to be installed and run as a
regular system service via /etc/init.d/hawk. To do this,
you will need the following additional packages:
Then, try the following, but READ THE MAKEFILE FIRST!
"make install" (and the rest of the build system for that
matter) is frightfully primitive at the moment:
# sudo make install
# /etc/init.d/hawk start
Then, point your browser at http://your-server:4444/ to see
the status of your cluster.
Assuming you've read this far, what next?
- In the very near future (but probably not next week,
because I'll be busy at linux.conf.au) you can expect to
see further documentation and roadmap info up on the
- Immediate goal is to obtain feature parity with crm_mon
(completing status display, adding error/failure messages).
- Various pieces of scaffolding need to be put in place (login
page, access via HTTPS, clean up build/packaging, theming,
- After status display, the following major areas of
- Basic operator tasks (stop/start/migrate resource,
standby/online node, etc.)
- Explore failure scenarios (shadow CIB magic to see
what would happen if a node/resource failed).
- Ability to actually configure resources and nodes.
Please direct comments, feedback, questions, etc. to
tserong at novell.com and/or the Pacemaker mailing list.
Thank you for your attention.
Tim Serong <tserong at novell.com>
Senior Clustering Engineer, Novell Inc.
More information about the Linux-HA