[Linux-HA] RE: LInux-HA vs Clumanager
rodgersr at yahoo.com
Tue Jun 20 10:21:07 MDT 2006
Alan Robertson <alanr at unix.sh> wrote: Rick Rodgers wrote:
> Thanks for the info. Here is what I have summarized as the
> similarities and differences of open source RH Clumanager and
> Linux-HA. Let me know if you don't agree or I missed somthing.
> 2.0 Common Components of Clumanager and Linux-HA As might be
> expected Linux-HA and Clumanager have some similarities of features
> and architectures. Below are some of these common components: Â·
> Heartbeat: this is a usually daemon that sends heartbeat packets
> across the network (or serial ports) to the other instances of the
> heartbeat daemon to monitor the status (aliveness) of the member
> nodes of the cluster. This is a critical part of the failover
> software to manage the wellness of the cluster and determine if a
> failover needs to occur.
> Â· Service scripts: These are scripts that provide an interface
> for the CMS to the HA services. The scripts are basically designed
> as init.d scripts with options of start/stop/status and are used by
> the CMS to control a service. Redhat requires these scripts conform
> to System V requirements.
We also support that option. We call it 'lsb' resource agents. LSB ==
Linux Standards Base.
> Linux-HA, as an example, defines the
> requirements for start and stop as follows:
> This brings the resource instance online and makes it available for
> use. It should NOT terminate before the resource instance has either
> been fully started or an error has been encountered.
> This stops the resource instance. After the "stop" command has
> completed, no component of the resource shall remain active and it
> must be possible to start it on the same node or another node or an
> error must be returned.
> (See Appendix E for more details)
R2 supports 4 kinds or resource agents - and this is a subset of what
they do (depending on what type). They are OCF, lsb, heartbeatr1, and
STONITH (which are really just a built-in wrapper for the STONITH
plugins you talk about later).
One of the nice things about OCF resource agents is that they can tell
the GUI how they need to be configured, and the GUI uses that to prompt
you for whatever is needed.
So, any resource agent _you_ write is treated just like any resource
agent we write. The GUI automatically knows how to configure your RAs
too. I think it's pretty slick.
> The CMS upon startup will issue a âstartâ command for the service
> and verify that the return status is successful. In addition, the CMS
> will periodically monitor the service using the âstatusâ option. If
> the status returns a failure the CMS will try and restart the
> service. After a specified number of tries to restart the service,
> the CMS will issue the âstopâ command and failover to the standby
> node â starting the service there.
This is somewhat similar, except that we monitor OCF services with a
(typically more powerful) "monitor" operation.
Can you be more specific about the "more power" part of that response (ie. give examples)?
> Â· Fencing: Fencing is the process of locking resources away
> from a node whose status is uncertain. This is often used to prevent
> a state called âSplit Brainâ. Split Brain refers to the situation
> where two members of the cluster think they are the âactiveâ node
> and may attempt to access the shared resource simultaneously. This
> can cause corruption of the shared data. To ensure this condition
> will not occur, before taking over as active node the standby node
> will use fencing if the active node is in an uncertain state. To
> accomplish this the standby node will use the âSTONITHâ (Shoot The
> Other Node In The Head) process . This process entails killing the
> schizophrenic node by issuing a âpower offâ command to the power
> controller of the node.
Or something equivalent which stops it dead. It doesn't have to be a
power-off command. In R2 we have a very nice full-power scripting
interface to STONITH as well.
> Â· Power Controller Plugins: PILS: A generalized Plugin and
> Interface Loading System. Theses plugins are required to allow the
> CMS to to talk to the power controllers for each member in the
> cluster. Linux-HA and Clumanager both claim to support the PILS
> interface specification for plugins.
I thought the verison of STONITH they took predated PILS? [It's been a
_loong_ time ago]
That may be true. It is just what RH says on its website.
In any case, the PILS spec is for any kind of plugin. The STONITH
plugin spec is a higher level spec, and we support an
easier-to-configure version of it - version 2. Version 2 allows the
plugins to clue in the GUI on how to configure them - in multiple
languages - nirroring the capability OCF RAs have.
[As I mentioned before the STONITH and PILS code they're using is from
old versions of heartbeat - if you care].
> Â· IP failover:The clients of a service often access the
> service via a virtual IP. This virtual IP can be failed over as part
> of the service to the standby node much like any other service
> Â· Active/Standby Configurations: Support for active-standby
> and active-active cluster configurations. In an active-active
> configuration there are services spread across more than one node.
> This may become important for Collage when we begin managing large
> numbers of appnodes. Both solutions support this but we are
> currently using active-standby in Collage.
Heartbeat version 2 supports much more complex configurations than this
- like "n" nodes with more-or-less any kind of failover mode you can
Can you give examples of what you mean by "any kind of failover mode" ?
Note that active/active in your application may cause you problems _in
your application_. Depends on what your application is, and how it works.
> Â· XML based: Both use XML based syntax for defining services
> and configuration of the services.
> Â· Failure Detection: Each can be configured to detect a
> failure within 1-2 seconds.
From what I know about both, I suspect that we can do this reliably
under more kinds of conditions than they can. But, it may not matter
either ;-). We have an R1 customer who claim that they can fail over in
something like .25 seconds. R2 can probably detect the failure just as
fast, but it might take a little longer to react than this.
> 3.0 Comparison Linux-HA and Clumanager Clumanager and Linux-HA
> support âLocationalâ constraints for services. This allows the
> user to specify the preferred node the services will be run on. In
> addition the failback node can also be specified. Linux-HA provides a
> much richer set of options for services. For example; In
> addition to Locational constraints, the following constraints
> can be defined: Ordering Constraints : Start before
> Start after
> Co-locational Constraints Where services can/cannot be
> placed relative to other services
> Linux-HA also supports the concept of service âClonesâ. This is
> the idea of having separate instances of the same service
> running on different member nodes. For example you can have N
> instances of an IP service and have them distributed throughout
> the cluster for load balancing. Clumanager does not support
> this concept.
Another thing we do (they probably do this too), is resource groups. A
resource group is a listing of resources that have default dependencies
each to the resource above it, and implicit co-location constraints.
You can can also create groups with only colocation, or only ordering
dependencies if you like. [They probably don't have all these options
Something else which is related to clones. We also support Master-Slave
resources - good for replication type resources, etc. But, there is an
additional feature necessary for making them completely useful which
isn't in yet (sigh).
> To protect against a single point of failure it is important
> that each heartbeat have a completely separate hardware path.
> Both solutions recommend configuration to have one heartbeat
> path be a serial connection and the other be LAN based.
In R2, it's probably the case that serial isn't a good choice in many
cases. Particularly not for larger clusters.
> Linux-HA uses this network heartbeat communication process to
> provide status (âkeep alive pingâ) and also to manage the nodes of
> the cluster. This includes membership information and voting for
We also support 'pinging' things like routers, etc. And using that
information in failover decisions.
You can also create your own criteria for failover and use attrd to tell
us something that will cause us to fail over (according to the rules you
> Clumanager on the other hand, uses both the heartbeat network and
> shared raw disk for managing and getting status of the member nodes.
> The disk is also used to store quorum information. The shared disk
> can act as a back up to the heartbeat if all networking is down. When
> both heartbeat network connections fail, if the member nodes detect
> that the shared disk connections are still alive a failover will not
> occur and services remain running. This may be valuable when the
> client service is not on the same network interface as the
> heartbeat. This scenario however is more that a âsingleâ point of
> failure if the heartbeats are configured (serial/network) as
> mentioned above.
You can heartbeat over as many interfaces as you like. All our
communication is digitally signed, to make it much harder for intruders
to gain privileges. You can heartbeat via UDP unicast if you want to
heartbeat over a client network without rasing the hackles of your
network admins (or multicast).
It's also worth noting that hearrtbeat DOES NOT require any shared disk
at all. You can use disk mirroring/replication like DRBD and it works
fine for us.
> L-HA supports authentication of the Heartbeat communications.
> Heartbeat digitally signs every packet. It comes supplied with
> the following default signature algorithms: 32-bit CRC
> MD5 HMAC-SHA1
Oh yeah. You said that ;-)
> Clumanager utilizes UDP messages.
> L-HA is the reference implementation for Open Cluster Framework
> (OCF) http://www.opencf.org/home.html and SAF standards
> Linux-HA is included with SUSE Linux, Mandriva Linux,
> TurboLinux, Red Flag Linux, Debian, Gentoo . It also runs on
> Red Hat, FreeBSD, Solaris and the Mac's OS/X. Clumanager is
> Supported on Red Hat Enterprise Linux AS and Red Hat Enterprise Linux
> Linux-HA provide a full-featured GUI - for configuring,
> controlling, and monitoring HA services and servers (See Figure
> 2 in Appendix D). Redhat provides a purchase only GUI.
> Clumanager and Linux-HA both support the concept of forming a
> quorum before a cluster can be formed. This usually means that
> a majority of the cluster nodes must vote to become a member of
> the cluster before the cluster can become active. However in
> the case of an even numbered node cluster (ie, 2,4,6â¦) a quorum can
> not be achieved if half of the nodes are down. This means the
> services can not be started until half of the nodes are up
> (quorum achieved). To solve this problem there have been
> devised other methods to simulate that a quorum has been
> achieved. Some of these methods include using disks or remote
> nodes as voting members. One example of this is to use a âtie-breaker
> IPâ as a voting member to form a quorum. This effectively
> gives the cluster 1 more vote and a cluster can be formed even
> if half are down. Currently Clumanager supports the tie-breaker
> functionality but Linux-HA is still in development of this
> capability. They expect to have it shortly.
> Linux-HA provides reasonably good documentation for manual
> configuration of the cluster software. There is not very good
> documentation on manual configuration of Clumanager. Most of
> the documentation from Redhat is GUI based configuration. This
> makes it more difficult for development and support. It also
> can affect a customerâs ability to make changes to meet their
> specific needs.
> Linux-HA supports CIM (Common Information Model) support
> for industry-standard Systems Management support.
> Both Clumanager and Linux-HA will attempt to restart a failed
> service before failing over. However, Linux-HA allows you to
> easily configure how many times a restart should be performed
> before a failover should occur.
Including the option of failing it over immediately to another node.
> Linux-HA provides a more complete set of administrative
> commands and better documentation to support them. See Appendix
> Both Clumanager and Linux-HA allow the user define the
> intervals for checking a service status. However, Linux-HA
> additional allows configuration of timeouts on operations. For
> example a sample configuration for timeouts on a service
> operation: Timeout Example
By the way, you can specify timeouts for start and stop operations as well.
And, of course, we can support clusters up to (at least) 16 nodes.
Oh, and by the way, as an added irrelevant bonus - no extra charge!
We can support single-node clusters too. Of course, you can't fail over
in that case, you can just restart services.. And, since we support LSB
resources, we can monitor your system's base level services for you -
for free ;-).
Do you know if this is something that RedHat can not do (a single node)?
Hope this helps.
"Openness is the foundation and preservative of friendship... Let me
claim from you at all times your undisguised opinions." - William
Linux-HA mailing list
Linux-HA at lists.linux-ha.org
See also: http://linux-ha.org/ReportingProblems
Yahoo! Sports Fantasy Football 06 - Go with the leader. Start your league today!
More information about the Linux-HA