[Linux-HA] Xen multiple resource instance prevention

Andrew Beekhof beekhof at gmail.com
Mon Oct 1 06:20:06 MDT 2007


On 10/1/07, Yann Cezard <yann.cezard at univ-pau.fr> wrote:
> Ivan a écrit :
> > Hi List,
> >
> > I am a bit new to HA but read what was available and now I have a well
> > working Xen cluster based on SLES10SP1.
> >
> > Background:
> > My Xen domUs are running off a san, partitions are LVM2 on top of EVMS
> > CSM. I cannot have OCFS2 image store for my domUs due to the nature of
> > them so must use partitions instead if file images.
> > [...]
> >
> > I use HA 2.0.8 with crm only.
> >
> > Thanks a lot in advance,
> > Ivan
> Hi Ivan,
>
> I do not really have the answer to your question (except that : are you
> sure 2.0.8 is sufficient to make Xen VM migration ? When I looked at
> this some
> months ago, 2.1.X was required... but it could have changed).

its a long, complicated and not terribly interesting story, but
sles10-sp1 contains the same code as 2.1.0 just under a different
name... so it should be new enough.

> The reason I answer to this mail, is that I want to do the same thing as
> you, and I am sure that a lot of other people do to (HB2 - Xen - SAN -
> live migration -
> resource monitoring - _NO_ ClusterFS).
> The only point that block me now is "how to be totally sure that a LUN
> won't be mounted with Write access on two nodes at the same time".
> Cause I don't want my filesystem to be destroyed in case of a
> split-brain occures.

we go out of our way to prevent a resource from being active on more
than one node.

the only exception here is for things like clones which are
specifically designed to run more than one instance in the cluster.

>
> Using a cluster aware FileSystem is not really a solution, cause I don't
> want to make parallel access, I just want that it never happens.
>
> I tried to have a look at CLVM, but it is really hard to find some
> documentation
> to it not related to GFS.
>
> It seems that the good way is using some kind of quorum, and so, the minimal
> cluster size should be 3 nodes (even if quorum is possible with two
> nodes, by
> telling that one node has a biggest weight in quorum).

nod, no-quorum-policy=stop
which is the default IIRC

> And than, this is what should happen in case of problem :
> - node A,  resources : XenVM1 - XenVM2
> - node B,  resources : XenVM3 - XenVM4
> - node C,  resources : no Xen VM
>
> Case 1 : node B goes down
> - node A,  resources : XenVM1 - XenVM2
> - node B,  OUT
> - node C,  START resources : XenVM3 - XenVM4
>
> Case 2 : node B looses network and so HB2 cluster connection
> - node A,  resources : XenVM1 - XenVM2
> - node B,  STOP resources : XenVM3 - XenVM4
> - node C,  START resources : XenVM3 - XenVM4 (should happen after
>   node B has stopped its resources, but there is no way to know that as B
>   is OUT of the HB cluster)

yes there is, its called stonith

>
> Case 3 : split-brain A, B & C loose network
> - node A,  STOP resources : XenVM1 - XenVM2
> - node B,  STOP resources : XenVM3 - XenVM4
> - node C,  nothing to do
>
> This does not look so hard to achieve but I wonder if I am missing some
> other
> problems that could happened, or perhaps am I going in the wrong direction ?
>
> I think this could be interesting to have some web resources about
> HB2 + Xen on the linux-ha.org website, more and more people will try
> to do such cluster configuration.
>
> So if anyone has some interest in this, some clue, or just want to give
> his/her opinion,
> he/she's welcome.
>
> Thanks,
>
> Yann
>
> _______________________________________________
> Linux-HA mailing list
> Linux-HA at lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>



More information about the Linux-HA mailing list