upgrade to 0.4.9f log-stonith issues
Fri, 7 Feb 2003 08:02:26 -0800 (PST)
--- Alan Robertson <firstname.lastname@example.org> wrote:
> Rob wrote:
> > I installed 0.4.9f(heartbeat -stonith -pils),
> > When I start heartbeat on CLUS1 it sees CLUS2 is
> > alive... Then it reset's the machine! If
> > is not started on CLUS2 should CLUS1 really be
> > resetting it? (by the way, I don't have CLUS2's
> > attached to CLUS1 at the moment (just in case....)
> > it didn't really reset the machine... But it
> > too!)
> The only safe way to take over resources from a node
> you haven't heard from
> since you booted is to STONITH it.
> In the case described here, each node will wake up,
> STONITH the other, and
> then get STONITHed by the other node when it comes
> back up. This may sound
> dumb, but it's not nearly as dumb as allowing both
> to mount the disk at the
> same time.
OK, but wouldn't it be more appropriate (remember, I
have a Sun Cluster background here...) to fail to
__start__ if the other node's not alive? (but allowing
an administrator to manually force a start?) --
Second, wouldn't it be advisable to have a forked
daemon which (like Sun Cluster here...) maintains
certain information (if cluster is running) about the
member nodes and their states (i.e.
UP/Down/Maintenance(?)) and maybe machine information
such as cpuinfo, meminfo, uptime, version?
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.