[Linux-HA] 'Stale NFS File Handle' using LVM on shared SCSI
chrisagallo at gmail.com
Sun Nov 12 20:24:01 MST 2006
You need to have the NFS system files mounted as well. They are kept
in /var/lib/nfs and need to be the same on each node serving NFS
shares. That will get rid of the stale file handler thing.
A tutorial to do this with drbd can be found here
Also, I HIGHLY suggest moving to ha version2, its MUCH more flexible
in terms of what you can do.
On 11/12/06, Kirk Ismay <captain at netidea.com> wrote:
> Hi everyone,
> I'm trying to set up an Active-Active NFS server using heartbeat v2.0.7
> on Debian Sarge (3.1).
> I'm using two servers connected to a StorCase Infostation external
> SCSI-to-SATA RAID, using LSI Logic LSI53C1030 adapters on the servers
> (an FC SAN wasn't within the budget). I have the fail-over process
> working after a fashion, in that the service IP, and file-system switch
> from one node to the other. The problem is that the NFS client
> complains of having a 'Stale NFS File Handle' after the switch.
> From reading the HaNFS document here: http://www.linux-ha.org/HaNFS, I
> understand that it can happen if the device numbers change. I have
> verified that this is what is happening, though the document says that
> using LVM is supposed to prevent this.
> Here is my haresources:
> node1 192.168.0.50
> node2 192.168.0.51 LVM::/dev/storage0 \
> Filesystem::/dev/mapper/storage0-raid0::/srv/exports/raid0::ext3 \
> I am using Matt Schillinger's scripts:
> Here's my device listing from both systems:
> brw------- 1 root root 253, 14 2006-11-12 02:12 /dev/mapper/storage0-raid0
> brw------- 1 root root 253, 12 2006-10-31 15:22 /dev/mapper/storage1-home1
> brw------- 1 root root 253, 10 2006-10-31 15:22 /dev/mapper/storage2-home2
> brw------- 1 root root 253, 4 2006-11-12 01:35 /dev/mapper/storage0-raid0
> brw------- 1 root root 253, 2 2006-11-11 23:14 /dev/mapper/storage1-home1
> brw------- 1 root root 253, 0 2006-11-11 23:14 /dev/mapper/storage2-home2
> Is there a way to configure LVM to use a specific device number for each
> Also, I originally had LinuxSCSI::1:0:0:0 in my haresources, but I could
> only achieve fail-over's one way. If I tried to move the device back, it
> wouldn't work. According to dmesg it said that it was a dead device.
> It seems to work without using LinuxSCSI, I actually mounted two
> separate LV's on each node on the same RAID volume, and ran bonnie on
> both nodes. Nothing complained. I have no idea if that is safe or not.
> If anyone has used the StorCase RAID before, I'd love to be able to
> compare notes.
> Thanks in advance.
> Kirk Ismay
> System Administrator
> Net Idea
> 201-625 Front Street Nelson, BC V1L 4B6
> Linux-HA mailing list
> Linux-HA at lists.linux-ha.org
> See also: http://linux-ha.org/ReportingProblems
More information about the Linux-HA