[Linux-HA] Network / distributed file system: which one?

S P Arif Sahari Wibowo arifsaha at yahoo.com
Tue Oct 4 15:22:06 MDT 2005


Hi!

I just started planning for a HA cluster system now. I want to 
setup a HA file server that have its file shared to several 
other server (planning for load balancing). Therefore I will 
need a network / distributed file system that can work well with 
certain replication system (either DRBD, rsync / csync, or even 
better if it come with the file system).

My big question now is what network / distributed file system to 
use? I did some research and came with some alternative, would 
you mind give some comment on them?:

1. NFS, most commonly use, most flexible in setup. Issue: prone 
to stale file handle, which will happen in failover situation. 
Anybody know how's NFSv4 in this issue?

2. OpenAFS, good and secure network fs. Issue: some old design 
problem, like only 15 characters per directory entry. No idea 
how it will work with replication and how it will react in 
failover.

3. PVFS2, need to learn much more about this.

4. GFS on the top of GNBD or iSCSI. No idea about performance. 
Will it run on DRBD? I guess file-level replication is out of 
the question. No idea how it will react in failover.

5. Lustre. Issue: the disk cannot mount locally / through local 
loop, at least not recommended. Will it run on DRBD? I guess 
file-level replication is out of the question. No idea how it 
will react in failover.

6. Coda (or InterMezzo), actually ideal concept, replication and 
network / distributed file system in one. But it apparently it 
is not stable yet.

7. OpenSSI. Issue: distribution in limited form: in full kernel 
and few of them. Kind of bad if I need to change kernel.

8. any thing else?

Thank you for reading this far!

-- 
                                Stephan Paul Arif Sahari Wibowo
     _____  _____  _____  _____
    /____  /____/ /____/ /____
   _____/ /      /    / _____/       http://www.arifsaha.com/



More information about the Linux-HA mailing list