[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [ogfs-dev]Lock server fail recovery

> -----Original Message-----
> From: opengfs-devel-admin@lists.sourceforge.net
> [mailto:opengfs-devel-admin@lists.sourceforge.net]On Behalf Of Guochun
> Shi
> Sent: Monday, October 27, 2003 3:28 PM
> To: opengfs-devel@lists.sourceforge.net
> Subject: Re: [ogfs-dev]Lock server fail recovery

> >> First of all, did I understand correctly that the memexpd 
> daemon is, 
> >> effectively, the only way to get OpenGFS working?
> >
> >Currently yes, but we are working at adding a different locking
> >system (DLM).  Since DLM sitributes the locking responsibilities
> >among the cluster nodes, there is no real server, removing the
> >single point of failure with memexpd.
> Is separating the cluster/locking code in memexp locking 
> module and using external cluster manager(e.g. heartbeat) on 
> the TODO plan? This seems to be duplicate
> work with OpenDLM.
> -Guochun

I would vote for focusing on OpenDLM.  It seems fundamentally to be a better lock manager, due to:

-- less LAN traffic (no repeated polling/loading of lock data when lock not immediately available)

-- no single point of failure

It's nice that memexp "works", but I think we need to get beyond it as soon as possible, just as we did with pool.

-- Ben --

Opinions are mine, not Intel's

This SF.net email is sponsored by: SF.net Giveback Program.
Does SourceForge.net help you be more productive?  Does it
help you create better code?   SHARE THE LOVE, and help us help
YOU!  Click Here: http://sourceforge.net/donate/
Opengfs-devel mailing list

[Kernel]     [Security]     [Bugtraq]     [Photo]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Clusters]     [Linux RAID]     [Yosemite Hiking]     [Linux Resources]

Powered by Linux