[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [ogfs-dev]Proposal: Clean up the locking module




> -----Original Message-----
> From: Stanley Wang [mailto:stanley.wang@linux.co.intel.com]
> Sent: Wednesday, July 09, 2003 11:18 PM
> To: OpenGFS-Dev
> Subject: [ogfs-dev]Proposal: Clean up the locking module
> 
> 
> Proposal for cleaning up the locking module
> 
> In the current implementation, the locking module(harness, modules,
> server) takes care the following jobs:
> *** Managing inter-node locks
> *** Lock expiration(is it needed?) and deadlock detection
> * Heartbeat functionality (are other nodes alive and healthy?)
> * Fencing nodes and triggering journal replay in case of a 
> node failure
> 
> The G-lock layer takes care of:
> *** Communication with the locking backend
> *** Local caching of locks
> * Special lock modes during journal replay
> * Journal replay
> 
> They obviously burden with too many workloads. I think only the items
> with three star should be retained. 

I agree with that.  However, I think it would be best to confine the first step to the locking module area, and leave the G-Lock work until later.  Take it one step at a time . . . 

> The others should be 
> cluster manager
> or journal layer's responsibility. It is time to introduce a real
> cluster manager into OpenGFS. Thanks for existence of lock harness, we
> could achieve this goal without impacting the whole file system very
> much.

I learned enough about the lock harness (harness.c) yesterday to understand that it is involved *only for mounting* a lock module.  Once the module is mounted, and the module's functionality is exposed to the filesystem code (including the G-Lock layer) the harness' job is done.

The module exposure is done via sdp->sd_lockstruct, and its member sdp->sd_lockstruct.ls_ops.  See the ogfs-locking doc (I just committed and swept to the web site some changes that are important to this discussion).

It's starting to look to me that this lock module interface could perhaps be split into two interfaces, one for locking, one for cluster management.  This could support *separate* lock and cluster modules, perhaps with the help of *separate* lock and cluster harnesses.  For sdp->sd_lockstruct:

ls_jid   - journal ID of *this* computer == cluster
ls_first - indicates that *this* computer is first to mount == cluster

ls_lockspace - private data for lock module == both, mostly locking
ls_ops - functions implemented by lock module, called by fs/glock == both, mostly locking


For ls_ops:

mount - called by harness to mount module == both cluster/locking
others_may_mount - called by first computer to mount, after all journal replays are done
          == cluster management
unmount - called by fs when unmounting module == both cluster/locking

All other functions, I think, are lock related.

Does this make sense??  That is, to split the current lock modules into separate lock and cluster modules??  Maybe even continue to use memexpd as combined lock/cluster server, just as a first step??

If so, then Stan could look into supporting OpenDLM via a *lock* module, and Brian could look into using other cluster managers via a *cluster* module, all without disturbing the fs or glock code.

One other area that I don't know much about yet is the callback from the lock modules to the glock layer . . . this would likely need to be split as well, but not necessarily.  Perhaps both lock and cluster modules could continue to use the same callback.

All I'm trying to do is suggest taking things a step at a time.  Otherwise, things could get very messy really fast.



> 
> With OpenDLM in my mind, I would like to chose Linux-HA heartbeat as
> cluster manager first. Other cluster manager also should be 
> supported in
> later stage.

Does linux-HA heartbeat still have the limitation of only 2-node failover?  Or is that a factor in what you have in mind?

I'm wondering if it would be easy to simply use memexpd as the cluster manager for now, rather than trying to switch.  We could probably continue to use memexpd as cluster manager, *without* using it as lock manager, once we get OpenDLM support.

If this makes sense, Stan, could you write up a more detailed design proposal?  Describe how you would split up the memexp module's private data structure into separate cluster/lock modules?  Which functions in which module?  etc.

Any comments from anyone else?

-- Ben --

Opinions are mine, not Intel's


> 
> Any comments?
> 
> Best Regards,
> Stan
> 
> 
> -- 
> Opinions expressed are those of the author and do not represent Intel
> Corporation
> "gpg --recv-keys --keyserver wwwkeys.pgp.net E1390A7F"
> {E1390A7F:3AD1 1B0C 2019 E183 0CFF  55E8 369A 8B75 E139 0A7F}
> 
> 
> 
> -------------------------------------------------------
> This SF.Net email sponsored by: Parasoft
> Error proof Web apps, automate testing & more.
> Download & eval WebKing and get a free book.
> www.parasoft.com/bulletproofapps
> _______________________________________________
> Opengfs-devel mailing list
> Opengfs-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/opengfs-devel
> 


-------------------------------------------------------
This SF.Net email sponsored by: Parasoft
Error proof Web apps, automate testing & more.
Download & eval WebKing and get a free book.
www.parasoft.com/bulletproofapps1
_______________________________________________
Opengfs-devel mailing list
Opengfs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/opengfs-devel


[Kernel]     [Security]     [Bugtraq]     [Photo]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Clusters]     [Linux RAID]     [Yosemite Hiking]     [Linux Resources]

Powered by Linux