[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [ogfs-dev]Map OpenDLM to G-lock



Good!

See comments below.

-- Ben --

Opinions are mine, not Intel's

> -----Original Message-----
> From: Stanley Wang [mailto:stanley.wang@linux.co.intel.com]
> Sent: Monday, August 04, 2003 5:54 AM
> To: OpenGFS-Dev
> Subject: [ogfs-dev]Map OpenDLM to G-lock
> 
> 
> Hi, folks
> Following is the "initial version" method that map OpenDLM to 
> G-lock:  
> 
> 
> Fields in struct lm_lockops:
> 
> * mount
> Initialize all data struct for mount ogfs.
> lockspace: get lockspace basing on cidev(because different 
> ogfs instants
> using different cidev). 
> Record lockspace and node's 
> information(such as
> CB, fsdata, etc) in a private data sturct.
> ls_jid: get it by reading cidev
> ls_first: determined by using deadman lock (the first node would block
> all other nodes until "others_may_mount" is called)

It would be nice to use cluster manager's membership config/status info for some of this, rather than memexp's cidev.  It would be nice not to need a cidev (i.e. the memexp cidev) at all for the OpenDLM locking module.

I'm hoping that the cluster manager can somehow map cluster members to a "0-n" list of integers, so they could correspond directly with ogfs jids.  However, if cluster manager is not cooperative somehow, cidev might be a good backup plan.

The "lockspace" I think just needs a new instance of the lock module for each new instance of the filesystem.  I never could quite figure out why current ogfs code has a check for the cidev uniqueness (*or*, I might be mis-understanding the code), except perhaps that ogfs might require a separate cluster description for each and every instance of the filesystem.  Logically, though, I'm not sure if we should need to require such a thing (I may not know enough here).
 
> 
> 
> *others_may_mount
> Demote others ondes' deadman lock to share mode, and allow other nodes
> grab their own deadman lock.

This will be handled entirely within locking module, correct?

> 
> *unmount
> Clean up all data struct. Release all locks.
> 
> *get_lock 
> Allocate and initialize a private lock struct.
> 
> *put_lock
> Clean and free a private lock struct.
> 
> *lock
> Try to lock
> Lock state translation:
> LM_ST_UNLOCKED->NL
> LM_ST_EXCLUSIVE->EX
> LM_ST_SHARED->PR
> LM_ST_DEFERRED-> PR ??

Looks like good mapping.

NL = NULL, unlocked
EX = Exclusive read/write access
PR = Protected Read access

Dominik and Stephan and I could not find where DEFERRED mode is used.  It might have been something the original authors were preparing for future use.  We might be able to drop consideration of DEFERRED mode.


> 
> Lock flag translation:
> ??

I think that there are only two that you need to worry about:

LM_FLAG_NOEXP
LM_FLAG_TRY

Other flags (e.g. GL_PERM), in filesystem level calls to glock layer, do not pertain to lock module.

> 
> Note: All locks are persistent DLM lock.

I think shared locks could be non-persistent, as a nice-to-have.  See other discussion on list.

> 
> *unlock
> Try to unlock
> 
> *reset
> Change lock state to NL. 
> 
> *cancel
> Cancel a lock/convert request
> 
> *hold_lvb
> dlm_scnop
> Note: The default size of lvb in OpenDLM is 16 bytes, we need 
> to change
> it to 32bytes.

Remember the caution (from Peter B?) about relying on the #define in OpenDLM for this ... there may be some hard-coded lines of code that do not refer to the #define.

> 
> *unhold_lvb
> dlm_scnop
> 
> *sync_lvb
> dlm_scnop
> 
> *reset_lvb
> dlm_purge
> 
> 
> Fields in struct lm_lockstruct
> 
> *ls_jid
> See previous section.
> 
> *ls_first
> See previous section.
> 
> *ls_lockspace
> See previous section.
> 
> *ls_ops
> See previous section.
> 
> This a incompleted version, please give me help to make it work :)
> Any concern or suggestion ?
> 
> Best Regards,
> Stan
> 
> 
> -- 
> Opinions expressed are those of the author and do not represent Intel
> Corporation
> "gpg --recv-keys --keyserver wwwkeys.pgp.net E1390A7F"
> {E1390A7F:3AD1 1B0C 2019 E183 0CFF  55E8 369A 8B75 E139 0A7F}
> 
> 
> 
> -------------------------------------------------------
> This SF.Net email sponsored by: Free pre-built ASP.NET sites including
> Data Reports, E-commerce, Portals, and Forums are available now.
> Download today and enter to win an XBOX or Visual Studio .NET.
> http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet
> _072303_01/01
> _______________________________________________
> Opengfs-devel mailing list
> Opengfs-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/opengfs-devel
> 


-------------------------------------------------------
This SF.Net email sponsored by: Free pre-built ASP.NET sites including
Data Reports, E-commerce, Portals, and Forums are available now.
Download today and enter to win an XBOX or Visual Studio .NET.
http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet_072303_01/01
_______________________________________________
Opengfs-devel mailing list
Opengfs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/opengfs-devel


[Kernel]     [Security]     [Bugtraq]     [Photo]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Clusters]     [Linux RAID]     [Yosemite Hiking]     [Linux Resources]

Powered by Linux