[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ogfs-users]FW: [ogfs-dev]Debugged no-pool code

Hello users . . .

I posted this to the opengfs-dev list last night . . . I thought maybe an
adventurous user or two might be interested in trying this out.  

Note that this is experimental code!  

The no-pool code's main purpose is to liberate us from the "pool" code and
utilities, and allow use with other volume managers/mappers (or even raw,
unmapped devices, if each node sees them with a consistent name).  It
achieves that by providing "external" journals on specific
devices/partitions separate from the main filesystem device/partition.
"Internal" journals, on the filesystem device/partition, are still supported
as well.

Background:  Pool, in conjunction with mkfs.ogfs (via a private ioctl), has
the ability to assign particular journals to particular devices.  We're
replacing that capability, without requiring the private interface between
mkfs.ogfs and a volume manager/mapper.  Pool also provides striping
capability, but we'll rely on other volume manager/mappers for that now.

I've now got it running bonnie++ on two machines simultaneously, using
memexp locking protocol.

Let me know if you have any questions (or results).

-- Ben --

Opinions are mine, not Intel's

> -----Original Message-----
> From: Cahill, Ben M 
> Sent: Wednesday, April 30, 2003 7:38 PM
> To: OpenGFS (E-mail)
> Subject: [ogfs-dev]Debugged no-pool code
> Hi all,
> I finally got the no-pool code to actually work, and I've
> just uploaded a new tarball of my build tree to the 
> sourceforge opengfs
> group
> directory (home/groups/o/op/opengfs/htdocs).
> To download via website, set your browser to:
> http://opengfs.sourceforge.net/opengfs_nopool.tar.gz
> There's an underscore between "opengfs" and "nopool" above.  For some
> reason, it's about 30% larger than the tarball I put up a 
> couple of weeks
> ago . . . my apologies . . . I couldn't find the extra 
> baggage, please let
> me know if you do.
> The no-pool code works well enough here on one machine that I 
> could run
> bonnie++ successfully, using an external journal.
> Most of the following repeats what I sent out a couple of 
> weeks ago when I
> first posted a no-pool tarball:
> I'm hoping someone can put a few hours on exercising the code 
> before I check
> this in.  For mkfs.ogfs, you can use the attached example 
> journal config
> file, journal.cf.  You'll need to modify it appropriately for 
> your setup.
> Use the following mkfs command:
> mkfs.ogfs -p memexp -t /dev/sdb1 -c journal.cf 
> I've also added options:
> -n  no write to disk
> -v  verbose, but not as much as the -d diagnostic option 
> (I've added some
> diagnostic output as well)
> Caveats:  
> I've tried it using only one machine, not two.  
> I haven't tried it "the old fashioned way", using pool (e.g. with
> HOWTO-generic setup).  
> I haven't tried it with EVMS.  
> I did manage to (at least partially) repair the debug 
> facilites for the fs,
> which I discovered were pretty broken . . . I want to write a 
> HOWTO on that.
> path_lookup (I put it between lookup_mnt and path_init)
> bd_acquire  (between cdput and bdget)
> NOTE 2:  The only user-space utility I've modified is 
> mkfs.ogfs.  Other
> tools may (or definitely, depending on the tool, e.g. ogfsck) 
> not work using
> external journals (but should work the same as they have been for any
> non-external journal setup).
> Special requests for anyone who might volunteer:
> -- add path_lookup and bd_acquire to our 2.4.20 kernel patch (Brian
> volunteered)
> -- change configuration stuff to send DEBUG_PRINT and 
> DEBUG_TRACE to entire
> fs tree (they get to only the arch_user subdirectory at present).
> -- try with pool, make sure I haven't messed up any legacy 
> setup/operation
> (Joe D volunteered)
> TBD (for me):
> -- write HOWTO-nopool
> -- write short design doc(?)
> -- a little more work to tighten things up in the fs code
> -- update user-space utilities for external journals
> -- finish repair of fs debug facilities
> -- check in the code (I'd like a couple of folks to try it first)
> Good luck, and, as always, let me know how things work for you!
> -- Ben --
> Opinions are mine, not Intel's
>  <<journal.cf>> 

Attachment: journal.cf
Description: Binary data

[Kernel]     [Security]     [Bugtraq]     [Photo]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Clusters]     [Linux RAID]     [Yosemite Hiking]     [Linux Resources]

Powered by Linux