[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [ogfs-dev]mkfs options for rg control



I think the easiest one thing for a user to specify would be the total # of rgrps when creating the fs.  That relates directly to the amount of memory it takes incore.

Can anyone think of a good reason for having any more than, say, (100 * number-of-cluster-nodes) rgrps??  I'm seeing rgrps as a way to distribute the block allocation info, and to distribute contention for it.  Ext2 puts *all* of the block allocation stats in the superblock, fine for single machine, but creating a bottleneck for shared filesystems.  OGFS  distributes all the block alloc info among the rgrps.

Maybe we need to think about cacheing the bitmaps!  Also, ext2 uses only 1 bit per block ... I wonder if we could get away with that somehow, like by bunching meta blocks together?  Not as good for disk performance, I guess.

Since we're getting away from using pool, do we need to worry about sub-pools (except for internal journals, which don't have rgrps)?  Without pool, we have no way of specifying data subpools, IIRC.  It's just one big data area.  Oops, I guess we have, in effect, subpools after doing a filesystem expansion.  But no need to worry about that with mkfs.ogfs ... only with ogfs_expand.

-- Ben --

Opinions are mine, not Intel's

> -----Original Message-----
> From: opengfs-devel-admin@lists.sourceforge.net
> [mailto:opengfs-devel-admin@lists.sourceforge.net]On Behalf Of Dominik
> Vogt
> Sent: Tuesday, September 23, 2003 6:29 AM
> To: opengfs-devel@lists.sourceforge.net
> Subject: [ogfs-dev]mkfs options for rg control
> 
> 
> Yesterday we discussed adding better control over resource group
> layout to mkfs.ogfs.  However, it's not obvious how much control
> we need.  I could imagine the following parameters to be adjusted:
> 
>   - The minimum allowed number of rgs per sub pool (currently the
>     -r option).
>   - The minimum number of blocks in each resource group (hard
>     coded to 10).
>   - The default size of the resource groups (hard coded roughly
>     to whatever fits into 14 rg header blocks).
>   - The number of resource groups per sub pool (ignoring the other
>     methods to set the number of rgs).  However, specifying this
>     number manually may waste up to one block minus one byte in
>     the block bitmap.
>   - All the above parameters specified independently for each sub
>     pool.
> 
> The current default algorithm would create 468114 resource groups
> in a 100 TB pool (4k block size) which is probably too much to
> store in main memory.  WIth 304 bytes per resource group this
> would consume 142 MB just for the rg headers.  On the other hand,
> the block bitmaps for a 100 TB file system eat up 6.7 GB :-O
> 
> So, how much control do we need and how should we name the command
> line options?  I'm reluctant to waste many one letter options for
> resource group tuning.
> 
> Ciao
> 
> Dominik ^_^  ^_^
> 
> 
> -------------------------------------------------------
> This sf.net email is sponsored by:ThinkGeek
> Welcome to geek heaven.
> http://thinkgeek.com/sf
> _______________________________________________
> Opengfs-devel mailing list
> Opengfs-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/opengfs-devel
> 


-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Opengfs-devel mailing list
Opengfs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/opengfs-devel


[Kernel]     [Security]     [Bugtraq]     [Photo]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Clusters]     [Linux RAID]     [Yosemite Hiking]     [Linux Resources]

Powered by Linux