[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [ogfs-users]list directory content performance

I'm sorry to say but I didn't find the right tool which reports the number of resource groups in the filesystem. Please advice how should I figure out the number.


"Cahill, Ben M" <ben.m.cahill@xxxxxxxxx>
Sent by: opengfs-users-admin@xxxxxxxxxxxxxxxxxxxxx

2004/04/28 06:17

Please respond to

RE: [ogfs-users]list directory content performance


> -----Original Message-----
> From: opengfs-users-admin@xxxxxxxxxxxxxxxxxxxxx
> [mailto:opengfs-users-admin@xxxxxxxxxxxxxxxxxxxxx] On Behalf
> Of Greg Freemyer
> Sent: Tuesday, April 27, 2004 12:27 PM
> To: opengfs-users@xxxxxxxxxxxxxxxxxxxxx
> Subject: Re: [ogfs-users]list directory content performance
> On Tue, 2004-04-27 at 08:53, Lajos.Okos@xxxxxx wrote:
> > We have an OGFS filesystem around 1.7TB mounted on 3 nodes. Does
> > anybody know how to speed up the response to the ls command? We
> > mounted the filesystem with noatime,nodiratime option but it didn't
> > help. Once I ask for the directory it scans the array for minutes
> > before respond to the command.
> >
> > Thanks in advance,
> >
> > Lajos
> This is the first time I recall seeing a OGFS filesystem above 1TB.
> It could easily be a bug in the calculation for number of resource
> groups.  (ie. based on filesize I believe.)
> How many resource groups do you have?
> Do you know if you have similar performance issues if you reduce your
> filesystem to 1 TB or below.
> Can you remake your filesystem with more resource groups.  (Use -R to
> override the default calculation.)

Just a word of caution:  the more resource groups you have, the longer
it will take to "stat" the filesystem.  The block usage statistics are
distributed among the resource groups ... The more RGs you have, the
longer it takes to read each RG to gather all of their statistics.

Comments on RGs:

I remember, when I was working on the block allocation code back in the
Fall, reaching the conclusion that RGs help nodes to work in parallel
when allocating blocks, by breaking up the filesystem into smaller
domains (almost like mini-filesystems).  When a node owns a lock on an
RG, it can allocate freely within that RG, while another node allocates
freely in another RG ...

... With 3 nodes, my guess would be that a dozen or two RGs would be
sufficient for that purpose, but there may be different opinions on that
(see the OGFS ondisk layout doc).

However, there is a limit to an RG's size (see the OGFS ondisk layout
doc), which in turn depends on your block size ... and you need many,
many RGs to comprise a 1.7TB filesystem.

How many RGs are in your filesystem?

-- Ben --

Opinions are mine, not Intel's

> Greg
> --
> Greg Freemyer

This SF.Net email is sponsored by: Oracle 10g
Get certified on the hottest thing ever to hit the market... Oracle 10g.
Take an Oracle 10g class now, and we'll give you the exam FREE.
Opengfs-users mailing list

[Kernel]     [Security]     [Bugtraq]     [Photo]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Clusters]     [Linux RAID]     [Yosemite Hiking]     [Linux Resources]

Powered by Linux