[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [ogfs-users]Suitability of ogfs for my HA situation



On Wed, 2003-06-04 at 23:12, niblettda@gru.com wrote:
> Thanks Franco,
> 
> One question I have after reading the list is more of
> real world performance.  I've read the info about how it
> compares to ext2 and such, but I really want to know how
> that translates into a real application.
> 
> If I've got a users reading email via IMAP, are we talking
> enough of a performance hit that it will take noticeably
> longer to pull each message up, or are we on the order of
> a 1/2s longer?
> 
> Certainly speed is required if you are a huge data store
> of file shares, but in an application like SMTP writes,
> POP/IMAP reads, etc will this performance hit be so big
> that I need to worry?

We are currently running about 7TB over 1Gbit Fibre Channel with 8
nodes. For large file access GFS performs pretty well, listing
directories and initial access to files can be a little hesitant.

I wouldn't imagine you'd notice any delays with pop/smtp access unless
you were being hit by a huge number of simultaneous users.

> 
> Also, you mention multiple lock servers v. pools.  I guess
> I didn't understand as well as I thought I did.  I thought
> pools were what were used to lock files?
> 
> Assuming they are not, then I guess I would have to implement
> the heartbeat and multiple lock servers, right?

Correct. We have not felt this necessary in our environment. Our lock
server is a dedicated machine with lots of grunt, and so far we have not
had a failure (touch wood, fingers crossed) - it runs 14 memexp daemons.

> 
> What happens if you only have one lock server and it dies.
> I know the FS can't be accessed, just wondering how "graceful"
> this death would be.  Are there file system corruption
> possibilities?

The whole cluster hangs and requires a full reboot with the revived lock
server. You'd be unlucky to loose any data, GFS uses journals which get
replayed when the first node mounts the filesystem.

> 
> Thanks
> 
> --
> David A. Niblett               | email: niblettda@gru.net
> Network Administrator          | Phone: (352) 334-3400
> Gainesville Regional Utilities | Web: http://www.gru.net/
>  
> 
> 
> -----Original Message-----
> From: Franco Broi [mailto:franco@robres.com.au] 
> Sent: Tuesday, June 03, 2003 10:48 PM
> To: opengfs-users@lists.sourceforge.net
> Subject: Re: [ogfs-users]Suitability of ogfs for my HA situation
> 
> 
> On Tue, 2003-06-03 at 23:14, niblettda@gru.com wrote:
> > I've been researching GFS for a few years now, back when it was 
> > sponsored by Sistina.  I think I'm finally at a point that I could use 
> > it, and I wanted to see if it was at a point that it would be useful 
> > for me.
> > 
> > Admittedly I don't understand a lot of the low level details of how 
> > the FS works.  So I don't really follow most of the bug details.
> > 
> > What I would like to do is have 4 nodes all serving SMTP, POP, FTP, 
> > etc (standard ISP services) that read/write to a central data store, 
> > Compaq SAN MSA1000 device.  Basically, all user data will be stored 
> > there, and I'll load balance across the 4 machines.  My main questions 
> > are:
> > 
> > Will the performance be good/bad/ugly enough for a moderate number 
> > (say 25-50 simultaneous) reads/writes from each node?
> > 
> > I read and followed most of the HOWTO on the no-pool, since
> > I don't want a single lock server.  Is that feature ready
> > for a production environment?
> 
> I don't think having pools has anything to do with having multiple lock
> servers. Certainly the no pool code works, although I can only vouche for it
> working over a 2 week period, my test setup has since been dismantled.
> 
> > 
> > Should a node die, with open files, will ogfs eventually release the 
> > locks so that other nodes can access the file?  How long before the 
> > locks get released?
> 
> Yes. The length of time depends on the timeout you set in the configuration
> file. As soon as the failed node has been stomith'd, the other nodes can
> replay the failed node's journal(s) and can access the locked files.
> 
> > 
> > I'm also investigating purchasing GFS from Sistina, but if ogfs will 
> > accomplish the job, then why not.
> 
> We have a Sistina GFS cluster (8 nodes) running a rather old version, in my
> brief test, ogfs seemed at least as good - but I haven't tried the latest's
> release from Sistina which promised much improved performance.
> 
> > 
> > Thanks all.
> > 
> > --
> > David A. Niblett               | email: niblettda@gru.net
> > Network Administrator          | Phone: (352) 334-3400
> > Gainesville Regional Utilities | Web: http://www.gru.net/
> >  
> > 
> > 
> > -------------------------------------------------------
> > This SF.net email is sponsored by: eBay
> > Get office equipment for less on eBay! 
> > http://adfarm.mediaplex.com/ad/ck/711-11697-6916-5
> > _______________________________________________
> > Opengfs-users mailing list Opengfs-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/opengfs-users
> > 
> 
> 
> 
> -------------------------------------------------------
> This SF.net email is sponsored by:  Etnus, makers of TotalView, The best
> thread debugger on the planet. Designed with thread debugging features
> you've never dreamed of, try TotalView 6 free at www.etnus.com.
> _______________________________________________
> Opengfs-users mailing list
> Opengfs-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/opengfs-users
> 
> 
> -------------------------------------------------------
> This SF.net email is sponsored by:  Etnus, makers of TotalView, The best
> thread debugger on the planet. Designed with thread debugging features
> you've never dreamed of, try TotalView 6 free at www.etnus.com.
> _______________________________________________
> Opengfs-users mailing list
> Opengfs-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/opengfs-users



-------------------------------------------------------
This SF.net email is sponsored by:  Etnus, makers of TotalView, The best
thread debugger on the planet. Designed with thread debugging features
you've never dreamed of, try TotalView 6 free at www.etnus.com.
_______________________________________________
Opengfs-users mailing list
Opengfs-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/opengfs-users

[Kernel]     [Security]     [Bugtraq]     [Photo]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Clusters]     [Linux RAID]     [Yosemite Hiking]     [Linux Resources]

Powered by Linux