[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
  Web www.spinics.net

Re: How can I use this?

Got a few semi OT questions, if you would be so kind, that I would have
liked to have found in the list archives when I was researching iSCSI
filesystem choice a while back:

How much, if any, of a performance degradation have you found in using
GFS over iSCSI versus say reiserfs over iSCSI? 

It depends on the operations.  I did a few comparisons between GFS and ext3 over iscsi.  Unfortunately I did not keep track of any of the results.  As I recall, lots of small files (one test was to unpack the kernel source) was as much as 1/3rd slower on GFS.  Random reads and writes to large files GFS was a little faster then EXT3.  GFS slows down when you are creating files, but operations on existing files do quite well.  One of our main goals was high availability, and the performance problems for the web and email did not show up as a problem.  The next release is supposed to fix some of the performance problems, at some point I will get around to putting it up on the test cluster.

I did not run a lot of benchmarks on reads and writes, the real testing effort was to see how well GFS would work with email.  Read on for details....

We wrote a tool that would take over 5 linux work stations, each would run 10 processes that would open 10 simultaneous SMTP connections and then generate a random mail message and send it to 3 of the hosts running the same set of GFS file systems.  There were 1000 test users that were receiving the random spam.  The inbox is in the users home directory, on a GFS file system. During this test, the 3 hosts were processing 3300 messages per minute.

We hacked up another tool to use imap to read the messages and verify them.  The reader tool was ran 30 times on 10 workstations, and was configured to get and delete a message every 2 seconds.

I put a delay in the push tool to slow the message rate to 1500 message per minute.  During this test, there were 300 imap clients reading and deleting mail and 1500 messages per minute being delivered to 1000 users.  The problem was the LDAP server was very busy looking up aliases.  The 3 mail servers had load averages around 7, the LDAP server was 15.  We were testing with openldap 2.2.26 with a Berkeley DB back end.  The GFS lock server (GULM) had a load of around .5.  During another test, we were running 600 imap clients and 1500 messages/minute.

This benchmark showed us that the GFS cluster is capable of processing around 5 times more email then the current campus mail server, and IBM RS/6000 M80 running AIX.   The GFS cluster is also much more scalable and can stay up in the face of hardware failures.

How much data are you working with?
I should have done more df's during these tests.  At one point there were over 20GB of mail boxes for 1000 users being read and written.

The 3 months has been continuous stable operation? No data corruption,
long running fsck's, etc? 
We have not had any corruption problems.  Early on, we did find a bug in the GFS fsck, but got a patch from RedHat to fix it.  The only reason I ran fsck was to get the experience of running it, the system did not crash. You have to shutdown the entire cluster and I wanted to see how long it would take to fsck all of the file systems we had set up at that time.  The current version of GFS for enterprise 3 has the patch in place.  The only time we have had problems is when I was running tests or tinkering with Kerberos, LDAP, NSS, NSCD and other user related tools.  GFS has performed well.

The left hand network NSMs and the GFS hosts are on 3 GB Ethernet switches, and each has 2 bonded Ethernets.  Any of the 3 switches, 6 servers or 6 NSMs can fail and the system will stay up.  If you are logged into a server that fails, you will have to log back in again and the load balancer will connect you to a host that is up.  The imap, pop, http, smpt, ssh, telnet, ftp and other user traffic is on a third Ethernet interface that connects to the campus net.  The iscsi traffic between the servers and nsms is on a separate vlan.

I expect to put this system in production early January.  We are moving test users over from the old mail server now and plan to move the entire campus before classes start in January.

How long did it take to setup? 
I have been working on this system for a little over 1 year.  The system includes 6 servers running GFS (1 for user logins, 2 apache web servers, and 3 pop/imap/smtp servers).  6 network storage modules from Left Hand Networks, 2 Kerberos servers, 2 ldap servers and 2 load balancers.  Most of my time has been spent on Kerberos and LDAP. 

And lastly any tips?
Sign up for the mailing lists, read the manuals and do your homework.  This system did not just pop out of the box running.  It could be argued that we should have gone to IBM and bought more P-series servers, AIX and GPFS.  This goes to the question, which do you have more of time or money?


[IP Storage]     [IETF]     [Linux SCSI]     [iSCSI Book]     [Linux Resources]     [Yosemite News]     [Photo]     [Home]     [IETF Announcements]     [IETF Discussion]     [SCSI Hardware]