Re: Design question about VG / LV in a clustered environment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jeff Sturme wrote:

>> We ditched CLVM but kept GFS.  It felt like CLVM had too many limitations to make it worthwhile. 

Would you elaborate on this for me please? I understand the "damn, forgot to start clvmd on that node...." type of annoyance, but what were your burning issues? I'm not convinced that there's a performance drawback which is specifically clvmd-related, but maybe I'm naïve. Thanks....Nick G

Nick Geovanis
US Cellular/Kforce Inc
e. Nicholas.Geovanis@xxxxxxxxxxxxxx

-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of linux-cluster-request@xxxxxxxxxx
Sent: Wednesday, December 07, 2011 11:00 AM
To: linux-cluster@xxxxxxxxxx
Subject: Linux-cluster Digest, Vol 92, Issue 4

Send Linux-cluster mailing list submissions to
	linux-cluster@xxxxxxxxxx

To subscribe or unsubscribe via the World Wide Web, visit
	https://www.redhat.com/mailman/listinfo/linux-cluster
or, via email, send a message with subject or body 'help' to
	linux-cluster-request@xxxxxxxxxx

You can reach the person managing the list at
	linux-cluster-owner@xxxxxxxxxx

When replying, please edit your Subject line so it is more specific than "Re: Contents of Linux-cluster digest..."


Today's Topics:

   1. Design question about VG / LV in a clustered	environement
      (Nicolas Ross)
   2. Re: Design question about VG / LV in a	clustered	environement
      (Jeff Sturm)
   3. cluster 3.1.8 released (Digimer)


----------------------------------------------------------------------

Message: 1
Date: Tue, 6 Dec 2011 13:39:39 -0500
From: "Nicolas Ross" <rossnick-lists@xxxxxxxxxxx>
To: "linux clustering" <linux-cluster@xxxxxxxxxx>
Subject:  Design question about VG / LV in a clustered
	environement
Message-ID: <6C60C76401934E22A74971C46A813064@versa>
Content-Type: text/plain; format=flowed; charset="iso-8859-1";
	reply-type=original

Hi !

Since the last couple of months, we had a few problems with the maner we designed our clustered filesystem and we are planing to do a re-design of the filesystems and how they are used.

Our cluster is composed of 8 nodes, connected via fibre channel, to a raid enclosure where we have 6 pair of 1-tb drives in mirror, so 6 1tb physical volumes.

First of all, our services that are run from the cluster are running inside of directories. For exemple, a webserver for a given application is run from
/CyberCat/WebServer/(...) That directory contains all executable (apache, php for exemple) and the related data, except for the databases. /CyberCat being a single GFS partition containing several other services.

This filesystem and another one like this containing services for some other clients occupy a single VG composed of 2 PV (total 2tb). The remaining (4) other PV are used in one 1tb VG each, and those VG contains only one LV that is used for databases servers.

For availibility reasons, we are planing of spliting the /CyberCat (and the other one like it) FS into several smaller filesystems, one for each service.

The reason being that in the event that we need to make a filesystem check, or any other unplaned reason, on any filesystem it won't affect other services.

So, now comes the question I have :

1. First of all, is this a bad idea ?

2. Is there any disadvantages of doing a single volume group composed of many physical volumes, enabling us to move the extents of a logical volume from one physical volume to another one, so that load is more balanced in the event we need it.

Thanks for the input. 



------------------------------

Message: 2
Date: Wed, 7 Dec 2011 03:33:35 +0000
From: Jeff Sturm <jeff.sturm@xxxxxxxxxx>
To: linux clustering <linux-cluster@xxxxxxxxxx>
Subject: Re:  Design question about VG / LV in a
	clustered	environement
Message-ID:
	<B1B9801C5CBC954680D0374CC4EEABA51178D668@MailNode2.eprize.local>
Content-Type: text/plain; charset="us-ascii"

> -----Original Message-----
> From: linux-cluster-bounces@xxxxxxxxxx 
> [mailto:linux-cluster-bounces@xxxxxxxxxx]
> On Behalf Of Nicolas Ross
> Sent: Tuesday, December 06, 2011 1:40 PM
>
> For availibility reasons, we are planing of spliting the /CyberCat 
> (and the other one like
> it) FS into several smaller filesystems, one for each service.

[snip]

> 1. First of all, is this a bad idea ?

Right or wrong, that's how we do it.

Apart from availability, you can tune the fs appropriately depending on how you use it.  GFS2 dropped some tunables, I think, but you can still mount with "noatime" (assuming your application doesn't rely on atime) and tune some things like block size.  Some of our GFS filesystems are also read-only on certain nodes, so we take advantage of spectator mounts for those.

> 2. Is there any disadvantages of doing a single volume group composed 
> of many physical volumes, enabling us to move the extents of a logical 
> volume from one physical volume to another one, so that load is more balanced in the event we need it.

Can't say, really.  We ditched CLVM but kept GFS.  It felt like CLVM had too many limitations to make it worthwhile.  It was straightforward to just export a LUN from our SAN for each file system, and that allows us to take advantage of the SAN's native snapshot facility.

-Jeff





------------------------------

Message: 3
Date: Tue, 06 Dec 2011 23:45:32 -0500
From: Digimer <linux@xxxxxxxxxxx>
To: linux clustering <linux-cluster@xxxxxxxxxx>
Subject:  cluster 3.1.8 released
Message-ID: <4EDEEF6C.2040002@xxxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1

Welcome to the cluster 3.1.8 release.

This release addresses several bugs and includes a patch to improve RRP configuration handling. DLM+SCTP (kernel counterpart of RRP) is still under testing, feedback is always appreciated.

The new source tarball can be downloaded here:

https://fedorahosted.org/releases/c/l/cluster/cluster-3.1.8.tar.xz

ChangeLog:

https://fedorahosted.org/releases/c/l/cluster/Changelog-3.1.8

To report bugs or issues:

   https://bugzilla.redhat.com/

Would you like to meet the cluster team or members of its community?

   Join us on IRC (irc.freenode.net #linux-cluster) and share your
   experience  with other sysadministrators or power users.

Thanks/congratulations to all people that contributed to achieve this great milestone.

Happy clustering,
Digimer



------------------------------

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

End of Linux-cluster Digest, Vol 92, Issue 4
********************************************

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux