Re: clustered web service using Apache resource?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

> -----Original Message-----
> From: linux-cluster-bounces@xxxxxxxxxx 
> [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Liam
> Sent: Wednesday, November 16, 2011 21:36
> To: linux-cluster@xxxxxxxxxx
> Subject:  clustered web service using Apache
> Hello,
> I've set up a 5 node cluster using RHEL 6.1 and the HA &
> Storage add-ons, with one service group (so far) containing an
> address resource and an Apache resource.  The service is
working fine,
> but only on the first node (which happens to be where luci is
> running). 

What does a "clustat" issued on the node where the resource is
active display?

Can clustat be issued on the other nodes (especially those which
are part of the failover domain of the service that the apache
resource belongs to) as well?
If not then probably clurgmgrd isn't running on them yet (i.e.
init service rgmanager hasn't started yet).
But to be able to relocate the service (i.e. your HA webserver;
it's daft that we have a naming collision of init service and
RHCS service here)
rgmanager must have been started on those nodes likewise.
If a "service rgmanager status" says something like "halted" or
"not running" issue a "service rgmanager start".

> All the /etc/cluster/cluster.conf files match, but I only
> see the /etc/cluster/apache directory on that node. 

That sounds queer to me.
Usually such a directory doesn't exist.
Is that where you have your shared storage (LV/GFS), i.e. fs
resource, of your webserver service mounted?
I would rather choose a different mountpoint.
But I seem to understand why you want to mount it in that place.
You only want to keep one central httpd.conf file I guess?
I'm not sure because I don't use the RHCS apache resource agent
(RA) on my clusters yet.

Let me see what parameters it cares for

# /usr/share/cluster/ meta-data|grep parameter\ name
        <parameter name="name" primary="1">
        <parameter name="server_root">
        <parameter name="config_file">
        <parameter name="httpd_options">
        <parameter name="shutdown_wait">

Yes, it looks as if you can provide it the attribute

> I have one
> failover domain that includes all nodes.  All nodes have the
httpd RPM
> and the GFS2 filesystem containing the apache
service files mounted
> (from /etc/fstab), but I don't seem to be able to get the
service to
> move over to another node through the Conga interface.  I
> tried anything at the command-line yet.

In the shell see "clusvcadm -h" for usage info.
You could also relocate from the shell by something like e.g.

# clusvcadm -r <name_of_your_webserver_service> -m

>From another shell terminal you could follow the relocation
progress by e.g.

# clustat -i3

and additionally  "tail -f" or "less +F" on /var/log/messages on
both nodes,
i.e. the one that releases the service/resources and the one that
takes them over.

> In C. Hohberger's Red Hat SUMMIT tutorial slides, I see he used
> Script instead of an Apache resource.  I also found a statement
in KB
> DOC-5897 that an apache web server clustered service needs an
> address, a script, and a filesystem.  Why a Script instead of
> Apache resource?  

I would assume because the RHCS apache RA is taylored at the
httpd RPM from the RHEL repository.
As said, I'm not using the apache RA so far.
But I encountered a similar issue with the RHCS tomcat RA.
Because our customer wished a special Tomcat release build with
different paths, starting/stopping scripts, environment etc. it
would have been much more work to adapt the RHCS tomcat RA to
those special needs than providing an own script RA script.

> Is using the Apache resource why I can't get the
> service to launch on another node?

If you hold the httpd.conf and all apache lib module files etc.
centralised in your shared storage as a clustered resource and
provide the mountpoint on each node it should work.
On the other hand if you cling to the dirs and files as provided
by the RHEL httpd RPM you yourself must take care of keeping any
config changes of your httpd.conf in sync over all cluster nodes.
That way should also work.
But you really ought to watch the output of clurgmgrd in
/var/log/messages on the nodes involved in the failed service

Linux-cluster mailing list

[Corosync Cluster Engine]     [Linux RAID]     [Fedora Users]     [Fedora Legacy List]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [Yosemite Photos]     [KDE Users]

Add to Google Powered by Linux