[Linux-HA] execv error when to execute a heartbeat RA httpd

Serge.Dubrouski at fjcomm.com Serge.Dubrouski at fjcomm.com
Tue Oct 11 10:51:55 MDT 2005


Looks like I was wrong. In this case the reason why it looks for RA scripts
in the /usr/local/heartbeat/etc/ha.d/resource.d would be --with-initdir=DIR
configure option.

linux-ha-bounces at lists.linux-ha.org wrote on 10/11/2005 10:45:43 AM:

> Hmm, I'm confused...
>
> So, if heartbeat is invoking the httpd from
> /usr/local/heartbeat/etc/ha.d/resource.d, then my configuration is not
> implementing a version 2 cluster?  Note, I renamed my haresources to
> _haresources after I ran haresources2cib.py to create the cib.xml file,
> so heartbeat should not "see" it.
>
> PJ
>
> Serge.Dubrouski at fjcomm.com wrote:
>
> >That means that you don't use 2.0 CIB features and just configured your
> >resources in the haresources file.
> >
> >Serge.
> >
> >linux-ha-bounces at lists.linux-ha.org wrote on 10/11/2005 10:04:20 AM:
> >
> >
> >
> >>Ah, yes, this is true...however, httpd script does exist in both
> >>/etc/rc.d/init.d and /etc/init.d.
> >>
> >>Actually, I copied the httpd script to
> >>/usr/local/heartbeat/etc/ha.d/resource.d and that seemed to do the
trick.
> >>
> >>Thanks,
> >>
> >>PJ
> >>
> >>Serge.Dubrouski at fjcomm.com wrote:
> >>
> >>
> >>
> >>>Hi -
> >>>
> >>>The path in which it looks for RA scripts depends on the type of the
> >>>service. For OCF scripts it's /usr/lib/ocf/resource.d/<provider> for
LSB
> >>>it's /etc/init.d (not /etc/rc.d/init.d) by default.
> >>>
> >>>Serge.
> >>>
> >>>linux-ha-bounces at lists.linux-ha.org wrote on 10/11/2005 09:14:28 AM:
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>>Hi all,
> >>>>
> >>>>I've compiled version 2.0.2 (with --prefix=/usr/local/heartbeat) onto
> >>>>RHAS v3 and am able to run a version 1, two-node Apache cluster.
> >>>>However, once I attempt to run a version 2 cluster, I repeatedly get
> >>>>"execv error when to execute a heartbeat RA httpd" errors...
> >>>>
> >>>>crmd[27033]: 2005/10/11_10:43:49 info: mask(lrm.c:do_lrm_rsc_op):
> >>>>Performing op start on group_1:IPaddr_1
> >>>>IPaddr[27045]:  2005/10/11_10:43:49 INFO: /sbin/ifconfig eth1:0
> >>>>10.252.1.50 netmask 255.255.255.0
> >>>>IPaddr[27045]:  2005/10/11_10:43:49 INFO: Sending Gratuitous Arp for
> >>>>10.252.1.50 on eth1:0 [eth1]
> >>>>IPaddr[27045]:  2005/10/11_10:43:49 INFO:
> >>>>/usr/local/heartbeat/lib/heartbeat/send_arp -i 500 -r 10 -p
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
>
>>/usr/local/heartbeat/var/run/heartbeat/rsctmp/send_arp/send_arp-10.252.1.50

> >>
> >>
> >
> >
> >
> >>>
> >>>
> >>>
> >>>>eth1 10.252.1.50 auto 10.252.1.50 ffffffffffff
> >>>>crmd[27033]: 2005/10/11_10:43:49 info: mask(lrm.c:do_lrm_rsc_op):
> >>>>Performing op monitor on group_1:IPaddr_1
> >>>>crmd[27033]: 2005/10/11_10:43:49 WARN: lrm_get_rsc(653): got a return
> >>>>code HA_FAIL from a reply message of getrsc with function
> >>>>
> >>>>
> >>>>
> >>>>
> >>>get_ret_from_msg.
> >>>
> >>>
> >>>
> >>>
> >>>>crmd[27033]: 2005/10/11_10:43:49 WARN: lrm_get_rsc(653): got a return
> >>>>code HA_FAIL from a reply message of getrsc with function
> >>>>
> >>>>
> >>>>
> >>>>
> >>>get_ret_from_msg.
> >>>
> >>>
> >>>
> >>>
> >>>>crmd[27033]: 2005/10/11_10:43:49 info: mask(lrm.c:do_lrm_rsc_op):
> >>>>Performing op start on group_1:httpd
> >>>>lrmd[27120]: 2005/10/11_10:43:49 ERROR: execv error when to execute a
> >>>>heartbeat RA httpd.
> >>>>lrmd[27120]: 2005/10/11_10:43:49 ERROR: Cause: No such file or
> >>>>
> >>>>
> >directory.
> >
> >
> >>>>crmd[27033]: 2005/10/11_10:43:49 ERROR: mask(lrm.c:do_lrm_event): LRM
> >>>>operation (4) start on group_1:httpd ERROR: unknown error
> >>>>crmd[27033]: 2005/10/11_10:43:49 info: mask(lrm.c:do_lrm_rsc_op):
> >>>>Performing op stop on group_1:httpd
> >>>>
> >>>>The last 4 lines repeat ad nauseum.  My first thought is lrmd is
unable
> >>>>to find the httpd init script, but the httpd init script is in the
> >>>>
> >>>>
> >usual
> >
> >
> >>>>/etc/rc.d/init.d.
> >>>>
> >>>>Any ideas?
> >>>>
> >>>>Thanks,
> >>>>
> >>>>Phil Juels
> >>>>
> >>>>---- cib.xml (generated by haresources2cib.py) -----
> >>>><cib admin_epoch="0" have_quorum="true" num_peers="2"
> >>>>origin="hpcgg-grd1" last_written="Tue Oct 11 10:44:31 2005"
> >>>>debug_source="finalize_join"
> >>>>dc_uuid="8a9fa544-185d-44c5-ae5a-63dab9df49a3" ccm_transition="2"
> >>>>generated="true" epoch="3" num_updates="282">
> >>>>  <configuration>
> >>>>    <crm_config>
> >>>>      <nvpair id="transition_idle_timeout"
> >>>>name="transition_idle_timeout" value="120s"/>
> >>>>      <nvpair id="symmetric_cluster" name="symmetric_cluster"
> >>>>value="true"/>
> >>>>      <nvpair id="no_quorum_policy" name="no_quorum_policy"
> >>>>
> >>>>
> >>>>
> >>>>
> >>>value="stop"/>
> >>>
> >>>
> >>>
> >>>
> >>>>      <nvpair id="suppress_cib_writes" name="suppress_cib_writes"
> >>>>value="false"/>
> >>>>    </crm_config>
> >>>>    <nodes>
> >>>>      <node id="306d4c0a-4d7a-43b0-b2e6-fa5ab74ae435"
> >>>>uname="hpcgg-grd1" type="member"/>
> >>>>      <node id="8a9fa544-185d-44c5-ae5a-63dab9df49a3"
> >>>>uname="hpcgg-grd2" type="member"/>
> >>>>    </nodes>
> >>>>    <resources>
> >>>>      <group id="group_1">
> >>>>        <primitive class="ocf" id="IPaddr_1" provider="heartbeat"
> >>>>type="IPaddr">
> >>>>          <operations>
> >>>>            <op id="1" interval="5s" name="monitor" timeout="5s"/>
> >>>>          </operations>
> >>>>          <instance_attributes>
> >>>>            <attributes>
> >>>>              <nvpair name="ip" value="10.252.1.50"
> >>>>id="505082bd-6b77-4a5d-81d6-d15dbfb7f0f9"/>
> >>>>            </attributes>
> >>>>          </instance_attributes>
> >>>>        </primitive>
> >>>>        <primitive class="heartbeat" id="httpd" provider="heartbeat"
> >>>>type="httpd">
> >>>>          <operations>
> >>>>            <op id="ffc5baeb-7049-4b3e-ad24-63ab6f45bb8a"
> >>>>interval="120s" name="monitor" timeout="60s"/>
> >>>>          </operations>
> >>>>        </primitive>
> >>>>      </group>
> >>>>    </resources>
> >>>>    <constraints>
> >>>>      <rsc_location id="rsc_location_group_1" rsc="group_1">
> >>>>        <rule id="prefered_location_group_1" score="100">
> >>>>          <expression attribute="#uname" operation="eq"
> >>>>value="hpcgg-grd1" id="88963502-6999-4452-9477-f8b390ae5b30"/>
> >>>>        </rule>
> >>>>      </rsc_location>
> >>>>    </constraints>
> >>>>  </configuration>
> >>>>  <status>
> >>>>    <node_state uname="hpcgg-grd2" in_ccm="true"
> >>>>id="8a9fa544-185d-44c5-ae5a-63dab9df49a3" join="member"
> >>>>origin="do_lrm_query" ha="active" crmd="online" expected="member"/>
> >>>>    <node_state join="member" uname="hpcgg-grd1" ha="active"
> >>>>in_ccm="true" crmd="online" origin="do_lrm_query" expected="down"
> >>>>shutdown="1129041824" id="306d4c0a-4d7a-43b0-b2e6-fa5ab74ae435">
> >>>>      <lrm>
> >>>>        <lrm_resources>
> >>>>          <lrm_resource rsc_state="running" last_op="monitor"
> >>>>id="group_1:IPaddr_1" op_status="0" rc_code="0">
> >>>>            <lrm_rsc_op operation="start"
> >>>>transition_key="0:3cfbe2bc-12eb-4103-a0c1-a00ba874823f"
> >>>>id="group_1:IPaddr_1_start_0" op_status="0" call_id="2" rc_code="0"
> >>>>origin="do_update_resource"
> >>>>transition_magic="0:0:3cfbe2bc-12eb-4103-a0c1-a00ba874823f"
> >>>>rsc_state="running"/>
> >>>>            <lrm_rsc_op operation="monitor"
> >>>>transition_key="0:3cfbe2bc-12eb-4103-a0c1-a00ba874823f"
> >>>>id="group_1:IPaddr_1_monitor_5000" op_status="0" call_id="3"
> >>>>
> >>>>
> >rc_code="0"
> >
> >
> >>>>origin="do_update_resource"
> >>>>transition_magic="0:0:3cfbe2bc-12eb-4103-a0c1-a00ba874823f"
> >>>>rsc_state="running"/>
> >>>>          </lrm_resource>
> >>>>          <lrm_resource op_status="4" rc_code="1"
> >>>>rsc_state="stop_failed" last_op="stop" id="group_1:httpd">
> >>>>            <lrm_rsc_op operation="start"
> >>>>transition_key="0:3cfbe2bc-12eb-4103-a0c1-a00ba874823f"
> >>>>id="group_1:httpd_start_0" op_status="4" call_id="4" rc_code="1"
> >>>>origin="do_update_resource"
> >>>>transition_magic="4:0:3cfbe2bc-12eb-4103-a0c1-a00ba874823f"
> >>>>rsc_state="start_failed"/>
> >>>>            <lrm_rsc_op operation="stop" origin="do_update_resource"
> >>>>rsc_state="stop_failed" rc_code="1" op_status="4"
> >>>>id="group_1:httpd_stop_0"
> >>>>transition_key="16:3cfbe2bc-12eb-4103-a0c1-a00ba874823f"
> >>>>transition_magic="4:16:3cfbe2bc-12eb-4103-a0c1-a00ba874823f"
> >>>>
> >>>>
> >>>>
> >>>>
> >>>call_id="20"/>
> >>>
> >>>
> >>>
> >>>
> >>>>          </lrm_resource>
> >>>>        </lrm_resources>
> >>>>      </lrm>
> >>>>    </node_state>
> >>>>  </status>
> >>>></cib>
> >>>>
> >>>>
> >>>>
> >>>>_______________________________________________
> >>>>Linux-HA mailing list
> >>>>Linux-HA at lists.linux-ha.org
> >>>>http://lists.linux-ha.org/mailman/listinfo/linux-ha
> >>>>See also: http://linux-ha.org/ReportingProblems
> >>>>
> >>>>[flyingj]
> >>>>
> >>>>
> >>>>
> >>>>
> >>>_______________________________________________
> >>>Linux-HA mailing list
> >>>Linux-HA at lists.linux-ha.org
> >>>http://lists.linux-ha.org/mailman/listinfo/linux-ha
> >>>See also: http://linux-ha.org/ReportingProblems
> >>>
> >>>
> >>>
> >>>
> >>_______________________________________________
> >>Linux-HA mailing list
> >>Linux-HA at lists.linux-ha.org
> >>http://lists.linux-ha.org/mailman/listinfo/linux-ha
> >>See also: http://linux-ha.org/ReportingProblems
> >>
> >>[flyingj]
> >>
> >>
> >
> >_______________________________________________
> >Linux-HA mailing list
> >Linux-HA at lists.linux-ha.org
> >http://lists.linux-ha.org/mailman/listinfo/linux-ha
> >See also: http://linux-ha.org/ReportingProblems
> >
> >
>
> _______________________________________________
> Linux-HA mailing list
> Linux-HA at lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
> [flyingj]




More information about the Linux-HA mailing list