[Linux-HA] problem filesys-resource and drbd CentOS 5.3 HA-v2

Testuser SST fatcharly at gmx.de
Thu Jul 16 05:42:11 MDT 2009


Hi again,

I´ve got a filesystem-resource that depends on a drbd-resource. The drdb-resource ist starting up, but the filesys won´t. The is a messages called "No set matching id=master-8a31d70e-2bbd-46b7-b01b-7df93b1e4b18 in status " at the end of the "var/log/messages". I made full start of the heartbeat-service to get the full "action-log" an node cluster02.
I also put an crm_verify of my config under it.

here are my constraints:
<constraints>
 <rsc_order id="order_filesys_drbd" from="resource_filesys" action="start" to="ms_drbd" to_action="promote"/>
 <rsc_colocation id="colocation_filesys_drbd" from="resource_filesys" to="ms_drbd" to_role="master" score="INFINITY"/>
 <rsc_order id="order_apache_after_IP" from="resource_apache" action="start" type="after" to="resource_IP" to_action="start"/>
 <rsc_colocation id="colocation_Apache_IP" from="resource_apache" to="resource_IP" score="INFINITY"/>
</constraints>

and here are the two ressources:
drdb:
 <master_slave id="ms_drbd" notify="true" globally_unique="false">
  <meta_attributes id="ms_drbd_meta_attrs">
   <attributes>
    <nvpair id="ms_drbd_metaattr_clone_max" name="clone_max" value="2"/>
    <nvpair id="ms_drbd_metaattr_clone_node_max" name="clone_node_max" value="1"/>
    <nvpair id="ms_drbd_metaattr_master_max" name="master_max" value="1"/>
    <nvpair id="ms_drbd_metaattr_master_node_max" name="master_node_max" value="1"/>
    <nvpair id="ms_drbd_metaattr_notify" name="notify" value="true" />
    <nvpair id="ms_drbd_metaattr_globally_unique" name="globally_unique" value="false"/>
   </attributes>
  </meta_attributes>
 <primitive id="resource_drbd" class="ocf" type="drbd" provider="heartbeat">
  <instance_attributes id="resource_drbd_instance_attr">
   <attributes>
    <nvpair id="pairid_drbdresource_attributes" name="drbd_resource" value="webspace0"/>
   </attributes>
  </instance_attributes>
 </primitive>
</master_slave>


and filesys:

<primitive id="resource_filesys" class="ocf" type="Filesystem" provider="heartbeat">
  <meta_attributes id="resource_filesys_meta_attrs">
   <attributes>
    <nvpair name="target_role" id="resource_filesys_metaattr_target_role" value="stopped"/>
   </attributes>
  </meta_attributes>
 <instance_attributes id="resource_filesys_instance_attrs">
  <attributes>
   <nvpair id="nvpair_attrs_filesys_type_res" name="fstype" value="ext3"/>
   <nvpair id="nvpair_attrs_filesys_dev_res" name="device" value="/dev/drbd1"/>
   <nvpair id="nvpair_attrs_filesys_mnt_res" name="directory" value="/clusterfs"/>
  </attributes>
 </instance_attributes>
</primitive>




/var/log/messages
Jul 16 13:12:05 cluster02 heartbeat: [7908]: info: Core process 7915 exited. 5 remaining
Jul 16 13:12:05 cluster02 heartbeat: [7908]: info: Core process 7914 exited. 4 remaining
Jul 16 13:12:05 cluster02 heartbeat: [7908]: info: Core process 7913 exited. 3 remaining
Jul 16 13:12:05 cluster02 heartbeat: [7908]: info: Core process 7912 exited. 2 remaining
Jul 16 13:12:05 cluster02 heartbeat: [7908]: info: Core process 7911 exited. 1 remaining
Jul 16 13:12:05 cluster02 heartbeat: [7908]: info: cluster02 Heartbeat shutdown complete.
Jul 16 13:12:06 cluster02 logd: [7887]: info: logd_term_write_action: received SIGTERM
Jul 16 13:12:06 cluster02 logd: [7887]: info: Exiting write process
Jul 16 13:12:06 cluster02 logd: [9070]: info: Waiting for pid=7886 to exit
Jul 16 13:12:07 cluster02 logd: [9070]: info: Pid 7886 exited
Jul 16 13:12:42 cluster02 logd: [9093]: info: logd started with default configuration.
Jul 16 13:12:42 cluster02 logd: [9094]: info: G_main_add_SignalHandler: Added signal handler for signal 15
Jul 16 13:12:42 cluster02 logd: [9093]: info: G_main_add_SignalHandler: Added signal handler for signal 15
Jul 16 13:12:42 cluster02 heartbeat: [9114]: info: Version 2 support: on
Jul 16 13:12:42 cluster02 heartbeat: [9114]: WARN: File /etc/ha.d/haresources exists.
Jul 16 13:12:42 cluster02 heartbeat: [9114]: WARN: This file is not used because crm is enabled
Jul 16 13:12:42 cluster02 heartbeat: [9114]: WARN: Logging daemon is disabled --enabling logging daemon is recommended
Jul 16 13:12:42 cluster02 heartbeat: [9114]: info: **************************
Jul 16 13:12:42 cluster02 heartbeat: [9114]: info: Configuration validated. Starting heartbeat 2.1.3
Jul 16 13:12:42 cluster02 heartbeat: [9115]: info: heartbeat: version 2.1.3
Jul 16 13:12:43 cluster02 heartbeat: [9115]: info: Heartbeat generation: 1246455859
Jul 16 13:12:43 cluster02 heartbeat: [9115]: info: glib: ucast: write socket priority set to IPTOS_LOWDELAY on eth1
Jul 16 13:12:43 cluster02 heartbeat: [9115]: info: glib: ucast: bound send socket to device: eth1
Jul 16 13:12:43 cluster02 heartbeat: [9115]: info: glib: ucast: bound receive socket to device: eth1
Jul 16 13:12:43 cluster02 heartbeat: [9115]: info: glib: ucast: started on port 694 interface eth1 to 192.168.95.5
Jul 16 13:12:43 cluster02 heartbeat: [9115]: info: glib: ping heartbeat started.
Jul 16 13:12:43 cluster02 heartbeat: [9115]: info: G_main_add_TriggerHandler: Added signal manual handler
Jul 16 13:12:43 cluster02 heartbeat: [9115]: info: G_main_add_TriggerHandler: Added signal manual handler
Jul 16 13:12:43 cluster02 heartbeat: [9115]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Jul 16 13:12:43 cluster02 heartbeat: [9115]: info: Local status now set to: 'up'
Jul 16 13:12:44 cluster02 heartbeat: [9115]: info: Link 192.168.1.9:192.168.1.9 up.
Jul 16 13:12:44 cluster02 heartbeat: [9115]: info: Status update for node 192.168.1.9: status ping
Jul 16 13:12:45 cluster02 heartbeat: [9115]: info: Link cluster01:eth1 up.
Jul 16 13:12:45 cluster02 heartbeat: [9115]: info: Status update for node cluster01: status active
Jul 16 13:12:45 cluster02 heartbeat: [9115]: info: Comm_now_up(): updating status to active
Jul 16 13:12:45 cluster02 heartbeat: [9115]: info: Local status now set to: 'active'
Jul 16 13:12:45 cluster02 heartbeat: [9115]: info: Starting child client "/usr/lib/heartbeat/ipfail" (498,496)
Jul 16 13:12:45 cluster02 heartbeat: [9115]: info: Starting child client "/usr/lib/heartbeat/ccm" (498,496)
Jul 16 13:12:45 cluster02 heartbeat: [9115]: info: Starting child client "/usr/lib/heartbeat/cib" (498,496)
Jul 16 13:12:45 cluster02 heartbeat: [9115]: info: Starting child client "/usr/lib/heartbeat/lrmd -r" (0,496)
Jul 16 13:12:45 cluster02 heartbeat: [9115]: info: Starting child client "/usr/lib/heartbeat/stonithd" (0,496)
Jul 16 13:12:45 cluster02 heartbeat: [9115]: info: Starting child client "/usr/lib/heartbeat/attrd" (498,496)
Jul 16 13:12:45 cluster02 heartbeat: [9115]: info: Starting child client "/usr/lib/heartbeat/crmd" (498,496)
Jul 16 13:12:45 cluster02 heartbeat: [9115]: info: Starting child client "/usr/lib/heartbeat/mgmtd -v" (0,496)
Jul 16 13:12:45 cluster02 heartbeat: [9126]: info: Starting "/usr/lib/heartbeat/ipfail" as uid 498  gid 496 (pid 9126)
Jul 16 13:12:45 cluster02 heartbeat: [9127]: info: Starting "/usr/lib/heartbeat/ccm" as uid 498  gid 496 (pid 9127)
Jul 16 13:12:45 cluster02 heartbeat: [9128]: info: Starting "/usr/lib/heartbeat/cib" as uid 498  gid 496 (pid 9128)
Jul 16 13:12:45 cluster02 heartbeat: [9129]: info: Starting "/usr/lib/heartbeat/lrmd -r" as uid 0  gid 496 (pid 9129)
Jul 16 13:12:45 cluster02 heartbeat: [9130]: info: Starting "/usr/lib/heartbeat/stonithd" as uid 0  gid 496 (pid 9130)
Jul 16 13:12:45 cluster02 heartbeat: [9131]: info: Starting "/usr/lib/heartbeat/attrd" as uid 498  gid 496 (pid 9131)
Jul 16 13:12:45 cluster02 heartbeat: [9132]: info: Starting "/usr/lib/heartbeat/crmd" as uid 498  gid 496 (pid 9132)
Jul 16 13:12:45 cluster02 heartbeat: [9133]: info: Starting "/usr/lib/heartbeat/mgmtd -v" as uid 0  gid 496 (pid 9133)
Jul 16 13:12:45 cluster02 stonithd: [9130]: info: G_main_add_SignalHandler: Added signal handler for signal 10
Jul 16 13:12:45 cluster02 stonithd: [9130]: info: G_main_add_SignalHandler: Added signal handler for signal 12
Jul 16 13:12:45 cluster02 attrd: [9131]: info: G_main_add_SignalHandler: Added signal handler for signal 15
Jul 16 13:12:45 cluster02 attrd: [9131]: info: register_with_ha: Hostname: cluster02
Jul 16 13:12:45 cluster02 crmd: [9132]: info: main: CRM Hg Version: node: 552305612591183b1628baa5bc6e903e0f1e26a3 
Jul 16 13:12:45 cluster02 crmd: [9132]: info: crmd_init: Starting crmd
Jul 16 13:12:45 cluster02 crmd: [9132]: info: G_main_add_SignalHandler: Added signal handler for signal 15
Jul 16 13:12:45 cluster02 crmd: [9132]: info: G_main_add_TriggerHandler: Added signal manual handler
Jul 16 13:12:45 cluster02 mgmtd: [9133]: info: G_main_add_SignalHandler: Added signal handler for signal 15
Jul 16 13:12:45 cluster02 mgmtd: [9133]: info: G_main_add_SignalHandler: Added signal handler for signal 10
Jul 16 13:12:45 cluster02 mgmtd: [9133]: info: G_main_add_SignalHandler: Added signal handler for signal 12
Jul 16 13:12:45 cluster02 mgmtd: [9133]: WARN: lrm_signon: can not initiate connection
Jul 16 13:12:45 cluster02 mgmtd: [9133]: info: login to lrm: 0, ret:0
Jul 16 13:12:45 cluster02 crmd: [9132]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Jul 16 13:12:45 cluster02 heartbeat: [9115]: ERROR: api_process_request: bad request [getrsc]
Jul 16 13:12:45 cluster02 heartbeat: [9115]: ERROR: MSG: Dumping message with 5 fields
Jul 16 13:12:45 cluster02 heartbeat: [9115]: ERROR: MSG[0] : [t=hbapi-req]
Jul 16 13:12:45 cluster02 heartbeat: [9115]: ERROR: MSG[1] : [reqtype=getrsc]
Jul 16 13:12:45 cluster02 heartbeat: [9115]: ERROR: MSG[2] : [dest=cluster02]
Jul 16 13:12:45 cluster02 heartbeat: [9115]: ERROR: MSG[3] : [pid=9126]
Jul 16 13:12:45 cluster02 heartbeat: [9115]: ERROR: MSG[4] : [from_id=ipfail]
Jul 16 13:12:45 cluster02 attrd: [9131]: info: register_with_ha: UUID: 8a31d70e-2bbd-46b7-b01b-7df93b1e4b18
Jul 16 13:12:45 cluster02 stonithd: [9130]: info: Signing in with heartbeat.
Jul 16 13:12:45 cluster02 cib: [9128]: info: G_main_add_SignalHandler: Added signal handler for signal 15
Jul 16 13:12:45 cluster02 cib: [9128]: info: G_main_add_TriggerHandler: Added signal manual handler
Jul 16 13:12:45 cluster02 cib: [9128]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Jul 16 13:12:45 cluster02 lrmd: [9129]: info: G_main_add_SignalHandler: Added signal handler for signal 15
Jul 16 13:12:45 cluster02 cib: [9128]: info: main: Retrieval of a per-action CIB: disabled
Jul 16 13:12:45 cluster02 ipfail: [9126]: ERROR: No managed resources
Jul 16 13:12:45 cluster02 ccm: [9127]: info: Hostname: cluster02
Jul 16 13:12:45 cluster02 lrmd: [9129]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Jul 16 13:12:45 cluster02 cib: [9128]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
Jul 16 13:12:45 cluster02 heartbeat: [9115]: WARN: Managed /usr/lib/heartbeat/ipfail process 9126 exited with return code 100.
Jul 16 13:12:45 cluster02 stonithd: [9130]: notice: /usr/lib/heartbeat/stonithd start up successfully.
Jul 16 13:12:45 cluster02 lrmd: [9129]: info: G_main_add_SignalHandler: Added signal handler for signal 10
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk] <cib admin_epoch="0" have_quorum="false" ignore_dtd="false" num_peers="0" cib_feature_revision="2.0" generated="false" epoch="79" num_updates="1" cib-last-written="Thu Jul 16 13:10:00 2009">
Jul 16 13:12:45 cluster02 stonithd: [9130]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Jul 16 13:12:45 cluster02 lrmd: [9129]: info: G_main_add_SignalHandler: Added signal handler for signal 12
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]   <configuration>
Jul 16 13:12:45 cluster02 lrmd: [9129]: info: Started.
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]     <crm_config>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]       <cluster_property_set id="cib-bootstrap-options">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         <attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="2.1.3-node: 552305612591183b1628baa5bc6e903e0f1e26a3"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1247656474"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         </attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]       </cluster_property_set>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]     </crm_config>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]     <nodes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]       <node id="8161ed6e-8a0d-48f3-a580-ee9a46e4b109" uname="cluster01" type="normal">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         <instance_attributes id="nodes-8161ed6e-8a0d-48f3-a580-ee9a46e4b109">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           <attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="standby-8161ed6e-8a0d-48f3-a580-ee9a46e4b109" name="standby" value="on"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           </attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         </instance_attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]       </node>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]       <node id="8a31d70e-2bbd-46b7-b01b-7df93b1e4b18" uname="cluster02" type="normal">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         <instance_attributes id="nodes-8a31d70e-2bbd-46b7-b01b-7df93b1e4b18">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           <attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="standby-8a31d70e-2bbd-46b7-b01b-7df93b1e4b18" name="standby" value="off"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           </attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         </instance_attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]       </node>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]     </nodes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]     <resources>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]       <primitive class="ocf" type="IPaddr2" provider="heartbeat" id="resource_IP">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         <meta_attributes id="resource_IP_meta_attrs">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           <attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="id_stickiness" name="resource_stickiness" value="100"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="id_failure" name="resource_failure_stickiness" value="-30"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           </attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         </meta_attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         <instance_attributes id="resource_IP_instance_attrs">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           <attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair name="ip" id="ip_id" value="192.168.3.212"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           </attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         </instance_attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         <instance_attributes id="resource_IP">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           <attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair name="target_role" id="resource_IP-target_role" value="started"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           </attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         </instance_attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]       </primitive>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]       <primitive id="resource_apache" class="ocf" type="apache" provider="heartbeat">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         <instance_attributes id="resource_apache_instance_attrs">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           <attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="id_config" name="configfile" value="/etc/httpd/conf/httpd.conf"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="id_bin" name="httpd" value="/usr/sbin/httpd"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           </attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         </instance_attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         <operations>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           <op id="op_apache" name="monitor" interval="60s" timeout="20s" disabled="false" role="Started" prereq="quorum" on_fail="restart"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         </operations>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         <instance_attributes id="resource_apache">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           <attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="resource_apache-target_role" name="target_role" value="started"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           </attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         </instance_attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]       </primitive>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]       <primitive id="resource_filesys" class="ocf" type="Filesystem" provider="heartbeat">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         <meta_attributes id="resource_filesys_meta_attrs">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           <attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair name="target_role" id="resource_filesys_metaattr_target_role" value="stopped"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           </attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         </meta_attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         <instance_attributes id="resource_filesys_instance_attrs">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           <attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="nvpair_attrs_filesys_type_res" name="fstype" value="ext3"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="nvpair_attrs_filesys_dev_res" name="device" value="/dev/drbd1"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="nvpair_attrs_filesys_mnt_res" name="directory" value="/clusterfs"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           </attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         </instance_attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         <instance_attributes id="resource_filesys">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           <attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="resource_filesys-target_role" name="target_role" value="started"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           </attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         </instance_attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]       </primitive>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]       <master_slave id="ms_drbd" notify="true" globally_unique="false">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         <meta_attributes id="ms_drbd_meta_attrs">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           <attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="ms_drbd_metaattr_clone_max" name="clone_max" value="2"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="ms_drbd_metaattr_clone_node_max" name="clone_node_max" value="1"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="ms_drbd_metaattr_master_max" name="master_max" value="1"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="ms_drbd_metaattr_master_node_max" name="master_node_max" value="1"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="ms_drbd_metaattr_notify" name="notify" value="true"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <nvpair id="ms_drbd_metaattr_globally_unique" name="globally_unique" value="false"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           </attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         </meta_attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         <primitive id="resource_drbd" class="ocf" type="drbd" provider="heartbeat">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           <instance_attributes id="resource_drbd_instance_attr">
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             <attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]               <nvpair id="pairid_drbdresource_attributes" name="drbd_resource" value="webspace0"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]             </attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]           </instance_attributes>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]         </primitive>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]       </master_slave>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]     </resources>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]     <constraints>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]       <rsc_order id="order_apache_after_IP" from="resource_apache" type="after" to="resource_IP" action="start" to_action="start"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]       <rsc_colocation from="resource_apache" score="INFINITY" id="colocation_Apache_IP" to="resource_IP"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]       <rsc_order id="order_filesys_drbd" from="resource_filesys" action="start" to="ms_drbd" to_action="promote"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]       <rsc_colocation from="resource_filesys" to_role="master" score="INFINITY" id="colocation_filesys_drbd" to="ms_drbd"/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]     </constraints>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]   </configuration>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk]   <status/>
Jul 16 13:12:45 cluster02 cib: [9128]: info: log_data_element: readCibXmlFile: [on-disk] </cib>
Jul 16 13:12:45 cluster02 cib: [9128]: info: startCib: CIB Initialization completed successfully
Jul 16 13:12:45 cluster02 cib: [9128]: info: cib_register_ha: Signing in with Heartbeat
Jul 16 13:12:45 cluster02 cib: [9128]: info: cib_register_ha: FSA Hostname: cluster02
Jul 16 13:12:45 cluster02 cib: [9128]: info: ccm_connect: Registering with CCM...
Jul 16 13:12:45 cluster02 cib: [9128]: WARN: ccm_connect: CCM Activation failed
Jul 16 13:12:45 cluster02 cib: [9128]: WARN: ccm_connect: CCM Connection failed 1 times (30 max)
Jul 16 13:12:46 cluster02 mgmtd: [9133]: info: init_crm
Jul 16 13:12:48 cluster02 cib: [9128]: info: ccm_connect: Registering with CCM...
Jul 16 13:12:48 cluster02 cib: [9128]: WARN: ccm_connect: CCM Activation failed
Jul 16 13:12:48 cluster02 cib: [9128]: WARN: ccm_connect: CCM Connection failed 2 times (30 max)
Jul 16 13:12:48 cluster02 ccm: [9127]: info: G_main_add_SignalHandler: Added signal handler for signal 15
Jul 16 13:12:51 cluster02 cib: [9128]: info: ccm_connect: Registering with CCM...
Jul 16 13:12:52 cluster02 cib: [9128]: info: cib_init: Starting cib mainloop
Jul 16 13:12:52 cluster02 cib: [9136]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
Jul 16 13:12:52 cluster02 cib: [9136]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
Jul 16 13:12:52 cluster02 cib: [9136]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
Jul 16 13:12:53 cluster02 cib: [9128]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
Jul 16 13:12:53 cluster02 cib: [9128]: info: mem_handle_event: instance=6, nodes=2, new=2, lost=0, n_idx=0, new_idx=0, old_idx=4
Jul 16 13:12:53 cluster02 cib: [9128]: info: cib_ccm_msg_callback: PEER: cluster01
Jul 16 13:12:53 cluster02 cib: [9128]: info: cib_ccm_msg_callback: PEER: cluster02
Jul 16 13:12:53 cluster02 cib: [9128]: info: cib_client_status_callback: Status update: Client cluster02/cib now has status [join]
Jul 16 13:12:53 cluster02 cib: [9128]: info: cib_client_status_callback: Status update: Client cluster02/cib now has status [online]
Jul 16 13:12:53 cluster02 cib: [9128]: info: cib_client_status_callback: Status update: Client cluster01/cib now has status [online]
Jul 16 13:12:53 cluster02 cib: [9128]: info: cib_null_callback: Setting cib_diff_notify callbacks for mgmtd: on
Jul 16 13:12:53 cluster02 cib: [9128]: info: cib_null_callback: Setting cib_refresh_notify callbacks for crmd: on
Jul 16 13:12:53 cluster02 crmd: [9132]: info: do_cib_control: CIB connection established
Jul 16 13:12:53 cluster02 mgmtd: [9133]: info: Started.
Jul 16 13:12:53 cluster02 crmd: [9132]: info: register_with_ha: Hostname: cluster02
Jul 16 13:12:53 cluster02 cib: [9136]: info: write_cib_contents: Wrote version 0.79.1 of the CIB to disk (digest: 88934adfc1b69e60c758a1047483028e)
Jul 16 13:12:53 cluster02 cib: [9136]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
Jul 16 13:12:53 cluster02 cib: [9136]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
Jul 16 13:12:53 cluster02 crmd: [9132]: info: register_with_ha: UUID: 8a31d70e-2bbd-46b7-b01b-7df93b1e4b18
Jul 16 13:12:54 cluster02 crmd: [9132]: info: populate_cib_nodes: Requesting the list of configured nodes
Jul 16 13:12:54 cluster02 crmd: [9132]: notice: populate_cib_nodes: Node: cluster02 (uuid: 8a31d70e-2bbd-46b7-b01b-7df93b1e4b18)
Jul 16 13:12:55 cluster02 crmd: [9132]: notice: populate_cib_nodes: Node: cluster01 (uuid: 8161ed6e-8a0d-48f3-a580-ee9a46e4b109)
Jul 16 13:12:55 cluster02 crmd: [9132]: info: do_ha_control: Connected to Heartbeat
Jul 16 13:12:55 cluster02 crmd: [9132]: info: do_ccm_control: CCM connection established... waiting for first callback
Jul 16 13:12:55 cluster02 crmd: [9132]: info: do_started: Delaying start, CCM (0000000000100000) not connected
Jul 16 13:12:55 cluster02 crmd: [9132]: info: crmd_init: Starting crmd's mainloop
Jul 16 13:12:55 cluster02 crmd: [9132]: notice: crmd_client_status_callback: Status update: Client cluster02/crmd now has status [online]
Jul 16 13:12:56 cluster02 crmd: [9132]: notice: crmd_client_status_callback: Status update: Client cluster02/crmd now has status [online]
Jul 16 13:12:56 cluster02 crmd: [9132]: notice: crmd_client_status_callback: Status update: Client cluster01/crmd now has status [online]
Jul 16 13:12:56 cluster02 crmd: [9132]: info: do_started: Delaying start, CCM (0000000000100000) not connected
Jul 16 13:12:56 cluster02 crmd: [9132]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
Jul 16 13:12:56 cluster02 crmd: [9132]: info: mem_handle_event: instance=6, nodes=2, new=2, lost=0, n_idx=0, new_idx=0, old_idx=4
Jul 16 13:12:56 cluster02 crmd: [9132]: info: crmd_ccm_msg_callback: Quorum (re)attained after event=NEW MEMBERSHIP (id=6)
Jul 16 13:12:56 cluster02 crmd: [9132]: info: ccm_event_detail: NEW MEMBERSHIP: trans=6, nodes=2, new=2, lost=0 n_idx=0, new_idx=0, old_idx=4
Jul 16 13:12:56 cluster02 crmd: [9132]: info: ccm_event_detail: 	CURRENT: cluster01 [nodeid=0, born=2]
Jul 16 13:12:56 cluster02 crmd: [9132]: info: ccm_event_detail: 	CURRENT: cluster02 [nodeid=1, born=6]
Jul 16 13:12:56 cluster02 crmd: [9132]: info: ccm_event_detail: 	NEW:     cluster01 [nodeid=0, born=2]
Jul 16 13:12:56 cluster02 crmd: [9132]: info: ccm_event_detail: 	NEW:     cluster02 [nodeid=1, born=6]
Jul 16 13:12:56 cluster02 crmd: [9132]: info: do_started: The local CRM is operational
Jul 16 13:12:56 cluster02 crmd: [9132]: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_CCM_CALLBACK origin=do_started ]
Jul 16 13:12:58 cluster02 attrd: [9131]: info: main: Starting mainloop...
Jul 16 13:12:58 cluster02 crmd: [9132]: info: update_dc: Set DC to cluster01 (2.0)
Jul 16 13:12:59 cluster02 crmd: [9132]: info: update_dc: Set DC to cluster01 (2.0)
Jul 16 13:12:59 cluster02 cib: [9128]: info: cib_replace_notify: Replaced: 0.79.1 -> 0.79.90 from <null>
Jul 16 13:12:59 cluster02 crmd: [9132]: info: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Jul 16 13:12:59 cluster02 crmd: [9132]: info: populate_cib_nodes: Requesting the list of configured nodes
Jul 16 13:13:00 cluster02 crmd: [9132]: notice: populate_cib_nodes: Node: cluster02 (uuid: 8a31d70e-2bbd-46b7-b01b-7df93b1e4b18)
Jul 16 13:13:01 cluster02 crmd: [9132]: notice: populate_cib_nodes: Node: cluster01 (uuid: 8161ed6e-8a0d-48f3-a580-ee9a46e4b109)
Jul 16 13:13:01 cluster02 crmd: [9132]: info: do_lrm_rsc_op: Performing op=resource_IP_monitor_0 key=4:5:3bb4df20-1fb6-499b-95d4-a303c6e0a002)
Jul 16 13:13:01 cluster02 lrmd: [9129]: info: rsc:resource_IP: monitor
Jul 16 13:13:01 cluster02 crmd: [9132]: info: do_lrm_rsc_op: Performing op=resource_apache_monitor_0 key=5:5:3bb4df20-1fb6-499b-95d4-a303c6e0a002)
Jul 16 13:13:01 cluster02 lrmd: [9129]: info: rsc:resource_apache: monitor
Jul 16 13:13:01 cluster02 crmd: [9132]: info: do_lrm_rsc_op: Performing op=resource_filesys_monitor_0 key=6:5:3bb4df20-1fb6-499b-95d4-a303c6e0a002)
Jul 16 13:13:01 cluster02 lrmd: [9129]: info: rsc:resource_filesys: monitor
Jul 16 13:13:01 cluster02 crmd: [9132]: info: do_lrm_rsc_op: Performing op=resource_drbd:0_monitor_0 key=7:5:3bb4df20-1fb6-499b-95d4-a303c6e0a002)
Jul 16 13:13:01 cluster02 lrmd: [9129]: info: rsc:resource_drbd:0: monitor
Jul 16 13:13:01 cluster02 crmd: [9132]: info: process_lrm_event: LRM operation resource_filesys_monitor_0 (call=4, rc=7) complete 
Jul 16 13:13:01 cluster02 crmd: [9132]: info: process_lrm_event: LRM operation resource_IP_monitor_0 (call=2, rc=7) complete 
Jul 16 13:13:01 cluster02 apache[9143]: INFO: apache not running
Jul 16 13:13:01 cluster02 crmd: [9132]: info: process_lrm_event: LRM operation resource_apache_monitor_0 (call=3, rc=7) complete 
Jul 16 13:13:01 cluster02 crmd: [9132]: info: process_lrm_event: LRM operation resource_drbd:0_monitor_0 (call=5, rc=7) complete 
Jul 16 13:13:02 cluster02 crmd: [9132]: info: do_lrm_rsc_op: Performing op=resource_IP_start_0 key=9:5:3bb4df20-1fb6-499b-95d4-a303c6e0a002)
Jul 16 13:13:02 cluster02 lrmd: [9129]: info: rsc:resource_IP: start
Jul 16 13:13:02 cluster02 crmd: [9132]: info: do_lrm_rsc_op: Performing op=resource_drbd:0_start_0 key=12:5:3bb4df20-1fb6-499b-95d4-a303c6e0a002)
Jul 16 13:13:02 cluster02 lrmd: [9129]: info: rsc:resource_drbd:0: start
Jul 16 13:13:03 cluster02 IPaddr2[9299]: INFO: ip -f inet addr add 192.168.3.212/21 brd 192.168.7.255 dev eth0
Jul 16 13:13:03 cluster02 IPaddr2[9299]: INFO: ip link set eth0 up
Jul 16 13:13:03 cluster02 IPaddr2[9299]: INFO: /usr/lib/heartbeat/send_arp -i 200 -r 5 -p /var/run/heartbeat/rsctmp/send_arp/send_arp-192.168.3.212 eth0 192.168.3.212 auto not_used not_used
Jul 16 13:13:03 cluster02 crmd: [9132]: info: process_lrm_event: LRM operation resource_IP_start_0 (call=6, rc=0) complete 
Jul 16 13:13:03 cluster02 kernel: drbd1: disk( Diskless -> Attaching ) 
Jul 16 13:13:03 cluster02 kernel: drbd1: Starting worker thread (from cqueue/0 [172])
Jul 16 13:13:03 cluster02 kernel: drbd1: Found 4 transactions (6 active extents) in activity log.
Jul 16 13:13:03 cluster02 kernel: drbd1: max_segment_size ( = BIO size ) = 32768
Jul 16 13:13:03 cluster02 kernel: drbd1: drbd_bm_resize called with capacity == 9374640
Jul 16 13:13:03 cluster02 kernel: drbd1: resync bitmap: bits=1171830 words=36620
Jul 16 13:13:03 cluster02 kernel: drbd1: size = 4577 MB (4687320 KB)
Jul 16 13:13:03 cluster02 kernel: drbd1: reading of bitmap took 2 jiffies
Jul 16 13:13:03 cluster02 kernel: drbd1: recounting of set bits took additional 0 jiffies
Jul 16 13:13:03 cluster02 kernel: drbd1: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
Jul 16 13:13:03 cluster02 kernel: drbd1: disk( Attaching -> UpToDate ) 
Jul 16 13:13:03 cluster02 kernel: drbd1: Writing meta data super block now.
Jul 16 13:13:03 cluster02 kernel: drbd1: conn( StandAlone -> Unconnected ) 
Jul 16 13:13:03 cluster02 kernel: drbd1: Starting receiver thread (from drbd1_worker [9413])
Jul 16 13:13:03 cluster02 kernel: drbd1: receiver (re)started
Jul 16 13:13:03 cluster02 kernel: drbd1: conn( Unconnected -> WFConnection ) 
Jul 16 13:13:03 cluster02 lrmd: [9129]: info: RA output: (resource_drbd:0:start:stdout)  
Jul 16 13:13:03 cluster02 crmd: [9132]: info: process_lrm_event: LRM operation resource_drbd:0_start_0 (call=7, rc=0) complete 
Jul 16 13:13:04 cluster02 crmd: [9132]: info: do_lrm_rsc_op: Performing op=resource_drbd:0_notify_0 key=48:5:3bb4df20-1fb6-499b-95d4-a303c6e0a002)
Jul 16 13:13:04 cluster02 lrmd: [9129]: info: rsc:resource_drbd:0: notify
Jul 16 13:13:04 cluster02 crm_master: [9536]: info: Invoked: /usr/sbin/crm_master -v 10 -l reboot 
Jul 16 13:13:06 cluster02 lrmd: [9129]: info: RA output: (resource_drbd:0:notify:stdout) No set matching id=master-8a31d70e-2bbd-46b7-b01b-7df93b1e4b18 in status 
Jul 16 13:13:06 cluster02 crmd: [9132]: info: process_lrm_event: LRM operation resource_drbd:0_notify_0 (call=8, rc=0) complete 
Jul 16 13:13:07 cluster02 crmd: [9132]: info: do_lrm_rsc_op: Performing op=resource_apache_start_0 key=8:6:3bb4df20-1fb6-499b-95d4-a303c6e0a002)
Jul 16 13:13:07 cluster02 lrmd: [9129]: info: rsc:resource_apache: start
Jul 16 13:13:07 cluster02 crmd: [9132]: info: do_lrm_rsc_op: Performing op=resource_drbd:0_notify_0 key=49:6:3bb4df20-1fb6-499b-95d4-a303c6e0a002)
Jul 16 13:13:07 cluster02 lrmd: [9129]: info: rsc:resource_drbd:0: notify
Jul 16 13:13:07 cluster02 apache[9543]: INFO: apache not running
Jul 16 13:13:07 cluster02 apache[9543]: INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
Jul 16 13:13:07 cluster02 crmd: [9132]: info: process_lrm_event: LRM operation resource_drbd:0_notify_0 (call=10, rc=0) complete 
Jul 16 13:13:08 cluster02 crmd: [9132]: info: do_lrm_rsc_op: Performing op=resource_drbd:0_promote_0 key=13:6:3bb4df20-1fb6-499b-95d4-a303c6e0a002)
Jul 16 13:13:08 cluster02 lrmd: [9129]: info: rsc:resource_drbd:0: promote
Jul 16 13:13:08 cluster02 kernel: drbd1: role( Secondary -> Primary ) 
Jul 16 13:13:08 cluster02 kernel: drbd1: Writing meta data super block now.
Jul 16 13:13:08 cluster02 lrmd: [9129]: info: RA output: (resource_drbd:0:promote:stdout)  
Jul 16 13:13:09 cluster02 crmd: [9132]: info: process_lrm_event: LRM operation resource_apache_start_0 (call=9, rc=0) complete 
Jul 16 13:13:09 cluster02 drbd[9622]: INFO: webspace0 promote: primary succeeded
Jul 16 13:13:09 cluster02 crmd: [9132]: info: process_lrm_event: LRM operation resource_drbd:0_promote_0 (call=11, rc=0) complete 
Jul 16 13:13:10 cluster02 crmd: [9132]: info: do_lrm_rsc_op: Performing op=resource_apache_monitor_60000 key=9:6:3bb4df20-1fb6-499b-95d4-a303c6e0a002)
Jul 16 13:13:10 cluster02 crmd: [9132]: info: do_lrm_rsc_op: Performing op=resource_drbd:0_notify_0 key=50:6:3bb4df20-1fb6-499b-95d4-a303c6e0a002)
Jul 16 13:13:10 cluster02 lrmd: [9129]: info: rsc:resource_drbd:0: notify
Jul 16 13:13:10 cluster02 crmd: [9132]: info: process_lrm_event: LRM operation resource_apache_monitor_60000 (call=12, rc=0) complete 
Jul 16 13:13:10 cluster02 crm_master: [9862]: info: Invoked: /usr/sbin/crm_master -v 10 -l reboot 
Jul 16 13:13:11 cluster02 lrmd: [9129]: info: RA output: (resource_drbd:0:notify:stdout) No set matching id=master-8a31d70e-2bbd-46b7-b01b-7df93b1e4b18 in status 
Jul 16 13:13:11 cluster02 crmd: [9132]: info: process_lrm_event: LRM operation resource_drbd:0_notify_0 (call=13, rc=0) complete 






Jul 16 13:16:51 cluster02 crm_verify: [10020]: info: main: =#=#=#=#= Getting XML =#=#=#=#=
Jul 16 13:16:51 cluster02 crm_verify: [10020]: info: main: Reading XML from: live cluster
Jul 16 13:16:51 cluster02 crm_verify: [10020]: notice: main: Required feature set: 2.0
Jul 16 13:16:51 cluster02 crm_verify: [10020]: info: unpack_nodes: Node cluster01 is in standby-mode
Jul 16 13:16:51 cluster02 crm_verify: [10020]: info: determine_online_status: Node cluster02 is online
Jul 16 13:16:51 cluster02 crm_verify: [10020]: info: determine_online_status: Node cluster01 is standby
Jul 16 13:16:51 cluster02 crm_verify: [10020]: info: unpack_find_resource: Internally renamed resource_drbd:0 on cluster01 to resource_drbd:1
Jul 16 13:16:51 cluster02 crm_verify: [10020]: notice: native_print: resource_IP	(heartbeat::ocf:IPaddr2):	Started cluster02
Jul 16 13:16:51 cluster02 crm_verify: [10020]: notice: native_print: resource_apache	(heartbeat::ocf:apache):	Started cluster02
Jul 16 13:16:51 cluster02 crm_verify: [10020]: notice: native_print: resource_filesys	(heartbeat::ocf:Filesystem):	Stopped 
Jul 16 13:16:51 cluster02 crm_verify: [10020]: notice: clone_print: Master/Slave Set: ms_drbd
Jul 16 13:16:51 cluster02 crm_verify: [10020]: notice: native_print:     resource_drbd:0	(heartbeat::ocf:drbd):	Master cluster02
Jul 16 13:16:51 cluster02 crm_verify: [10020]: notice: native_print:     resource_drbd:1	(heartbeat::ocf:drbd):	Stopped 
Jul 16 13:16:51 cluster02 crm_verify: [10020]: notice: NoRoleChange: Leave resource resource_IP	(cluster02)
Jul 16 13:16:51 cluster02 crm_verify: [10020]: notice: NoRoleChange: Leave resource resource_apache	(cluster02)
Jul 16 13:16:51 cluster02 crm_verify: [10020]: WARN: native_color: Resource resource_drbd:1 cannot run anywhere
Jul 16 13:16:51 cluster02 crm_verify: [10020]: info: master_promotion_order: Merging weights for ms_drbd
Jul 16 13:16:51 cluster02 crm_verify: [10020]: info: master_color: Promoting resource_drbd:0
Jul 16 13:16:51 cluster02 crm_verify: [10020]: info: master_color: ms_drbd: Promoted 1 instances of a possible 1 to master
Jul 16 13:16:52 cluster02 crm_verify: [10020]: WARN: native_color: Resource resource_filesys cannot run anywhere
Jul 16 13:16:52 cluster02 crm_verify: [10020]: info: master_color: ms_drbd: Promoted 1 instances of a possible 1 to master
Jul 16 13:16:52 cluster02 crm_verify: [10020]: notice: DemoteRsc: cluster02	Demote resource_drbd:0
Jul 16 13:16:52 cluster02 crm_verify: [10020]: notice: NoRoleChange: Leave resource resource_drbd:0	(cluster02)
Jul 16 13:16:52 cluster02 crm_verify: [10020]: notice: DemoteRsc: cluster02	Demote resource_drbd:0
Jul 16 13:16:52 cluster02 crm_verify: [10020]: notice: NoRoleChange: Leave resource resource_drbd:0	(cluster02






any suggestions are welcome


kind regards

SST
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 


-- 
Jetzt kostenlos herunterladen: Internet Explorer 8 und Mozilla Firefox 3 -
sicherer, schneller und einfacher! http://portal.gmx.net/de/go/chbrowser



More information about the Linux-HA mailing list