[Linux-HA] CIB Resource Location Preferences

Alex Spengler gaex.ch at gmail.com
Thu Feb 2 08:48:25 MST 2006


usage of the group id made it actually, thx.

But: this colocation rule just made that the resources do not run on the
same host. (as expected)
It will not enshure that they run only on a specific node.
I can not guarantee that my resource_1 runs only on node_1 and resource_2
only on node_2 ...

I tried to combine colocation rules with location preferences like this:

<rsc_location id="rsc_loc_group2" rsc="httpd_2">
                                <rule id="pref_loc_group2" score="INFINITY">
                                        <expression attribute="#uname"
operation="eq" value="big-sSTATS2"/>
                                </rule>
                                <rule id="not_pref_loc_group2"
score="-INFINITY">
                                        <expression attribute="#uname"
operation="eq" value="big-sSTATS1"/>
                                </rule>
                        </rsc_location>

But this won't work as well. Do I have to get 2.0.3 to get it working?


On 2/2/06, Andrew Beekhof <beekhof at gmail.com> wrote:
>
> On 2/1/06, Alex Spengler <gaex.ch at gmail.com> wrote:
> > Hi again
> >
> > Thanks for the colocation hint, I completly overread this section in the
> > manual.
> > I tested now lots of different configs but didn't manage to get it
> working.
> > Finally I used the following config:
> >
>
> use the id of the groups instead...
>
> <rsc_colocation id="colo_fix_HTTPD_1" from="group_HTTPD_1"
> to="group_HTTPD_2" score="-INFINITY"/>
>
> should do it.  (you dont need the reverse, its logically implied)
>
> > <?xml version="1.0" ?>
> > <cib>
> >         <configuration>
> >                 <crm_config>
> >                         <nvpair id="transition_idle_timeout"
> > name="transition_idle_timeout" value="120s"/>
> >                         <nvpair id="symmetric_cluster"
> > name="symmetric_cluster" value="true"/>
> >                         <nvpair id="no_quorum_policy"
> > name="no_quorum_policy" value="stop"/>
> >                 </crm_config>
> >                 <nodes>
> >                         <node
> > id="2d7c9cd6-05e8-4a31-96f0-a7b9faff42fa"
> > uname="big-sSTATS1" type="member"/>
> >                          <node
> > id="0d3fbaf7-2ae0-41e4-8e61-c7f3c750f36f"
> > uname="big-sSTATS2" type="member"/>
> >                 </nodes>
> >                 <resources>
> >                         <group id="group_VIP">
> >                                 <primitive class="ocf" id="IPaddr_1"
> > provider="heartbeat" type="IPaddr">
> >                                         <operations>
> >                                                 <op id="1"
> > interval="30s" name="monitor" timeout="3s"/>
> >                                         </operations>
> >
> > <instance_attributes>
> >
> > <attributes>
> >
> > <nvpair name="ip" value=" 10.40.109.200"/>
> >
> > </attributes>
> >
> > </instance_attributes>
> >                                 </primitive>
> >                         </group>
> >                         <group id="group_HTTPD_1">
> >                                 <primitive class="heartbeat"
> id="httpd_1"
> > provider="heartbeat" type="gaex_httpd">
> >                                         <operations>
> >                                                 <op id="1"
> > interval="30s" name="monitor" timeout="3s"/>
> >                                         </operations>
> >                                 </primitive>
> >                         </group>
> >                         <group id="group_HTTPD_2">
> >                                 <primitive class="heartbeat"
> id="httpd_2"
> > provider="heartbeat" type="gaex_httpd">
> >                                         <operations>
> >                                                 <op id="1"
> > interval="30s" name="monitor" timeout="3s"/>
> >                                         </operations>
> >                                 </primitive>
> >                         </group>
> >                 </resources>
> >                 <constraints>
> >                         <rsc_colocation id="colo_fix_HTTPD_1"
> from="httpd_1"
> > to="httpd_2" score="-INFINITY"/>
> >                         <rsc_colocation id="colo_fix_HTTPD_2"
> from="httpd_2"
> > to="httpd_1" score="-INFINITY"/>
> >                         <rsc_location id="rsc_loc_group_VIP"
> > rsc="group_VIP">
> >                                 <rule id="pref_loc_group_VIP"
> score="100">
> >                                         <expression
> > attribute="#uname" operation="eq" value="big-sSTATS1"/>
> >                                 </rule>
> >                         </rsc_location>
> >                         <rsc_location id="rsc_loc_group_httpd_1"
> > rsc="group_HTTPD_1">
> >                                 <rule id="pref_loc_group_httpd_1"
> > score="INFINITY">
> >                                         <expression
> > attribute="#uname" operation="eq" value="big-sSTATS1"/>
> >                                 </rule>
> >                         </rsc_location>
> >                         <rsc_location id="rsc_loc_group_httpd_2"
> > rsc="group_HTTPD_2">
> >                                 <rule id="pref_loc_group_httpd_2"
> > score="INFINITY">
> >                                         <expression
> > attribute="#uname" operation="eq" value="big-sSTATS2"/>
> >                                 </rule>
> >                         </rsc_location>
> >                 </constraints>
> >         </configuration>
> >         <status/>
> > </cib>
> >
> >
> > Result was:
> >
> > Current DC: big-sstats1
> > (2d7c9cd6-05e8-4a31-96f0-a7b9faff42fa)
> > 2 Nodes configured.
> > 3 Resources configured.
> > ============
> >
> > Node: big-sstats1 (2d7c9cd6-05e8-4a31-96f0-a7b9faff42fa):
> > online
> >         group_VIP:IPaddr_1 (heartbeat::ocf:IPaddr):
> >         group_HTTPD_1:httpd_1
> > (heartbeat::heartbeat:gaex_httpd):
> >         group_HTTPD_2:httpd_2
> > (heartbeat::heartbeat:gaex_httpd):
> > Node: big-sSTATS2 (0d3fbaf7-2ae0-41e4-8e61-c7f3c750f36f):
> > OFFLINE
> >
> > Full list of resources:
> > Resource Group: group_VIP
> >     group_VIP:IPaddr_1 (heartbeat::ocf:IPaddr): big-sstats1
> > (2d7c9cd6-05e8-4a31-96f0-a7b9faff42fa)
> > Resource Group: group_HTTPD_1
> >     group_HTTPD_1:httpd_1
> > (heartbeat::heartbeat:gaex_httpd):    big-sstats1
> > (2d7c9cd6-05e8-4a31-96f0-a7b9faff42fa)
> > Resource Group: group_HTTPD_2
> >     group_HTTPD_2:httpd_2
> > (heartbeat::heartbeat:gaex_httpd):    big-sstats1
> > (2d7c9cd6-05e8-4a31-96f0-a7b9faff42fa)
> >
> >
> >
> > Any ideas?
> >
> > cheers,
> > Alex
> >
> >
> >
> >
> > On 1/31/06, Andrew Beekhof <beekhof at gmail.com> wrote:
> > > On 1/31/06, Alex Spengler <gaex.ch at gmail.com> wrote:
> > > > Hi
> > > >
> > > > I'm trying to setup a 2 node cluster running heartbeat with
> following
> > > > configuration:
> > > >
> > > > NODE1 running apache & mysql + VIP IP by default
> > > > NODE2 running apache & mysql (VIP IP will be failed over)
> > > >
> > > > the point is that apache & mysql should run on both nodes all the
> time
> > >
> > > thats a rsc_colocation rule you need then.
> > >
> > > > (I
> > > > run it with heartbeat to monitor the status easily). but i can't get
> a
> > good
> > > > cib.xml with resource scoring INFINITY and -INFINITY.
> > > >
> > > > what i've done:
> > > >
> > > > <constraints>
> > > > <rsc_location id="rsc_loc_group_HTTPD1" rsc="group_HTTPD1">
> > > >     <rule id="pref_run_loc_group_HTTPD_1" score="INFINITY">
> > > >         <expression attribute="#uname" operation="eq"
> value="Node1"/>
> > > >     </rule>
> > > >     <rule id="pref_run_loc_group_HTTPD1" score="-INFINITY">
> > > >         <expression attribute="#uname" operation="ne"
> value="Node1"/>
> > > >      </rule>
> > > > </rsc_location>
> > > > </constraints>
> > > >
> > > > I tryed a few other configurations but nothing helped. Either both
> > > > webservers run on whatever node or both webservers aren't running at
> > all.
> > > >
> > > > can anyone help me out here?
> > > > _______________________________________________
> > > > Linux-HA mailing list
> > > > Linux-HA at lists.linux-ha.org
> > > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > > See also: http://linux-ha.org/ReportingProblems
> > > >
> > > >
> > > _______________________________________________
> > > Linux-HA mailing list
> > > Linux-HA at lists.linux-ha.org
> > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > See also: http://linux-ha.org/ReportingProblems
> > >
> >
> >
> > _______________________________________________
> > Linux-HA mailing list
> > Linux-HA at lists.linux-ha.org
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
> >
> >
> _______________________________________________
> Linux-HA mailing list
> Linux-HA at lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linux-ha.org/pipermail/linux-ha/attachments/20060202/92e8a50d/attachment.html>


More information about the Linux-HA mailing list