Mon 07 Jan 2013 04:27:42 PM UTC, original submission:
I have a series of clusters with 4 or more nodes in them. There is an IPMI STONITH
resource for each node. Each STONITH resource is constrained to NOT run on
the machine it is STONITHing -- since that doesn't work.
When I save the configuration, I get these messages:
foo-db01: WARNING: 4: stonith_foo-db01-loc: referenced node foo-db01 does not exist
foo-db01: WARNING: 4: stonith_foo-db02-loc: referenced node foo-db02 does not exist
foo-db01: WARNING: 4: stonith_foo-pn01-loc: referenced node foo-pn01 does not exist
foo-db01: WARNING: 4: stonith_foo-pn02-loc: referenced node foo-pn02 does not exist
foo-db01: WARNING: 4: stonith_foo-pn03-loc: referenced node foo-pn03 does not exist
foo-db01: WARNING: 4: stonith_foo-pn04-loc: referenced node foo-pn04 does not exist
foo-db01: WARNING: 4: stonith_foo-pn05-loc: referenced node foo-pn05 does not exist
foo-db01: WARNING: 4: stonith_foo-pn06-loc: referenced node foo-pn06 does not exist
Everything works exactly as it should, except for getting these bogus messages.
These system names are all correct and are all in the configuration.
Amusingly enough, the machine foo-db01 is complaining that even it doesn't exist.
The rules are created by a script that configures all the machines in the configuration.
The text in the here document that creates it looks like this:
location stonith_${node}-loc stonith_$node \\
rule \$id="ST_${node}_loc_R" -inf: #uname eq ${node}
or more concretely:
location stonith_foo-db01-loc stonith_foo-db01 \
rule $id="ST_foo-db01_loc_R" -inf: #uname eq foo-db01
These rules all work exactly like they should - but our customer doesn't like the bogus warnings. Please note that I cannot provide you the exact or complete configuration because our customer would not allow that. I can assure you that if I remove those constraints that the warnings
go away, and if I leave them in I get these warnings.
|