dlm_controld man page on YellowDog

Man page or keyword search:  
man Server   18644 pages
apropos Keyword Search (all sections)
Output format
YellowDog logo
[printable version]


dlm_controld(8)						       dlm_controld(8)

NAME
       dlm_controld - daemon that configures dlm according to cluster events

SYNOPSIS
       dlm_controld [OPTION]...

DESCRIPTION
       The  dlm	 lives	in the kernel, and the cluster infrastructure (cluster
       membership and group management) lives in user space.  The dlm  in  the
       kernel  needs  to  adjust/recover for certain cluster events.  It's the
       job of dlm_controld to receive these events and reconfigure the	kernel
       dlm  as	needed.	  dlm_controld controls and configures the dlm through
       sysfs and configfs files that are considered  dlm-internal  interfaces;
       not a general API/ABI.

       The  dlm	 also  exports lock state through debugfs so that dlm_controld
       can implement deadlock detection in user space.

CONFIGURATION FILE
       Optional cluster.conf settings are placed in the <dlm> section.

   Global settings
       The network protocol can be set to "tcp" or  "sctp".   The  default  is
       tcp.

	 <dlm protocol="tcp"/>

       After  waiting  timewarn	 centiseconds, the dlm will emit a warning via
       netlink.	  This	only  applies	to   lockspaces	  created   with   the
       DLM_LSFL_TIMEWARN  flag,	 and  is  used	for  deadlock  detection.  The
       default is 500 (5 seconds).

	 <dlm timewarn="500"/>

       DLM kernel debug messages can be enabled by  setting  log_debug	to  1.
       The default is 0.

	 <dlm log_debug="0"/>

   Disabling resource directory
       Lockspaces usually use a resource directory to keep track of which node
       is the master of each  resource.	  The  dlm  can	 operate  without  the
       resource	 directory,  though,  by  statically assigning the master of a
       resource using a hash of the resource name.

	 <dlm>
	   <lockspace name="foo" nodir="1">
	 </dlm>

   Lock-server configuration
       The nodir setting can be combined with node weights to create a config‐
       uration	where  select  node(s)	are the master of all resources/locks.
       These "master" nodes can be viewed as  "lock  servers"  for  the	 other
       nodes.

	 <dlm>
	   <lockspace name="foo" nodir="1">
	     <master name="node01"/>
	   </lockspace>
	 </dlm>

       or,

	 <dlm>
	   <lockspace name="foo" nodir="1">
	     <master name="node01"/>
	     <master name="node02"/>
	   </lockspace>
	 </dlm>

       Lock management will be partitioned among the available masters.	 There
       can be any number of masters defined.  The designated master nodes will
       master all resources/locks (according to the resource name hash).  When
       no masters are members of the lockspace, then the nodes revert  to  the
       common  fully-distributed configuration.	 Recovery is faster, with lit‐
       tle disruption, when a non-master node joins/leaves.

       There is no special mode in the dlm for this lock server configuration,
       it's  just  a  natural consequence of combining the "nodir" option with
       node weights.  When a lockspace has master nodes	 defined,  the	master
       has  a  default	weight of 1 and all non-master nodes have weight of 0.
       Explicit non-zero weights can also be assigned to master nodes, e.g.

	 <dlm>
	   <lockspace name="foo" nodir="1">
	     <master name="node01" weight="2"/>
	     <master name="node02" weight="1"/>
	   </lockspace>
	 </dlm>

       In which case node01 will master 2/3 of the total resources  and	 node2
       will master the other 1/3.

OPTIONS
       -D     Run  the	daemon in the foreground and print debug statements to
	      stdout.

       -K     Enable kernel dlm debugging messages.

       -V     Print the version information and exit.

       -h     Print out a help	message	 describing  available	options,  then
	      exit.

SEE ALSO
       groupd(8)

							       dlm_controld(8)
[top]

List of man pages available for YellowDog

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net