volnotify man page on DigitalUNIX

Printed from http://www.polarhome.com/service/man/?qf=volnotify&af=0&tf=2&of=DigitalUNIX

volnotify(8)							  volnotify(8)

NAME
       volnotify - Displays Logical Storage Manager configuration events

SYNOPSIS
       /usr/sbin/volnotify  [-icfdD] [-w wait-time] [-g diskgroup] [-n number]
       [-t timeout]

OPTIONS
       Displays disk group import, deport, and disable events.	Displays  disk
       group  change  events.	Displays plex, volume, and disk detach events.
       Displays disk change events.   Uses  a  diagnostic-mode	connection  to
       vold.  This  allows  the receipt of events when vold is running in dis‐
       abled mode. Access to configuration information is limited when vold is
       running	in  disabled  mode. For most applications, it is better to let
       volnotify print events only when vold is running in enabled mode.  Dis‐
       plays  waiting  events  after  wait_time	 seconds with no other events.
       Restricts displayed events to those in the indicated  disk  group.  The
       disk group can be specified either as a disk group name or a disk group
       ID.

	      If a disk group is specified with -g,  volnotify	displays  only
	      disk  group-related  events.   Displays  the indicated number of
	      vold events, then exits. Events that are not generated  by  vold
	      (that  is,  connect, disconnect and waiting events) do not count
	      toward the number of counted events and will not cause  an  exit
	      to  occur.   Displays  events  for  up  to timeout seconds, then
	      exits. The -n and -t options can be combined to specify a	 maxi‐
	      mum  number of events and a maximum timeout to wait before exit‐
	      ing.

       If none of the -i, -c, -f,  or  -d  options  are	 specified,  volnotify
       prints all event types.

DESCRIPTION
       The volnotify utility displays events related to disk and configuration
       changes managed by the Logical Storage  Manager	configuration  daemon,
       vold.  The  volnotify  utility  displays	 requested  event  types until
       stopped by a signal, until a given number of  events  is	 received,  or
       until a given number of seconds is passed.

       LSM  events  are	 sent to the Event Manager (EVM). The following events
       are reported within EVM: -i Disk	 group	import,	 deport,  and  disable
       events  -c Disk group change events -d Disk change events -f Plex, vol‐
       ume, and disk detach events

       While the LSM volnotify events reported to EVM are  configured  through
       the  rc.config.common  variable LSM_EVM_OPTS, the LSM_EVM_OPTS settings
       should not normally be changed, because certain system software depends
       on these values for operation.

					Note

       In  TruCluster  environments, the volnotify command reports only events
       that occur locally on that node. Therefore,  use	 EVM  to  display  LSM
       events that occur anywhere within the TruCluster environment.

       Subscribers  can	 display LSM events through the LSM volnotify EVM tem‐
       plate called lsm.volnotify.evt.	This EVM template can be used to  dis‐
       play LSM events in both TruCluster and other environments. See EXAMPLES
       for examples of how to retrieve LSM events.

       Each event is displayed as a single-line output record on the  standard
       output.

       Displayed  events  are:	A  connection  was established with vold. This
       event type is displayed immediately after successful startup  and  ini‐
       tialization  of	volnotify.  A connected event is also displayed if the
       connection to vold is lost and then regained. A	connected  event  dis‐
       played  after a reconnection indicates that some events might have been
       lost.  The connection to vold was lost. This normally results when vold
       is stopped (such as by the voldctl stop command) or killed by a signal.
       In response to  a  disconnection,  volnotify  displays  a  disconnected
       event,  waits  until  a reconnection succeeds, and then displays a con‐
       nected event.

	      A disconnected event is also displayed if vold is not accessible
	      at the time volnotify is started. In this case, the disconnected
	      event precedes the first connected event.	 Due to internal  buf‐
	      fer overruns, or other possible problems, some events might have
	      been lost.  The vold utility was changed to disabled mode.  Most
	      configuration  information  will	be  unavailable	 until vold is
	      changed back to enabled mode.  The vold utility was  changed  to
	      enabled  mode.  All  configuration  information  should  now  be
	      retrievable. The vold disabled and vold enabled  events  can  be
	      retrieved	 only  with  diagnostic-mode  connections  to the vold
	      diagnostic portal. Use the -D option to obtain a	regular	 diag‐
	      nostic-mode  connection.	If the -w option is specified, a wait‐
	      ing event is displayed after a  defined  period  with  no	 other
	      events. Shell scripts can use waiting messages to collect groups
	      of related, or at least nearly simultaneous,  events.  This  can
	      make  shell  scripts  more efficient. This can also provide some
	      scripts with better input, because sets  of  detach  events,  in
	      particular,  often  occur	 in  groups  that  scripts  can relate
	      together. This is particularly important	when  a	 shell	script
	      blocks until volnotify produces output, thus requiring output to
	      indicate the end of a possible sequence of related events.   The
	      named disk group was imported. The disk group ID of the imported
	      disk group is groupid.  The named disk group was deported.   The
	      named  disk  group was disabled. A disabled disk group cannot be
	      changed, and its records cannot be printed with  volprint.  How‐
	      ever,  some  volumes  in	a  disabled  disk group might still be
	      usable, although it is unlikely that the volumes will be	usable
	      after a system reboot. A disk group will be disabled as a result
	      of excessive failures, if the last disk in the disk group fails,
	      or  if  errors  occur  when writing to all configuration and log
	      copies in the disk group.	 The configuration for the named  disk
	      group  was  changed.  The	 transaction  ID  for  the  update was
	      groupid.	The named subdisk in the named disk group was detached
	      when  an	I/O  failure was detected during normal volume I/O, or
	      was disabled when a disk failure was detected. Subdisk  failures
	      in  a  RAID  5  volume or a log subdisk within a mirrored volume
	      result in a subdisk detach;  other  subdisk  failures  generally
	      result  in  a  subdisk plex detach.  The named plex in the named
	      disk group was detached when an I/O failure was detected	during
	      normal volume I/O, or was disabled when a total disk failure was
	      detected.	 The named volume in the named disk group was detached
	      when  an	I/O  failure was detected during normal volume I/O, or
	      when a total disk failure was detected. Usually, only plexes  or
	      subdisks	are  detached  as a result of volume I/O failure. How‐
	      ever, if a volume would become entirely unusable by detaching  a
	      plex  or subdisk, the volume might be detached.  The named disk,
	      with device access name accessname and disk media name medianame
	      was  disconnected	 from  the  named disk group as a result of an
	      apparent total disk failure. Total disk failures are checked for
	      automatically  when  plexes  or  subdisks are detached by kernel
	      failures, or explicitly by the  voldisk  check  operation.  (See
	      voldisk(8).)   All  log copies for the volume (either log plexes
	      for a RAID 5 volume or log subdisks for a regular mirrored  vol‐
	      ume)  are unusable, because an I/O failure or a total disk fail‐
	      ure was detected.	 The disk header was  changed  for  the	 named
	      disk  with  a device access name of accessname. No groupname and
	      groupid names are displayed if the disk is not currently	in  an
	      imported	disk group.  The RAID 5 volume has become degraded due
	      to the loss of one subdisk in the	 raid5	plex  of  the  volume.
	      Access to some parts of the volume might be slower than to other
	      parts depending on the location of the failed  subdisk  and  the
	      subsequent  I/O  patterns.   The named subdisk in the named disk
	      group needs to be relocated.

EXAMPLES
       The following example shell script will	send  mail  to	root  for  all
       detected plex, volume, and disk detaches:

	      checkdetach() {
		   d=`volprint -AQdF '%name %nodarec' | awk '$2=="on" {print "
	      " $1}'`
		   p=`volprint -AQpe 'pl_kdetach || pl_nodarec' -F ' %name'`
		   v=`volprint -AQvF ' %name' -e \
			"((any aslist.pl_kdetach==true) ||
			  (any aslist.pl_nodarec)) &&
			 !(any aslist.pl_stale==false)"`
		   if [ ! -z "$d" ] || [ ! -z "$p" ] || [ ! -z "$v" ]
		   then
		       (
			   cat <<EOF
	       Failures have been detected by the Logical Storage Manager:
	       EOF
			   [ -z "$d" ] || echo "\\nfailed disks:\\n$d"
			   [ -z "$p" ] || echo "\\nfailed plexes:\\n$p"
			   [ -z "$v" ] || echo "\\nfailed volumes:\\n$v"
		       ) | mailx -s "Logical Storage Manager failures" root
		   fi
	       }
	       volnotify -f -w 30 | while read code more
	       do
		   case $code in
		   waiting) checkdetach;;
		   esac
	       done The following example shows how to get LSM events from the
	      EVM  log:	 # evmget -f "[name *.volnotify]" | evmshow -t "@time‐
	      stamp @@" The following example shows how to get LSM  events  in
	      real  time:  #  evmwatch	-f  "[name  *.volnotify]" | evmshow -t
	      "@timestamp @@"

EXIT STATUS
       The volnotify utility exits with	 a  nonzero  status  if	 an  error  is
       encountered  while communicating with vold.  See volintro(8) for a list
       of standard exit codes.

SEE ALSO
       Commands: volintro(8), vold(8), voltrace(8)

								  volnotify(8)
[top]

List of man pages available for DigitalUNIX

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net