sys_attrs_vm man page on DigitalUNIX

Man page or keyword search:  
man Server   12896 pages
apropos Keyword Search (all sections)
Output format
DigitalUNIX logo
[printable version]

sys_attrs_vm(5)						       sys_attrs_vm(5)

NAME
       sys_attrs_vm - system attributes for the vm kernel subsystem

DESCRIPTION
       This  reference page describes system attributes for the Virtual Memory
       (vm) kernel subsystem. See sys_attrs(5) for  general  guidelines	 about
       changing system attributes.

       In the following list, an asterisk (*) precedes the names of attributes
       whose values you can change while the system  is	 running.  Changes  to
       values  of  attributes whose names are not preceded by an asterisk take
       effect only when the system is rebooted.

	      A value that sets no limit (0), a soft  limit  (1),  or  a  hard
	      limit (2) on the resident set size of a process.

	      Default value: 0 (no limit)

	      By default, applications can set a process-specific limit on the
	      number of pages resident in memory by specifying the  RLIMIT_RSS
	      resource	value in a setrlimit() call. However, applications are
	      not required to limit the resident set size  of  a  process  and
	      there  is	 no system-wide default limit. Therefore, the resident
	      set size for a process is limited only by system memory restric‐
	      tions.  If  the  demand  for  memory  exceeds the number of free
	      pages, processes with large resident set sizes are likely candi‐
	      dates for swapping.

	      The  anon_rss_enforce attribute enables different levels of con‐
	      trol over process set sizes and when the pages that a process is
	      using in anonymous memory are swapped out (blocking the process)
	      during   times   of   contention	 for   free   pages.   Setting
	      anon_rss_enforce	to either 1 or 2, allows you to enforce a sys‐
	      tem-wide limit on resident set size for a	 process  through  the
	      vm_rss_max_percent  attribute.  Setting anon_rss_enforce to 1 (a
	      soft limit), enables finer control  over	process	 blocking  and
	      paging   of   anonymous  memory  by  allowing  you  to  set  the
	      vm_rss_block_target and vm_rss_wakeup_target attributes.

	      When anon_rss_enforce is set to 2, the resident set size	for  a
	      process	cannot	 exceed	 the  system-wide  limit  set  by  the
	      vm_rss_max_percent attribute or  a  process-specific  limit,  if
	      any,  that is set by an application's setrlimit() call. When the
	      resident set size exceeds either of  these  limits,  the	system
	      starts to swap out pages of anonymous memory that the process is
	      already using to keep the resident set size within the specified
	      limit.

	      When  anon_rss_enforce  is  set  to  1,  any  system-default and
	      process-specific limits on resident set  size  still  apply  and
	      will  cause  swapping  to	 occur	when  exceeded.	 Otherwise,  a
	      process's pages are swapped out when the number of free pages is
	      less  than  the  value of the vm_rss_block_target attribute. The
	      process remains blocked until the number of free	pages  reaches
	      the value of the vm_rss_wakeup_target.

	      This  attribute  supports diskless systems and enables the pager
	      to be more responsive.  It functions under the following	condi‐
	      tions:  The  diskless  driver is loaded and configured. Diskless
	      system services are part of  the	Dataless  Management  Services
	      (DMS).  DMS  enables  systems to run the operating system from a
	      server without requiring a local hard disk on each  client  sys‐
	      tem.  The server is serving a realtime pre-emptive kernel.

	      Default value: 0 (off)

	      Maximum value: 1 (on)

	      A	 value	that  enables (1) or disables (0) a soft guard page on
	      the program stack. This allows an application to enter a	signal
	      handler  on  stack overflows, which otherwise would cause a core
	      dump.

	      Default value: 0 (disabled)

	      The enable_yellow_zone attribute is intended for use by  systems
	      programmers  who	are  debugging	kernel	applications,  such as
	      device drivers.

	      Number of 4-MB chunks of memory reserved at boot time for shared
	      memory  use.  This  memory cannot be used for any other purpose,
	      nor can it be returned to the system or reclaimed when not being
	      used.  On NUMA systems, the gh_chunks attribute affects only the
	      first  Resource  Affinity	 Domain	 (RAD).	 See  the  entry   for
	      rad_gh_regions for more information.

	      Default  value:  0  (chunks)  (The  zero value means that use of
	      granularity hints is disabled.)

	      Minimum value: 0

	      Maximum value: 9,223,372,036,854,775,807

	      The  attributes  associated  with	  “granularity	 hints”	  (the
	      gh_*attributes) are sometimes recommended specifically for data‐
	      base servers. Using segmented shared memory (SSM) is the	alter‐
	      native  to  using	 granularity hints and is recommended for most
	      systems. Therefore, if the gh_chunks attribute  is  not  set  to
	      zero, the ssm_threshold attribute of the ipc subsystem should be
	      set to zero. If the gh_chunks attribute  is  set	to  zero,  the
	      ssm_threshold attribute should not be set to zero.

	      The gh_* attributes, which includes gh_chunks, are automatically
	      disabled if the vm_bigpg_enabled attribute  is  set  to  1.  The
	      vm_bigpg_enabled	attribute  turns on “big pages” memory alloca‐
	      tion mode, which provides the advantages of using extended  vir‐
	      tual  page sizes without hard wiring a specific amount of physi‐
	      cal memory at boot time for this purpose.

	      See your database product documentation and the System  Configu‐
	      ration and Tuning manual for more information about using granu‐
	      larity hints or SSM.

	      A value that enables (1) or disables (0) a failure return by the
	      shmget  function under certain conditions when granularity hints
	      is in use. When this attribute is set to 1, the  shmget()	 func‐
	      tion  returns  a failure if the requested segment size is larger
	      than the value of	 the gh-min-seg-size attribute and if there is
	      insufficient memory allocated by the gh-chunks attribute to sat‐
	      isfy  the request.

	      Default value: 1 (enabled)

	      A value that specifies whether the memory reserved for granular‐
	      ity  hints is (1) or is not (0) allocated from low physical mem‐
	      ory addresses. Allocation from low physical memory addresses  is
	      useful if you have an odd number of memory boards.

	      Default value: 1 (allocation from low physical memory addresses)

	      Specifies	 whether  the memory reserved for granularity hints is
	      (1) or is not (0) sorted.

	      Default value: 0 (not sorted)

	      Size, in bytes, of the segment in which shared memory  is	 allo‐
	      cated  from  the memory reserved for shared memory, according to
	      the value of the gh-chunks attribute.

	      Default value: 8,388,608 (bytes, or 8 MB)

	      Minimum value: 0

	      Maximum value: 9,223,372,036,854,775,807

	      Number of pages per thread that are used for stack space in ker‐
	      nel mode.

	      Default value: 2 (pages per thread)

	      Minimum value: 2

	      Maximum value: 3

	      It   is	strongly   recommended	 that	you  not  modify  ker‐
	      nel_stack_pages unless directed to do so by your support	repre‐
	      sentative.  In  the event of a kernel stack not valid halt error
	      that is caused by a kernel stack	overflow  problem,  increasing
	      the  value  of  kernel_stack_pages  may work around the problem.
	      This workaround will not be successful  if  the  error  occurred
	      because the stack pointer became corrupted. In any event, a ker‐
	      nel stack not valid halt error is	 always	 an  unexpected	 error
	      that  should be reported to your support representative for fur‐
	      ther investigation.

	      Number of freed kernel stack pages that  are  saved  for	reuse.
	      Above this limit, freed kernel stack pages are immediately deal‐
	      located.

	      Default value: 5 (pages)

	      Minimum value: 0

	      Maximum value: 2,147,483,647

	      Deallocation of freed kernel stack pages ensures that memory  is
	      available	 for  other  operations.  However,  the processor time
	      required for deallocating freed kernel stack pages has  a	 nega‐
	      tive  performance	 impact	 that might be more noticeable on NUMA
	      systems than on other systems. You can use the  kstack_free_tar‐
	      get   value  to  make  the  most	appropriate  tradeoff  between
	      increased memory consumption and time spent by CPUs in  a	 purge
	      operation.

	      You  can	change	the  value of the kstack_free_target attribute
	      while the system is running.

	      A value that enables (1) or disables (0) caching of malloc  mem‐
	      ory on a per CPU basis.

	      Default value: 1

	      Do  not  modify  the  default  setting for this attribute unless
	      instructed to do so by support personnel or by patch  kit	 docu‐
	      mentation.

	      Default value: 1 (on)

	      Do  not  modify  the  default  setting for this attribute unless
	      instructed to do so by support personnel or by patch  kit	 docu‐
	      mentation.

	      Percentage of the secondary cache that is reserved for anonymous
	      (nonshared) memory.  Increasing the cache for  anonymous	memory
	      reduces	the  cache  space  available  for  file-backed	memory
	      (shared). This attribute is useful only for benchmarking.

	      Default value: 0 (percent)

	      Minimum value: 0

	      Maximum value: 100

	      For  NUMA	 systems,  the	granularity  hints  chunk   size   (in
	      megabytes)  for the Resource Affinity Domain (RAD) identified by
	      n.   There   are	 64   elements	 in   the   attribute	array,
	      rad_gh_regions[0]	 to  rad_gh_regions[63]. Although all elements
	      in the array are visible on all systems, the  kernel  uses  only
	      the  element values corresponding to RADs that exist on the sys‐
	      tem.  See the entry for  the  gh_chunks  attribute  for  general
	      information about granularity hints memory allocation.

	      Default value: 0 (MB) (Granularity hints is disabled.)

	      The  array of rad_gh_regions[n] attributes replace the gh_chunks
	      attribute, which affects only the first or  (for	non-NUMA  sys‐
	      tems)   only  RAD	 (rad_gh_regions[0])  supported by the system.
	      Although gh_chunks and the set of rad_gh_regions attributes both
	      specify how much memory is manipulated through granularity hints
	      memory allocation, the unit of measurement  for  the  former  is
	      4-megabyte  units whereas the unit of measurement for the latter
	      is megabytes. Therefore:

	      rad_gh_regions[0] = gh_chunks * 4

	      Setting  the  gh_chunks  attribute,  not	the  rad_gh_regions[0]
	      attribute,  is  recommended if you want to use granularity hints
	      memory allocation on non-NUMA systems.

	      The rad_gh_regions[n] attribute are  automatically  disabled  if
	      the vm_bigpg_enabled attribute is set to 1. The vm_bigpg_enabled
	      attribute turns on “big pages”  memory  allocation  mode,	 which
	      provides	the  advantages	 of  using extended virtual page sizes
	      without hard wiring a specific amount of physical memory at boot
	      time for this purpose.

	      A	 value that controls whether user text can or cannot be repli‐
	      cated on multiple CPUs of a NUMA system. When the	 value	is  1,
	      replication  of  user  text  is  enabled.	  When the value is 0,
	      replication of user text is disabled. This  attribute  is	 some‐
	      times used by kernel developers when debugging software for NUMA
	      systems; however, the attribute is not  for  general  use.  (The
	      value  is	 ignored  on  non-NUMA systems and changing it to 0 on
	      NUMA systems might degrade performance.)

	      Default value: 1

	      Do not change the value of this attribute unless	instructed  to
	      do so by support personnel or patch kit instructions.

	      The  device  partitions  reserved for swapping. This is a comma-
	      separated string,	 for  example  /dev/disk/dsk0g,/dev/disk/dsk0d
	      that can be up to 256 bytes in length.

	      Percentage  of memory above which the UBC is only borrowing mem‐
	      ory from the virtual memory subsystem.  Paging  does  not	 occur
	      until the UBC has returned all its borrowed pages.

	      Default value: 20 (percent)

	      Minimum value: 0

	      Maximum value: 100

	      Increasing  this	value may increase UBC cache effectiveness and
	      improve throughput; however, the cost is a likely degradation of
	      system response time during a low memory condition.

	      Obsolete; currently ignored by the software.

	      Specifies	 the  number of pages to consolidate before initiating
	      an I/O operation.

	      Default value: 32 (pages)

	      Minimum value: 0

	      Maximum value: 512

	      The default value is appropriate for the vast majority  of  sys‐
	      tems.  Raising  this  value  may improve I/O efficiency if rela‐
	      tively few users and applications write to only a few very large
	      files,  and  there  is  high  probability	 that write operations
	      affect contiguous pages. However, the  cost  is  increased  time
	      spent  in memory (and holding locks for a longer length of time)
	      while the system determines what state pages are	in  and	 which
	      ones can be clustered.

	      A	 threshold value that forces cleanup of AdvFS metadata that is
	      being stored in the UBC. The default setting  forces  return  of
	      pages  containing	 AdvFS	metadata when they reach 70 percent of
	      the UBC.

	      This is not a tuning parameter. Do not modify the	 default  set‐
	      ting  unless directed to do so by support personnel or patch kit
	      instructions.

	      Default value: 70 (percent)

	      Minimum value: 0

	      Maximum value: 100

	      Number of I/O operations (per second) that  the  virtual	memory
	      subsystem	 performs when the number of dirty (modified) pages in
	      the UBC exceeds the value of the vm-ubcdirtypercent attribute.

	      Default value: 5 (operations per second)

	      Minimum value: 0

	      Maximum value: 2,147,483,647

	      Maximum percentage of physical memory that the UBC  can  use  at
	      one time.

	      Default value: 100 (percent)

	      Minimum value: 0

	      Maximum value: 100

	      It is recommended that this value be set to a value in the range
	      of 70 to 80 percent. On an overloaded system, values higher than
	      80  can  delay  return  of  excess UBC pages to vm and adversely
	      affect performance.

	      Minimum percentage of physical memory that the UBC can use.

	      Default value: 10 (percent)

	      Minimum value: 0

	      Maximum value: 100

	      A value that enables (1) or disables (0) the ability of the task
	      swapper to aggressively swap out idle tasks.

	      Default value: 0 (disabled)

	      Setting this attribute to 1 helps prevent a low-memory condition
	      from occurring and allows more jobs to  be  run  simultaneously.
	      However, interactive response times are likely to be longer on a
	      system that is excessively paging and swapping.

	      The number of asynchronous I/O requests per swap partition  that
	      can  be outstanding at one time.	Asynchronous swap requests are
	      used for pageout operations and for prewriting modified pages.

	      Default value: 4 (requests)

	      Minimum value: 0

	      Maximum value: 2,147,483,647

	      The minimum amount of anonymous memory (in Kbytes) that  a  user
	      process  must  request before the kernel will map a virtual page
	      in the process address space to more  than  one  physical	 page.
	      Anonymous	 memory is requested by calls to mmap(), nmmap(), mal‐
	      loc(), and amalloc().

	      Default value: 64 (Kbytes)

	      Minimum value: 0 (big pages allocation mode disabled for	anony‐
	      mous memory)

	      Consult	with   your  support  representative  before  changing
	      vm_bigpg_anon to a value other than the 64 Kbyte default.

	      The  vm_bigpg_anon  attribute   has   no	 effect	  unless   the
	      vm_bigpg_enabled attribute is set to 1.

	      Currently,  big pages allocation of anonymous memory is not sup‐
	      ported for memory-mapped files.

	      If the anon_rss_enforce attribute (which sets  a	limit  on  the
	      resident	set  size of a process) is set to 1 or 2, it overrides
	      and disables big pages memory allocation mode for anonymous  and
	      stack memory. Make sure that anon_rss_enforce is set to 0 if you
	      want big pages memory allocation to be applied to anonymous  and
	      stack memory.

	      The  master switch that enables (1) or disables (0) memory allo‐
	      cation for user processes in “big pages” mode.

	      Default value: 0 (disabled)

	      Big pages memory allocation allows a virtual page in the process
	      address  space  to  be mapped to	multiple pages in the system's
	      physical memory. This mapping can be to  8-pages,	 64-pages,  or
	      512-pages	 (64,  512,  or 4096 Kbytes, respectively) of physical
	      memory.

	      Big pages uses threshold values set on a per  memory-type	 basis
	      to  determine whether a memory allocation request is eligible to
	      use the extended page sizes.   The  attributes  that  set	 these
	      thresholds   are	 vm_bigpg_anon,	  vm_bigpg_shm,	 vm_bigpg_ssm,
	      vm_bigpg_seg, and vm_bigpg_stack.

	      Enabling the vm_bigpg_enabled attribute also enables level three
	      granularity  hints. Level three granularity hints are controlled
	      by the vm_l3gh_anon, vm_l3gh_shm, and vm_l3gh_ssm attributes.

	      If big pages memory allocation is disabled, the kernel maps each
	      virtual page in the user address space to 8 Kbytes of memory.

	      To enable big pages, you must set the vm_bigpg_enabled attribute
	      at system boot time.   Allow  big	 pages	to  distribute	memory
	      across  RADs  as	a  priority over getting the largest page size
	      possible

	      Default value: 1 (Use smp)

	      Setting the value to 0 enables this feature.

	      The minimum amount of memory (in Kbytes)	that  a	 user  process
	      must  request  for  a program text object before the kernel will
	      map a virtual page in the process address space to more than one
	      physical	page.  Allocations for program text objects are gener‐
	      ated when the process executes  a	 program  or  loads  a	shared
	      library.	 See also the descriptions of vm_segment_cache_max and
	      vm_segmentation.

	      Default value: 64 (Kbytes)

	      Minimum value: 0 (big pages memory allocation disabled for  pro‐
	      gram text objects)

	      Consult	with   your  support  representative  before  changing
	      vm_bigpg_seg to a value other than the 64 Kbyte default.

	      The  vm_bigpg_seg	  attribute   has   no	 effect	  unless   the
	      vm_bigpg_enabled attribute is set to 1.

	      The  minimum amount of System V shared memory (in Kbytes) that a
	      user process must request before the kernel will map  a  virtual
	      page  in	the  process  address  space to more than one physical
	      page. Requests for System V shared memory are generated by calls
	      to shmget(), shmctl(), and nshmget().

	      Default value: 64 (Kbytes)

	      Minimum  value:  0  (big	pages allocation disabled for System V
	      shared memory)

	      Consult  with  your  support  representative   before   changing
	      vm_bigpg_shm to a value other than the 64 Kbyte default.

	      The   vm_bigpg_shm   attribute   has   no	  effect   unless  the
	      vm_bigpg_enabled attribute is set to 1.

	      The minimum amount (in Kbytes) of segmented shared memory	 (Sys‐
	      tem V shared memory with shared page tables) that a user process
	      must request before the kernel will map a virtual	 page  in  the
	      process  address	space to more than one physical page. Requests
	      for this type of memory are generated by calls to shmget(), shm‐
	      ctl(), and nshmget().

	      Default value: 64 (Kbytes)

	      Minimum  value:  0  (big pages allocation disabled for segmented
	      shared memory)

	      Consult  with  your  support  representative   before   changing
	      vm_bigpg_ssm to a value other than the 64 Kbyte default.

	      The   vm_bigpg_ssm   attribute   has   no	  effect   unless  the
	      vm_bigpg_enabled attribute is set to 1.

	      The vm_bigpg_ssm attribute  is  disabled	if  the	 ssm_threshold
	      attribute	 is set to 0 (zero). If you want to use big pages mem‐
	      ory allocation for segmented shared memory, make sure  that  the
	      ssm_threshold  is	 set  to a value that is at least equal to the
	      value of SSM_SIZE. This value is defined in the <machine/pmap.h>
	      file. See sys_attrs_ipc(5) for more information.

	      The  minimum  amount  of	memory (in Kbytes) needed for the user
	      process stack before the kernel will map a virtual page  in  the
	      process address space to more than one physical page. Stack mem‐
	      ory is automatically allocated  by  the  kernel  on  the	user's
	      behalf.

	      Default value: 64 (Kbytes)

	      Minimum  value:  0  (big	pages allocation disabled for the user
	      process stack)

	      Consult  with  your  support  representative   before   changing
	      vm_bigpg_stack to a value other than the 64 Kbyte default.

	      The   vm_bigpg_stack   attribute	 has   no  effect  unless  the
	      vm_bigpg_enabled attribute is set to 1.

	      If the anon_rss_enforce attribute (which sets  a	limit  on  the
	      resident	set  size of a process) is set to 1 or 2, it overrides
	      and disables big pages memory allocation of anonymous and	 stack
	      memory.  Make sure that anon_rss_enforce is set to 0 if you want
	      big pages memory allocation to be applied to anonymous and stack
	      memory.

	      The  percentage  of physical memory that should be maintained on
	      the free page list for each of the four possible page sizes  (8,
	      64, 512, and 4096 Kbytes).

	      When  a  page of memory is freed, an attempt is made to coalesce
	      the page	with  adjacent	pages  to  form	 a  bigger  page.  The
	      vm_bigpg_thresh attribute sets the threshold at which coalescing
	      begins. With smaller values, more	 pages	are  coalesced,	 hence
	      there  are  fewer pages available at the smaller sizes. This may
	      result in a performance degradation as a larger page  will  then
	      have  to	be broken into smaller pieces to satisfy an allocation
	      request for one of the smaller page sizes.   If  vm_bigpg_thresh
	      is  too  large,  fewer  large  size  pages will be available and
	      applications may not be able  to	take  full  advantage  of  big
	      pages. Generally, the default value will suffice, but this value
	      can be increased if the system work  load	 requires  more	 small
	      pages than large pages.

	      Default value: 6%

	      Minimum value: 0%

	      Maximum value: 25%

	      Size,  in	 bytes, of the kernel cluster submap, which is used to
	      allocate the scatter/gather map for clustered file and swap I/O.

	      Default value: 1,048,576 (bytes, or 1 MB)

	      Minimum value: 0

	      Maximum value: 922,337,203,854,775,807

	      Maximum size, in bytes, of a single  scatter/gather  map	for  a
	      clustered I/O request.

	      Default value: 65,536 (bytes, or 64 KB)

	      Minimum value: 0

	      Maximum value: 922,337,203,854,775,807

	      Number  of times that the pages of an anonymous object are copy-
	      on-write faulted after a fork  operation	but  before  they  are
	      copied as part of the fork operation.

	      Default value: 4 (faults)

	      Minimum value: 0

	      Maximum value: 2,147,483,647

	      Size, in bytes, of the kernel copy submap.

	      Default value: 1,048,576 (bytes, or 1 MB)

	      Minimum value: 0

	      Maximum value: 922,337,203,854,775,807

	      Obsolete; currently ignored by the software.

	      Minimum  amount  of time, in seconds, that a task remains in the
	      inswapped state before it is considered a candidate for outswap‐
	      ping.

	      Default value: 1 (second)

	      Minimum value: 0

	      Maximum  value: 60 Enable level 3 granularity hints in anonymous
	      memory. This attribute has no effect unless the vm_bigpg_enabled
	      attribute is set to 1.  Enable level 3 granularity hints in Sys‐
	      tem V shared memory. This attribute has  no  effect  unless  the
	      vm_bigpg_enabled	attribute  is set to 1.	 Enable level 3 granu‐
	      larity hints in segmented shared memory (System V shared	memory
	      with  shared  page  tables). This attribute has no effect unless
	      the vm_bigpg_enabled attribute is set to 1.

	      Size, in bytes, of the largest pagein  (read)  cluster  that  is
	      passed to the swap device.

	      Default value: 16,384 (bytes) (16 KB)

	      Minimum value: 8192

	      Maximum value: 131,072

	      Size,  in bytes, of the  largest pageout (write) cluster that is
	      passed to the swap device.

	      Default value: 32,768 (bytes) (32 KB)

	      Minimum value: 8192

	      Maximum value: 131,072

	      Base address of the kernel's virtual address space.   The	 value
	      can  be  either  Oxffffffff80000000 or Oxfffffffe00000000, which
	      sets the size of the kernel's virtual address space to either  2
	      GB or 8 GB, respectively.

	      Default value: 18,446,744,073,709,551,615 (2 to the power of 64)

	      You  may	need to increase the kernel's virtual address space on
	      very large memory (VLM) systems (for example, systems with  sev‐
	      eral  gigabytes  of  physical memory and several thousand	 large
	      processes).  The vm_overflow attribute changes how large	memory
	      applications  “borrow”  memory   from  other  resource  affinity
	      domains (RADs) when their local memory resources	are exhausted.
	      This can be used on all NUMA class systems.

	      When  memory  resources  are depleted on a RAD in a NUMA system,
	      the vm subsystem automatically overflows to another RAD to  ful‐
	      fill  the	 memory	 allocation request. The following list illus‐
	      trates the default overflow  behavior.  If  a  step  fails,  the
	      process  moves  to the next step: Attempt an allocation from the
	      local RAD Page out a page of memory and “steal”  it  Attempt  an
	      allocation  from	the  next  RAD	Page  out a page of memory and
	      “steal” it Continue until allocation/stealing has been attempted
	      on all RADs

	      Setting the vm_overflow attribute to 1 changes the order of page
	      allocations and page stealing as follows: Attempt an  allocation
	      from  the local RAD Attempt an allocation from the next RAD Con‐
	      tinue until allocation has been attempted on all RADs Revert  to
	      the default behavior.

	      Setting  vm_overflow to 1 may result in less paging activity for
	      some applications, thereby improving performance.

	      The threshold value that stops paging. When the number of	 pages
	      on the free list reaches this value, paging stops.

	      Default  value: Varies, depending on physical memory size; about
	      16 times the value of vm_page_free_target

	      Minimum value: 0

	      Maximum value: 2,147,483,647

	      The   vm_page_free_hardswap   value   is	 computed   from   the
	      vm_page_free_target value, which by default scales with physical
	      memory size. If  you  change  vm_page_free_target,  your	change
	      affects vm_page_free_hardswap as well.

	      The  threshold  value that starts page swapping. When the number
	      of pages on the free page list falls below  this	value,	paging
	      starts.

	      Default	value:	 20   (pages,	or   twice   the   amount   of
	      vm_page_free_reserved)

	      Minimum value: 0

	      Maximum value: 2,147,483,647

	      The threshold value that begins hard swapping. When  the	number
	      of  pages	 on the free list falls below this value for five sec‐
	      onds, hard swapping begins.

	      Default value: Automatically scaled by using this formula:

	      vm_page_free_min + ((vm_page_free_target -  vm_page_free_min)  /
	      2)

	      Minimum value: 0 (pages)

	      Maximum value: 2,147,483,647

	      The  threshold  value  that determines when memory is limited to
	      privileged tasks.	 When the number of pages  on  the  free  page
	      list  falls below this value, only privileged tasks can get mem‐
	      ory.

	      Default value: 10 (pages)

	      Minimum value: 1

	      Maximum value: 2,147,483,647

	      The threshold value that begins swapping of idle tasks. When the
	      number  of  pages	 on the free page list falls below this value,
	      idle task swapping begins.

	      Default value: Automatically scaled by using this formula:

	      vm_page_free_min + ((vm_page_free_target -  vm_page_free_min)  /
	      2)

	      Minimum value: 0

	      Maximum value: 2,147,483,647

	      The  threshold value that stops paging, When the number of pages
	      on the free page list reaches this value, paging stops.

	      Default value: Based on the amount of  managed  memory  that  is
	      available on the system, as shown in the following table:

	      ───────────────────────────────────────────────────
	      Available Memory (M)   vm_page_free_target (pages)
	      ───────────────────────────────────────────────────
	      Less than 512	     128
	      512 to 1023	     256
	      1024 to 2047	     512
	      2048 to 4095	     768
	      4096 and higher	     1024
	      ───────────────────────────────────────────────────

	      Minimum value: 0 (pages)

	      Maximum value: 2,147,483,647

	      Maximum  number of modified UBC pages that the vm subsystem will
	      prewrite to disk if it anticipates running out  of  memory.  The
	      prewritten pages are the least recently used (LRU) pages.

	      Default value: vm_page_free_target * 2

	      Minimum value: 0

	      Maximum value: 2,147,483,647

	      A	 threshold  number  of	free pages that will start swapping of
	      anonymous memory from the resident set of a process.  Paging  of
	      anonymous	 memory	 starts when the number of free pages meets or
	      exceeds this value. The process is blocked until the  number  of
	      free  pages  reaches  the	 value set by the vm_rss_wakeup_target
	      attribute.

	      Default value: Same as vm_page_free_optimal

	      Minimum value: 0

	      Maximum value: 2,147,483,647

	      The default value of the vm_rss_block_target  attribute  is  the
	      same  as the default value of the vm_page_free_optimal attribute
	      that controls the threshold value for hard swapping.

	      You can increase the value of vm_rss_block_target to start  pag‐
	      ing  of  anonymous memory earlier than when hard swapping occurs
	      or decrease the value to delay paging of anonymous memory beyond
	      the point at which hard swapping occurs.

	      A	 percentage of the total pages of anonymous memory on the sys‐
	      tem that is the system-wide limit on the resident set  size  for
	      any  process.  The value of this attribute has an effect only if
	      anon_rss_enforce is set to 1 or 2.

	      Default value: 100 (percent)

	      Minimum value: 1

	      Maximum value: 100

	      You can decrease this percentage to enforce a system-wide	 limit
	      on  the  resident	 set  size for any process. Be aware, however,
	      that this limit applies to privileged, as well as	 unprivileged,
	      processes	 and will override a larger resident set size that may
	      be specified for a process through the setrlimit() call.

	      A threshold number of free pages that  will  unblock  a  process
	      whose  anonymous memory is swapped out. The process is unblocked
	      when the number of free pages meets this value.

	      Default value: Same is vm_page_free_optimal

	      Minimum value: 0

	      Maximum value: 2,147,483,647

	      The default value of the vm_rss_wakeup_target attribute  is  the
	      same  as the default value of the vm_page_free_optimal attribute
	      that controls the threshold value for hard swapping.

	      You can increase the value of vm_rss_wakeup_target to free  more
	      memory  before  unblocking  a  process  or decrease the value to
	      unblock the process sooner (with less freed memory).

	      Number of text segments that can be cached in the segment cache.
	      (Applies only if you enable segmentation.)

	      Default value:  50 (segments)

	      Minimum value: 0

	      Maximum value: 8192

	      The  vm  subsystem uses the segment cache to cache inactive exe‐
	      cutables and shared libraries.  Because objects in  the  segment
	      cache  can be accessed by mapping a page table entry, this cache
	      eliminates I/O delays for repeated executions and reloads.

	      Reducing the number of segments in the segment  cache  can  free
	      memory  and  help	 to  reduce paging overhead. (The size of each
	      segment depends on the text size of the executable or the shared
	      library that is being cached.)

	      A	 value	that enables (1) or disables (0) the ability of shared
	      regions of user address space to also share the page tables that
	      map to those shared regions.

	      Default value: 1 (enabled)

	      In  a TruCluster environment, this value must be the same on all
	      cluster members.

	      Specifies the swap allocation mode, which can be immediate  mode
	      (1)  or  deferred mode (0).  Immediate mode is commonly referred
	      to as “eager” mode and deferred mode is commonly referred to  as
	      “lazy” mode.

	      Default value: 1 (eager swap mode)

	      In  eager	 swap  mode, the kernel will block a memory allocation
	      when it cannot reserve in advance	 a  matching  amount  of  swap
	      space.  Eager swap mode is recommended for systems with variable
	      workloads, particularly for those with unpredictably high	 peaks
	      of  memory  consumption.	For eager swap mode, swap space should
	      not be less than 111 percent of system memory. A swap space con‐
	      figuration of 150 percent of memory is recommended for most sys‐
	      tems, and small memory systems are likely to require swap	 space
	      in  excess of 150 percent of memory. In eager swap mode, if swap
	      space is not configured to exceed the  amount  of	 memory	 by  a
	      large  enough percentage, the likelihood that system memory will
	      be underutilized during times of peak demand  is	increased.  In
	      fact,  configuring  swap	space  that is less than the amount of
	      memory on the system, even if swapping does not occur,  prevents
	      the  kernel  from	 using	memory	that represents the difference
	      between memory and  swap	space  amounts.	 When  swap  space  is
	      unavailable  in  eager  swap  mode, processes start blocking one
	      another and, worst case, cause the system to hang.

	      In lazy swap mode, the kernel does not require a matching amount
	      of swap space to be available in advance of a memory allocation.
	      However, in lazy	swap  mode,  the  kernel  kills	 processes  to
	      reclaim memory if an attempt to swap out a process fails because
	      of insufficient swap space. Because key kernel processes can  be
	      killed,  this  condition	increases  the	likelihood of a system
	      crash. Lazy swap mode is appropriate on very large  memory  sys‐
	      tems for which it is impractical to configure swap space that is
	      half again as large as memory. Lazy swap mode is also  appropri‐
	      ate  for	smaller	 systems  with	a relatively constant and pre‐
	      dictable workload or for systems on which peak  memory  consump‐
	      tion  is	always	well below the amount of memory that is avail‐
	      able.  In all cases where lazy swap mode is  used,  enough  swap
	      space  must  be  configured  to accommodate times of peak memory
	      consumption, plus an extra amount of swap	 space	to  provide  a
	      margin  of safety. To determine the amount of swap space that is
	      needed, monitor memory and swap space consumption over  time  to
	      determine consumption peaks and then factor in a generous margin
	      of safety.

	      The number of synchronous I/O requests that can  be  outstanding
	      to  the  swap  partitions at one time. Synchronous swap requests
	      are used for page-in operations and task swapping.

	      Default value: 128 (requests)

	      Minimum value: 1

	      Maximum value: 2,147,483,647

	      Maximum percentage of physical memory that  can  be  dynamically
	      wired.  The kernel and user processes use this memory for dynam‐
	      ically allocated data  structures	 and  address  space,  respec‐
	      tively.

	      Default value: 80 (percent)

	      Minimum value: 1

	      Maximum value: 100

	      Enables,	disables,  and	tunes the trolling rate for the memory
	      troller on systems supported by the memory troller.

	      When enabled, the memory troller continually reads the  system's
	      memory  to  proactively  discover and handle memory errors.  The
	      troll rate is expressed as a percentage of  the  system's	 total
	      memory trolled per hour and you can change it at any time. Valid
	      troll rate settings are: Default value: 4 percent per hour

	      This default value applies if you do not specify any  value  for
	      vm_troll_percent in the /etc/sysconfigtab.  At the default troll
	      rate, each 8 kilobyte memory  page  is  trolled  once  every  24
	      hours.  Disable value: 0 (zero)

	      Specify a value of 0 (zero)  to disable memory trolling.	Range:
	      1 - 100 percent

	      Specify a value in the range 1 to 100 to set the troll rate to a
	      percentage  of  memory  to  troll per hour. For example, a troll
	      rate of 50 reads half the total memory in one  hour.  After  all
	      memory  is  read, the troller starts a new pass at the beginning
	      of memory.  Accelerated trolling: 101 percent

	      Specify a value greater than 100	percent	 to  invoke  one  pass
	      accelerated trolling. At this rate, all system memory is trolled
	      at a rate of approximately 6000 pages per second, where one page
	      equals  8	 kilobytes.  Trolling  is  then automatically disabled
	      after a single pass. This mode is intended for trolling all mem‐
	      ory quickly during off peak hours.

	      Low  troll rates, such as the 4 percent default, have a negligi‐
	      ble impact on system performance.	 Processor  usage  for	memory
	      trolling increases as the troll rate is increased. Refer to mem‐
	      ory_trolling(5) for additional performance information and  mem‐
	      ory troller usage instructions.

	      Specifies	 the  number of I/O operations that can be outstanding
	      while purging dirty (modified) pages from	 the  UBC.  The	 dirty
	      pages are flushed to disk to reclaim memory.  The UBC purge dae‐
	      mon will stop flushing dirty  pages  when	 the  number  of  I/Os
	      reaches the vm_ubcbuffers limit or there are no more dirty pages
	      in the UBC. AdvFS software does not use this attribute; only UFS
	      software uses it.

	      Default value: 256 (I/Os)

	      Minimum value: 0

	      Maximum value: 2,147,483,647

	      For  systems  running  at capacity and on which many interactive
	      users are performing write operations to UFS file systems, users
	      might  detect slowed response times if many pages are flushed to
	      disk each time the UBC buffers are purged. Decreasing the	 value
	      of  vm_ubcbuffers	 causes shorter but more frequent purge opera‐
	      tions, thereby smoothing out system response times. Do not, how‐
	      ever, decrease vm_ubcbuffers to a value that completely disables
	      purging of dirty pages. One I/O for certain file	systems	 might
	      be  associated  with  many  pages because of write clustering of
	      dirty pages.

					    Note

	      Changes to this attribute only take affect  when	made  at  boot
	      time.

	      You  can also set the smoothsync_age attribute of the vfs kernel
	      subsystem to address response-time delays that can occur	during
	      periods  of intense write activity. The smoothsync_age attribute
	      uses a different metric (age of dirty pages rather  than	number
	      of  I/Os)	 to  balance  the frequency and duration time of purge
	      operations and therefore does not support the ability of UFS  to
	      flush  all  dirty pages for the same write operation at the same
	      time. However, smoothsync_age can be changed while the system is
	      running  and  is	used  by  AdvFS	 as  well as UFS software. See
	      sys_attrs_vfs(5)	for  information  about	  the	smoothsync_age
	      attribute.

	      The percentage of pages that must be dirty (modified) before the
	      UBC starts writing them to disk.

	      Default value: 10 (percent)

	      Minimum value: 0

	      Maximum value: 100

	      In the context of an application thread,	the  number  of	 pages
	      that  must  be  dirty  (modified)	 before	 the UBC update daemon
	      starts writing them. This value is for internal use only.

	      The minimum number of pages to be available for file  expansion.
	      When  the number of available pages falls below this number, the
	      UBC steals additional pages to anticipate the  file's  expansion
	      demands.

	      Default value: 24 (file pages)

	      Minimum value: 0

	      Maximum value: 2,147,483,647

	      The maximum percentage of UBC memory that can be used to cache a
	      single file. See	vm_ubcseqstartpercent  for  information	 about
	      controlling when the UBC checks this limit.

	      Default value: 10 (percent)

	      Minimum value: 0

	      Maximum value: 100

	      A	 threshold value (a percentage of the UBC in terms of its cur‐
	      rent size) that determines when the UBC starts to check the per‐
	      centage  of UBC pages cached for each file object. If the cached
	      page percentage for any file exceeds the value of	 vm_ubcseqper‐
	      cent,  the UBC returns that file's UBC LRU pages to virtual mem‐
	      ory.

	      Default value: 50 (percent)

	      Minimum value: 0

	      Maximum value: 100

SEE ALSO
       Commands: dxkerneltuner(8), sysconfig(8), and sysconfigdb(8).

       Others: memory_trolling(5), sys_attrs_proc(5), and sys_attrs(5).

       System Configuration and Tuning

       System Administration

							       sys_attrs_vm(5)
[top]

List of man pages available for DigitalUNIX

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net