VMS Help
V73 Features, System Management Features, Dedicated CPU Lock Manager (Alpha)
*Conan The Librarian (sorry for the slow response - running on an old VAX)
|
|
The Dedicated CPU Lock Manager is a new feature that improves
performance on large SMP systems that have heavy lock manager
activity. The feature dedicates a CPU to performing lock manager
operations.
A dedicated CPU has the following advantages for overall system
performance as follows:
o Reduces the amount of MP_SYNCH time
o Provides good CPU cache utilization
For the Dedicated CPU Lock Manager to be effective, systems
must have a high CPU count and a high amount of MP_SYNCH due
to the lock manager. Use the MONITOR utility and the MONITOR
MODE command to see the amount of MP_SYNCH. If your system has
more than five CPUs and if MP_SYNCH is higher than 200%, your
system may be able to take advantage of the Dedicated CPU Lock
Manager. You can also use the spinlock trace feature in the
System Dump Analyzer (SDA) to help determine if the lock manager
is contributing to the high amount of MP_SYNCH time.
The Dedicated CPU Lock Manager is implemented by a LCKMGR_SERVER
process. This process runs at priority 63. When the Dedicated
CPU Lock Manager is turned on, this process runs in a compute
bound loop looking for lock manager work to perform. Because
this process polls for work, it is always computable; and with
a priority of 63 the process will never give up the CPU, thus
consuming a whole CPU.
If the Dedicated CPU Lock Manager is running when a program calls
either the $ENQ or $DEQ system services, a lock manager request
is placed on a work queue for the Dedicated CPU Lock Manager.
While a process waits for a lock request to be processed, the
process spins in kernel mode at IPL 2. After the dedicated CPU
processes the request, the status for the system service is
returned to the process.
The Dedicated CPU Lock Manager is dynamic and can be turned off
if there are no perceived benefits. When the Dedicated CPU Lock
Manager is turned off, the LCKMGR_SERVER process is in a HIB
(hibernate) state. The process may not be deleted once started.
To use the Dedicated CPU Lock Manager, set the LCKMGR_MODE
system parameter. Note the following about the LCKMGR_MODE system
parameter:
o Zero (0) indicates the Dedicated CPU Lock Manager is off (the
default).
o A number greater than zero (0) indicates the number of CPUs
that should be active before the Dedicated CPU Lock Manager is
turned on.
Setting LCKMGR_MODE to a number greater than zero (0) triggers
the creation of a detached process called LCKMGR_SERVER. The
process is created, and it starts running if the number of active
CPUs equals the number set by the LCKMGR_MODE system parameter.
In addition, if the number of active CPUs should ever be reduced
below the required threshold by either a STOP/CPU command or by
CPU reassignment in a Galaxy configuration, the Dedicated CPU
Lock Manager automatically turns off within one second, and the
LCKMGR_SERVER process goes into a hibernate state. If the CPU is
restarted, the LCKMGR_SERVER process again resumes operations.
The LCKMGR_SERVER process uses the affinity mechanism to set
the process to the lowest CPU ID other than the primary. You can
change this by indicating another CPU ID with the LCKMGR_CPUID
system parameter. The Dedicated CPU Lock Manager then attempts
to use this CPU. If this CPU is not available, it reverts back to
the lowest CPU other than the primary.
The following shows how to dynamically change the CPU used by the
LCKMGR_SERVER process:
$RUN SYS$SYSTEM:SYSGEN
SYSGEN>USE ACTIVE
SYSGEN>SET LCKMGR_CPUID 2
SYSGEN>WRITE ACTIVE
SYSGEN>EXIT
To verify the CPU dedicated to the lock manager, use the following
show system command:
$ SHOW SYSTEM/PROCESS=LCKMGR_SERVER
This change applies to the currently running system. A reboot
reverts back to the lowest CPU other than the primary. To
permanently change the CPU used by the LCKMGR_SERVER process,
set LCKMGR_CPUID in your MODPARAMS.DAT file.
Compaq highly recommends that a process not be given hard
affinity to the CPU used by the Dedicated CPU Lock Manager.
With hard affinity when such a process becomes computable, it
cannot obtain any CPU time, because the LCKMGR_SERVER process
is running at the highest possible real-time priority of 63.
However, the LCKMGR_SERVER detects once per second if there are
any computable processes that are set by the affinity mechanism
to the dedicated lock manager CPU. If so, the LCKMGR_SERVER
switches to a different CPU for one second to allow the waiting
process to run.
4 - Using with Fast Path Devices
|
OpenVMS Version 7.3 also introduces Fast Path for SCSI and Fibre
Channel Controllers along with the existing support of CIPCA
adapters. The Dedicated CPU Lock Manager supports both the
LCKMGR_SERVER process and Fast Path devices on the same CPU.
However, this may not produce optimal performance.
By default, the LCKMGR_SERVER process runs on the first available
nonprimary CPU. Compaq recommends that the CPU used by the
LCKMGR_SERVER process not have any Fast Path devices. This can
be accomplished in either of the following ways:
o You can eliminate the first available nonprimary CPU as an
available Fast Path CPU. To do so, clear the bit associated
with the CPU ID from the IO_PREFER_CPUS system parameter.
For example, let's say your system has eight CPUs with CPU IDs
from zero to seven and four SCSI adapters that will use Fast
Path. Clearing bit 1 from IO_PREFER_CPUs would result in the
four SCSI devices being bound to CPUs 2, 3, 4, and 5. CPU 1,
which is the default CPU the lock manager will use, will not
have any Fast Path devices.
o You can set the LCKMGR_CPUID system parameter to tell the
LCKMGR_SERVER process to use a CPU other than the default. For
the above example, setting this system parameter to 7 would
result in the LCKMGR_SERVER process running on CPU 7. The Fast
Path devices would by default be bound to CPUs 1, 2, 3, and 4.
5 - Using on AlphaServer GS Series Systems
|
The new AlphaServer GS Series Systems (GS80, GS160, and the
GS320) have NUMA memory characteristics. When using the Dedicated
CPU Lock Manager on one of these systems, the best performance is
obtained by utilizing a CPU and memory from within a single Quad
Building Block (QBB).
For OpenVMS Version 7.3, the Dedicated CPU Lock Manager does not
yet have the ability to decide from where QBB memory should be
allocated. However, there is a method to preallocate lock manager
memory from the low QBB. This can be done with the LOCKIDTBL
system parameter. This system parameter indicates the initial
size of the Lock ID Table, along with the initial amount of
memory to preallocate for lock manager data structures.
To preallocate the proper amount of memory, this system parameter
should be set to the highest number of locks plus resources
on the system. The command MONITOR LOCK can provide this
information. If MONITOR indicates the system has 100,000 locks
and 50,000 resources, then setting LOCKIDTBL to the sum of these
two values will ensure that enough memory is initially allocated.
Adding in some additional overhead may also be beneficial.
Setting LOCKIDTBL to 200,000 thus might be appropriate.
If necessary, use the LCKMGR_CPUID system parameter to ensure
that the LCKMGR_SERVER runs on a CPU in the low QBB.
[legal]
[privacy]
[GNU]
[policy]
[netiquette]
[sponsors]
[FAQ]
Polarhome, production since 1999.
Member of Polarhome portal.