alt-nvidia-current-cuda-mps-control man page on Mageia

Man page or keyword search:  
man Server   17783 pages
apropos Keyword Search (all sections)
Output format
Mageia logo
[printable version]

nvidia-cuda-mps-control(1)	    NVIDIA	    nvidia-cuda-mps-control(1)

NAME
       nvidia-cuda-mps-control	- NVIDIA CUDA Multi Process Service management
       program

SYNOPSIS
       nvidia-cuda-mps-control [-d]

DESCRIPTION
       MPS is a runtime service designed to let multiple MPI  processes	 using
       CUDA to run concurrently on a single GPU in a way that's transparent to
       the MPI program.	 A CUDA program runs in MPS mode if  the  MPS  control
       daemon is running on the system.

       When  CUDA  is first initialized in a program, the CUDA driver attempts
       to connect to the MPS control daemon. If the connection attempt	fails,
       the  program continues to run as it normally would without MPS. If how‐
       ever, the connection attempt to the control daemon succeeds,  the  CUDA
       driver  then  requests the daemon to start an MPS server on its behalf.
       If there's an MPS server already running,  and  the  user  id  of  that
       server  process matches that of the requesting client process, the con‐
       trol daemon simply notifies the client process of it, which  then  pro‐
       ceeds  to  connect to the server. If there's no MPS server already run‐
       ning on the system, the control daemon launches an MPS server with  the
       same user id (UID) as that of the requesting client process. If there's
       an MPS server already running, but with a different user id  than  that
       of  the client process, the control daemon requests the existing server
       to shutdown as soon as all its clients  are  done.  Once	 the  existing
       server  has  terminated,	 the control daemon launches a new server with
       the user id same as that of the queued client process.

       The MPS server creates the shared GPU context, manages its clients, and
       issues work to the GPU on behalf of its clients. An MPS server can sup‐
       port upto 16 client CUDA contexts at a time. MPS is transparent to CUDA
       programs,  with	all the complexity of communication between the client
       process, the server and the control daemon  hidden  within  the	driver
       binaries.

       Currently,  CUDA	 MPS  is  available  on	 64-bit Linux only, requires a
       device that supports Unified Virtual  Address  (UVA)  and  has  compute
       capability  SM 3.5 or higher.  Applications requiring pre-CUDA 4.0 APIs
       are not supported under CUDA MPS.  MPS is also not supported on	multi-
       GPU  configurations.  Please use CUDA_VISIBLE_DEVICES when starting the
       control daemon to limit visibility to a single device.

OPTIONS
   -d
       Start the MPS control daemon, assuming the user	has  enough  privilege
       (e.g. root).

   -h, --help
       Print a help message.

   <no arguments>
       Start  the  front-end management user interface to the MPS control dae‐
       mon, which needs to be started first. The front-end  UI	keeps  reading
       commands	 from  stdin until EOF.	 Commands are separated by the newline
       character. If an invalid command is issued and rejected, an error  mes‐
       sage  will be printed to stdout. The exit status of the front-end UI is
       zero if communication with the daemon is successful. A  non-zero	 value
       is  returned  if the daemon is not found or connection to the daemon is
       broken unexpectedly. See the "quit" command below for more  information
       about the exit status.

       Commands supported by the MPS control daemon:

       get_server_list
	      Print out a list of PIDs of all MPS servers.

       start_server -uid UID
	      Start a new MPS server for the specified user (UID).

       shutdown_server PID [-f]
	      Shutdown	the MPS server with given PID. The MPS server will not
	      accept any new client connections and it exits when all  current
	      clients disconnect. -f is forced immediate shutdown. If a client
	      launches a faulty kernel that runs forever, a forced shutdown of
	      the MPS server may be required, since the MPS server creates and
	      issues GPU work on behalf of its clients.

       get_client_list PID
	      Print out a list of PIDs of all clients  connected  to  the  MPS
	      server with given PID.

       quit [-t TIMEOUT]
	      Shutdown the MPS control daemon process and all MPS servers. The
	      MPS control daemon stops accepting new clients while waiting for
	      current  MPS  servers  and  MPS clients to finish. If TIMEOUT is
	      specified (in seconds), the daemon will  force  MPS  servers  to
	      shutdown if they are still running after TIMEOUT seconds.

	      This command is synchronous. The front-end UI waits for the dae‐
	      mon to shutdown, then returns the daemon's exit status. The exit
	      status is zero iff all MPS servers have exited gracefully.

ENVIRONMENT
       CUDA_MPS_PIPE_DIRECTORY
	      Specify  the  directory  that  contains the named pipes used for
	      communication among MPS control, MPS server,  and	 MPS  clients.
	      The  value  of this environment variable should be consistent in
	      the MPS control daemon and all MPS  client  processes.   Default
	      directory is /tmp/nvidia-mps

       CUDA_MPS_LOG_DIRECTORY
	      Specify  the  directory  that  contains  the MPS log files. This
	      variable is used by the MPS control daemon only. Default	direc‐
	      tory is /var/log/nvidia-mps

FILES
       Log files created by the MPS control daemon in the specified directory

       control.log
	      Record startup and shutdown of MPS control daemon, user commands
	      issued with their results, and status of MPS servers.

       server.log
	      Record startup and shutdown of MPS servers, and  status  of  MPS
	      clients.

nvidia-cuda-mps-control		  2013-02-26	    nvidia-cuda-mps-control(1)
[top]

List of man pages available for Mageia

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net