Welcome to nvitop’s documentation!

GitHub Python Version PyPI Package Conda Package Documentation Status Downloads GitHub Repo Stars License

An interactive NVIDIA-GPU process viewer and beyond, the one-stop solution for GPU process management.

https://user-images.githubusercontent.com/16078332/171005261-1aad126e-dc27-4ed3-a89b-7f9c1c998bf7.png

The CLI from nvitop.


Installation

It is highly recommended to install nvitop in an isolated virtual environment. Simple installation and run via pipx:

pipx run nvitop

Install from PyPI (PyPI Package):

pip3 install --upgrade nvitop

Note

Python 3.7+ is required, and Python versions lower than 3.7 is not supported.

Install from conda-forge (Conda-forge Package):

conda install -c conda-forge nvitop

Install the latest version from GitHub (Commit Count):

pip3 install --upgrade pip setuptools
pip3 install git+https://github.com/XuehaiPan/nvitop.git#egg=nvitop

Or, clone this repo and install manually:

git clone --depth=1 https://github.com/XuehaiPan/nvitop.git
cd nvitop
pip3 install .

If this repo is useful to you, please star ⭐️ it to let more people know 🤗. GitHub Repo Stars


Quick Start

A minimal script to monitor the GPU devices based on APIs from nvitop:

from nvitop import Device

devices = Device.all()  # or Device.cuda.all()
for device in devices:
    processes = device.processes()  # type: Dict[int, GpuProcess]
    sorted_pids = sorted(processes)

    print(device)
    print(f'  - Fan speed:       {device.fan_speed()}%')
    print(f'  - Temperature:     {device.temperature()}C')
    print(f'  - GPU utilization: {device.gpu_utilization()}%')
    print(f'  - Total memory:    {device.memory_total_human()}')
    print(f'  - Used memory:     {device.memory_used_human()}')
    print(f'  - Free memory:     {device.memory_free_human()}')
    print(f'  - Processes ({len(processes)}): {sorted_pids}')
    for pid in sorted_pids:
        print(f'    - {processes[pid]}')
    print('-' * 120)

Another more advanced approach with coloring:

import time

from nvitop import Device, GpuProcess, NA, colored

print(colored(time.strftime('%a %b %d %H:%M:%S %Y'), color='red', attrs=('bold',)))

devices = Device.cuda.all()  # or `Device.all()` to use NVML ordinal instead
separator = False
for device in devices:
    processes = device.processes()  # type: Dict[int, GpuProcess]

    print(colored(str(device), color='green', attrs=('bold',)))
    print(colored('  - Fan speed:       ', color='blue', attrs=('bold',)) + f'{device.fan_speed()}%')
    print(colored('  - Temperature:     ', color='blue', attrs=('bold',)) + f'{device.temperature()}C')
    print(colored('  - GPU utilization: ', color='blue', attrs=('bold',)) + f'{device.gpu_utilization()}%')
    print(colored('  - Total memory:    ', color='blue', attrs=('bold',)) + f'{device.memory_total_human()}')
    print(colored('  - Used memory:     ', color='blue', attrs=('bold',)) + f'{device.memory_used_human()}')
    print(colored('  - Free memory:     ', color='blue', attrs=('bold',)) + f'{device.memory_free_human()}')
    if len(processes) > 0:
        processes = GpuProcess.take_snapshots(processes.values(), failsafe=True)
        processes.sort(key=lambda process: (process.username, process.pid))

        print(colored(f'  - Processes ({len(processes)}):', color='blue', attrs=('bold',)))
        fmt = '    {pid:<5}  {username:<8} {cpu:>5}  {host_memory:>8} {time:>8}  {gpu_memory:>8}  {sm:>3}  {command:<}'.format
        print(colored(fmt(pid='PID', username='USERNAME',
                          cpu='CPU%', host_memory='HOST-MEM', time='TIME',
                          gpu_memory='GPU-MEM', sm='SM%',
                          command='COMMAND'),
                      attrs=('bold',)))
        for snapshot in processes:
            print(fmt(pid=snapshot.pid,
                      username=snapshot.username[:7] + ('+' if len(snapshot.username) > 8 else snapshot.username[7:8]),
                      cpu=snapshot.cpu_percent, host_memory=snapshot.host_memory_human,
                      time=snapshot.running_time_human,
                      gpu_memory=(snapshot.gpu_memory_human if snapshot.gpu_memory_human is not NA else 'WDDM:N/A'),
                      sm=snapshot.gpu_sm_utilization,
                      command=snapshot.command))
    else:
        print(colored('  - No Running Processes', attrs=('bold',)))

    if separator:
        print('-' * 120)
    separator = True
https://user-images.githubusercontent.com/16078332/177041142-fe988d58-6a97-4559-84fd-b51204cf9231.png

An example monitoring script built with APIs from nvitop.

Please refer to section More than a Monitor in README for more examples.


API Reference


Module Contents

An interactive NVIDIA-GPU process viewer and beyond, the one-stop solution for GPU process management.

nvitop.version.PYNVML_VERSION_CANDIDATES = ('11.450.51', '11.450.129', '11.460.79', '11.470.66', '11.495.46', '11.510.69', '11.515.48', '11.515.75', '11.525.84', '11.525.112', '11.525.131', '11.525.150', '12.535.77', '12.535.108', '12.535.133', '12.535.161', '12.550.52', '12.555.43')

The list of supported nvidia-ml-py versions. See also: nvidia-ml-py’s Release History.

To install nvitop with a specific version of nvidia-ml-py, use nvitop[pynvml-xx.yyy.zzz], for example:

pip3 install 'nvitop[pynvml-11.450.51]'

or

pip3 install nvitop nvidia-ml-py==11.450.51

Note

The package nvidia-ml-py is not backward compatible over releases. This may cause problems such as “Function Not Found” errors with old versions of NVIDIA drivers (e.g. the NVIDIA R430 driver on Ubuntu 16.04 LTS). The ideal solution is to let the user install the best-fit version of nvidia-ml-py. See also: nvidia-ml-py’s Release History.

nvidia-ml-py==11.450.51 is the last version supports the NVIDIA R430 driver (CUDA 10.x). Since nvidia-ml-py>=11.450.129, the definition of struct nvmlProcessInfo_t has introduced two new fields gpuInstanceId and computeInstanceId (GI ID and CI ID in newer nvidia-smi) which are incompatible with some old NVIDIA drivers. nvitop may not display the processes correctly due to this incompatibility.

class nvitop.NaType[source]

Bases: str

A singleton (str: 'N/A') class represents a not applicable value.

The NA instance behaves like a str instance ('N/A') when doing string manipulation (e.g. concatenation). For arithmetic operations, for example NA / 1024 / 1024, it acts like the math.nan.

Examples

>>> NA
'N/A'
>>> 'memory usage: {}'.format(NA)  # NA is an instance of `str`
'memory usage: N/A'
>>> NA.lower()                     # NA is an instance of `str`
'n/a'
>>> NA.ljust(5)                    # NA is an instance of `str`
'N/A  '
>>> NA + ' str'                    # string contamination if the operand is a string
'N/A str'
>>> float(NA)                      # explicit conversion to float (`math.nan`)
nan
>>> NA + 1                         # auto-casting to float if the operand is a number
nan
>>> NA * 1024                      # auto-casting to float if the operand is a number
nan
>>> NA / (1024 * 1024)             # auto-casting to float if the operand is a number
nan
static __new__(cls) NaType[source]

Get the singleton instance (nvitop.NA).

__bool__() bool[source]

Convert NA to bool and return False.

>>> bool(NA)
False
__int__() int[source]

Convert NA to int and return 0.

>>> int(NA)
0
__float__() float[source]

Convert NA to float and return math.nan.

>>> float(NA)
nan
>>> float(NA) is math.nan
True
__add__(other: object) str | float[source]

Return math.nan if the operand is a number or uses string concatenation if the operand is a string (NA + other).

A special case is when the operand is nvitop.NA itself, the result is math.nan instead of 'N/AN/A'.

>>> NA + ' str'
'N/A str'
>>> NA + NA
nan
>>> NA + 1
nan
>>> NA + 1.0
nan
__radd__(other: object) str | float[source]

Return math.nan if the operand is a number or uses string concatenation if the operand is a string (other + NA).

>>> 'str' + NA
'strN/A'
>>> 1 + NA
nan
>>> 1.0 + NA
nan
__sub__(other: object) float[source]

Return math.nan if the operand is a number (NA - other).

>>> NA - 'str'
TypeError: unsupported operand type(s) for -: 'NaType' and 'str'
>>> NA - NA
'N/AN/A'
>>> NA + 1
nan
>>> NA + 1.0
nan
__rsub__(other: object) float[source]

Return math.nan if the operand is a number (other - NA).

>>> 'str' - NA
TypeError: unsupported operand type(s) for -: 'str' and 'NaType'
>>> 1 - NA
nan
>>> 1.0 - NA
nan
__mul__(other: object) float[source]

Return math.nan if the operand is a number (NA * other).

A special case is when the operand is nvitop.NA itself, the result is also math.nan.

>>> NA * 1024
nan
>>> NA * 1024.0
nan
>>> NA * NA
nan
__rmul__(other: object) float[source]

Return math.nan if the operand is a number (other * NA).

>>> 1024 * NA
nan
>>> 1024.0 * NA
nan
__truediv__(other: object) float[source]

Return math.nan if the operand is a number (NA / other).

>>> NA / 1024
nan
>>> NA / 1024.0
nan
>>> NA / 0
ZeroDivisionError: float division by zero
>>> NA / 0.0
ZeroDivisionError: float division by zero
>>> NA / NA
nan
__rtruediv__(other: object) float[source]

Return math.nan if the operand is a number (other / NA).

>>> 1024 / NA
nan
>>> 1024.0 / NA
nan
__floordiv__(other: object) float[source]

Return math.nan if the operand is a number (NA // other).

>>> NA // 1024
nan
>>> NA // 1024.0
nan
>>> NA / 0
ZeroDivisionError: float division by zero
>>> NA / 0.0
ZeroDivisionError: float division by zero
>>> NA // NA
nan
__rfloordiv__(other: object) float[source]

Return math.nan if the operand is a number (other // NA).

>>> 1024 // NA
nan
>>> 1024.0 // NA
nan
__mod__(other: object) float[source]

Return math.nan if the operand is a number (NA % other).

>>> NA % 1024
nan
>>> NA % 1024.0
nan
>>> NA % 0
ZeroDivisionError: float modulo
>>> NA % 0.0
ZeroDivisionError: float modulo
__rmod__(other: object) float[source]

Return math.nan if the operand is a number (other % NA).

>>> 1024 % NA
nan
>>> 1024.0 % NA
nan
__divmod__(other: object) tuple[float, float][source]

The pair (NA // other, NA % other) (divmod(NA, other)).

>>> divmod(NA, 1024)
(nan, nan)
>>> divmod(NA, 1024.0)
(nan, nan)
>>> divmod(NA, 0)
ZeroDivisionError: float floor division by zero
>>> divmod(NA, 0.0)
ZeroDivisionError: float floor division by zero
__rdivmod__(other: object) tuple[float, float][source]

The pair (other // NA, other % NA) (divmod(other, NA)).

>>> divmod(1024, NA)
(nan, nan)
>>> divmod(1024.0, NA)
(nan, nan)
__pos__() float[source]

Return math.nan (+NA).

>>> +NA
nan
__neg__() float[source]

Return math.nan (-NA).

>>> -NA
nan
__abs__() float[source]

Return math.nan (abs(NA)).

>>> abs(NA)
nan
__round__(ndigits: int | None = None) int | float[source]

Round nvitop.NA to ndigits decimal places, defaulting to 0.

If ndigits is omitted or None, returns 0, otherwise returns math.nan.

>>> round(NA)
0
>>> round(NA, 0)
nan
>>> round(NA, 1)
nan
__lt__(x: object) bool[source]

The nvitop.NA is always greater than any number, or uses the dictionary order for string.

__le__(x: object) bool[source]

The nvitop.NA is always greater than any number, or uses the dictionary order for string.

__gt__(x: object) bool[source]

The nvitop.NA is always greater than any number, or uses the dictionary order for string.

__ge__(x: object) bool[source]

The nvitop.NA is always greater than any number, or uses the dictionary order for string.

__format__(format_spec: str) str[source]

Format nvitop.NA according to format_spec.

nvitop.NA = 'N/A'

The singleton instance of NaType. The actual value is str: 'N/A'.

nvitop.NotApplicableType

alias of NaType

nvitop.NotApplicable = 'N/A'

The singleton instance of NaType. The actual value is str: 'N/A'.

An interactive NVIDIA-GPU process viewer and beyond, the one-stop solution for GPU process management.

exception nvitop.NVMLError(value)[source]

Bases: Exception

Base exception class for NVML query errors.

static __new__(typ, value)[source]

Map value to a proper subclass of NVMLError.

nvitop.nvmlCheckReturn(retval: _Any, types: type | tuple[type, ...] | None = None) bool[source]

Check whether the return value is not nvitop.NA and is one of the given types.

class nvitop.Device(index: int | tuple[int, int] | str | None = None, *, uuid: str | None = None, bus_id: str | None = None)[source]

Bases: object

Live class of the GPU devices, different from the device snapshots.

Device.__new__() returns different types depending on the given arguments.

- (index: int)        -> PhysicalDevice
- (index: (int, int)) -> MigDevice
- (uuid: str)         -> Union[PhysicalDevice, MigDevice]  # depending on the UUID value
- (bus_id: str)       -> PhysicalDevice

Examples

>>> Device.driver_version()              # version of the installed NVIDIA display driver
'470.129.06'
>>> Device.count()                       # number of NVIDIA GPUs in the system
10
>>> Device.all()                         # all physical devices in the system
[
    PhysicalDevice(index=0, ...),
    PhysicalDevice(index=1, ...),
    ...
]
>>> nvidia0 = Device(index=0)            # -> PhysicalDevice
>>> mig10   = Device(index=(1, 0))       # -> MigDevice
>>> nvidia2 = Device(uuid='GPU-xxxxxx')  # -> PhysicalDevice
>>> mig30   = Device(uuid='MIG-xxxxxx')  # -> MigDevice
>>> nvidia0.memory_free()                # total free memory in bytes
11550654464
>>> nvidia0.memory_free_human()          # total free memory in human readable format
'11016MiB'
>>> nvidia2.as_snapshot()                # takes an onetime snapshot of the device
PhysicalDeviceSnapshot(
    real=PhysicalDevice(index=2, ...),
    ...
)
Raises:
UUID_PATTERN: re.Pattern = re.compile('^  # full match\n        (?:(?P<MigMode>MIG)-)?                                 # prefix for MIG UUID\n        (?:(?P<GpuUuid>GPU)-)?                                 # prefix for GPU UUID\n        (?, re.VERBOSE)
GPU_PROCESS_CLASS

alias of GpuProcess

cuda

alias of CudaDevice

classmethod is_available() bool[source]

Test whether there are any devices and the NVML library is successfully loaded.

static driver_version() str | NaType[source]

The version of the installed NVIDIA display driver. This is an alphanumeric string.

Command line equivalent:

nvidia-smi --id=0 --format=csv,noheader,nounits --query-gpu=driver_version
Raises:
static cuda_driver_version() str | NaType[source]

The maximum CUDA version supported by the NVIDIA display driver. This is an alphanumeric string.

This can be different from the version of the CUDA Runtime. See also cuda_runtime_version().

Returns: Union[str, NaType]

The maximum CUDA version supported by the NVIDIA display driver.

Raises:
static max_cuda_version() str | NaType

The maximum CUDA version supported by the NVIDIA display driver. This is an alphanumeric string.

This can be different from the version of the CUDA Runtime. See also cuda_runtime_version().

Returns: Union[str, NaType]

The maximum CUDA version supported by the NVIDIA display driver.

Raises:
static cuda_runtime_version() str | NaType[source]

The CUDA Runtime version. This is an alphanumeric string.

This can be different from the CUDA driver version. See also cuda_driver_version().

Returns: Union[str, NaType]

The CUDA Runtime version, or nvitop.NA when no CUDA Runtime is available or no CUDA-capable devices are present.

static cudart_version() str | NaType

The CUDA Runtime version. This is an alphanumeric string.

This can be different from the CUDA driver version. See also cuda_driver_version().

Returns: Union[str, NaType]

The CUDA Runtime version, or nvitop.NA when no CUDA Runtime is available or no CUDA-capable devices are present.

classmethod count() int[source]

The number of NVIDIA GPUs in the system.

Command line equivalent:

nvidia-smi --id=0 --format=csv,noheader,nounits --query-gpu=count
Raises:
classmethod all() list[PhysicalDevice][source]

Return a list of all physical devices in the system.

classmethod from_indices(indices: int | Iterable[int | tuple[int, int]] | None = None) list[PhysicalDevice | MigDevice][source]

Return a list of devices of the given indices.

Parameters:

indices (Iterable[Union[int, Tuple[int, int]]]) – Indices of the devices. For each index, get PhysicalDevice for single int and MigDevice for tuple (int, int). That is: - (int) -> PhysicalDevice - ((int, int)) -> MigDevice

Returns: List[Union[PhysicalDevice, MigDevice]]

A list of PhysicalDevice and/or MigDevice instances of the given indices.

Raises:
static from_cuda_visible_devices() list[CudaDevice][source]

Return a list of all CUDA visible devices.

The CUDA ordinal will be enumerate from the CUDA_VISIBLE_DEVICES environment variable.

Note

The result could be empty if the CUDA_VISIBLE_DEVICES environment variable is invalid.

See also for CUDA Device Enumeration:
Returns: List[CudaDevice]

A list of CudaDevice instances.

static from_cuda_indices(cuda_indices: int | Iterable[int] | None = None) list[CudaDevice][source]

Return a list of CUDA devices of the given CUDA indices.

The CUDA ordinal will be enumerate from the CUDA_VISIBLE_DEVICES environment variable.

See also for CUDA Device Enumeration:
Parameters:

cuda_indices (Iterable[int]) – The indices of the GPU in CUDA ordinal, if not given, returns all visible CUDA devices.

Returns: List[CudaDevice]

A list of CudaDevice of the given CUDA indices.

Raises:
static parse_cuda_visible_devices(cuda_visible_devices: str | None = <VALUE OMITTED>) list[int] | list[tuple[int, int]][source]

Parse the given CUDA_VISIBLE_DEVICES value into a list of NVML device indices.

This is a alias of parse_cuda_visible_devices().

Note

The result could be empty if the CUDA_VISIBLE_DEVICES environment variable is invalid.

See also for CUDA Device Enumeration:
Parameters:

cuda_visible_devices (Optional[str]) – The value of the CUDA_VISIBLE_DEVICES variable. If not given, the value from the environment will be used. If explicitly given by None, the CUDA_VISIBLE_DEVICES environment variable will be unset before parsing.

Returns: Union[List[int], List[Tuple[int, int]]]

A list of int (physical device) or a list of tuple of two integers (MIG device) for the corresponding real device indices.

static normalize_cuda_visible_devices(cuda_visible_devices: str | None = <VALUE OMITTED>) str[source]

Parse the given CUDA_VISIBLE_DEVICES value and convert it into a comma-separated string of UUIDs.

This is an alias of normalize_cuda_visible_devices().

Note

The result could be empty string if the CUDA_VISIBLE_DEVICES environment variable is invalid.

See also for CUDA Device Enumeration:
Parameters:

cuda_visible_devices (Optional[str]) – The value of the CUDA_VISIBLE_DEVICES variable. If not given, the value from the environment will be used. If explicitly given by None, the CUDA_VISIBLE_DEVICES environment variable will be unset before parsing.

Returns: str

The comma-separated string (GPU UUIDs) of the CUDA_VISIBLE_DEVICES environment variable.

static __new__(cls, index: int | tuple[int, int] | str | None = None, *, uuid: str | None = None, bus_id: str | None = None) Self[source]

Create a new instance of Device.

The type of the result is determined by the given argument.

- (index: int)        -> PhysicalDevice
- (index: (int, int)) -> MigDevice
- (uuid: str)         -> Union[PhysicalDevice, MigDevice]  # depending on the UUID value
- (bus_id: str)       -> PhysicalDevice

Note: This method takes exact 1 non-None argument.

Returns: Union[PhysicalDevice, MigDevice]

A PhysicalDevice instance or a MigDevice instance.

Raises:
  • TypeError – If the number of non-None arguments is not exactly 1.

  • TypeError – If the given index is a tuple but is not consist of two integers.

__init__(index: int | str | None = None, *, uuid: str | None = None, bus_id: str | None = None) None[source]

Initialize the instance created by __new__().

Raises:
__repr__() str[source]

Return a string representation of the device.

__eq__(other: object) bool[source]

Test equality to other object.

__hash__() int[source]

Return a hash value of the device.

__getattr__(name: str) Any | Callable[..., Any][source]

Get the object attribute.

If the attribute is not defined, make a method from pynvml.nvmlDeviceGet<AttributeName>(handle). The attribute name will be converted to PascalCase string.

Raises:

AttributeError – If the attribute is not defined in pynvml.py.

Examples

>>> device = Device(0)
>>> # Method `cuda_compute_capability` is not implemented in the class definition
>>> PhysicalDevice.cuda_compute_capability
AttributeError: type object 'Device' has no attribute 'cuda_compute_capability'
>>> # Dynamically create a new method from `pynvml.nvmlDeviceGetCudaComputeCapability(device.handle, *args, **kwargs)`
>>> device.cuda_compute_capability
<function PhysicalDevice.cuda_compute_capability at 0x7fbfddf5d9d0>
>>> device.cuda_compute_capability()
(8, 6)
__reduce__() tuple[type[Device], tuple[int | tuple[int, int]]][source]

Return state information for pickling.

property index: int | tuple[int, int]

The NVML index of the device.

Returns: Union[int, Tuple[int, int]]

Returns an int for physical device and tuple of two integers for MIG device.

property nvml_index: int | tuple[int, int]

The NVML index of the device.

Returns: Union[int, Tuple[int, int]]

Returns an int for physical device and tuple of two integers for MIG device.

property physical_index: int

The index of the physical device.

Returns: int

An int for the physical device index. For MIG devices, returns the index of the parent physical device.

property handle: LP_struct_c_nvmlDevice_t

The NVML device handle.

property cuda_index: int

The CUDA device index.

The value will be evaluated on the first call.

Raises:

RuntimeError – If the current device is not visible to CUDA applications (i.e. not listed in the CUDA_VISIBLE_DEVICES environment variable or the environment variable is invalid).

name() str | NaType[source]

The official product name of the GPU. This is an alphanumeric string. For all products.

Returns: Union[str, NaType]

The official product name, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=name
uuid() str | NaType[source]

This value is the globally unique immutable alphanumeric identifier of the GPU.

It does not correspond to any physical label on the board.

Returns: Union[str, NaType]

The UUID of the device, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=name
bus_id() str | NaType[source]

PCI bus ID as “domain:bus:device.function”, in hex.

Returns: Union[str, NaType]

The PCI bus ID of the device, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=pci.bus_id
serial() str | NaType[source]

This number matches the serial number physically printed on each board.

It is a globally unique immutable alphanumeric value.

Returns: Union[str, NaType]

The serial number of the device, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=serial
memory_info() MemoryInfo[source]

Return a named tuple with memory information (in bytes) for the device.

Returns: MemoryInfo(total, free, used)

A named tuple with memory information, the item could be nvitop.NA when not applicable.

memory_total() int | NaType[source]

Total installed GPU memory in bytes.

Returns: Union[int, NaType]

Total installed GPU memory in bytes, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=memory.total
memory_used() int | NaType[source]

Total memory allocated by active contexts in bytes.

Returns: Union[int, NaType]

Total memory allocated by active contexts in bytes, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=memory.used
memory_free() int | NaType[source]

Total free memory in bytes.

Returns: Union[int, NaType]

Total free memory in bytes, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=memory.free
memory_total_human() str | NaType[source]

Total installed GPU memory in human readable format.

Returns: Union[str, NaType]

Total installed GPU memory in human readable format, or nvitop.NA when not applicable.

memory_used_human() str | NaType[source]

Total memory allocated by active contexts in human readable format.

Returns: Union[int, NaType]

Total memory allocated by active contexts in human readable format, or nvitop.NA when not applicable.

memory_free_human() str | NaType[source]

Total free memory in human readable format.

Returns: Union[int, NaType]

Total free memory in human readable format, or nvitop.NA when not applicable.

memory_percent() float | NaType[source]

The percentage of used memory over total memory (0 <= p <= 100).

Returns: Union[float, NaType]

The percentage of used memory over total memory, or nvitop.NA when not applicable.

memory_usage() str[source]

The used memory over total memory in human readable format.

Returns: str

The used memory over total memory in human readable format, or 'N/A / N/A' when not applicable.

bar1_memory_info() MemoryInfo[source]

Return a named tuple with BAR1 memory information (in bytes) for the device.

Returns: MemoryInfo(total, free, used)

A named tuple with BAR1 memory information, the item could be nvitop.NA when not applicable.

bar1_memory_total() int | NaType[source]

Total BAR1 memory in bytes.

Returns: Union[int, NaType]

Total BAR1 memory in bytes, or nvitop.NA when not applicable.

bar1_memory_used() int | NaType[source]

Total used BAR1 memory in bytes.

Returns: Union[int, NaType]

Total used BAR1 memory in bytes, or nvitop.NA when not applicable.

bar1_memory_free() int | NaType[source]

Total free BAR1 memory in bytes.

Returns: Union[int, NaType]

Total free BAR1 memory in bytes, or nvitop.NA when not applicable.

bar1_memory_total_human() str | NaType[source]

Total BAR1 memory in human readable format.

Returns: Union[int, NaType]

Total BAR1 memory in human readable format, or nvitop.NA when not applicable.

bar1_memory_used_human() str | NaType[source]

Total used BAR1 memory in human readable format.

Returns: Union[int, NaType]

Total used BAR1 memory in human readable format, or nvitop.NA when not applicable.

bar1_memory_free_human() str | NaType[source]

Total free BAR1 memory in human readable format.

Returns: Union[int, NaType]

Total free BAR1 memory in human readable format, or nvitop.NA when not applicable.

bar1_memory_percent() float | NaType[source]

The percentage of used BAR1 memory over total BAR1 memory (0 <= p <= 100).

Returns: Union[float, NaType]

The percentage of used BAR1 memory over total BAR1 memory, or nvitop.NA when not applicable.

bar1_memory_usage() str[source]

The used BAR1 memory over total BAR1 memory in human readable format.

Returns: str

The used BAR1 memory over total BAR1 memory in human readable format, or 'N/A / N/A' when not applicable.

utilization_rates() UtilizationRates[source]

Return a named tuple with GPU utilization rates (in percentage) for the device.

Returns: UtilizationRates(gpu, memory, encoder, decoder)

A named tuple with GPU utilization rates (in percentage) for the device, the item could be nvitop.NA when not applicable.

gpu_utilization() int | NaType[source]

Percent of time over the past sample period during which one or more kernels was executing on the GPU.

The sample period may be between 1 second and 1/6 second depending on the product.

Returns: Union[int, NaType]

The GPU utilization rate in percentage, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=utilization.gpu
gpu_percent() int | NaType

Percent of time over the past sample period during which one or more kernels was executing on the GPU.

The sample period may be between 1 second and 1/6 second depending on the product.

Returns: Union[int, NaType]

The GPU utilization rate in percentage, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=utilization.gpu
memory_utilization() int | NaType[source]

Percent of time over the past sample period during which global (device) memory was being read or written.

The sample period may be between 1 second and 1/6 second depending on the product.

Returns: Union[int, NaType]

The memory bandwidth utilization rate of the GPU in percentage, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=utilization.memory
encoder_utilization() int | NaType[source]

The encoder utilization rate in percentage.

Returns: Union[int, NaType]

The encoder utilization rate in percentage, or nvitop.NA when not applicable.

decoder_utilization() int | NaType[source]

The decoder utilization rate in percentage.

Returns: Union[int, NaType]

The decoder utilization rate in percentage, or nvitop.NA when not applicable.

clock_infos() ClockInfos[source]

Return a named tuple with current clock speeds (in MHz) for the device.

Returns: ClockInfos(graphics, sm, memory, video)

A named tuple with current clock speeds (in MHz) for the device, the item could be nvitop.NA when not applicable.

clocks() ClockInfos

Return a named tuple with current clock speeds (in MHz) for the device.

Returns: ClockInfos(graphics, sm, memory, video)

A named tuple with current clock speeds (in MHz) for the device, the item could be nvitop.NA when not applicable.

max_clock_infos() ClockInfos[source]

Return a named tuple with maximum clock speeds (in MHz) for the device.

Returns: ClockInfos(graphics, sm, memory, video)

A named tuple with maximum clock speeds (in MHz) for the device, the item could be nvitop.NA when not applicable.

max_clocks() ClockInfos

Return a named tuple with maximum clock speeds (in MHz) for the device.

Returns: ClockInfos(graphics, sm, memory, video)

A named tuple with maximum clock speeds (in MHz) for the device, the item could be nvitop.NA when not applicable.

clock_speed_infos() ClockSpeedInfos[source]

Return a named tuple with the current and the maximum clock speeds (in MHz) for the device.

Returns: ClockSpeedInfos(current, max)

A named tuple with the current and the maximum clock speeds (in MHz) for the device.

graphics_clock() int | NaType[source]

Current frequency of graphics (shader) clock in MHz.

Returns: Union[int, NaType]

The current frequency of graphics (shader) clock in MHz, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=clocks.current.graphics
sm_clock() int | NaType[source]

Current frequency of SM (Streaming Multiprocessor) clock in MHz.

Returns: Union[int, NaType]

The current frequency of SM (Streaming Multiprocessor) clock in MHz, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=clocks.current.sm
memory_clock() int | NaType[source]

Current frequency of memory clock in MHz.

Returns: Union[int, NaType]

The current frequency of memory clock in MHz, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=clocks.current.memory
video_clock() int | NaType[source]

Current frequency of video encoder/decoder clock in MHz.

Returns: Union[int, NaType]

The current frequency of video encoder/decoder clock in MHz, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=clocks.current.video
max_graphics_clock() int | NaType[source]

Maximum frequency of graphics (shader) clock in MHz.

Returns: Union[int, NaType]

The maximum frequency of graphics (shader) clock in MHz, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=clocks.max.graphics
max_sm_clock() int | NaType[source]

Maximum frequency of SM (Streaming Multiprocessor) clock in MHz.

Returns: Union[int, NaType]

The maximum frequency of SM (Streaming Multiprocessor) clock in MHz, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=clocks.max.sm
max_memory_clock() int | NaType[source]

Maximum frequency of memory clock in MHz.

Returns: Union[int, NaType]

The maximum frequency of memory clock in MHz, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=clocks.max.memory
max_video_clock() int | NaType[source]

Maximum frequency of video encoder/decoder clock in MHz.

Returns: Union[int, NaType]

The maximum frequency of video encoder/decoder clock in MHz, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=clocks.max.video
fan_speed() int | NaType[source]

The fan speed value is the percent of the product’s maximum noise tolerance fan speed that the device’s fan is currently intended to run at.

This value may exceed 100% in certain cases. Note: The reported speed is the intended fan speed. If the fan is physically blocked and unable to spin, this output will not match the actual fan speed. Many parts do not report fan speeds because they rely on cooling via fans in the surrounding enclosure.

Returns: Union[int, NaType]

The fan speed value in percentage, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=fan.speed
temperature() int | NaType[source]

Core GPU temperature in degrees C.

Returns: Union[int, NaType]

The core GPU temperature in Celsius degrees, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=temperature.gpu
power_usage() int | NaType[source]

The last measured power draw for the entire board in milliwatts.

Returns: Union[int, NaType]

The power draw for the entire board in milliwatts, or nvitop.NA when not applicable.

Command line equivalent:

$(( "$(nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=power.draw)" * 1000 ))
power_draw() int | NaType

The last measured power draw for the entire board in milliwatts.

Returns: Union[int, NaType]

The power draw for the entire board in milliwatts, or nvitop.NA when not applicable.

Command line equivalent:

$(( "$(nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=power.draw)" * 1000 ))
power_limit() int | NaType[source]

The software power limit in milliwatts.

Set by software like nvidia-smi.

Returns: Union[int, NaType]

The software power limit in milliwatts, or nvitop.NA when not applicable.

Command line equivalent:

$(( "$(nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=power.limit)" * 1000 ))
power_status() str[source]

The string of power usage over power limit in watts.

Returns: str

The string of power usage over power limit in watts, or 'N/A / N/A' when not applicable.

pcie_throughput() ThroughputInfo[source]

The current PCIe throughput in KiB/s.

This function is querying a byte counter over a 20ms interval and thus is the PCIe throughput over that interval.

Returns: ThroughputInfo(tx, rx)

A named tuple with current PCIe throughput in KiB/s, the item could be nvitop.NA when not applicable.

pcie_tx_throughput() int | NaType[source]

The current PCIe transmit throughput in KiB/s.

This function is querying a byte counter over a 20ms interval and thus is the PCIe throughput over that interval.

Returns: Union[int, NaType]

The current PCIe transmit throughput in KiB/s, or nvitop.NA when not applicable.

pcie_rx_throughput() int | NaType[source]

The current PCIe receive throughput in KiB/s.

This function is querying a byte counter over a 20ms interval and thus is the PCIe throughput over that interval.

Returns: Union[int, NaType]

The current PCIe receive throughput in KiB/s, or nvitop.NA when not applicable.

pcie_tx_throughput_human() str | NaType[source]

The current PCIe transmit throughput in human readable format.

This function is querying a byte counter over a 20ms interval and thus is the PCIe throughput over that interval.

Returns: Union[str, NaType]

The current PCIe transmit throughput in human readable format, or nvitop.NA when not applicable.

pcie_rx_throughput_human() str | NaType[source]

The current PCIe receive throughput in human readable format.

This function is querying a byte counter over a 20ms interval and thus is the PCIe throughput over that interval.

Returns: Union[str, NaType]

The current PCIe receive throughput in human readable format, or nvitop.NA when not applicable.

nvlink_link_count() int[source]

The number of NVLinks that the GPU has.

Returns: Union[int, NaType]

The number of NVLinks that the GPU has.

nvlink_throughput(interval: float | None = None) list[ThroughputInfo][source]

The current NVLink throughput for each NVLink in KiB/s.

This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.

Parameters:

interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If interval is a positive number, compares throughput counters before and after the interval (blocking). If interval is :const`0.0` or None, compares throughput counters since the last call, returning immediately (non-blocking).

Returns: List[ThroughputInfo(tx, rx)]

A list of named tuples with current NVLink throughput for each NVLink in KiB/s, the item could be nvitop.NA when not applicable.

nvlink_total_throughput(interval: float | None = None) ThroughputInfo[source]

The total NVLink throughput for all NVLinks in KiB/s.

This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.

Parameters:

interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If interval is a positive number, compares throughput counters before and after the interval (blocking). If interval is :const`0.0` or None, compares throughput counters since the last call, returning immediately (non-blocking).

Returns: ThroughputInfo(tx, rx)

A named tuple with the total NVLink throughput for all NVLinks in KiB/s, the item could be nvitop.NA when not applicable.

nvlink_mean_throughput(interval: float | None = None) ThroughputInfo[source]

The mean NVLink throughput for all NVLinks in KiB/s.

This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.

Parameters:

interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If interval is a positive number, compares throughput counters before and after the interval (blocking). If interval is :const`0.0` or None, compares throughput counters since the last call, returning immediately (non-blocking).

Returns: ThroughputInfo(tx, rx)

A named tuple with the mean NVLink throughput for all NVLinks in KiB/s, the item could be nvitop.NA when not applicable.

nvlink_tx_throughput(interval: float | None = None) list[int | NaType][source]

The current NVLink transmit data throughput in KiB/s for each NVLink.

This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.

Parameters:

interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If interval is a positive number, compares throughput counters before and after the interval (blocking). If interval is :const`0.0` or None, compares throughput counters since the last call, returning immediately (non-blocking).

Returns: List[Union[int, NaType]]

The current NVLink transmit data throughput in KiB/s for each NVLink, or nvitop.NA when not applicable.

nvlink_mean_tx_throughput(interval: float | None = None) int | NaType[source]

The mean NVLink transmit data throughput for all NVLinks in KiB/s.

This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.

Parameters:

interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If interval is a positive number, compares throughput counters before and after the interval (blocking). If interval is :const`0.0` or None, compares throughput counters since the last call, returning immediately (non-blocking).

Returns: Union[int, NaType]

The mean NVLink transmit data throughput for all NVLinks in KiB/s, or nvitop.NA when not applicable.

nvlink_total_tx_throughput(interval: float | None = None) int | NaType[source]

The total NVLink transmit data throughput for all NVLinks in KiB/s.

This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.

Parameters:

interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If interval is a positive number, compares throughput counters before and after the interval (blocking). If interval is :const`0.0` or None, compares throughput counters since the last call, returning immediately (non-blocking).

Returns: Union[int, NaType]

The total NVLink transmit data throughput for all NVLinks in KiB/s, or nvitop.NA when not applicable.

nvlink_rx_throughput(interval: float | None = None) list[int | NaType][source]

The current NVLink receive data throughput for each NVLink in KiB/s.

This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.

Parameters:

interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If interval is a positive number, compares throughput counters before and after the interval (blocking). If interval is :const`0.0` or None, compares throughput counters since the last call, returning immediately (non-blocking).

Returns: Union[int, NaType]

The current NVLink receive data throughput for each NVLink in KiB/s, or nvitop.NA when not applicable.

nvlink_mean_rx_throughput(interval: float | None = None) int | NaType[source]

The mean NVLink receive data throughput for all NVLinks in KiB/s.

This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.

Parameters:

interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If interval is a positive number, compares throughput counters before and after the interval (blocking). If interval is :const`0.0` or None, compares throughput counters since the last call, returning immediately (non-blocking).

Returns: Union[int, NaType]

The mean NVLink receive data throughput for all NVLinks in KiB/s, or nvitop.NA when not applicable.

nvlink_total_rx_throughput(interval: float | None = None) int | NaType[source]

The total NVLink receive data throughput for all NVLinks in KiB/s.

This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.

Parameters:

interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If interval is a positive number, compares throughput counters before and after the interval (blocking). If interval is :const`0.0` or None, compares throughput counters since the last call, returning immediately (non-blocking).

Returns: Union[int, NaType]

The total NVLink receive data throughput for all NVLinks in KiB/s, or nvitop.NA when not applicable.

nvlink_tx_throughput_human(interval: float | None = None) list[str | NaType][source]

The current NVLink transmit data throughput for each NVLink in human readable format.

This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.

Parameters:

interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If interval is a positive number, compares throughput counters before and after the interval (blocking). If interval is :const`0.0` or None, compares throughput counters since the last call, returning immediately (non-blocking).

Returns: Union[str, NaType]

The current NVLink transmit data throughput for each NVLink in human readable format, or nvitop.NA when not applicable.

nvlink_mean_tx_throughput_human(interval: float | None = None) str | NaType[source]

The mean NVLink transmit data throughput for all NVLinks in human readable format.

This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.

Parameters:

interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If interval is a positive number, compares throughput counters before and after the interval (blocking). If interval is :const`0.0` or None, compares throughput counters since the last call, returning immediately (non-blocking).

Returns: Union[str, NaType]

The mean NVLink transmit data throughput for all NVLinks in human readable format, or nvitop.NA when not applicable.

nvlink_total_tx_throughput_human(interval: float | None = None) str | NaType[source]

The total NVLink transmit data throughput for all NVLinks in human readable format.

This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.

Parameters:

interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If interval is a positive number, compares throughput counters before and after the interval (blocking). If interval is :const`0.0` or None, compares throughput counters since the last call, returning immediately (non-blocking).

Returns: Union[str, NaType]

The total NVLink transmit data throughput for all NVLinks in human readable format, or nvitop.NA when not applicable.

nvlink_rx_throughput_human(interval: float | None = None) list[str | NaType][source]

The current NVLink receive data throughput for each NVLink in human readable format.

This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.

Parameters:

interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If interval is a positive number, compares throughput counters before and after the interval (blocking). If interval is :const`0.0` or None, compares throughput counters since the last call, returning immediately (non-blocking).

Returns: Union[str, NaType]

The current NVLink receive data throughput for each NVLink in human readable format, or nvitop.NA when not applicable.

nvlink_mean_rx_throughput_human(interval: float | None = None) str | NaType[source]

The mean NVLink receive data throughput for all NVLinks in human readable format.

This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.

Parameters:

interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If interval is a positive number, compares throughput counters before and after the interval (blocking). If interval is :const`0.0` or None, compares throughput counters since the last call, returning immediately (non-blocking).

Returns: Union[str, NaType]

The mean NVLink receive data throughput for all NVLinks in human readable format, or nvitop.NA when not applicable.

nvlink_total_rx_throughput_human(interval: float | None = None) str | NaType[source]

The total NVLink receive data throughput for all NVLinks in human readable format.

This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.

Parameters:

interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If interval is a positive number, compares throughput counters before and after the interval (blocking). If interval is :const`0.0` or None, compares throughput counters since the last call, returning immediately (non-blocking).

Returns: Union[str, NaType]

The total NVLink receive data throughput for all NVLinks in human readable format, or nvitop.NA when not applicable.

display_active() str | NaType[source]

A flag that indicates whether a display is initialized on the GPU’s (e.g. memory is allocated on the device for display).

Display can be active even when no monitor is physically attached. “Enabled” indicates an active display. “Disabled” indicates otherwise.

Returns: Union[str, NaType]
  • 'Disabled': if not an active display device.

  • 'Enabled': if an active display device.

  • nvitop.NA: if not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=display_active
display_mode() str | NaType[source]

A flag that indicates whether a physical display (e.g. monitor) is currently connected to any of the GPU’s connectors.

“Enabled” indicates an attached display. “Disabled” indicates otherwise.

Returns: Union[str, NaType]
  • 'Disabled': if the display mode is disabled.

  • 'Enabled': if the display mode is enabled.

  • nvitop.NA: if not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=display_mode
current_driver_model() str | NaType[source]

The driver model currently in use.

Always “N/A” on Linux. On Windows, the TCC (WDM) and WDDM driver models are supported. The TCC driver model is optimized for compute applications. I.E. kernel launch times will be quicker with TCC. The WDDM driver model is designed for graphics applications and is not recommended for compute applications. Linux does not support multiple driver models, and will always have the value of “N/A”.

Returns: Union[str, NaType]
  • 'WDDM': for WDDM driver model on Windows.

  • 'WDM': for TTC (WDM) driver model on Windows.

  • nvitop.NA: if not applicable, e.g. on Linux.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=driver_model.current
driver_model() str | NaType

The driver model currently in use.

Always “N/A” on Linux. On Windows, the TCC (WDM) and WDDM driver models are supported. The TCC driver model is optimized for compute applications. I.E. kernel launch times will be quicker with TCC. The WDDM driver model is designed for graphics applications and is not recommended for compute applications. Linux does not support multiple driver models, and will always have the value of “N/A”.

Returns: Union[str, NaType]
  • 'WDDM': for WDDM driver model on Windows.

  • 'WDM': for TTC (WDM) driver model on Windows.

  • nvitop.NA: if not applicable, e.g. on Linux.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=driver_model.current
persistence_mode() str | NaType[source]

A flag that indicates whether persistence mode is enabled for the GPU. Value is either “Enabled” or “Disabled”.

When persistence mode is enabled the NVIDIA driver remains loaded even when no active clients, such as X11 or nvidia-smi, exist. This minimizes the driver load latency associated with running dependent apps, such as CUDA programs. Linux only.

Returns: Union[str, NaType]
  • 'Disabled': if the persistence mode is disabled.

  • 'Enabled': if the persistence mode is enabled.

  • nvitop.NA: if not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=persistence_mode
performance_state() str | NaType[source]

The current performance state for the GPU. States range from P0 (maximum performance) to P12 (minimum performance).

Returns: Union[str, NaType]

The current performance state in format P<int>, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=pstate
total_volatile_uncorrected_ecc_errors() int | NaType[source]

Total errors detected across entire chip.

Returns: Union[int, NaType]

The total number of uncorrected errors in volatile ECC memory, or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=ecc.errors.uncorrected.volatile.total
compute_mode() str | NaType[source]

The compute mode flag indicates whether individual or multiple compute applications may run on the GPU.

Returns: Union[str, NaType]
  • 'Default': means multiple contexts are allowed per device.

  • 'Exclusive Thread': deprecated, use Exclusive Process instead

  • 'Prohibited': means no contexts are allowed per device (no compute apps).

  • 'Exclusive Process': means only one context is allowed per device, usable from multiple threads at a time.

  • nvitop.NA: if not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=compute_mode
cuda_compute_capability() tuple[int, int] | NaType[source]

The CUDA compute capability for the device.

Returns: Union[Tuple[int, int], NaType]

The CUDA compute capability version in format (major, minor), or nvitop.NA when not applicable.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=compute_cap
is_mig_device() bool[source]

Return whether or not the device is a MIG device.

mig_mode() str | NaType[source]

The MIG mode that the GPU is currently operating under.

Returns: Union[str, NaType]
  • 'Disabled': if the MIG mode is disabled.

  • 'Enabled': if the MIG mode is enabled.

  • nvitop.NA: if not applicable, e.g. the GPU does not support MIG mode.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=mig.mode.current
is_mig_mode_enabled() bool[source]

Test whether the MIG mode is enabled on the device.

Return False if MIG mode is disabled or the device does not support MIG mode.

max_mig_device_count() int[source]

Return the maximum number of MIG instances the device supports.

This method will return 0 if the device does not support MIG mode.

mig_devices() list[MigDevice][source]

Return a list of children MIG devices of the current device.

This method will return an empty list if the MIG mode is disabled or the device does not support MIG mode.

is_leaf_device() bool[source]

Test whether the device is a physical device with MIG mode disabled or a MIG device.

Return True if the device is a physical device with MIG mode disabled or a MIG device. Otherwise, return False if the device is a physical device with MIG mode enabled.

to_leaf_devices() list[PhysicalDevice] | list[MigDevice] | list[CudaDevice] | list[CudaMigDevice][source]

Return a list of leaf devices.

Note that a CUDA device is always a leaf device.

processes() dict[int, GpuProcess][source]

Return a dictionary of processes running on the GPU.

Returns: Dict[int, GpuProcess]

A dictionary mapping PID to GPU process instance.

as_snapshot() Snapshot[source]

Return a onetime snapshot of the device.

The attributes are defined in SNAPSHOT_KEYS.

SNAPSHOT_KEYS: ClassVar[list[str]] = ['name', 'uuid', 'bus_id', 'memory_info', 'memory_used', 'memory_free', 'memory_total', 'memory_used_human', 'memory_free_human', 'memory_total_human', 'memory_percent', 'memory_usage', 'utilization_rates', 'gpu_utilization', 'memory_utilization', 'encoder_utilization', 'decoder_utilization', 'clock_infos', 'max_clock_infos', 'clock_speed_infos', 'sm_clock', 'memory_clock', 'fan_speed', 'temperature', 'power_usage', 'power_limit', 'power_status', 'pcie_throughput', 'pcie_tx_throughput', 'pcie_rx_throughput', 'pcie_tx_throughput_human', 'pcie_rx_throughput_human', 'display_active', 'display_mode', 'current_driver_model', 'persistence_mode', 'performance_state', 'total_volatile_uncorrected_ecc_errors', 'compute_mode', 'cuda_compute_capability', 'mig_mode']
oneshot() Generator[None, None, None][source]

A utility context manager which considerably speeds up the retrieval of multiple device information at the same time.

Internally different device info (e.g. memory_info, utilization_rates, …) may be fetched by using the same routine, but only one information is returned and the others are discarded. When using this context manager the internal routine is executed once (in the example below on memory_info()) and the other info are cached.

The cache is cleared when exiting the context manager block. The advice is to use this every time you retrieve more than one information about the device.

Examples

>>> from nvitop import Device
>>> device = Device(0)
>>> with device.oneshot():
...     device.memory_info()        # collect multiple info
...     device.memory_used()        # return cached value
...     device.memory_free_human()  # return cached value
...     device.memory_percent()     # return cached value
class nvitop.PhysicalDevice(index: int | tuple[int, int] | str | None = None, *, uuid: str | None = None, bus_id: str | None = None)[source]

Bases: Device

Class for physical devices.

This is the real GPU installed in the system.

property physical_index: int

Zero based index of the GPU. Can change at each boot.

Command line equivalent:

nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=index
max_mig_device_count() int[source]

Return the maximum number of MIG instances the device supports.

This method will return 0 if the device does not support MIG mode.

mig_device(mig_index: int) MigDevice[source]

Return a child MIG device of the given index.

Raises:

libnvml.NVMLError – If the device does not support MIG mode or the given MIG device does not exist.

mig_devices() list[MigDevice][source]

Return a list of children MIG devices of the current device.

This method will return an empty list if the MIG mode is disabled or the device does not support MIG mode.

class nvitop.MigDevice(index: int | tuple[int, int] | str | None = None, *, uuid: str | None = None, bus_id: str | None = None)[source]

Bases: Device

Class for MIG devices.

classmethod count() int[source]

The number of total MIG devices aggregated over all physical devices.

classmethod all() list[MigDevice][source]

Return a list of MIG devices aggregated over all physical devices.

classmethod from_indices(indices: Iterable[tuple[int, int]]) list[MigDevice][source]

Return a list of MIG devices of the given indices.

Parameters:

indices (Iterable[Tuple[int, int]]) – Indices of the MIG devices. Each index is a tuple of two integers.

Returns: List[MigDevice]

A list of MigDevice instances of the given indices.

Raises:
__init__(index: tuple[int, int] | str | None = None, *, uuid: str | None = None) None[source]

Initialize the instance created by __new__().

Raises:
property index: tuple[int, int]

The index of the MIG device. This is a tuple of two integers.

property physical_index: int

The index of the parent physical device.

property mig_index: int

The index of the MIG device over the all MIG devices of the parent device.

property parent: PhysicalDevice

The parent physical device.

gpu_instance_id() int | NaType[source]

The gpu instance ID of the MIG device.

Returns: Union[int, NaType]

The gpu instance ID of the MIG device, or nvitop.NA when not applicable.

compute_instance_id() int | NaType[source]

The compute instance ID of the MIG device.

Returns: Union[int, NaType]

The compute instance ID of the MIG device, or nvitop.NA when not applicable.

as_snapshot() Snapshot[source]

Return a onetime snapshot of the device.

The attributes are defined in SNAPSHOT_KEYS.

SNAPSHOT_KEYS: ClassVar[list[str]] = ['name', 'uuid', 'bus_id', 'memory_info', 'memory_used', 'memory_free', 'memory_total', 'memory_used_human', 'memory_free_human', 'memory_total_human', 'memory_percent', 'memory_usage', 'utilization_rates', 'gpu_utilization', 'memory_utilization', 'encoder_utilization', 'decoder_utilization', 'clock_infos', 'max_clock_infos', 'clock_speed_infos', 'sm_clock', 'memory_clock', 'fan_speed', 'temperature', 'power_usage', 'power_limit', 'power_status', 'pcie_throughput', 'pcie_tx_throughput', 'pcie_rx_throughput', 'pcie_tx_throughput_human', 'pcie_rx_throughput_human', 'display_active', 'display_mode', 'current_driver_model', 'persistence_mode', 'performance_state', 'total_volatile_uncorrected_ecc_errors', 'compute_mode', 'cuda_compute_capability', 'mig_mode', 'gpu_instance_id', 'compute_instance_id']
class nvitop.CudaDevice(cuda_index: int | None = None, *, nvml_index: int | tuple[int, int] | None = None, uuid: str | None = None)[source]

Bases: Device

Class for devices enumerated over the CUDA ordinal.

The order can be vary for different CUDA_VISIBLE_DEVICES environment variable.

See also for CUDA Device Enumeration:

CudaDevice.__new__() returns different types depending on the given arguments.

- (cuda_index: int)        -> Union[CudaDevice, CudaMigDevice]  # depending on `CUDA_VISIBLE_DEVICES`
- (uuid: str)              -> Union[CudaDevice, CudaMigDevice]  # depending on `CUDA_VISIBLE_DEVICES`
- (nvml_index: int)        -> CudaDevice
- (nvml_index: (int, int)) -> CudaMigDevice

Examples

>>> import os
>>> os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
>>> os.environ['CUDA_VISIBLE_DEVICES'] = '3,2,1,0'
>>> CudaDevice.count()                     # number of NVIDIA GPUs visible to CUDA applications
4
>>> Device.cuda.count()                    # use alias in class `Device`
4
>>> CudaDevice.all()                       # all CUDA visible devices (or `Device.cuda.all()`)
[
    CudaDevice(cuda_index=0, nvml_index=3, ...),
    CudaDevice(cuda_index=1, nvml_index=2, ...),
    ...
]
>>> cuda0 = CudaDevice(cuda_index=0)       # use CUDA ordinal (or `Device.cuda(0)`)
>>> cuda1 = CudaDevice(nvml_index=2)       # use NVML ordinal
>>> cuda2 = CudaDevice(uuid='GPU-xxxxxx')  # use UUID string
>>> cuda0.memory_free()                    # total free memory in bytes
11550654464
>>> cuda0.memory_free_human()              # total free memory in human readable format
'11016MiB'
>>> cuda1.as_snapshot()                    # takes an onetime snapshot of the device
CudaDeviceSnapshot(
    real=CudaDevice(cuda_index=1, nvml_index=2, ...),
    ...
)
Raises:
classmethod is_available() bool[source]

Test whether there are any CUDA-capable devices available.

classmethod count() int[source]

The number of GPUs visible to CUDA applications.

classmethod all() list[CudaDevice][source]

All CUDA visible devices.

Note

The result could be empty if the CUDA_VISIBLE_DEVICES environment variable is invalid.

classmethod from_indices(indices: int | Iterable[int] | None = None) list[CudaDevice][source]

Return a list of CUDA devices of the given CUDA indices.

The CUDA ordinal will be enumerate from the CUDA_VISIBLE_DEVICES environment variable.

See also for CUDA Device Enumeration:
Parameters:

cuda_indices (Iterable[int]) – The indices of the GPU in CUDA ordinal, if not given, returns all visible CUDA devices.

Returns: List[CudaDevice]

A list of CudaDevice of the given CUDA indices.

Raises:
static __new__(cls, cuda_index: int | None = None, *, nvml_index: int | tuple[int, int] | None = None, uuid: str | None = None) Self[source]

Create a new instance of CudaDevice.

The type of the result is determined by the given argument.

- (cuda_index: int)        -> Union[CudaDevice, CudaMigDevice]  # depending on `CUDA_VISIBLE_DEVICES`
- (uuid: str)              -> Union[CudaDevice, CudaMigDevice]  # depending on `CUDA_VISIBLE_DEVICES`
- (nvml_index: int)        -> CudaDevice
- (nvml_index: (int, int)) -> CudaMigDevice

Note: This method takes exact 1 non-None argument.

Returns: Union[CudaDevice, CudaMigDevice]

A CudaDevice instance or a CudaMigDevice instance.

Raises:
  • TypeError – If the number of non-None arguments is not exactly 1.

  • TypeError – If the given NVML index is a tuple but is not consist of two integers.

  • RuntimeError – If the index is out of range for the given CUDA_VISIBLE_DEVICES environment variable.

__init__(cuda_index: int | None = None, *, nvml_index: int | tuple[int, int] | None = None, uuid: str | None = None) None[source]

Initialize the instance created by __new__().

Raises:
__repr__() str[source]

Return a string representation of the CUDA device.

__reduce__() tuple[type[CudaDevice], tuple[int]][source]

Return state information for pickling.

as_snapshot() Snapshot[source]

Return a onetime snapshot of the device.

The attributes are defined in SNAPSHOT_KEYS.

class nvitop.CudaMigDevice(cuda_index: int | None = None, *, nvml_index: int | tuple[int, int] | None = None, uuid: str | None = None)[source]

Bases: CudaDevice, MigDevice

Class for CUDA devices that are MIG devices.

nvitop.parse_cuda_visible_devices(cuda_visible_devices: str | None = <VALUE OMITTED>) list[int] | list[tuple[int, int]][source]

Parse the given CUDA_VISIBLE_DEVICES value into a list of NVML device indices.

This function is aliased by Device.parse_cuda_visible_devices().

Note

The result could be empty if the CUDA_VISIBLE_DEVICES environment variable is invalid.

See also for CUDA Device Enumeration:
Parameters:

cuda_visible_devices (Optional[str]) – The value of the CUDA_VISIBLE_DEVICES variable. If not given, the value from the environment will be used. If explicitly given by None, the CUDA_VISIBLE_DEVICES environment variable will be unset before parsing.

Returns: Union[List[int], List[Tuple[int, int]]]

A list of int (physical device) or a list of tuple of two integers (MIG device) for the corresponding real device indices.

Examples

>>> import os
>>> os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
>>> os.environ['CUDA_VISIBLE_DEVICES'] = '6,5'
>>> parse_cuda_visible_devices()       # parse the `CUDA_VISIBLE_DEVICES` environment variable to NVML indices
[6, 5]
>>> parse_cuda_visible_devices('0,4')  # pass the `CUDA_VISIBLE_DEVICES` value explicitly
[0, 4]
>>> parse_cuda_visible_devices('GPU-18ef14e9,GPU-849d5a8d')  # accept abbreviated UUIDs
[5, 6]
>>> parse_cuda_visible_devices(None)   # get all devices when the `CUDA_VISIBLE_DEVICES` environment variable unset
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> parse_cuda_visible_devices('MIG-d184f67c-c95f-5ef2-a935-195bd0094fbd')           # MIG device support (MIG UUID)
[(0, 0)]
>>> parse_cuda_visible_devices('MIG-GPU-3eb79704-1571-707c-aee8-f43ce747313d/13/0')  # MIG device support (GPU UUID)
[(0, 1)]
>>> parse_cuda_visible_devices('MIG-GPU-3eb79704/13/0')                              # MIG device support (abbreviated GPU UUID)
[(0, 1)]
>>> parse_cuda_visible_devices('')     # empty string
[]
>>> parse_cuda_visible_devices('0,0')  # invalid `CUDA_VISIBLE_DEVICES` (duplicate device ordinal)
[]
>>> parse_cuda_visible_devices('16')   # invalid `CUDA_VISIBLE_DEVICES` (device ordinal out of range)
[]
nvitop.normalize_cuda_visible_devices(cuda_visible_devices: str | None = <VALUE OMITTED>) str[source]

Parse the given CUDA_VISIBLE_DEVICES value and convert it into a comma-separated string of UUIDs.

This function is aliased by Device.normalize_cuda_visible_devices().

Note

The result could be empty string if the CUDA_VISIBLE_DEVICES environment variable is invalid.

See also for CUDA Device Enumeration:
Parameters:

cuda_visible_devices (Optional[str]) – The value of the CUDA_VISIBLE_DEVICES variable. If not given, the value from the environment will be used. If explicitly given by None, the CUDA_VISIBLE_DEVICES environment variable will be unset before parsing.

Returns: str

The comma-separated string (GPU UUIDs) of the CUDA_VISIBLE_DEVICES environment variable.

Examples

>>> import os
>>> os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
>>> os.environ['CUDA_VISIBLE_DEVICES'] = '6,5'
>>> normalize_cuda_visible_devices()        # normalize the `CUDA_VISIBLE_DEVICES` environment variable to UUID strings
'GPU-849d5a8d-610e-eeea-1fd4-81ff44a23794,GPU-18ef14e9-dec6-1d7e-1284-3010c6ce98b1'
>>> normalize_cuda_visible_devices('4')     # pass the `CUDA_VISIBLE_DEVICES` value explicitly
'GPU-96de99c9-d68f-84c8-424c-7c75e59cc0a0'
>>> normalize_cuda_visible_devices('GPU-18ef14e9,GPU-849d5a8d')  # normalize abbreviated UUIDs
'GPU-18ef14e9-dec6-1d7e-1284-3010c6ce98b1,GPU-849d5a8d-610e-eeea-1fd4-81ff44a23794'
>>> normalize_cuda_visible_devices(None)    # get all devices when the `CUDA_VISIBLE_DEVICES` environment variable unset
'GPU-<GPU0-UUID>,GPU-<GPU1-UUID>,...'  # all GPU UUIDs
>>> normalize_cuda_visible_devices('MIG-d184f67c-c95f-5ef2-a935-195bd0094fbd')           # MIG device support (MIG UUID)
'MIG-d184f67c-c95f-5ef2-a935-195bd0094fbd'
>>> normalize_cuda_visible_devices('MIG-GPU-3eb79704-1571-707c-aee8-f43ce747313d/13/0')  # MIG device support (GPU UUID)
'MIG-37b51284-1df4-5451-979d-3231ccb0822e'
>>> normalize_cuda_visible_devices('MIG-GPU-3eb79704/13/0')                              # MIG device support (abbreviated GPU UUID)
'MIG-37b51284-1df4-5451-979d-3231ccb0822e'
>>> normalize_cuda_visible_devices('')      # empty string
''
>>> normalize_cuda_visible_devices('0,0')   # invalid `CUDA_VISIBLE_DEVICES` (duplicate device ordinal)
''
>>> normalize_cuda_visible_devices('16')    # invalid `CUDA_VISIBLE_DEVICES` (device ordinal out of range)
''
class nvitop.HostProcess(pid: int | None = None)[source]

Bases: Process

Represent an OS process with the given PID.

If PID is omitted current process PID (os.getpid()) is used. The instance will be cache during the lifetime of the process.

Examples

>>> HostProcess()  # the current process
HostProcess(pid=12345, name='python3', status='running', started='00:55:43')
>>> p1 = HostProcess(12345)
>>> p2 = HostProcess(12345)
>>> p1 is p2                 # the same instance
True
>>> import copy
>>> copy.deepcopy(p1) is p1  # the same instance
True
>>> p = HostProcess(pid=12345)
>>> p.cmdline()
['python3', '-c', 'import IPython; IPython.terminal.ipapp.launch_new_instance()']
>>> p.command()  # the result is in shell-escaped format
'python3 -c "import IPython; IPython.terminal.ipapp.launch_new_instance()"'
>>> p.as_snapshot()
HostProcessSnapshot(
    real=HostProcess(pid=12345, name='python3', status='running', started='00:55:43'),
    cmdline=['python3', '-c', 'import IPython; IPython.terminal.ipapp.launch_new_instance()'],
    command='python3 -c "import IPython; IPython.terminal.ipapp.launch_new_instance()"',
    connections=[],
    cpu_percent=0.3,
    cpu_times=pcputimes(user=2.180019456, system=0.18424464, children_user=0.0, children_system=0.0),
    create_time=1656608143.31,
    cwd='/home/panxuehai',
    environ={...},
    ...
)
INSTANCE_LOCK: threading.RLock = <unlocked _thread.RLock object owner=0 count=0>
INSTANCES: WeakValueDictionary[int, HostProcess] = <WeakValueDictionary>
static __new__(cls, pid: int | None = None) Self[source]

Return the cached instance of HostProcess.

__init__(pid: int | None = None) None[source]

Initialize the instance.

__repr__() str[source]

Return a string representation of the process.

__reduce__() tuple[type[HostProcess], tuple[int]][source]

Return state information for pickling.

username() str[source]

The name of the user that owns the process.

On UNIX this is calculated by using real process uid.

Raises:
cmdline() list[str][source]

The command line this process has been called with.

Raises:
command() str[source]

Return a shell-escaped string from command line arguments.

Raises:
running_time() timedelta[source]

The elapsed time this process has been running in datetime.timedelta.

Raises:
running_time_human() str[source]

The elapsed time this process has been running in human readable format.

Raises:
running_time_in_seconds() float[source]

The elapsed time this process has been running in seconds.

Raises:
elapsed_time() timedelta

The elapsed time this process has been running in datetime.timedelta.

Raises:
elapsed_time_human() str

The elapsed time this process has been running in human readable format.

Raises:
elapsed_time_in_seconds() float

The elapsed time this process has been running in seconds.

Raises:
rss_memory() int[source]

The used resident set size (RSS) memory of the process in bytes.

Raises:
parent() HostProcess | None[source]

Return the parent process as a HostProcess instance or None if there is no parent.

Raises:
children(recursive: bool = False) list[HostProcess][source]

Return the children of this process as a list of HostProcess instances.

If recursive is True return all the descendants.

Raises:
oneshot() Generator[None, None, None][source]

A utility context manager which considerably speeds up the retrieval of multiple process information at the same time.

Internally different process info (e.g. name, ppid, uids, gids, …) may be fetched by using the same routine, but only one information is returned and the others are discarded. When using this context manager the internal routine is executed once (in the example below on name()) and the other info are cached.

The cache is cleared when exiting the context manager block. The advice is to use this every time you retrieve more than one information about the process.

Examples

>>> from nvitop import HostProcess
>>> p = HostProcess()
>>> with p.oneshot():
...     p.name()         # collect multiple info
...     p.cpu_times()    # return cached value
...     p.cpu_percent()  # return cached value
...     p.create_time()  # return cached value
as_snapshot(attrs: Iterable[str] | None = None, ad_value: Any | None = None) Snapshot[source]

Return a onetime snapshot of the process.

class nvitop.GpuProcess(pid: int | None, device: Device, *, gpu_memory: int | NaType | None = None, gpu_instance_id: int | NaType | None = None, compute_instance_id: int | NaType | None = None, type: str | NaType | None = None)[source]

Bases: object

Represent a process with the given PID running on the given GPU device.

The instance will be cache during the lifetime of the process.

The same host process can use multiple GPU devices. The GpuProcess instances representing the same PID on the host but different GPU devices are different.

INSTANCE_LOCK: threading.RLock = <unlocked _thread.RLock object owner=0 count=0>
INSTANCES: WeakValueDictionary[tuple[int, Device], GpuProcess] = <WeakValueDictionary>
static __new__(cls, pid: int | None, device: Device, *, gpu_memory: int | NaType | None = None, gpu_instance_id: int | NaType | None = None, compute_instance_id: int | NaType | None = None, type: str | NaType | None = None) Self[source]

Return the cached instance of GpuProcess.

__init__(pid: int | None, device: Device, *, gpu_memory: int | NaType | None = None, gpu_instance_id: int | NaType | None = None, compute_instance_id: int | NaType | None = None, type: str | NaType | None = None) None[source]

Initialize the instance returned by __new__().

__repr__() str[source]

Return a string representation of the GPU process.

__eq__(other: object) bool[source]

Test equality to other object.

__hash__() int[source]

Return a hash value of the GPU process.

__getattr__(name: str) Any | Callable[..., Any][source]

Get a member from the instance or fallback to the host process instance if missing.

Raises:
property pid: int

The process PID.

property host: HostProcess

The process instance running on the host.

property device: Device

The GPU device the process running on.

The same host process can use multiple GPU devices. The GpuProcess instances representing the same PID on the host but different GPU devices are different.

gpu_instance_id() int | NaType[source]

The GPU instance ID of the MIG device, or nvitop.NA if not applicable.

compute_instance_id() int | NaType[source]

The compute instance ID of the MIG device, or nvitop.NA if not applicable.

gpu_memory() int | NaType[source]

The used GPU memory in bytes, or nvitop.NA if not applicable.

gpu_memory_human() str | NaType[source]

The used GPU memory in human readable format, or nvitop.NA if not applicable.

gpu_memory_percent() float | NaType[source]

The percentage of used GPU memory by the process, or nvitop.NA if not applicable.

gpu_sm_utilization() int | NaType[source]

The utilization rate of SM (Streaming Multiprocessor), or nvitop.NA if not applicable.

gpu_memory_utilization() int | NaType[source]

The utilization rate of GPU memory bandwidth, or nvitop.NA if not applicable.

gpu_encoder_utilization() int | NaType[source]

The utilization rate of the encoder, or nvitop.NA if not applicable.

gpu_decoder_utilization() int | NaType[source]

The utilization rate of the decoder, or nvitop.NA if not applicable.

set_gpu_memory(value: int | NaType) None[source]

Set the used GPU memory in bytes.

set_gpu_utilization(gpu_sm_utilization: int | NaType | None = None, gpu_memory_utilization: int | NaType | None = None, gpu_encoder_utilization: int | NaType | None = None, gpu_decoder_utilization: int | NaType | None = None) None[source]

Set the GPU utilization rates.

update_gpu_status() int | NaType[source]

Update the GPU consumption status from a new NVML query.

property type: str | NaType

The type of the GPU context.

The type is one of the following:
  • 'C': compute context

  • 'G': graphics context

  • 'C+G': both compute context and graphics context

  • 'N/A': not applicable

is_running() bool[source]

Return whether this process is running.

status() str[source]

The process current status.

Raises:

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). See also take_snapshots() and failsafe().

create_time() float | NaType[source]

The process creation time as a floating point number expressed in seconds since the epoch.

Raises:

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). See also take_snapshots() and failsafe().

running_time() datetime.timedelta | NaType[source]

The elapsed time this process has been running in datetime.timedelta.

Raises:

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). See also take_snapshots() and failsafe().

running_time_human() str | NaType[source]

The elapsed time this process has been running in human readable format.

Raises:

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). See also take_snapshots() and failsafe().

running_time_in_seconds() float | NaType[source]

The elapsed time this process has been running in seconds.

Raises:

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). See also take_snapshots() and failsafe().

elapsed_time() datetime.timedelta | NaType

The elapsed time this process has been running in datetime.timedelta.

Raises:

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). See also take_snapshots() and failsafe().

elapsed_time_human() str | NaType

The elapsed time this process has been running in human readable format.

Raises:

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). See also take_snapshots() and failsafe().

elapsed_time_in_seconds() float | NaType

The elapsed time this process has been running in seconds.

Raises:

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). See also take_snapshots() and failsafe().

username() str | NaType[source]

The name of the user that owns the process.

Raises:

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). See also take_snapshots() and failsafe().

name() str | NaType[source]

The process name.

Raises:

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). See also take_snapshots() and failsafe().

cpu_percent() float | NaType[source]

Return a float representing the current process CPU utilization as a percentage.

Raises:

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). See also take_snapshots() and failsafe().

memory_percent() float | NaType[source]

Compare process RSS memory to total physical system memory and calculate process memory utilization as a percentage.

Raises:

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). See also take_snapshots() and failsafe().

host_memory_percent() float | NaType

Compare process RSS memory to total physical system memory and calculate process memory utilization as a percentage.

Raises:

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). See also take_snapshots() and failsafe().

host_memory() int | NaType[source]

The used resident set size (RSS) memory of the process in bytes.

Raises:

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). See also take_snapshots() and failsafe().

host_memory_human() str | NaType[source]

The used resident set size (RSS) memory of the process in human readable format.

Raises:

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). See also take_snapshots() and failsafe().

rss_memory() int | NaType

The used resident set size (RSS) memory of the process in bytes.

Raises:

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). See also take_snapshots() and failsafe().

cmdline() list[str][source]

The command line this process has been called with.

Raises:

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). See also take_snapshots() and failsafe().

command() str[source]

Return a shell-escaped string from command line arguments.

Raises:

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). See also take_snapshots() and failsafe().

host_snapshot() Snapshot[source]

Return a onetime snapshot of the host process.

as_snapshot(*, host_process_snapshot_cache: dict[int, Snapshot] | None = None) Snapshot[source]

Return a onetime snapshot of the process on the GPU device.

Note

To return the fallback value rather than raise an exception, please use the context manager GpuProcess.failsafe(). Also, consider using the batched version to take snapshots with GpuProcess.take_snapshots(), which caches the results and reduces redundant queries. See also take_snapshots() and failsafe().

classmethod take_snapshots(gpu_processes: Iterable[GpuProcess], *, failsafe: bool = False) list[Snapshot][source]

Take snapshots for a list of GpuProcess instances.

If failsafe is True, then if any method fails, the fallback value in auto_garbage_clean() will be used.

classmethod failsafe() Generator[None, None, None][source]

A context manager that enables fallback values for methods that fail.

Examples

>>> p = GpuProcess(pid=10000, device=Device(0))  # process does not exist
>>> p
GpuProcess(pid=10000, gpu_memory=N/A, type=N/A, device=PhysicalDevice(index=0, name="NVIDIA GeForce RTX 3070", total_memory=8192MiB), host=HostProcess(pid=10000, status='terminated'))
>>> p.cpu_percent()
Traceback (most recent call last):
    ...
NoSuchProcess: process no longer exists (pid=10000)
>>> # Failsafe to the fallback value instead of raising exceptions
... with GpuProcess.failsafe():
...     print('fallback:              {!r}'.format(p.cpu_percent()))
...     print('fallback (float cast): {!r}'.format(float(p.cpu_percent())))  # `nvitop.NA` can be cast to float or int
...     print('fallback (int cast):   {!r}'.format(int(p.cpu_percent())))    # `nvitop.NA` can be cast to float or int
fallback:              'N/A'
fallback (float cast): nan
fallback (int cast):   0
nvitop.command_join(cmdline: list[str]) str[source]

Return a shell-escaped string from command line arguments.

nvitop.take_snapshots(devices: Device | Iterable[Device] | None = None, *, gpu_processes: bool | GpuProcess | Iterable[GpuProcess] | None = None) SnapshotResult[source]

Retrieve status of demanded devices and GPU processes.

Parameters:
  • devices (Optional[Union[Device, Iterable[Device]]]) – Requested devices for snapshots. If not given, the devices will be determined from GPU processes: (1) All devices (no GPU processes are given); (2) Devices that used by given GPU processes.

  • gpu_processes (Optional[Union[bool, GpuProcess, Iterable[GpuProcess]]]) – Requested GPU processes snapshots. If not given, all GPU processes running on the requested device will be returned. The GPU process snapshots can be suppressed by specifying gpu_processes=False.

Returns: SnapshotResult

A named tuple containing two lists of snapshots.

Note

If not arguments are specified, all devices and all GPU processes will be returned.

Examples

>>> from nvitop import take_snapshots, Device
>>> import os
>>> os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
>>> os.environ['CUDA_VISIBLE_DEVICES'] = '1,0'
>>> take_snapshots()  # equivalent to `take_snapshots(Device.all())`
SnapshotResult(
    devices=[
        PhysicalDeviceSnapshot(
            real=PhysicalDevice(index=0, ...),
            ...
        ),
        ...
    ],
    gpu_processes=[
        GpuProcessSnapshot(
            real=GpuProcess(pid=xxxxxx, device=PhysicalDevice(index=0, ...), ...),
            ...
        ),
        ...
    ]
)
>>> device_snapshots, gpu_process_snapshots = take_snapshots(Device.all())  # type: Tuple[List[DeviceSnapshot], List[GpuProcessSnapshot]]
>>> device_snapshots, _ = take_snapshots(gpu_processes=False)  # ignore process snapshots
>>> take_snapshots(Device.cuda.all())  # use CUDA device enumeration
SnapshotResult(
    devices=[
        CudaDeviceSnapshot(
            real=CudaDevice(cuda_index=0, physical_index=1, ...),
            ...
        ),
        CudaDeviceSnapshot(
            real=CudaDevice(cuda_index=1, physical_index=0, ...),
            ...
        ),
    ],
    gpu_processes=[
        GpuProcessSnapshot(
            real=GpuProcess(pid=xxxxxx, device=CudaDevice(cuda_index=0, ...), ...),
            ...
        ),
        ...
    ]
)
>>> take_snapshots(Device.cuda(1))  # <CUDA 1> only
SnapshotResult(
    devices=[
        CudaDeviceSnapshot(
            real=CudaDevice(cuda_index=1, physical_index=0, ...),
            ...
        )
    ],
    gpu_processes=[
        GpuProcessSnapshot(
            real=GpuProcess(pid=xxxxxx, device=CudaDevice(cuda_index=1, ...), ...),
            ...
        ),
        ...
    ]
)
nvitop.collect_in_background(on_collect: Callable[[dict[str, float]], bool], collector: ResourceMetricCollector | None = None, interval: float | None = None, *, on_start: Callable[[ResourceMetricCollector], None] | None = None, on_stop: Callable[[ResourceMetricCollector], None] | None = None, tag: str = 'metrics-daemon', start: bool = True) threading.Thread[source]

Start a background daemon thread that collect and call the callback function periodically.

See also ResourceMetricCollector.daemonize().

Parameters:
  • on_collect (Callable[[Dict[str, float]], bool]) – A callback function that will be called periodically. It takes a dictionary containing the resource metrics and returns a boolean indicating whether to continue monitoring.

  • collector (Optional[ResourceMetricCollector]) – A ResourceMetricCollector instance to collect metrics. If not given, it will collect metrics for all GPUs and subprocess of the current process.

  • interval (Optional[float]) – The collect interval. If not given, use collector.interval.

  • on_start (Optional[Callable[[ResourceMetricCollector], None]]) – A function to initialize the daemon thread and collector.

  • on_stop (Optional[Callable[[ResourceMetricCollector], None]]) – A function that do some necessary cleanup after the daemon thread is stopped.

  • tag (str) – The tag prefix used for metrics results.

  • start (bool) – Whether to start the daemon thread on return.

Returns: threading.Thread

A daemon thread object.

Examples

logger = ...

def on_collect(metrics):  # will be called periodically
    if logger.is_closed():  # closed manually by user
        return False
    logger.log(metrics)
    return True

def on_stop(collector):  # will be called only once at stop
    if not logger.is_closed():
        logger.close()  # cleanup

# Record metrics to the logger in the background every 5 seconds.
# It will collect 5-second mean/min/max for each metric.
collect_in_background(
    on_collect,
    ResourceMetricCollector(Device.cuda.all()),
    interval=5.0,
    on_stop=on_stop,
)
class nvitop.ResourceMetricCollector(devices: Iterable[Device] | None = None, root_pids: Iterable[int] | None = None, interval: float = 1.0)[source]

Bases: object

A class for collecting resource metrics.

Parameters:
  • devices (Iterable[Device]) – Set of Device instances for logging. If not given, all physical devices on board will be used.

  • root_pids (Set[int]) – A set of PIDs, only the status of the descendant processes on the GPUs will be collected. If not given, the PID of the current process will be used.

  • interval (float) – The snapshot interval for background daemon thread.

Core methods:

collector.activate(tag='<tag>')  # alias: start
collector.deactivate()           # alias: stop
collector.reset(tag='<tag>')
collector.collect()

with collector(tag='<tag>'):
    ...

collector.daemonize(on_collect_fn)

Examples

>>> import os
>>> os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
>>> os.environ['CUDA_VISIBLE_DEVICES'] = '3,2,1,0'
>>> from nvitop import ResourceMetricCollector, Device
>>> collector = ResourceMetricCollector()                           # log all devices and descendant processes of the current process on the GPUs
>>> collector = ResourceMetricCollector(root_pids={1})              # log all devices and all GPU processes
>>> collector = ResourceMetricCollector(devices=Device.cuda.all())  # use the CUDA ordinal
>>> with collector(tag='<tag>'):
...     # Do something
...     collector.collect()  # -> Dict[str, float]
# key -> '<tag>/<scope>/<metric (unit)>/<mean/min/max>'
{
    '<tag>/host/cpu_percent (%)/mean': 8.967849777683456,
    '<tag>/host/cpu_percent (%)/min': 6.1,
    '<tag>/host/cpu_percent (%)/max': 28.1,
    ...,
    '<tag>/host/memory_percent (%)/mean': 21.5,
    '<tag>/host/swap_percent (%)/mean': 0.3,
    '<tag>/host/memory_used (GiB)/mean': 91.0136418208109,
    '<tag>/host/load_average (%) (1 min)/mean': 10.251427386878328,
    '<tag>/host/load_average (%) (5 min)/mean': 10.072539414569503,
    '<tag>/host/load_average (%) (15 min)/mean': 11.91126970422139,
    ...,
    '<tag>/cuda:0 (gpu:3)/memory_used (MiB)/mean': 3.875,
    '<tag>/cuda:0 (gpu:3)/memory_free (MiB)/mean': 11015.562499999998,
    '<tag>/cuda:0 (gpu:3)/memory_total (MiB)/mean': 11019.437500000002,
    '<tag>/cuda:0 (gpu:3)/memory_percent (%)/mean': 0.0,
    '<tag>/cuda:0 (gpu:3)/gpu_utilization (%)/mean': 0.0,
    '<tag>/cuda:0 (gpu:3)/memory_utilization (%)/mean': 0.0,
    '<tag>/cuda:0 (gpu:3)/fan_speed (%)/mean': 22.0,
    '<tag>/cuda:0 (gpu:3)/temperature (C)/mean': 25.0,
    '<tag>/cuda:0 (gpu:3)/power_usage (W)/mean': 19.11166264116916,
    ...,
    '<tag>/cuda:1 (gpu:2)/memory_used (MiB)/mean': 8878.875,
    ...,
    '<tag>/cuda:2 (gpu:1)/memory_used (MiB)/mean': 8182.875,
    ...,
    '<tag>/cuda:3 (gpu:0)/memory_used (MiB)/mean': 9286.875,
    ...,
    '<tag>/pid:12345/host/cpu_percent (%)/mean': 151.34342772112265,
    '<tag>/pid:12345/host/host_memory (MiB)/mean': 44749.72373447514,
    '<tag>/pid:12345/host/host_memory_percent (%)/mean': 8.675082352111717,
    '<tag>/pid:12345/host/running_time (min)': 336.23803206741576,
    '<tag>/pid:12345/cuda:1 (gpu:4)/gpu_memory (MiB)/mean': 8861.0,
    '<tag>/pid:12345/cuda:1 (gpu:4)/gpu_memory_percent (%)/mean': 80.4,
    '<tag>/pid:12345/cuda:1 (gpu:4)/gpu_memory_utilization (%)/mean': 6.711118172407917,
    '<tag>/pid:12345/cuda:1 (gpu:4)/gpu_sm_utilization (%)/mean': 48.23283397736476,
    ...,
    '<tag>/duration (s)': 7.247399162035435,
    '<tag>/timestamp': 1655909466.9981883
}
DEVICE_METRICS: ClassVar[list[tuple[str, str, float | int]]] = [('memory_used', 'memory_used (MiB)', 1048576), ('memory_free', 'memory_free (MiB)', 1048576), ('memory_total', 'memory_total (MiB)', 1048576), ('memory_percent', 'memory_percent (%)', 1.0), ('gpu_utilization', 'gpu_utilization (%)', 1.0), ('memory_utilization', 'memory_utilization (%)', 1.0), ('fan_speed', 'fan_speed (%)', 1.0), ('temperature', 'temperature (C)', 1.0), ('power_usage', 'power_usage (W)', 1000.0)]
PROCESS_METRICS: ClassVar[list[tuple[str, str | None, str, float | int]]] = [('cpu_percent', 'host', 'cpu_percent (%)', 1.0), ('host_memory', 'host', 'host_memory (MiB)', 1048576), ('host_memory_percent', 'host', 'host_memory_percent (%)', 1.0), ('running_time_in_seconds', 'host', 'running_time (min)', 60.0), ('gpu_memory', None, 'gpu_memory (MiB)', 1048576), ('gpu_memory_percent', None, 'gpu_memory_percent (%)', 1.0), ('gpu_memory_utilization', None, 'gpu_memory_utilization (%)', 1.0), ('gpu_sm_utilization', None, 'gpu_sm_utilization (%)', 1.0)]
__init__(devices: Iterable[Device] | None = None, root_pids: Iterable[int] | None = None, interval: float = 1.0) None[source]

Initialize the resource metric collector.

interval: float
devices: list[Device]
all_devices: list[Device]
leaf_devices: list[Device]
root_pids: set[int]
activate(tag: str) ResourceMetricCollector[source]

Start a new metric collection with the given tag.

Parameters:

tag (str) – The name of the new metric collection. The tag will be used to identify the metric collection. It must be a unique string.

Examples

>>> collector = ResourceMetricCollector()
>>> collector.activate(tag='train')  # key prefix -> 'train'
>>> collector.activate(tag='batch')  # key prefix -> 'train/batch'
>>> collector.deactivate()           # key prefix -> 'train'
>>> collector.deactivate()           # the collector has been stopped
>>> collector.activate(tag='test')   # key prefix -> 'test'
start(tag: str) ResourceMetricCollector

Start a new metric collection with the given tag.

Parameters:

tag (str) – The name of the new metric collection. The tag will be used to identify the metric collection. It must be a unique string.

Examples

>>> collector = ResourceMetricCollector()
>>> collector.activate(tag='train')  # key prefix -> 'train'
>>> collector.activate(tag='batch')  # key prefix -> 'train/batch'
>>> collector.deactivate()           # key prefix -> 'train'
>>> collector.deactivate()           # the collector has been stopped
>>> collector.activate(tag='test')   # key prefix -> 'test'
deactivate(tag: str | None = None) ResourceMetricCollector[source]

Stop the current collection with the given tag and remove all sub-tags.

If the tag is not specified, deactivate the current active collection. For nested collections, the sub-collections will be deactivated as well.

Parameters:

tag (Optional[str]) – The tag to deactivate. If None, the current active collection will be used.

stop(tag: str | None = None) ResourceMetricCollector

Stop the current collection with the given tag and remove all sub-tags.

If the tag is not specified, deactivate the current active collection. For nested collections, the sub-collections will be deactivated as well.

Parameters:

tag (Optional[str]) – The tag to deactivate. If None, the current active collection will be used.

context(tag: str) Generator[ResourceMetricCollector, None, None][source]

A context manager for starting and stopping resource metric collection.

Parameters:

tag (str) – The name of the new metric collection. The tag will be used to identify the metric collection. It must be a unique string.

Examples

>>> collector = ResourceMetricCollector()
>>> with collector.context(tag='train'):  # key prefix -> 'train'
...     # Do something
...     collector.collect()  # -> Dict[str, float]
__call__(tag: str) Generator[ResourceMetricCollector, None, None]

A context manager for starting and stopping resource metric collection.

Parameters:

tag (str) – The name of the new metric collection. The tag will be used to identify the metric collection. It must be a unique string.

Examples

>>> collector = ResourceMetricCollector()
>>> with collector.context(tag='train'):  # key prefix -> 'train'
...     # Do something
...     collector.collect()  # -> Dict[str, float]
clear(tag: str | None = None) None[source]

Reset the metric collection with the given tag.

If the tag is not specified, reset the current active collection. For nested collections, the sub-collections will be reset as well.

Parameters:

tag (Optional[str]) – The tag to reset. If None, the current active collection will be reset.

Examples

>>> collector = ResourceMetricCollector()
>>> with collector(tag='train'):          # key prefix -> 'train'
...     time.sleep(5.0)
...     collector.collect()               # metrics within the 5.0s interval
...
...     time.sleep(5.0)
...     collector.collect()               # metrics within the cumulative 10.0s interval
...
...     collector.reset()                 # reset the active collection
...     time.sleep(5.0)
...     collector.collect()               # metrics within the 5.0s interval
...
...     with collector(tag='batch'):      # key prefix -> 'train/batch'
...         collector.reset(tag='train')  # reset both 'train' and 'train/batch'
collect() dict[str, float][source]

Get the average resource consumption during collection.

daemonize(on_collect: Callable[[dict[str, float]], bool], interval: float | None = None, *, on_start: Callable[[ResourceMetricCollector], None] | None = None, on_stop: Callable[[ResourceMetricCollector], None] | None = None, tag: str = 'metrics-daemon', start: bool = True) threading.Thread[source]

Start a background daemon thread that collect and call the callback function periodically.

See also collect_in_background().

Parameters:
  • on_collect (Callable[[Dict[str, float]], bool]) – A callback function that will be called periodically. It takes a dictionary containing the resource metrics and returns a boolean indicating whether to continue monitoring.

  • interval (Optional[float]) – The collect interval. If not given, use collector.interval.

  • on_start (Optional[Callable[[ResourceMetricCollector], None]]) – A function to initialize the daemon thread and collector.

  • on_stop (Optional[Callable[[ResourceMetricCollector], None]]) – A function that do some necessary cleanup after the daemon thread is stopped.

  • tag (str) – The tag prefix used for metrics results.

  • start (bool) – Whether to start the daemon thread on return.

Returns: threading.Thread

A daemon thread object.

Examples

logger = ...

def on_collect(metrics):  # will be called periodically
    if logger.is_closed():  # closed manually by user
        return False
    logger.log(metrics)
    return True

def on_stop(collector):  # will be called only once at stop
    if not logger.is_closed():
        logger.close()  # cleanup

# Record metrics to the logger in the background every 5 seconds.
# It will collect 5-second mean/min/max for each metric.
ResourceMetricCollector(Device.cuda.all()).daemonize(
    on_collect,
    ResourceMetricCollector(Device.cuda.all()),
    interval=5.0,
    on_stop=on_stop,
)
__del__() None[source]

Clean up the demon thread on destruction.

take_snapshots() SnapshotResult[source]

Take snapshots of the current resource metrics and update the metric buffer.

nvitop.bytes2human(b: int |