nvitop.device module
|
Live class of the GPU devices, different from the device snapshots. |
|
Class for physical devices. |
|
Class for MIG devices. |
|
Class for devices enumerated over the CUDA ordinal. |
|
Class for CUDA devices that are MIG devices. |
|
Parse the given |
Parse the given |
The live classes for GPU devices.
The core classes are Device
and CudaDevice
(also aliased as Device.cuda
).
The type of returned instance created by Class(args)
is depending on the given arguments.
Device()
returns:
- (index: int) -> PhysicalDevice
- (index: (int, int)) -> MigDevice
- (uuid: str) -> Union[PhysicalDevice, MigDevice] # depending on the UUID value
- (bus_id: str) -> PhysicalDevice
CudaDevice()
returns:
- (cuda_index: int) -> Union[CudaDevice, CudaMigDevice] # depending on `CUDA_VISIBLE_DEVICES`
- (uuid: str) -> Union[CudaDevice, CudaMigDevice] # depending on `CUDA_VISIBLE_DEVICES`
- (nvml_index: int) -> CudaDevice
- (nvml_index: (int, int)) -> CudaMigDevice
Examples
>>> from nvitop import Device, CudaDevice
>>> Device.driver_version() # version of the installed NVIDIA display driver
'470.129.06'
>>> Device.count() # number of NVIDIA GPUs in the system
10
>>> Device.all() # all physical devices in the system
[
PhysicalDevice(index=0, ...),
PhysicalDevice(index=1, ...),
...
]
>>> nvidia0 = Device(index=0) # -> PhysicalDevice
>>> mig10 = Device(index=(1, 0)) # -> MigDevice
>>> nvidia2 = Device(uuid='GPU-xxxxxx') # -> PhysicalDevice
>>> mig30 = Device(uuid='MIG-xxxxxx') # -> MigDevice
>>> nvidia0.memory_free() # total free memory in bytes
11550654464
>>> nvidia0.memory_free_human() # total free memory in human readable format
'11016MiB'
>>> nvidia2.as_snapshot() # takes an onetime snapshot of the device
PhysicalDeviceSnapshot(
real=PhysicalDevice(index=2, ...),
...
)
>>> import os
>>> os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
>>> os.environ['CUDA_VISIBLE_DEVICES'] = '3,2,1,0'
>>> CudaDevice.count() # number of NVIDIA GPUs visible to CUDA applications
4
>>> Device.cuda.count() # use alias in class `Device`
4
>>> CudaDevice.all() # all CUDA visible devices (or `Device.cuda.all()`)
[
CudaDevice(cuda_index=0, nvml_index=3, ...),
CudaDevice(cuda_index=1, nvml_index=2, ...),
...
]
>>> cuda0 = CudaDevice(cuda_index=0) # use CUDA ordinal (or `Device.cuda(0)`)
>>> cuda1 = CudaDevice(nvml_index=2) # use NVML ordinal
>>> cuda2 = CudaDevice(uuid='GPU-xxxxxx') # use UUID string
>>> cuda0.memory_free() # total free memory in bytes
11550654464
>>> cuda0.memory_free_human() # total free memory in human readable format
'11016MiB'
>>> cuda1.as_snapshot() # takes an onetime snapshot of the device
CudaDeviceSnapshot(
real=CudaDevice(cuda_index=1, nvml_index=2, ...),
...
)
- class nvitop.Device(index: int | tuple[int, int] | str | None = None, *, uuid: str | None = None, bus_id: str | None = None)[source]
Bases:
object
Live class of the GPU devices, different from the device snapshots.
Device.__new__()
returns different types depending on the given arguments.- (index: int) -> PhysicalDevice - (index: (int, int)) -> MigDevice - (uuid: str) -> Union[PhysicalDevice, MigDevice] # depending on the UUID value - (bus_id: str) -> PhysicalDevice
Examples
>>> Device.driver_version() # version of the installed NVIDIA display driver '470.129.06'
>>> Device.count() # number of NVIDIA GPUs in the system 10
>>> Device.all() # all physical devices in the system [ PhysicalDevice(index=0, ...), PhysicalDevice(index=1, ...), ... ]
>>> nvidia0 = Device(index=0) # -> PhysicalDevice >>> mig10 = Device(index=(1, 0)) # -> MigDevice >>> nvidia2 = Device(uuid='GPU-xxxxxx') # -> PhysicalDevice >>> mig30 = Device(uuid='MIG-xxxxxx') # -> MigDevice
>>> nvidia0.memory_free() # total free memory in bytes 11550654464 >>> nvidia0.memory_free_human() # total free memory in human readable format '11016MiB'
>>> nvidia2.as_snapshot() # takes an onetime snapshot of the device PhysicalDeviceSnapshot( real=PhysicalDevice(index=2, ...), ... )
- Raises:
libnvml.NVMLError_LibraryNotFound – If cannot find the NVML library, usually the NVIDIA driver is not installed.
libnvml.NVMLError_DriverNotLoaded – If NVIDIA driver is not loaded.
libnvml.NVMLError_LibRmVersionMismatch – If RM detects a driver/library version mismatch, usually after an upgrade for NVIDIA driver without reloading the kernel module.
libnvml.NVMLError_NotFound – If the device is not found for the given NVML identifier.
libnvml.NVMLError_InvalidArgument – If the device index is out of range.
TypeError – If the number of non-None arguments is not exactly 1.
TypeError – If the given index is a tuple but is not consist of two integers.
- UUID_PATTERN: re.Pattern = re.compile('^ # full match\n (?:(?P<MigMode>MIG)-)? # prefix for MIG UUID\n (?:(?P<GpuUuid>GPU)-)? # prefix for GPU UUID\n (?, re.VERBOSE)
- GPU_PROCESS_CLASS
alias of
GpuProcess
- cuda
alias of
CudaDevice
- classmethod is_available() bool [source]
Test whether there are any devices and the NVML library is successfully loaded.
- static driver_version() str | NaType [source]
The version of the installed NVIDIA display driver. This is an alphanumeric string.
Command line equivalent:
nvidia-smi --id=0 --format=csv,noheader,nounits --query-gpu=driver_version
- Raises:
libnvml.NVMLError_LibraryNotFound – If cannot find the NVML library, usually the NVIDIA driver is not installed.
libnvml.NVMLError_DriverNotLoaded – If NVIDIA driver is not loaded.
libnvml.NVMLError_LibRmVersionMismatch – If RM detects a driver/library version mismatch, usually after an upgrade for NVIDIA driver without reloading the kernel module.
- static cuda_driver_version() str | NaType [source]
The maximum CUDA version supported by the NVIDIA display driver. This is an alphanumeric string.
This can be different from the version of the CUDA Runtime. See also
cuda_runtime_version()
.- Returns: Union[str, NaType]
The maximum CUDA version supported by the NVIDIA display driver.
- Raises:
libnvml.NVMLError_LibraryNotFound – If cannot find the NVML library, usually the NVIDIA driver is not installed.
libnvml.NVMLError_DriverNotLoaded – If NVIDIA driver is not loaded.
libnvml.NVMLError_LibRmVersionMismatch – If RM detects a driver/library version mismatch, usually after an upgrade for NVIDIA driver without reloading the kernel module.
- static max_cuda_version() str | NaType
The maximum CUDA version supported by the NVIDIA display driver. This is an alphanumeric string.
This can be different from the version of the CUDA Runtime. See also
cuda_runtime_version()
.- Returns: Union[str, NaType]
The maximum CUDA version supported by the NVIDIA display driver.
- Raises:
libnvml.NVMLError_LibraryNotFound – If cannot find the NVML library, usually the NVIDIA driver is not installed.
libnvml.NVMLError_DriverNotLoaded – If NVIDIA driver is not loaded.
libnvml.NVMLError_LibRmVersionMismatch – If RM detects a driver/library version mismatch, usually after an upgrade for NVIDIA driver without reloading the kernel module.
- static cuda_runtime_version() str | NaType [source]
The CUDA Runtime version. This is an alphanumeric string.
This can be different from the CUDA driver version. See also
cuda_driver_version()
.- Returns: Union[str, NaType]
The CUDA Runtime version, or
nvitop.NA
when no CUDA Runtime is available or no CUDA-capable devices are present.
- static cudart_version() str | NaType
The CUDA Runtime version. This is an alphanumeric string.
This can be different from the CUDA driver version. See also
cuda_driver_version()
.- Returns: Union[str, NaType]
The CUDA Runtime version, or
nvitop.NA
when no CUDA Runtime is available or no CUDA-capable devices are present.
- classmethod count() int [source]
The number of NVIDIA GPUs in the system.
Command line equivalent:
nvidia-smi --id=0 --format=csv,noheader,nounits --query-gpu=count
- Raises:
libnvml.NVMLError_LibraryNotFound – If cannot find the NVML library, usually the NVIDIA driver is not installed.
libnvml.NVMLError_DriverNotLoaded – If NVIDIA driver is not loaded.
libnvml.NVMLError_LibRmVersionMismatch – If RM detects a driver/library version mismatch, usually after an upgrade for NVIDIA driver without reloading the kernel module.
- classmethod all() list[PhysicalDevice] [source]
Return a list of all physical devices in the system.
- classmethod from_indices(indices: int | Iterable[int | tuple[int, int]] | None = None) list[PhysicalDevice | MigDevice] [source]
Return a list of devices of the given indices.
- Parameters:
indices (Iterable[Union[int, Tuple[int, int]]]) – Indices of the devices. For each index, get
PhysicalDevice
for single int andMigDevice
for tuple (int, int). That is: - (int) -> PhysicalDevice - ((int, int)) -> MigDevice
- Returns: List[Union[PhysicalDevice, MigDevice]]
A list of
PhysicalDevice
and/orMigDevice
instances of the given indices.
- Raises:
libnvml.NVMLError_LibraryNotFound – If cannot find the NVML library, usually the NVIDIA driver is not installed.
libnvml.NVMLError_DriverNotLoaded – If NVIDIA driver is not loaded.
libnvml.NVMLError_LibRmVersionMismatch – If RM detects a driver/library version mismatch, usually after an upgrade for NVIDIA driver without reloading the kernel module.
libnvml.NVMLError_NotFound – If the device is not found for the given NVML identifier.
libnvml.NVMLError_InvalidArgument – If the device index is out of range.
- static from_cuda_visible_devices() list[CudaDevice] [source]
Return a list of all CUDA visible devices.
The CUDA ordinal will be enumerate from the
CUDA_VISIBLE_DEVICES
environment variable.Note
The result could be empty if the
CUDA_VISIBLE_DEVICES
environment variable is invalid.- See also for CUDA Device Enumeration:
- Returns: List[CudaDevice]
A list of
CudaDevice
instances.
- static from_cuda_indices(cuda_indices: int | Iterable[int] | None = None) list[CudaDevice] [source]
Return a list of CUDA devices of the given CUDA indices.
The CUDA ordinal will be enumerate from the
CUDA_VISIBLE_DEVICES
environment variable.- See also for CUDA Device Enumeration:
- Parameters:
cuda_indices (Iterable[int]) – The indices of the GPU in CUDA ordinal, if not given, returns all visible CUDA devices.
- Returns: List[CudaDevice]
A list of
CudaDevice
of the given CUDA indices.
- Raises:
libnvml.NVMLError_LibraryNotFound – If cannot find the NVML library, usually the NVIDIA driver is not installed.
libnvml.NVMLError_DriverNotLoaded – If NVIDIA driver is not loaded.
libnvml.NVMLError_LibRmVersionMismatch – If RM detects a driver/library version mismatch, usually after an upgrade for NVIDIA driver without reloading the kernel module.
RuntimeError – If the index is out of range for the given
CUDA_VISIBLE_DEVICES
environment variable.
- static parse_cuda_visible_devices(cuda_visible_devices: str | None = <VALUE OMITTED>) list[int] | list[tuple[int, int]] [source]
Parse the given
CUDA_VISIBLE_DEVICES
value into a list of NVML device indices.This is a alias of
parse_cuda_visible_devices()
.Note
The result could be empty if the
CUDA_VISIBLE_DEVICES
environment variable is invalid.- See also for CUDA Device Enumeration:
- Parameters:
cuda_visible_devices (Optional[str]) – The value of the
CUDA_VISIBLE_DEVICES
variable. If not given, the value from the environment will be used. If explicitly given byNone
, theCUDA_VISIBLE_DEVICES
environment variable will be unset before parsing.
- Returns: Union[List[int], List[Tuple[int, int]]]
A list of int (physical device) or a list of tuple of two integers (MIG device) for the corresponding real device indices.
- static normalize_cuda_visible_devices(cuda_visible_devices: str | None = <VALUE OMITTED>) str [source]
Parse the given
CUDA_VISIBLE_DEVICES
value and convert it into a comma-separated string of UUIDs.This is an alias of
normalize_cuda_visible_devices()
.Note
The result could be empty string if the
CUDA_VISIBLE_DEVICES
environment variable is invalid.- See also for CUDA Device Enumeration:
- Parameters:
cuda_visible_devices (Optional[str]) – The value of the
CUDA_VISIBLE_DEVICES
variable. If not given, the value from the environment will be used. If explicitly given byNone
, theCUDA_VISIBLE_DEVICES
environment variable will be unset before parsing.
- Returns: str
The comma-separated string (GPU UUIDs) of the
CUDA_VISIBLE_DEVICES
environment variable.
- static __new__(cls, index: int | tuple[int, int] | str | None = None, *, uuid: str | None = None, bus_id: str | None = None) Self [source]
Create a new instance of Device.
The type of the result is determined by the given argument.
- (index: int) -> PhysicalDevice - (index: (int, int)) -> MigDevice - (uuid: str) -> Union[PhysicalDevice, MigDevice] # depending on the UUID value - (bus_id: str) -> PhysicalDevice
Note: This method takes exact 1 non-None argument.
- Returns: Union[PhysicalDevice, MigDevice]
A
PhysicalDevice
instance or aMigDevice
instance.
- __init__(index: int | str | None = None, *, uuid: str | None = None, bus_id: str | None = None) None [source]
Initialize the instance created by
__new__()
.- Raises:
libnvml.NVMLError_LibraryNotFound – If cannot find the NVML library, usually the NVIDIA driver is not installed.
libnvml.NVMLError_DriverNotLoaded – If NVIDIA driver is not loaded.
libnvml.NVMLError_LibRmVersionMismatch – If RM detects a driver/library version mismatch, usually after an upgrade for NVIDIA driver without reloading the kernel module.
libnvml.NVMLError_NotFound – If the device is not found for the given NVML identifier.
libnvml.NVMLError_InvalidArgument – If the device index is out of range.
- __getattr__(name: str) Any | Callable[..., Any] [source]
Get the object attribute.
If the attribute is not defined, make a method from
pynvml.nvmlDeviceGet<AttributeName>(handle)
. The attribute name will be converted to PascalCase string.- Raises:
AttributeError – If the attribute is not defined in
pynvml.py
.
Examples
>>> device = Device(0)
>>> # Method `cuda_compute_capability` is not implemented in the class definition >>> PhysicalDevice.cuda_compute_capability AttributeError: type object 'Device' has no attribute 'cuda_compute_capability'
>>> # Dynamically create a new method from `pynvml.nvmlDeviceGetCudaComputeCapability(device.handle, *args, **kwargs)` >>> device.cuda_compute_capability <function PhysicalDevice.cuda_compute_capability at 0x7fbfddf5d9d0>
>>> device.cuda_compute_capability() (8, 6)
- __reduce__() tuple[type[Device], tuple[int | tuple[int, int]]] [source]
Return state information for pickling.
- property index: int | tuple[int, int]
The NVML index of the device.
- Returns: Union[int, Tuple[int, int]]
Returns an int for physical device and tuple of two integers for MIG device.
- property nvml_index: int | tuple[int, int]
The NVML index of the device.
- Returns: Union[int, Tuple[int, int]]
Returns an int for physical device and tuple of two integers for MIG device.
- property physical_index: int
The index of the physical device.
- Returns: int
An int for the physical device index. For MIG devices, returns the index of the parent physical device.
- property handle: LP_struct_c_nvmlDevice_t
The NVML device handle.
- property cuda_index: int
The CUDA device index.
The value will be evaluated on the first call.
- Raises:
RuntimeError – If the current device is not visible to CUDA applications (i.e. not listed in the
CUDA_VISIBLE_DEVICES
environment variable or the environment variable is invalid).
- name() str | NaType [source]
The official product name of the GPU. This is an alphanumeric string. For all products.
- Returns: Union[str, NaType]
The official product name, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=name
- uuid() str | NaType [source]
This value is the globally unique immutable alphanumeric identifier of the GPU.
It does not correspond to any physical label on the board.
- Returns: Union[str, NaType]
The UUID of the device, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=name
- bus_id() str | NaType [source]
PCI bus ID as “domain:bus:device.function”, in hex.
- Returns: Union[str, NaType]
The PCI bus ID of the device, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=pci.bus_id
- serial() str | NaType [source]
This number matches the serial number physically printed on each board.
It is a globally unique immutable alphanumeric value.
- Returns: Union[str, NaType]
The serial number of the device, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=serial
- memory_info() MemoryInfo [source]
Return a named tuple with memory information (in bytes) for the device.
- Returns: MemoryInfo(total, free, used)
A named tuple with memory information, the item could be
nvitop.NA
when not applicable.
- memory_total() int | NaType [source]
Total installed GPU memory in bytes.
- Returns: Union[int, NaType]
Total installed GPU memory in bytes, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=memory.total
- memory_used() int | NaType [source]
Total memory allocated by active contexts in bytes.
- Returns: Union[int, NaType]
Total memory allocated by active contexts in bytes, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=memory.used
- memory_free() int | NaType [source]
Total free memory in bytes.
- Returns: Union[int, NaType]
Total free memory in bytes, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=memory.free
- memory_total_human() str | NaType [source]
Total installed GPU memory in human readable format.
- Returns: Union[str, NaType]
Total installed GPU memory in human readable format, or
nvitop.NA
when not applicable.
- memory_used_human() str | NaType [source]
Total memory allocated by active contexts in human readable format.
- Returns: Union[int, NaType]
Total memory allocated by active contexts in human readable format, or
nvitop.NA
when not applicable.
- memory_free_human() str | NaType [source]
Total free memory in human readable format.
- Returns: Union[int, NaType]
Total free memory in human readable format, or
nvitop.NA
when not applicable.
- memory_percent() float | NaType [source]
The percentage of used memory over total memory (
0 <= p <= 100
).- Returns: Union[float, NaType]
The percentage of used memory over total memory, or
nvitop.NA
when not applicable.
- memory_usage() str [source]
The used memory over total memory in human readable format.
- Returns: str
The used memory over total memory in human readable format, or
'N/A / N/A'
when not applicable.
- bar1_memory_info() MemoryInfo [source]
Return a named tuple with BAR1 memory information (in bytes) for the device.
- Returns: MemoryInfo(total, free, used)
A named tuple with BAR1 memory information, the item could be
nvitop.NA
when not applicable.
- bar1_memory_total() int | NaType [source]
Total BAR1 memory in bytes.
- Returns: Union[int, NaType]
Total BAR1 memory in bytes, or
nvitop.NA
when not applicable.
- bar1_memory_used() int | NaType [source]
Total used BAR1 memory in bytes.
- Returns: Union[int, NaType]
Total used BAR1 memory in bytes, or
nvitop.NA
when not applicable.
- bar1_memory_free() int | NaType [source]
Total free BAR1 memory in bytes.
- Returns: Union[int, NaType]
Total free BAR1 memory in bytes, or
nvitop.NA
when not applicable.
- bar1_memory_total_human() str | NaType [source]
Total BAR1 memory in human readable format.
- Returns: Union[int, NaType]
Total BAR1 memory in human readable format, or
nvitop.NA
when not applicable.
- bar1_memory_used_human() str | NaType [source]
Total used BAR1 memory in human readable format.
- Returns: Union[int, NaType]
Total used BAR1 memory in human readable format, or
nvitop.NA
when not applicable.
- bar1_memory_free_human() str | NaType [source]
Total free BAR1 memory in human readable format.
- Returns: Union[int, NaType]
Total free BAR1 memory in human readable format, or
nvitop.NA
when not applicable.
- bar1_memory_percent() float | NaType [source]
The percentage of used BAR1 memory over total BAR1 memory (0 <= p <= 100).
- Returns: Union[float, NaType]
The percentage of used BAR1 memory over total BAR1 memory, or
nvitop.NA
when not applicable.
- bar1_memory_usage() str [source]
The used BAR1 memory over total BAR1 memory in human readable format.
- Returns: str
The used BAR1 memory over total BAR1 memory in human readable format, or
'N/A / N/A'
when not applicable.
- utilization_rates() UtilizationRates [source]
Return a named tuple with GPU utilization rates (in percentage) for the device.
- Returns: UtilizationRates(gpu, memory, encoder, decoder)
A named tuple with GPU utilization rates (in percentage) for the device, the item could be
nvitop.NA
when not applicable.
- gpu_utilization() int | NaType [source]
Percent of time over the past sample period during which one or more kernels was executing on the GPU.
The sample period may be between 1 second and 1/6 second depending on the product.
- Returns: Union[int, NaType]
The GPU utilization rate in percentage, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=utilization.gpu
- gpu_percent() int | NaType
Percent of time over the past sample period during which one or more kernels was executing on the GPU.
The sample period may be between 1 second and 1/6 second depending on the product.
- Returns: Union[int, NaType]
The GPU utilization rate in percentage, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=utilization.gpu
- memory_utilization() int | NaType [source]
Percent of time over the past sample period during which global (device) memory was being read or written.
The sample period may be between 1 second and 1/6 second depending on the product.
- Returns: Union[int, NaType]
The memory bandwidth utilization rate of the GPU in percentage, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=utilization.memory
- encoder_utilization() int | NaType [source]
The encoder utilization rate in percentage.
- Returns: Union[int, NaType]
The encoder utilization rate in percentage, or
nvitop.NA
when not applicable.
- decoder_utilization() int | NaType [source]
The decoder utilization rate in percentage.
- Returns: Union[int, NaType]
The decoder utilization rate in percentage, or
nvitop.NA
when not applicable.
- clock_infos() ClockInfos [source]
Return a named tuple with current clock speeds (in MHz) for the device.
- Returns: ClockInfos(graphics, sm, memory, video)
A named tuple with current clock speeds (in MHz) for the device, the item could be
nvitop.NA
when not applicable.
- clocks() ClockInfos
Return a named tuple with current clock speeds (in MHz) for the device.
- Returns: ClockInfos(graphics, sm, memory, video)
A named tuple with current clock speeds (in MHz) for the device, the item could be
nvitop.NA
when not applicable.
- max_clock_infos() ClockInfos [source]
Return a named tuple with maximum clock speeds (in MHz) for the device.
- Returns: ClockInfos(graphics, sm, memory, video)
A named tuple with maximum clock speeds (in MHz) for the device, the item could be
nvitop.NA
when not applicable.
- max_clocks() ClockInfos
Return a named tuple with maximum clock speeds (in MHz) for the device.
- Returns: ClockInfos(graphics, sm, memory, video)
A named tuple with maximum clock speeds (in MHz) for the device, the item could be
nvitop.NA
when not applicable.
- clock_speed_infos() ClockSpeedInfos [source]
Return a named tuple with the current and the maximum clock speeds (in MHz) for the device.
- Returns: ClockSpeedInfos(current, max)
A named tuple with the current and the maximum clock speeds (in MHz) for the device.
- graphics_clock() int | NaType [source]
Current frequency of graphics (shader) clock in MHz.
- Returns: Union[int, NaType]
The current frequency of graphics (shader) clock in MHz, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=clocks.current.graphics
- sm_clock() int | NaType [source]
Current frequency of SM (Streaming Multiprocessor) clock in MHz.
- Returns: Union[int, NaType]
The current frequency of SM (Streaming Multiprocessor) clock in MHz, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=clocks.current.sm
- memory_clock() int | NaType [source]
Current frequency of memory clock in MHz.
- Returns: Union[int, NaType]
The current frequency of memory clock in MHz, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=clocks.current.memory
- video_clock() int | NaType [source]
Current frequency of video encoder/decoder clock in MHz.
- Returns: Union[int, NaType]
The current frequency of video encoder/decoder clock in MHz, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=clocks.current.video
- max_graphics_clock() int | NaType [source]
Maximum frequency of graphics (shader) clock in MHz.
- Returns: Union[int, NaType]
The maximum frequency of graphics (shader) clock in MHz, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=clocks.max.graphics
- max_sm_clock() int | NaType [source]
Maximum frequency of SM (Streaming Multiprocessor) clock in MHz.
- Returns: Union[int, NaType]
The maximum frequency of SM (Streaming Multiprocessor) clock in MHz, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=clocks.max.sm
- max_memory_clock() int | NaType [source]
Maximum frequency of memory clock in MHz.
- Returns: Union[int, NaType]
The maximum frequency of memory clock in MHz, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=clocks.max.memory
- max_video_clock() int | NaType [source]
Maximum frequency of video encoder/decoder clock in MHz.
- Returns: Union[int, NaType]
The maximum frequency of video encoder/decoder clock in MHz, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=clocks.max.video
- fan_speed() int | NaType [source]
The fan speed value is the percent of the product’s maximum noise tolerance fan speed that the device’s fan is currently intended to run at.
This value may exceed 100% in certain cases. Note: The reported speed is the intended fan speed. If the fan is physically blocked and unable to spin, this output will not match the actual fan speed. Many parts do not report fan speeds because they rely on cooling via fans in the surrounding enclosure.
- Returns: Union[int, NaType]
The fan speed value in percentage, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=fan.speed
- temperature() int | NaType [source]
Core GPU temperature in degrees C.
- Returns: Union[int, NaType]
The core GPU temperature in Celsius degrees, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=temperature.gpu
- power_usage() int | NaType [source]
The last measured power draw for the entire board in milliwatts.
- Returns: Union[int, NaType]
The power draw for the entire board in milliwatts, or
nvitop.NA
when not applicable.
Command line equivalent:
$(( "$(nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=power.draw)" * 1000 ))
- power_draw() int | NaType
The last measured power draw for the entire board in milliwatts.
- Returns: Union[int, NaType]
The power draw for the entire board in milliwatts, or
nvitop.NA
when not applicable.
Command line equivalent:
$(( "$(nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=power.draw)" * 1000 ))
- power_limit() int | NaType [source]
The software power limit in milliwatts.
Set by software like nvidia-smi.
- Returns: Union[int, NaType]
The software power limit in milliwatts, or
nvitop.NA
when not applicable.
Command line equivalent:
$(( "$(nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=power.limit)" * 1000 ))
- power_status() str [source]
The string of power usage over power limit in watts.
- Returns: str
The string of power usage over power limit in watts, or
'N/A / N/A'
when not applicable.
- pcie_throughput() ThroughputInfo [source]
The current PCIe throughput in KiB/s.
This function is querying a byte counter over a 20ms interval and thus is the PCIe throughput over that interval.
- Returns: ThroughputInfo(tx, rx)
A named tuple with current PCIe throughput in KiB/s, the item could be
nvitop.NA
when not applicable.
- pcie_tx_throughput() int | NaType [source]
The current PCIe transmit throughput in KiB/s.
This function is querying a byte counter over a 20ms interval and thus is the PCIe throughput over that interval.
- Returns: Union[int, NaType]
The current PCIe transmit throughput in KiB/s, or
nvitop.NA
when not applicable.
- pcie_rx_throughput() int | NaType [source]
The current PCIe receive throughput in KiB/s.
This function is querying a byte counter over a 20ms interval and thus is the PCIe throughput over that interval.
- Returns: Union[int, NaType]
The current PCIe receive throughput in KiB/s, or
nvitop.NA
when not applicable.
- pcie_tx_throughput_human() str | NaType [source]
The current PCIe transmit throughput in human readable format.
This function is querying a byte counter over a 20ms interval and thus is the PCIe throughput over that interval.
- Returns: Union[str, NaType]
The current PCIe transmit throughput in human readable format, or
nvitop.NA
when not applicable.
- pcie_rx_throughput_human() str | NaType [source]
The current PCIe receive throughput in human readable format.
This function is querying a byte counter over a 20ms interval and thus is the PCIe throughput over that interval.
- Returns: Union[str, NaType]
The current PCIe receive throughput in human readable format, or
nvitop.NA
when not applicable.
- nvlink_link_count() int [source]
The number of NVLinks that the GPU has.
- Returns: Union[int, NaType]
The number of NVLinks that the GPU has.
- nvlink_throughput(interval: float | None = None) list[ThroughputInfo] [source]
The current NVLink throughput for each NVLink in KiB/s.
This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.
- Parameters:
interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If
interval
is a positive number, compares throughput counters before and after the interval (blocking). Ifinterval
is :const`0.0` orNone
, compares throughput counters since the last call, returning immediately (non-blocking).
- Returns: List[ThroughputInfo(tx, rx)]
A list of named tuples with current NVLink throughput for each NVLink in KiB/s, the item could be
nvitop.NA
when not applicable.
- nvlink_mean_throughput(interval: float | None = None) ThroughputInfo [source]
The mean NVLink throughput for all NVLinks in KiB/s.
This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.
- Parameters:
interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If
interval
is a positive number, compares throughput counters before and after the interval (blocking). Ifinterval
is :const`0.0` orNone
, compares throughput counters since the last call, returning immediately (non-blocking).
- Returns: ThroughputInfo(tx, rx)
A named tuple with the mean NVLink throughput for all NVLinks in KiB/s, the item could be
nvitop.NA
when not applicable.
- nvlink_tx_throughput(interval: float | None = None) list[int | NaType] [source]
The current NVLink transmit data throughput in KiB/s for each NVLink.
This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.
- Parameters:
interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If
interval
is a positive number, compares throughput counters before and after the interval (blocking). Ifinterval
is :const`0.0` orNone
, compares throughput counters since the last call, returning immediately (non-blocking).
- Returns: List[Union[int, NaType]]
The current NVLink transmit data throughput in KiB/s for each NVLink, or
nvitop.NA
when not applicable.
- nvlink_mean_tx_throughput(interval: float | None = None) int | NaType [source]
The mean NVLink transmit data throughput for all NVLinks in KiB/s.
This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.
- Parameters:
interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If
interval
is a positive number, compares throughput counters before and after the interval (blocking). Ifinterval
is :const`0.0` orNone
, compares throughput counters since the last call, returning immediately (non-blocking).
- Returns: Union[int, NaType]
The mean NVLink transmit data throughput for all NVLinks in KiB/s, or
nvitop.NA
when not applicable.
- nvlink_rx_throughput(interval: float | None = None) list[int | NaType] [source]
The current NVLink receive data throughput for each NVLink in KiB/s.
This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.
- Parameters:
interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If
interval
is a positive number, compares throughput counters before and after the interval (blocking). Ifinterval
is :const`0.0` orNone
, compares throughput counters since the last call, returning immediately (non-blocking).
- Returns: Union[int, NaType]
The current NVLink receive data throughput for each NVLink in KiB/s, or
nvitop.NA
when not applicable.
- nvlink_mean_rx_throughput(interval: float | None = None) int | NaType [source]
The mean NVLink receive data throughput for all NVLinks in KiB/s.
This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.
- Parameters:
interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If
interval
is a positive number, compares throughput counters before and after the interval (blocking). Ifinterval
is :const`0.0` orNone
, compares throughput counters since the last call, returning immediately (non-blocking).
- Returns: Union[int, NaType]
The mean NVLink receive data throughput for all NVLinks in KiB/s, or
nvitop.NA
when not applicable.
- nvlink_tx_throughput_human(interval: float | None = None) list[str | NaType] [source]
The current NVLink transmit data throughput for each NVLink in human readable format.
This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.
- Parameters:
interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If
interval
is a positive number, compares throughput counters before and after the interval (blocking). Ifinterval
is :const`0.0` orNone
, compares throughput counters since the last call, returning immediately (non-blocking).
- Returns: Union[str, NaType]
The current NVLink transmit data throughput for each NVLink in human readable format, or
nvitop.NA
when not applicable.
- nvlink_mean_tx_throughput_human(interval: float | None = None) str | NaType [source]
The mean NVLink transmit data throughput for all NVLinks in human readable format.
This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.
- Parameters:
interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If
interval
is a positive number, compares throughput counters before and after the interval (blocking). Ifinterval
is :const`0.0` orNone
, compares throughput counters since the last call, returning immediately (non-blocking).
- Returns: Union[str, NaType]
The mean NVLink transmit data throughput for all NVLinks in human readable format, or
nvitop.NA
when not applicable.
- nvlink_rx_throughput_human(interval: float | None = None) list[str | NaType] [source]
The current NVLink receive data throughput for each NVLink in human readable format.
This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.
- Parameters:
interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If
interval
is a positive number, compares throughput counters before and after the interval (blocking). Ifinterval
is :const`0.0` orNone
, compares throughput counters since the last call, returning immediately (non-blocking).
- Returns: Union[str, NaType]
The current NVLink receive data throughput for each NVLink in human readable format, or
nvitop.NA
when not applicable.
- nvlink_mean_rx_throughput_human(interval: float | None = None) str | NaType [source]
The mean NVLink receive data throughput for all NVLinks in human readable format.
This function is querying data counters between methods calls and thus is the NVLink throughput over that interval. For the first call, the function is blocking for 20ms to get the first data counters.
- Parameters:
interval (Optional[float]) – The interval in seconds between two calls to get the NVLink throughput. If
interval
is a positive number, compares throughput counters before and after the interval (blocking). Ifinterval
is :const`0.0` orNone
, compares throughput counters since the last call, returning immediately (non-blocking).
- Returns: Union[str, NaType]
The mean NVLink receive data throughput for all NVLinks in human readable format, or
nvitop.NA
when not applicable.
- display_active() str | NaType [source]
A flag that indicates whether a display is initialized on the GPU’s (e.g. memory is allocated on the device for display).
Display can be active even when no monitor is physically attached. “Enabled” indicates an active display. “Disabled” indicates otherwise.
- Returns: Union[str, NaType]
'Disabled'
: if not an active display device.'Enabled'
: if an active display device.nvitop.NA
: if not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=display_active
- display_mode() str | NaType [source]
A flag that indicates whether a physical display (e.g. monitor) is currently connected to any of the GPU’s connectors.
“Enabled” indicates an attached display. “Disabled” indicates otherwise.
- Returns: Union[str, NaType]
'Disabled'
: if the display mode is disabled.'Enabled'
: if the display mode is enabled.nvitop.NA
: if not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=display_mode
- current_driver_model() str | NaType [source]
The driver model currently in use.
Always “N/A” on Linux. On Windows, the TCC (WDM) and WDDM driver models are supported. The TCC driver model is optimized for compute applications. I.E. kernel launch times will be quicker with TCC. The WDDM driver model is designed for graphics applications and is not recommended for compute applications. Linux does not support multiple driver models, and will always have the value of “N/A”.
- Returns: Union[str, NaType]
'WDDM'
: for WDDM driver model on Windows.'WDM'
: for TTC (WDM) driver model on Windows.nvitop.NA
: if not applicable, e.g. on Linux.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=driver_model.current
- driver_model() str | NaType
The driver model currently in use.
Always “N/A” on Linux. On Windows, the TCC (WDM) and WDDM driver models are supported. The TCC driver model is optimized for compute applications. I.E. kernel launch times will be quicker with TCC. The WDDM driver model is designed for graphics applications and is not recommended for compute applications. Linux does not support multiple driver models, and will always have the value of “N/A”.
- Returns: Union[str, NaType]
'WDDM'
: for WDDM driver model on Windows.'WDM'
: for TTC (WDM) driver model on Windows.nvitop.NA
: if not applicable, e.g. on Linux.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=driver_model.current
- persistence_mode() str | NaType [source]
A flag that indicates whether persistence mode is enabled for the GPU. Value is either “Enabled” or “Disabled”.
When persistence mode is enabled the NVIDIA driver remains loaded even when no active clients, such as X11 or nvidia-smi, exist. This minimizes the driver load latency associated with running dependent apps, such as CUDA programs. Linux only.
- Returns: Union[str, NaType]
'Disabled'
: if the persistence mode is disabled.'Enabled'
: if the persistence mode is enabled.nvitop.NA
: if not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=persistence_mode
- performance_state() str | NaType [source]
The current performance state for the GPU. States range from P0 (maximum performance) to P12 (minimum performance).
- Returns: Union[str, NaType]
The current performance state in format
P<int>
, ornvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=pstate
- total_volatile_uncorrected_ecc_errors() int | NaType [source]
Total errors detected across entire chip.
- Returns: Union[int, NaType]
The total number of uncorrected errors in volatile ECC memory, or
nvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=ecc.errors.uncorrected.volatile.total
- compute_mode() str | NaType [source]
The compute mode flag indicates whether individual or multiple compute applications may run on the GPU.
- Returns: Union[str, NaType]
'Default'
: means multiple contexts are allowed per device.'Exclusive Thread'
: deprecated, use Exclusive Process instead'Prohibited'
: means no contexts are allowed per device (no compute apps).'Exclusive Process'
: means only one context is allowed per device, usable from multiple threads at a time.nvitop.NA
: if not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=compute_mode
- cuda_compute_capability() tuple[int, int] | NaType [source]
The CUDA compute capability for the device.
- Returns: Union[Tuple[int, int], NaType]
The CUDA compute capability version in format
(major, minor)
, ornvitop.NA
when not applicable.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=compute_cap
- mig_mode() str | NaType [source]
The MIG mode that the GPU is currently operating under.
- Returns: Union[str, NaType]
'Disabled'
: if the MIG mode is disabled.'Enabled'
: if the MIG mode is enabled.nvitop.NA
: if not applicable, e.g. the GPU does not support MIG mode.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=mig.mode.current
- is_mig_mode_enabled() bool [source]
Test whether the MIG mode is enabled on the device.
Return
False
if MIG mode is disabled or the device does not support MIG mode.
- max_mig_device_count() int [source]
Return the maximum number of MIG instances the device supports.
This method will return 0 if the device does not support MIG mode.
- mig_devices() list[MigDevice] [source]
Return a list of children MIG devices of the current device.
This method will return an empty list if the MIG mode is disabled or the device does not support MIG mode.
- is_leaf_device() bool [source]
Test whether the device is a physical device with MIG mode disabled or a MIG device.
Return
True
if the device is a physical device with MIG mode disabled or a MIG device. Otherwise, returnFalse
if the device is a physical device with MIG mode enabled.
- to_leaf_devices() list[PhysicalDevice] | list[MigDevice] | list[CudaDevice] | list[CudaMigDevice] [source]
Return a list of leaf devices.
Note that a CUDA device is always a leaf device.
- processes() dict[int, GpuProcess] [source]
Return a dictionary of processes running on the GPU.
- Returns: Dict[int, GpuProcess]
A dictionary mapping PID to GPU process instance.
- as_snapshot() Snapshot [source]
Return a onetime snapshot of the device.
The attributes are defined in
SNAPSHOT_KEYS
.
- SNAPSHOT_KEYS: ClassVar[list[str]] = ['name', 'uuid', 'bus_id', 'memory_info', 'memory_used', 'memory_free', 'memory_total', 'memory_used_human', 'memory_free_human', 'memory_total_human', 'memory_percent', 'memory_usage', 'utilization_rates', 'gpu_utilization', 'memory_utilization', 'encoder_utilization', 'decoder_utilization', 'clock_infos', 'max_clock_infos', 'clock_speed_infos', 'sm_clock', 'memory_clock', 'fan_speed', 'temperature', 'power_usage', 'power_limit', 'power_status', 'pcie_throughput', 'pcie_tx_throughput', 'pcie_rx_throughput', 'pcie_tx_throughput_human', 'pcie_rx_throughput_human', 'display_active', 'display_mode', 'current_driver_model', 'persistence_mode', 'performance_state', 'total_volatile_uncorrected_ecc_errors', 'compute_mode', 'cuda_compute_capability', 'mig_mode']
- oneshot() Generator[None, None, None] [source]
A utility context manager which considerably speeds up the retrieval of multiple device information at the same time.
Internally different device info (e.g. memory_info, utilization_rates, …) may be fetched by using the same routine, but only one information is returned and the others are discarded. When using this context manager the internal routine is executed once (in the example below on memory_info()) and the other info are cached.
The cache is cleared when exiting the context manager block. The advice is to use this every time you retrieve more than one information about the device.
Examples
>>> from nvitop import Device >>> device = Device(0) >>> with device.oneshot(): ... device.memory_info() # collect multiple info ... device.memory_used() # return cached value ... device.memory_free_human() # return cached value ... device.memory_percent() # return cached value
- class nvitop.PhysicalDevice(index: int | tuple[int, int] | str | None = None, *, uuid: str | None = None, bus_id: str | None = None)[source]
Bases:
Device
Class for physical devices.
This is the real GPU installed in the system.
- property physical_index: int
Zero based index of the GPU. Can change at each boot.
Command line equivalent:
nvidia-smi --id=<IDENTIFIER> --format=csv,noheader,nounits --query-gpu=index
- max_mig_device_count() int [source]
Return the maximum number of MIG instances the device supports.
This method will return 0 if the device does not support MIG mode.
- mig_device(mig_index: int) MigDevice [source]
Return a child MIG device of the given index.
- Raises:
libnvml.NVMLError – If the device does not support MIG mode or the given MIG device does not exist.
- class nvitop.MigDevice(index: int | tuple[int, int] | str | None = None, *, uuid: str | None = None, bus_id: str | None = None)[source]
Bases:
Device
Class for MIG devices.
- classmethod count() int [source]
The number of total MIG devices aggregated over all physical devices.
- classmethod all() list[MigDevice] [source]
Return a list of MIG devices aggregated over all physical devices.
- classmethod from_indices(indices: Iterable[tuple[int, int]]) list[MigDevice] [source]
Return a list of MIG devices of the given indices.
- Parameters:
indices (Iterable[Tuple[int, int]]) – Indices of the MIG devices. Each index is a tuple of two integers.
- Returns: List[MigDevice]
A list of
MigDevice
instances of the given indices.
- Raises:
libnvml.NVMLError_LibraryNotFound – If cannot find the NVML library, usually the NVIDIA driver is not installed.
libnvml.NVMLError_DriverNotLoaded – If NVIDIA driver is not loaded.
libnvml.NVMLError_LibRmVersionMismatch – If RM detects a driver/library version mismatch, usually after an upgrade for NVIDIA driver without reloading the kernel module.
libnvml.NVMLError_NotFound – If the device is not found for the given NVML identifier.
- __init__(index: tuple[int, int] | str | None = None, *, uuid: str | None = None) None [source]
Initialize the instance created by
__new__()
.- Raises:
libnvml.NVMLError_LibraryNotFound – If cannot find the NVML library, usually the NVIDIA driver is not installed.
libnvml.NVMLError_DriverNotLoaded – If NVIDIA driver is not loaded.
libnvml.NVMLError_LibRmVersionMismatch – If RM detects a driver/library version mismatch, usually after an upgrade for NVIDIA driver without reloading the kernel module.
libnvml.NVMLError_NotFound – If the device is not found for the given NVML identifier.
- property parent: PhysicalDevice
The parent physical device.
- gpu_instance_id() int | NaType [source]
The gpu instance ID of the MIG device.
- Returns: Union[int, NaType]
The gpu instance ID of the MIG device, or
nvitop.NA
when not applicable.
- compute_instance_id() int | NaType [source]
The compute instance ID of the MIG device.
- Returns: Union[int, NaType]
The compute instance ID of the MIG device, or
nvitop.NA
when not applicable.
- as_snapshot() Snapshot [source]
Return a onetime snapshot of the device.
The attributes are defined in
SNAPSHOT_KEYS
.
- SNAPSHOT_KEYS: ClassVar[list[str]] = ['name', 'uuid', 'bus_id', 'memory_info', 'memory_used', 'memory_free', 'memory_total', 'memory_used_human', 'memory_free_human', 'memory_total_human', 'memory_percent', 'memory_usage', 'utilization_rates', 'gpu_utilization', 'memory_utilization', 'encoder_utilization', 'decoder_utilization', 'clock_infos', 'max_clock_infos', 'clock_speed_infos', 'sm_clock', 'memory_clock', 'fan_speed', 'temperature', 'power_usage', 'power_limit', 'power_status', 'pcie_throughput', 'pcie_tx_throughput', 'pcie_rx_throughput', 'pcie_tx_throughput_human', 'pcie_rx_throughput_human', 'display_active', 'display_mode', 'current_driver_model', 'persistence_mode', 'performance_state', 'total_volatile_uncorrected_ecc_errors', 'compute_mode', 'cuda_compute_capability', 'mig_mode', 'gpu_instance_id', 'compute_instance_id']
- class nvitop.CudaDevice(cuda_index: int | None = None, *, nvml_index: int | tuple[int, int] | None = None, uuid: str | None = None)[source]
Bases:
Device
Class for devices enumerated over the CUDA ordinal.
The order can be vary for different
CUDA_VISIBLE_DEVICES
environment variable.- See also for CUDA Device Enumeration:
CudaDevice.__new__()
returns different types depending on the given arguments.- (cuda_index: int) -> Union[CudaDevice, CudaMigDevice] # depending on `CUDA_VISIBLE_DEVICES` - (uuid: str) -> Union[CudaDevice, CudaMigDevice] # depending on `CUDA_VISIBLE_DEVICES` - (nvml_index: int) -> CudaDevice - (nvml_index: (int, int)) -> CudaMigDevice
Examples
>>> import os >>> os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID' >>> os.environ['CUDA_VISIBLE_DEVICES'] = '3,2,1,0'
>>> CudaDevice.count() # number of NVIDIA GPUs visible to CUDA applications 4 >>> Device.cuda.count() # use alias in class `Device` 4
>>> CudaDevice.all() # all CUDA visible devices (or `Device.cuda.all()`) [ CudaDevice(cuda_index=0, nvml_index=3, ...), CudaDevice(cuda_index=1, nvml_index=2, ...), ... ]
>>> cuda0 = CudaDevice(cuda_index=0) # use CUDA ordinal (or `Device.cuda(0)`) >>> cuda1 = CudaDevice(nvml_index=2) # use NVML ordinal >>> cuda2 = CudaDevice(uuid='GPU-xxxxxx') # use UUID string
>>> cuda0.memory_free() # total free memory in bytes 11550654464 >>> cuda0.memory_free_human() # total free memory in human readable format '11016MiB'
>>> cuda1.as_snapshot() # takes an onetime snapshot of the device CudaDeviceSnapshot( real=CudaDevice(cuda_index=1, nvml_index=2, ...), ... )
- Raises:
libnvml.NVMLError_LibraryNotFound – If cannot find the NVML library, usually the NVIDIA driver is not installed.
libnvml.NVMLError_DriverNotLoaded – If NVIDIA driver is not loaded.
libnvml.NVMLError_LibRmVersionMismatch – If RM detects a driver/library version mismatch, usually after an upgrade for NVIDIA driver without reloading the kernel module.
libnvml.NVMLError_NotFound – If the device is not found for the given NVML identifier.
libnvml.NVMLError_InvalidArgument – If the NVML index is out of range.
TypeError – If the number of non-None arguments is not exactly 1.
TypeError – If the given NVML index is a tuple but is not consist of two integers.
RuntimeError – If the index is out of range for the given
CUDA_VISIBLE_DEVICES
environment variable.
- classmethod is_available() bool [source]
Test whether there are any CUDA-capable devices available.
- classmethod all() list[CudaDevice] [source]
All CUDA visible devices.
Note
The result could be empty if the
CUDA_VISIBLE_DEVICES
environment variable is invalid.
- classmethod from_indices(indices: int | Iterable[int] | None = None) list[CudaDevice] [source]
Return a list of CUDA devices of the given CUDA indices.
The CUDA ordinal will be enumerate from the
CUDA_VISIBLE_DEVICES
environment variable.- See also for CUDA Device Enumeration:
- Parameters:
cuda_indices (Iterable[int]) – The indices of the GPU in CUDA ordinal, if not given, returns all visible CUDA devices.
- Returns: List[CudaDevice]
A list of
CudaDevice
of the given CUDA indices.
- Raises:
libnvml.NVMLError_LibraryNotFound – If cannot find the NVML library, usually the NVIDIA driver is not installed.
libnvml.NVMLError_DriverNotLoaded – If NVIDIA driver is not loaded.
libnvml.NVMLError_LibRmVersionMismatch – If RM detects a driver/library version mismatch, usually after an upgrade for NVIDIA driver without reloading the kernel module.
RuntimeError – If the index is out of range for the given
CUDA_VISIBLE_DEVICES
environment variable.
- static __new__(cls, cuda_index: int | None = None, *, nvml_index: int | tuple[int, int] | None = None, uuid: str | None = None) Self [source]
Create a new instance of CudaDevice.
The type of the result is determined by the given argument.
- (cuda_index: int) -> Union[CudaDevice, CudaMigDevice] # depending on `CUDA_VISIBLE_DEVICES` - (uuid: str) -> Union[CudaDevice, CudaMigDevice] # depending on `CUDA_VISIBLE_DEVICES` - (nvml_index: int) -> CudaDevice - (nvml_index: (int, int)) -> CudaMigDevice
Note: This method takes exact 1 non-None argument.
- Returns: Union[CudaDevice, CudaMigDevice]
A
CudaDevice
instance or aCudaMigDevice
instance.
- Raises:
TypeError – If the number of non-None arguments is not exactly 1.
TypeError – If the given NVML index is a tuple but is not consist of two integers.
RuntimeError – If the index is out of range for the given
CUDA_VISIBLE_DEVICES
environment variable.
- __init__(cuda_index: int | None = None, *, nvml_index: int | tuple[int, int] | None = None, uuid: str | None = None) None [source]
Initialize the instance created by
__new__()
.- Raises:
libnvml.NVMLError_LibraryNotFound – If cannot find the NVML library, usually the NVIDIA driver is not installed.
libnvml.NVMLError_DriverNotLoaded – If NVIDIA driver is not loaded.
libnvml.NVMLError_LibRmVersionMismatch – If RM detects a driver/library version mismatch, usually after an upgrade for NVIDIA driver without reloading the kernel module.
libnvml.NVMLError_NotFound – If the device is not found for the given NVML identifier.
libnvml.NVMLError_InvalidArgument – If the NVML index is out of range.
RuntimeError – If the given device is not visible to CUDA applications (i.e. not listed in the
CUDA_VISIBLE_DEVICES
environment variable or the environment variable is invalid).
- class nvitop.CudaMigDevice(cuda_index: int | None = None, *, nvml_index: int | tuple[int, int] | None = None, uuid: str | None = None)[source]
Bases:
CudaDevice
,MigDevice
Class for CUDA devices that are MIG devices.
- nvitop.parse_cuda_visible_devices(cuda_visible_devices: str | None = <VALUE OMITTED>) list[int] | list[tuple[int, int]] [source]
Parse the given
CUDA_VISIBLE_DEVICES
value into a list of NVML device indices.This function is aliased by
Device.parse_cuda_visible_devices()
.Note
The result could be empty if the
CUDA_VISIBLE_DEVICES
environment variable is invalid.- See also for CUDA Device Enumeration:
- Parameters:
cuda_visible_devices (Optional[str]) – The value of the
CUDA_VISIBLE_DEVICES
variable. If not given, the value from the environment will be used. If explicitly given byNone
, theCUDA_VISIBLE_DEVICES
environment variable will be unset before parsing.
- Returns: Union[List[int], List[Tuple[int, int]]]
A list of int (physical device) or a list of tuple of two integers (MIG device) for the corresponding real device indices.
Examples
>>> import os >>> os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID' >>> os.environ['CUDA_VISIBLE_DEVICES'] = '6,5' >>> parse_cuda_visible_devices() # parse the `CUDA_VISIBLE_DEVICES` environment variable to NVML indices [6, 5]
>>> parse_cuda_visible_devices('0,4') # pass the `CUDA_VISIBLE_DEVICES` value explicitly [0, 4]
>>> parse_cuda_visible_devices('GPU-18ef14e9,GPU-849d5a8d') # accept abbreviated UUIDs [5, 6]
>>> parse_cuda_visible_devices(None) # get all devices when the `CUDA_VISIBLE_DEVICES` environment variable unset [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> parse_cuda_visible_devices('MIG-d184f67c-c95f-5ef2-a935-195bd0094fbd') # MIG device support (MIG UUID) [(0, 0)] >>> parse_cuda_visible_devices('MIG-GPU-3eb79704-1571-707c-aee8-f43ce747313d/13/0') # MIG device support (GPU UUID) [(0, 1)] >>> parse_cuda_visible_devices('MIG-GPU-3eb79704/13/0') # MIG device support (abbreviated GPU UUID) [(0, 1)]
>>> parse_cuda_visible_devices('') # empty string [] >>> parse_cuda_visible_devices('0,0') # invalid `CUDA_VISIBLE_DEVICES` (duplicate device ordinal) [] >>> parse_cuda_visible_devices('16') # invalid `CUDA_VISIBLE_DEVICES` (device ordinal out of range) []
- nvitop.normalize_cuda_visible_devices(cuda_visible_devices: str | None = <VALUE OMITTED>) str [source]
Parse the given
CUDA_VISIBLE_DEVICES
value and convert it into a comma-separated string of UUIDs.This function is aliased by
Device.normalize_cuda_visible_devices()
.Note
The result could be empty string if the
CUDA_VISIBLE_DEVICES
environment variable is invalid.- See also for CUDA Device Enumeration:
- Parameters:
cuda_visible_devices (Optional[str]) – The value of the
CUDA_VISIBLE_DEVICES
variable. If not given, the value from the environment will be used. If explicitly given byNone
, theCUDA_VISIBLE_DEVICES
environment variable will be unset before parsing.
- Returns: str
The comma-separated string (GPU UUIDs) of the
CUDA_VISIBLE_DEVICES
environment variable.
Examples
>>> import os >>> os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID' >>> os.environ['CUDA_VISIBLE_DEVICES'] = '6,5' >>> normalize_cuda_visible_devices() # normalize the `CUDA_VISIBLE_DEVICES` environment variable to UUID strings 'GPU-849d5a8d-610e-eeea-1fd4-81ff44a23794,GPU-18ef14e9-dec6-1d7e-1284-3010c6ce98b1'
>>> normalize_cuda_visible_devices('4') # pass the `CUDA_VISIBLE_DEVICES` value explicitly 'GPU-96de99c9-d68f-84c8-424c-7c75e59cc0a0'
>>> normalize_cuda_visible_devices('GPU-18ef14e9,GPU-849d5a8d') # normalize abbreviated UUIDs 'GPU-18ef14e9-dec6-1d7e-1284-3010c6ce98b1,GPU-849d5a8d-610e-eeea-1fd4-81ff44a23794'
>>> normalize_cuda_visible_devices(None) # get all devices when the `CUDA_VISIBLE_DEVICES` environment variable unset 'GPU-<GPU0-UUID>,GPU-<GPU1-UUID>,...' # all GPU UUIDs
>>> normalize_cuda_visible_devices('MIG-d184f67c-c95f-5ef2-a935-195bd0094fbd') # MIG device support (MIG UUID) 'MIG-d184f67c-c95f-5ef2-a935-195bd0094fbd' >>> normalize_cuda_visible_devices('MIG-GPU-3eb79704-1571-707c-aee8-f43ce747313d/13/0') # MIG device support (GPU UUID) 'MIG-37b51284-1df4-5451-979d-3231ccb0822e' >>> normalize_cuda_visible_devices('MIG-GPU-3eb79704/13/0') # MIG device support (abbreviated GPU UUID) 'MIG-37b51284-1df4-5451-979d-3231ccb0822e'
>>> normalize_cuda_visible_devices('') # empty string '' >>> normalize_cuda_visible_devices('0,0') # invalid `CUDA_VISIBLE_DEVICES` (duplicate device ordinal) '' >>> normalize_cuda_visible_devices('16') # invalid `CUDA_VISIBLE_DEVICES` (device ordinal out of range) ''