QBoard » Artificial Intelligence & ML » AI and ML - PyTorch » How to check if pytorch is using the GPU?

How to check if pytorch is using the GPU?

  • I would like to know if pytorch is using my GPU. It's possible to detect with nvidia-smi if there is any activity from the GPU during the process, but I want something written in a python script.

    Is there a way to do so?
      December 28, 2020 11:59 AM IST
    0
  • From practical standpoint just one minor digression:

    import torch
    dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")

     

    This dev now knows if cuda or cpu.

    And there is a difference in how you deal with models and with tensors when moving to cuda. It is a bit strange at first.

    import torch
    import torch.nn as nn
    dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
    t1 = torch.randn(1,2)
    t2 = torch.randn(1,2).to(dev)
    print(t1)  # tensor([[-0.2678,  1.9252]])
    print(t2)  # tensor([[ 0.5117, -3.6247]], device='cuda:0')
    t1.to(dev)
    print(t1)  # tensor([[-0.2678,  1.9252]])
    print(t1.is_cuda) # False
    t1 = t1.to(dev)
    print(t1)  # tensor([[-0.2678,  1.9252]], device='cuda:0')
    print(t1.is_cuda) # True
    
    class M(nn.Module):
        def __init__(self):        
            super().__init__()        
            self.l1 = nn.Linear(1,2)
    
        def forward(self, x):                      
            x = self.l1(x)
            return x
    model = M()   # not on cuda
    model.to(dev) # is on cuda (all parameters)
    print(next(model.parameters()).is_cuda) # True

     

    This all is tricky and understanding it once, helps you to deal fast with less debugging.

     
      November 26, 2021 12:08 PM IST
    0
  • This is going to work :
    In [1]: import torch
    
    In [2]: torch.cuda.current_device()
    Out[2]: 0
    
    In [3]: torch.cuda.device(0)
    Out[3]: <torch.cuda.device at 0x7efce0b03be0>
    
    In [4]: torch.cuda.device_count()
    Out[4]: 1
    
    In [5]: torch.cuda.get_device_name(0)
    Out[5]: 'GeForce GTX 950M'
    
    In [6]: torch.cuda.is_available()
    Out[6]: True

    This tells me the GPU GeForce GTX 950M is being used by PyTorch.

      December 29, 2020 12:12 PM IST
    0
  • This should work:

    import torch
    
    torch.cuda.is_available()
    >>> True
    
    torch.cuda.current_device()
    >>> 0
    
    torch.cuda.device(0)
    >>> <torch.cuda.device at 0x7efce0b03be0>
    
    torch.cuda.device_count()
    >>> 1
    
    torch.cuda.get_device_name(0)
    >>> 'GeForce GTX 950M'

    This tells me CUDA is available and can be used in one of your devices (GPUs). And currently, Device 0 or the GPU GeForce GTX 950M is being used by PyTorch.

      September 2, 2021 1:33 PM IST
    0