QBoard » Artificial Intelligence & ML » AI and ML - PyTorch » How to use AMD GPU for fastai/pytorch?

How to use AMD GPU for fastai/pytorch?

  • I'm using a laptop which has Intel Corporation HD Graphics 5500 (rev 09), and AMD Radeon r5 m255 graphics card.
    Does anyone know how to it set up for Deep Learning, specifically fastai/Pytorch?
      September 8, 2021 11:48 PM IST
    0
  • Update :

    As of Pytorch 1.8 (March 04, 2021), AMD ROCm versions are made available from Pytorch's official website. You can now easily install them on Linux and Mac, the same way you used to install the CUDA/CPU versions.

    Currently, the pip packages are being provided only. Also, the Mac and Windows platforms are still not supported (I haven't tested with WSL2 though!)

    Old answer:

    You need to install the ROCm version. The official AMD instructions on building Pytorch is here.

    There was previously a wheel package for rocm, but it seems AMD doesn't distribute that anymore, and instead, you need to build PyTorch from the source as the guide which I linked to explains.

    However, you may consult this page, to build the latest PyTorch version: The unofficial page of ROCm/PyTorch.

      September 9, 2021 12:39 PM IST
    0
  • Update: In March 2021, Pytorch added support for AMD GPUs, you can just install it and configure it like every other CUDA based GPU. Here is the link
    Don't know about PyTorch but, Even though Keras is now integrated with TF, you can use Keras on an AMD GPU using a library PlaidML link! made by Intel. It's pretty cool and easy to set up plus it's pretty handy to switch the Keras backends for different projects
      September 9, 2021 4:39 PM IST
    0
  • With the PyTorch 1.8 release, we are delighted to announce a new installation option for users of PyTorch on the ROCm™ open software platform. An installable Python package is now hosted on pytorch.org, along with instructions for local installation in the same simple, selectable format as PyTorch packages for CPU-only configurations and other GPU platforms. PyTorch on ROCm includes full capability for mixed-precision and large-scale training using AMD’s MIOpen & RCCL libraries. This provides a new option for data scientists, researchers, students, and others in the community to get started with accelerated PyTorch using AMD GPUs.

    THE ROCM ECOSYSTEM

    ROCm is AMD’s open source software platform for GPU-accelerated high performance computing and machine learning. Since the original ROCm release in 2016, the ROCm platform has evolved to support additional libraries and tools, a wider set of Linux® distributions, and a range of new GPUs. This includes the AMD Instinct™ MI100, the first GPU based on AMD CDNA™ architecture.

    The ROCm ecosystem has an established history of support for PyTorch, which was initially implemented as a fork of the PyTorch project, and more recently through ROCm support in the upstream PyTorch code. PyTorch users can install PyTorch for ROCm using AMD’s public PyTorch docker image, and can of course build PyTorch for ROCm from source. With PyTorch 1.8, these existing installation options are now complemented by the availability of an installable Python package.

    The primary focus of ROCm has always been high performance computing at scale. The combined capabilities of ROCm and AMD’s Instinct family of data center GPUs are particularly suited to the challenges of HPC at data center scale. PyTorch is a natural fit for this environment, as HPC and ML workflows become more intertwined.

    Getting started with PyTorch for ROCm

    The scope for this build of PyTorch is AMD GPUs with ROCm support, running on Linux. The GPUs supported by ROCm include all of AMD’s Instinct family of compute-focused data center GPUs, along with some other select GPUs. A current list of supported GPUs can be found in the ROCm Github repository. After confirming that the target system includes supported GPUs and the current 4.0.1 release of ROCm, installation of PyTorch follows the same simple Pip-based installation as any other Python package. As with PyTorch builds for other platforms, the configurator at https://pytorch.org/get-started/locally/ provides the specific command line to be run.

    PyTorch for ROCm is built from the upstream PyTorch repository, and is a full featured implementation. Notably, it includes support for distributed training across multiple GPUs and supports accelerated mixed precision training.

      September 14, 2021 1:46 PM IST
    0