QBoard » Artificial Intelligence & ML » AI and ML - PyTorch » What's the difference between reshape and view in pytorch?

What's the difference between reshape and view in pytorch?

  • In numpy, we use ndarray.reshape() for reshaping an array.

    I noticed that in pytorch, people use torch.view() for the same purpose, but at the same time, there is also a torch.reshape() existing.

    So I am wondering what the differences are between them and when I should use either of them?

    This post was edited by Samar Patil at July 21, 2021 2:12 PM IST
      July 19, 2021 3:57 PM IST
    0
  • torch.view  will return a tensor with the new shape. The returned tensor will share the underling data with the original tensor.
    torch.reshape returns a tensor with the same data and number of elements as input, but with the specified shape. When possible, the returned tensor will be a view of input. Otherwise, it will be a copy. Contiguous inputs and inputs with compatible strides can be reshaped without copying, but you should not depend on the copying vs. viewing behavior.
      August 3, 2021 10:26 PM IST
    0
  • I would say the answers here are technically correct but there's another reason for existing of reshape. pytorch is usually considered more convenient than other frameworks because it closer to python and numpy. It's interesting that the question involves numpy.

    Let's look into size and shape in pytorch. size is a function so you call it like x.size(). shape in pytorch is not a function. In numpy you have shape and it's not a function - you use it x.shape. So it's handy to get both of them in pytorch. If you came from numpy it would be nice to use the same functions.
      January 3, 2022 1:52 PM IST
    0
  • Tensor.reshape() is more robust. It will work on any tensor, while Tensor.view() works only on tensor t where t.is_contiguous()==True.

    To explain about non-contiguous and contiguous is another story, but you can always make the tensor t contiguous if you call t.contiguous() and then you can call view() without the error.

      January 14, 2022 2:06 PM IST
    0
  • Although both torch.view and torch.reshape are used to reshape tensors, here are the differences between them.

    As the name suggests, torch.view merely creates a view of the original tensor. The new tensor will always share its data with the original tensor. This means that if you change the original tensor, the reshaped tensor will change and vice versa.

    >>> z = torch.zeros(3, 2)
    >>> x = z.view(2, 3)
    >>> z.fill_(1)
    >>> x
    tensor([[1., 1., 1.],
            [1., 1., 1.]])​


    To ensure that the new tensor always shares its data with the original, torch.view imposes some contiguity constraints on the shapes of the two tensors [docs]. More often than not this is not a concern, but sometimes torch.view throws an error even if the shapes of the two tensors are compatible. Here's a famous counter-example.

    >>> z = torch.zeros(3, 2)
    >>> y = z.t()
    >>> y.size()
    torch.Size([2, 3])
    >>> y.view(6)
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    RuntimeError: invalid argument 2: view size is not compatible with input tensor's
    size and stride (at least one dimension spans across two contiguous subspaces).
    Call .contiguous() before .view().

    torch.reshape doesn't impose any contiguity constraints, but also doesn't guarantee data sharing. The new tensor may be a view of the original tensor, or it may be a new tensor altogether.

    >>> z = torch.zeros(3, 2)
    >>> y = z.reshape(6)
    >>> x = z.t().reshape(6)
    >>> z.fill_(1)
    tensor([[1., 1.],
            [1., 1.],
            [1., 1.]])
    >>> y
    tensor([1., 1., 1., 1., 1., 1.])
    >>> x
    tensor([0., 0., 0., 0., 0., 0.])


    TL;DR:

    If you just want to reshape tensors, use torch.reshape. If you're also concerned about memory usage and want to ensure that the two tensors share the same data, use torch.view.

    Share



      August 16, 2021 3:02 PM IST
    0