![]() Here’s the graphical interpretation of a stride. Using all the stride entries we can translate any given n-dimensional index to the offset within a physical array by ourselves. Print(tens_A.storage() = tens_A) # it is True # and compare against the element at index (x+1,y,z) # Add the jump to the offset of index (x,y,z) Jump = tens_A.stride(0) # or tens_A.stride() # Translation from n-dimensional index to offset in the underlying data If we have, for example, an arbitrary 3-dimensional container, then the elements at indices (x,y,z) and at (x+1,y,z) live at a distance of size stride(0) within the underlying data. The tensor’s underlying data is just a one-dimensional physical array, that is stored sequentially in the memory. Stride is a tuple of integers each of which represents a jump needed to perform on the underlying data to go to the next element within the current dimension. This quote introduces three things: the view of a tensor, the stride of a tensor, and contiguous tensors. Contiguous inputs and inputs with compatible strides can be reshaped without copying, but you should not depend on the copying vs. When possible, the returned tensor will be a view of input. Returns a tensor with the same data and number of elements as input, but with the specified shape. The documentation says the following about reshape method: The answer to the first question is that reshaped tensor sometimes triggers copying of underlying data and sometimes doesn’t.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |