- torch.cat(x,y,dim=0)
Concatinates tensors along the specified dimension.
The output tensor has same shape as input tensor except in dimension to be concatinated.
The input tensors must have same shape except the dimension in which concatination has to take place.
x= torch.randn(1,1,2,3)
tensor([[[[ 0.1735, 0.9757, 0.8116],
[ 0.4838, 0.9459, -0.3021]]]])
torch.cat((x,x),0) #along 0th dimension(external most dimension)
tensor([
[[[ 0.1735, 0.9757, 0.8116],
[ 0.4838, 0.9459, -0.3021]]],
[[[ 0.1735, 0.9757, 0.8116],
[ 0.4838, 0.9459, -0.3021]]]
])
torch.cat((x,x),0).shape
torch.Size([2, 1, 2, 3])
torch.cat((x,x),1) #along 1st dimension
tensor([[
[[ 0.1735, 0.9757, 0.8116],
[ 0.4838, 0.9459, -0.3021]],
[[ 0.1735, 0.9757, 0.8116],
[ 0.4838, 0.9459, -0.3021]]
]])
torch.cat((x,x),1).shape
torch.Size([1, 2, 2, 3])
torch.cat((x,x),2)
tensor([[
[
[ 0.1735, 0.9757, 0.8116],
[ 0.4838, 0.9459, -0.3021],
[ 0.1735, 0.9757, 0.8116],
[ 0.4838, 0.9459, -0.3021]
]
]])
torch.cat((x,x),2).shape
torch.Size([1,1,4,3])
torch.cat((x,x),3)
tensor([[[
[ 0.1735, 0.9757, 0.8116, 0.1735, 0.9757, 0.8116],
[ 0.4838, 0.9459, -0.3021, 0.4838, 0.9459, -0.3021]
]]])
torch.cat((x,x),3).shape
torch.Size([1,1,2,6])
- torch.chunk(input_tensor,n_chunks,dimension)
splits the tensor in specific number of chunks.
a = torch.tensor([[[[1,2],[3,4]],[[5,6],[7,8]]],[[[9,10],[11,12]],[[13,14],[15,16]]]])
tensor([[[[ 1, 2],
[ 3, 4]],
[[ 5, 6],
[ 7, 8]]],
[[[ 9, 10],
[11, 12]],
[[13, 14],
[15, 16]]]])
a.size()
torch.Size([2, 2, 2, 2])
Tensor is visualised as the last dimension(3) as width,second last (2) as height,third last(1) as channels and zeroth dimension as number of images.
torch.chunk(a,2,0)
(tensor([[[[1, 2],
[3, 4]],
[[5, 6],
[7, 8]]]]), tensor([[[[ 9, 10],
[11, 12]],
[[13, 14],
[15, 16]]]]))
torch.chunk(a,2,0)[0].size()
torch.Size([1, 2, 2, 2])
chunking along the zeroth dimension simply divides our number of images into individual images tensor.Visualising this process as taking a knife and cutting the above picturised tensor to down from middle.It divides the tensor into two tensor with zeroth dimension halved in each of the resulting tensor.Analogous to dividing batch of two images into individual images.
torch.chunk(a,2,1)
(tensor([[[[ 1, 2],
[ 3, 4]]],
[[[ 9, 10],
[11, 12]]]]), tensor([[[[ 5, 6],
[ 7, 8]]],
[[[13, 14],
[15, 16]]]]))
torch.chunk(a,2,1)[0].size()
torch.Size([2, 1, 2, 2])
The tensor is cut from between the channel,leaving only one channel per image. Number of dimensions remain same and the dimension is halved along which chunk is operated.If initial tensor was 2 4X4 images with 2 channels,this chuncked tuple will be two tensors,the first one being first channel of both images and second one will be the second channel of both images.
torch.chunk(a,2,2)
(tensor([[[[ 1, 2]],
[[ 5, 6]]],
[[[ 9, 10]],
[[13, 14]]]]), tensor([[[[ 3, 4]],
[[ 7, 8]]],
[[[11, 12]],
[[15, 16]]]]))
torch.chunk(a,2,2).size()
torch.Size([2, 2, 1, 2])
The tensor here is cut along the width of the images from left to right.The output tuple of tensor consists the first tensor as top poriton of both the images and the second tensor is the bottom portion of both images.
torch.chunk(a,2,3)
(tensor([[[[ 1],
[ 3]],
[[ 5],
[ 7]]],
[[[ 9],
[11]],
[[13],
[15]]]]), tensor([[[[ 2],
[ 4]],
[[ 6],
[ 8]]],
[[[10],
[12]],
[[14],
[16]]]]))
torch.chunk(a,2,3)[0].size()
torch.Size([2, 2, 2, 1])
The tensor here is cut along height from top to bottom.The output tuple of tensor consist of first tensor having left half of both images and seconf tensor having right half of both the images.