Skip to content

Instantly share code, notes, and snippets.

@paultsw
Created August 7, 2017 22:56
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save paultsw/7a9d6e3ce7b70e9e2c61bc9287addefc to your computer and use it in GitHub Desktop.
Save paultsw/7a9d6e3ce7b70e9e2c61bc9287addefc to your computer and use it in GitHub Desktop.
A simple way of computing a causal conv1d using a pad+shift on the whole sequence. Implemented as a PyTorch module. (Currently untested with no CUDA support.)
import torch
import torch.nn as nn
from torch.autograd import Variable
__CUDA__ = torch.cuda.is_available()
class CausalConv1d(nn.Module):
"""
A causal 1D convolution.
"""
def __init__(self, kernel_size, in_channels, out_channels, dilation):
super(CausalConv1d, self).__init__(self)
# attributes:
self.kernel_size = kernel_size
self.in_channels = in_channels
self.dilation = dilation
# modules:
self.conv1d = torch.nn.Conv1d(in_channels, out_channels,
kernel_size, stride=1,
padding=(kernel_size-1),
dilation=dilation)
def forward(self, seq):
"""
Note that Conv1d expects (batch, in_channels, in_length).
We assume that seq ~ (len(seq), batch, in_channels), so we'll reshape it first.
"""
seq_ = seq.permute(1,2,0)
conv1d_out = self.conv1d(seq_).permute(2,0,1)
# remove k-1 values from the end:
return conv1d_out[0:-(self.kernel_size-1)]
@kdgutier
Copy link

kdgutier commented Oct 3, 2020

I think you forgot to pad using
padding = (kernel_size-1) * dilation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment