-
ModuleList
: https://pytorch.org/docs/stable/nn.html#torch.nn.ParameterListclass MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.linears = nn.ModuleList([nn.Linear(10, 10) for i in range(10)]) def forward(self, x): # ModuleList can act as an iterable, or be indexed using ints for i, l in enumerate(self.linears): x = self.linears[i // 2](x) + l(x) return x
-
ParameterList
and an example to understandParameter
: https://pytorch.org/docs/stable/nn.html#torch.nn.ParameterListclass MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.params = nn.ParameterList([nn.Parameter(torch.randn(10, 10)) for i in range(10)]) def forward(self, x): # ParameterList can act as an iterable, or be indexed using ints for i, p in enumerate(self.params): x = self.params[i // 2].mm(x) + p.mm(x) return
-
Sequential with OrderedDict: https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential
# Example of using Sequential model = nn.Sequential( nn.Conv2d(1,20,5), nn.ReLU(), nn.Conv2d(20,64,5), nn.ReLU() ) # Example of using Sequential with OrderedDict model = nn.Sequential(OrderedDict([ ('conv1', nn.Conv2d(1,20,5)), ('relu1', nn.ReLU()), ('conv2', nn.Conv2d(20,64,5)), ('relu2', nn.ReLU()) ])) ```
-
state_dict
: https://pytorch.org/docs/stable/nn.html#torch.nn.Module.state_dict -
register_buffer
: -
modules
/named_modules
v.s.children
/named_children
: https://pytorch.org/docs/stable/nn.html#torch.nn.Module.modules -
apply function to all submodule and self: https://pytorch.org/docs/stable/nn.html#torch.nn.Module.apply
-
freeze base: https://pytorch.org/docs/stable/notes/autograd.html#excluding-subgraphs-from-backward
-
inspect input/output using hook: https://pytorch.org/tutorials/beginner/former_torchies/nn_tutorial.html#forward-and-backward-function-hooks
-
torch.optim.lr_scheduler
: https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html# Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
-
torch.set_grad_enabled
: https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html# forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step()
Last active
July 5, 2018 02:23
-
-
Save yang-zhang/a9ddace6a393eb7b1cfb45237c2c2bc3 to your computer and use it in GitHub Desktop.
Pytorch references
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment