Skip to content

Instantly share code, notes, and snippets.

@morganmcg1
Created May 4, 2021 15:35
Show Gist options
  • Save morganmcg1/422f41afef7e9545dc85329622ae0fb4 to your computer and use it in GitHub Desktop.
Save morganmcg1/422f41afef7e9545dc85329622ae0fb4 to your computer and use it in GitHub Desktop.
saturn error
/srv/conda/envs/saturn/lib/python3.7/site-packages/torch/nn/functional.py:1204: UserWarning: Output 0 of BackwardHookFunctionBackward is a view and is being modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is deprecated and will be forbidden starting version 1.6. You can remove this warning by cloning the output of the custom Function. (Triggered internally at /pytorch/torch/csrc/autograd/variable.cpp:547.)
result = torch.relu_(input)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-7-8393222d813a> in <module>
----> 1 simple_train_single(**model_params)
<ipython-input-6-29dd15a1ccdf> in simple_train_single(bucket, prefix, batch_size, downsample_to, n_epochs, base_lr, pretrained_classes)
75 # zero the parameter gradients
76 optimizer.zero_grad()
---> 77 loss.backward()
78 scheduler.step()
79
/srv/conda/envs/saturn/lib/python3.7/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
243 create_graph=create_graph,
244 inputs=inputs)
--> 245 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
246
247 def register_hook(self, hook):
/srv/conda/envs/saturn/lib/python3.7/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
145 Variable._execution_engine.run_backward(
146 tensors, grad_tensors_, retain_graph, create_graph, inputs,
--> 147 allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
148
149
/srv/conda/envs/saturn/lib/python3.7/site-packages/torch/utils/hooks.py in hook(grad_input, _)
101 def hook(grad_input, _):
102 if self.grad_outputs is None:
--> 103 raise RuntimeError("Module backward hook for grad_input is called before "
104 "the grad_output one. This happens because the gradient "
105 "in your nn.Module flows to the Module's input without "
RuntimeError: Module backward hook for grad_input is called before the grad_output one. This happens because the gradient in your nn.Module flows to the Module's input without passing through the Module's output. Make sure that the output depends on the input and that the loss is computed based on the output.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment