Skip to content

Instantly share code, notes, and snippets.

@kylemcdonald
Last active March 12, 2023 18:37
Show Gist options
  • Save kylemcdonald/e8ca989584b3b0e6526c0a737ed412f0 to your computer and use it in GitHub Desktop.
Save kylemcdonald/e8ca989584b3b0e6526c0a737ed412f0 to your computer and use it in GitHub Desktop.
PyTorch ACAI (1807.07543).
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@mohazzam1
Copy link

Modules: show_array and make_mosaic do not exist in the shared link.

@enicolin
Copy link

Modules: show_array and make_mosaic do not exist in the shared link.

make_mosaic is under mosaic.py

from utils.mosaic import make_mosaic

Not sure about show_array, however.

@somepago
Copy link

You can see the code for show_array.py here - kylemcdonald/python-utils@9528bfc

@kikatuso
Copy link

Hello,
Thanks for providing us with this code.
I am however encountering one error and I dont know how to fix it.
The loss_disc.backwards() breaks

and I am getting an error:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 16, 3, 3]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

@The-D-lab
Copy link

Hello,
Thanks for providing us with this code.
I am however encountering one error and I dont know how to fix it.
The loss_disc.backwards() breaks

and I am getting an error:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 16, 3, 3]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

I think its an error in the code. Interestingly if you run in an older versions of pytorch it seems to run fine, but it look like the Autoencoder l2 loss and the Discriminator l2 loss are backwards. ae_l2 should take the "disc", while disc_l2 should take "disc_mix". If they are flipped around the error seems to go away.

@stamate
Copy link

stamate commented Apr 16, 2022

In such a pipeline:

optim1 = optim.Adam(G.parameters())
optim2 = optim.Adam(D.parameters())
G = Model1()
D = Model2()
recons, z = G(input)
loss1 = loss_func1(recons)
diff = D(z)
loss2 = loss_func2(diff)
loss3 = loss_func3(diff)
loss_G = loss1 + loss2 # we don’t want to update D parameters here
loss_D = loss3

Solution #1
optim1.zero_grad()
loss_G.backward(retain_graph=True)
optim2.zero_grad()
loss_D.backward()
optim1.step()
optim2.step()

Solution #2
optim1.zero_grad()
loss_G.backward(retain_graph=True, inputs=list(G.parameters()))
optim1.step()
optim2.zero_grad()
loss_D.backward(inputs=list(D.parameters()))
optim2.step()

Both of the solutions come from here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment