Skip to content

Instantly share code, notes, and snippets.

@Lyken17
Last active April 9, 2024 11:51
Show Gist options
  • Star 43 You must be signed in to star a gist
  • Fork 16 You must be signed in to fork a gist
  • Save Lyken17/91b81526a8245a028d4f85ccc9191884 to your computer and use it in GitHub Desktop.
Save Lyken17/91b81526a8245a028d4f85ccc9191884 to your computer and use it in GitHub Desktop.
Deep Leakage from Gradients.ipynb
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@ProfessorX
Copy link

火钳刘明。

@dexhunter
Copy link

interesting work, thanks for sharing

@Wan-Yifei
Copy link

这个要火

@LYF14020510036
Copy link

你好,我想问一下关于你的论文 Deep Leakage from Gradients(很棒的工作)。 你们的方法你能不能恢复 graph数据?? 或者如果每个局部用户的 graph数据的标签空间大小(|Y|)不一样的时候,能不能恢复 graph 数据???我训练的时候,每个用户的 batch size 设置都不一样,可以恢复graph 数据么???

@TheSunWillRise
Copy link

TheSunWillRise commented Dec 18, 2019

你好,我用这套代码,每次跑了几十个iter之后loss突然就炸了,请问你遇到过吗?
以下是一次实验记录:

0 117.4059
10 4.3706
20 0.2128
30 0.0191
40 0.0050
50 0.0022
60 0.0030
70 0.0008
80 0.0004
90 213.8976
100 213.8976
110 213.8976
120 213.8976
130 213.8976
140 213.8976
150 213.8976
160 213.8976
170 213.8976
180 213.8976
190 213.8976
200 213.8976
210 213.8976
220 213.8976
230 213.8976
240 213.8976
250 213.8976
260 213.8976
270 213.8976
280 213.8976
290 213.8976

@Lyken17
Copy link
Author

Lyken17 commented Dec 18, 2019

@LYF14020510036 Haven't tested on Graph data yet. I think DLG should work since it requires model to be twice-differentiable, as long as the batch size is known.

@Lyken17
Copy link
Author

Lyken17 commented Dec 18, 2019

@815961618 I think that is an issue for L-BFGS. I am now testing adam (which is more robust). Will update a new version soon.

@slowbull
Copy link

Great work! In the paper, label variable is also trainable, however, label variable is fixed as the groundtruth in the code. It looks like that the earlier version requires more iterations to converge. Could you also give hints or code how to train when data and label are both trainable?

@Manchery
Copy link

Hi, great work! Is there any code for batched data?

@Harxis
Copy link

Harxis commented Mar 15, 2020

I have experimented on ResNet base the code shared on https://github.com/mit-han-lab/dlg.And I haved following the paper saying: remove stride and replace activation function ReLu as Sigmoid, but I found the DLG way fail on ResNet. Cloud you please show me the code on ResNet?
Or anyone have experimented on ResNet, please show me how you done?

@SiHE-master
Copy link

你好,我用这套代码,每次跑了几十个iter之后loss突然就炸了,请问你遇到过吗? 以下是一次实验记录:

0 117.4059 10 4.3706 20 0.2128 30 0.0191 40 0.0050 50 0.0022 60 0.0030 70 0.0008 80 0.0004 90 213.8976 100 213.8976 110 213.8976 120 213.8976 130 213.8976 140 213.8976 150 213.8976 160 213.8976 170 213.8976 180 213.8976 190 213.8976 200 213.8976 210 213.8976 220 213.8976 230 213.8976 240 213.8976 250 213.8976 260 213.8976 270 213.8976 280 213.8976 290 213.8976

@815961618 不知道你解决了没 我把手动种子torch.manual_seed注释了就好了

@mil137
Copy link

mil137 commented Oct 26, 2022

I have experimented on ResNet base the code shared on https://github.com/mit-han-lab/dlg.And I haved following the paper saying: remove stride and replace activation function ReLu as Sigmoid, but I found the DLG way fail on ResNet. Cloud you please show me the code on ResNet? Or anyone have experimented on ResNet, please show me how you done?

Have you resolved this? I'm also getting stuck here @Harxis @Lyken17

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment