Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Deep Leakage from Gradients.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@ProfessorX
Copy link

ProfessorX commented Dec 11, 2019

火钳刘明。

Loading

@dexhunter
Copy link

dexhunter commented Dec 13, 2019

interesting work, thanks for sharing

Loading

@Wan-Yifei
Copy link

Wan-Yifei commented Dec 14, 2019

这个要火

Loading

@LYF14020510036
Copy link

LYF14020510036 commented Dec 16, 2019

你好,我想问一下关于你的论文 Deep Leakage from Gradients(很棒的工作)。 你们的方法你能不能恢复 graph数据?? 或者如果每个局部用户的 graph数据的标签空间大小(|Y|)不一样的时候,能不能恢复 graph 数据???我训练的时候,每个用户的 batch size 设置都不一样,可以恢复graph 数据么???

Loading

@815961618
Copy link

815961618 commented Dec 18, 2019

你好,我用这套代码,每次跑了几十个iter之后loss突然就炸了,请问你遇到过吗?
以下是一次实验记录:

0 117.4059
10 4.3706
20 0.2128
30 0.0191
40 0.0050
50 0.0022
60 0.0030
70 0.0008
80 0.0004
90 213.8976
100 213.8976
110 213.8976
120 213.8976
130 213.8976
140 213.8976
150 213.8976
160 213.8976
170 213.8976
180 213.8976
190 213.8976
200 213.8976
210 213.8976
220 213.8976
230 213.8976
240 213.8976
250 213.8976
260 213.8976
270 213.8976
280 213.8976
290 213.8976

Loading

@Lyken17
Copy link
Author

Lyken17 commented Dec 18, 2019

@LYF14020510036 Haven't tested on Graph data yet. I think DLG should work since it requires model to be twice-differentiable, as long as the batch size is known.

Loading

@Lyken17
Copy link
Author

Lyken17 commented Dec 18, 2019

@815961618 I think that is an issue for L-BFGS. I am now testing adam (which is more robust). Will update a new version soon.

Loading

@slowbull
Copy link

slowbull commented Dec 24, 2019

Great work! In the paper, label variable is also trainable, however, label variable is fixed as the groundtruth in the code. It looks like that the earlier version requires more iterations to converge. Could you also give hints or code how to train when data and label are both trainable?

Loading

@Manchery
Copy link

Manchery commented Jan 16, 2020

Hi, great work! Is there any code for batched data?

Loading

@Harxis
Copy link

Harxis commented Mar 15, 2020

I have experimented on ResNet base the code shared on https://github.com/mit-han-lab/dlg.And I haved following the paper saying: remove stride and replace activation function ReLu as Sigmoid, but I found the DLG way fail on ResNet. Cloud you please show me the code on ResNet?
Or anyone have experimented on ResNet, please show me how you done?

Loading

@SiHE-master
Copy link

SiHE-master commented Nov 3, 2021

你好,我用这套代码,每次跑了几十个iter之后loss突然就炸了,请问你遇到过吗? 以下是一次实验记录:

0 117.4059 10 4.3706 20 0.2128 30 0.0191 40 0.0050 50 0.0022 60 0.0030 70 0.0008 80 0.0004 90 213.8976 100 213.8976 110 213.8976 120 213.8976 130 213.8976 140 213.8976 150 213.8976 160 213.8976 170 213.8976 180 213.8976 190 213.8976 200 213.8976 210 213.8976 220 213.8976 230 213.8976 240 213.8976 250 213.8976 260 213.8976 270 213.8976 280 213.8976 290 213.8976

@815961618 不知道你解决了没 我把手动种子torch.manual_seed注释了就好了

Loading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment