PyTorch | MXNet | |
---|---|---|
.item() |
.asscalar() |
|
.numpy() |
.asnumpy() |
|
.from_numpy() |
.array() |
|
device = torch.device("cuda") |
ctx = mx.gpu() |
|
x = x.to(device) |
x = x.as_in_context(ctx) |
Specify a variable need autograd in PyTorch
x = torch.ones(2, 2, requires_grad=True)