In Git you can add a submodule to a repository. This is basically a repository embedded in your main repository. This can be very useful. A couple of usecases of submodules:
- Separate big codebases into multiple repositories.
import scipy.io | |
import numpy as np | |
data = scipy.io.loadmat("subject.mat") | |
for i in data: | |
if '__' not in i and 'readme' not in i: | |
np.savetxt(("filesforyou/"+i+".csv"),data[i],delimiter=',') |
import torch | |
import torch.nn as nn | |
from torch.autograd import Variable | |
# Do this to display pytorch version. | |
# The version used in this gist is 0.3.0.post4. | |
print(torch.__version__) | |
# There are three steps to demonstrate multi head network | |
# 1. build the network |
my.vennDiagram <- function (object, include = "both", names = NULL, mar = rep(1, | |
4), cex = c(1.5, 1, 0.7), lwd = 1, circle.col = NULL, counts.col = NULL, | |
show.include = NULL, ...) | |
{ | |
include <- as.character(include) | |
LenInc <- min(length(include), 2) | |
if (is(object, "VennCounts")) { | |
include <- include[1] | |
LenInc <- 1 | |
} |
They basically all suggest that apparent improvements to the state of the art in ML and related fields are often not real, or at least the result of factors other than what the authors claim.
The state of sparsity in deep neural networks
What is the state of neural network pruning?
On the State of the Art of Evaluation in Neural Language Models
Do Transformer Modifications Transfer Across Implementations and Applications?