This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
torch.cuda.is_available() | |
#output | |
# False | |
#use of cuda | |
print(torch.Tensor(1,2).cuda()) | |
--------------------------------------------------------------------------- | |
RuntimeError Traceback (most recent call last) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
x=torch.rand(1,2) | |
print(x) | |
y=torch.rand(1,2) | |
print(y) | |
# output: | |
# tensor([[ 0.9781, 0.0128]]) | |
# tensor([[ 0.9404, 0.6528]]) | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#Inplace / Out-of-place | |
value_1=0 | |
value_2=2 | |
#creating new object | |
value_3=value_1+value_2 | |
print(value_2) | |
#inplace | |
value_2+=value_1 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
a=torch.rand(3) | |
print(a) | |
a.fill_(2.0) #inplace | |
print(a) | |
b=a.add(3) #new_object creating | |
print(b) | |
# output: | |
# tensor([ 0.0025, 0.9584, 0.7258]) | |
# tensor([ 2., 2., 2.]) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#converting torch to numpy | |
a=torch.Tensor(1,5) | |
print(a) | |
print(a.numpy()) | |
# output: | |
# tensor(1.00000e-19 * | |
# [[ 0.0000, 1.0842, 0.0000, 1.0842, 0.0000]]) | |
# [[ 0.00000000e+00 1.08420217e-19 0.00000000e+00 1.08420217e-19 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#cool experiment | |
import numpy as np | |
var_1=np.random.randint(1,3,[2,3]) | |
new_= torch.from_numpy(var_1) | |
print(var_1) | |
print(new_) | |
np.add(var_1,1,out=var_1) | |
print(var_1) | |
print(new_) #change automatically |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
a=torch.Tensor(3,8) | |
print(a) | |
b=a.view(-1,2) | |
print(b) | |
# output: | |
# tensor([[ 0.0000e+00, 1.0842e-19, 5.1128e+35, 3.6902e+19, 3.3927e-26, | |
# 1.4013e-45, 3.2906e-26, 1.4013e-45], | |
# [ 3.2668e-26, 1.4013e-45, 3.3719e-26, 1.4013e-45, -1.7837e-37, | |
# -1.6074e+07, 0.0000e+00, 0.0000e+00], |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
def multilayer_perceptron(x, weights, biases, keep_prob): | |
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1']) | |
layer_1 = tf.nn.relu(layer_1) | |
layer_1 = tf.nn.dropout(layer_1, keep_prob) | |
out_layer = tf.matmul(layer_1, weights['out']) + biases['out'] | |
return out_layer | |
n_hidden_1 = 38 | |
n_input = train_x.shape[1] | |
n_classes = train_y.shape[1] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
In Neural network we need normalizing data (features) when they have different ranges, | |
for example one of them ranges from (1000-30000) while another feature ranges from (0.01 - 0.99). | |
We cast both of them in one unified range for example (-1 to +1) or (0 to 1).. why we do that ? Two reasons, | |
first to eliminate the influence of one factor over another (i.e. to give features equal chances), | |
second reason is that the gradient descent with momentom GDM algorithm which is used for backpropagation converges | |
faster with normalized data than with un-normalized data. So, if you have different features have same range of data | |
then you don't need normalization. | |
Read the link I provided, it contains the equations required for normalization for both [0,1] and [-1, +1] ranges. | |
Standardization is changing data in such away that the new set has mean=0 and standard deviation =1. This kind of | |
scaling is useful when the set of the data contains outliers (anomalies), because it has no boundaries like normalization. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
c = np.array([[3.,4], [5.,6], [6.,7]]) | |
print(np.mean(c,1)) | |
print((5.+6)/2) | |
print([float(sum(l))/len(l) for l in zip(*c)]) | |
+------------+---------+--------+ | |
| | A | B | | |
+------------+---------+--------- | |
| 0 | 0.626386| 1.52325|----axis=1-----> | |
+------------+---------+--------+ |