This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Generate test data | |
data1 = np.random.normal(loc=5.0, scale=10, size=1000) | |
# Initialize model | |
dist1 = distfit(bins=25,alpha=0.02,stats='ks') | |
# Fit | |
dist1.fit_transform(data1,verbose=1) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
object comb { | |
def nCr(n: Int, r: Int) = { | |
// Factorial | |
def fact(i: Int) = { | |
var res = 1 | |
for (e <-i to 2 by -1) | |
res*=e | |
res | |
} // End 'fact' function |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
p1 = drug_user(prob_th=0.5,sensitivity=0.97,specificity=0.95,prevelance=0.005) | |
print("Probability of the test-taker being a drug user, in the first round of test, is:",round(p1,3)) | |
print() | |
p2 = drug_user(prob_th=0.5,sensitivity=0.97,specificity=0.95,prevelance=p1) | |
print("Probability of the test-taker being a drug user, in the second round of test, is:",round(p2,3)) | |
print() | |
p3 = drug_user(prob_th=0.5,sensitivity=0.97,specificity=0.95,prevelance=p2) | |
print("Probability of the test-taker being a drug user, in the third round of test, is:",round(p3,3)) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ps = [] | |
sens = [] | |
for sen in [i*0.001+0.95 for i in range(1,50,2)]: | |
sens.append(sen) | |
p = drug_user(prob_th=0.5,sensitivity=sen,specificity=0.95,prevelance=0.005,verbose=False) | |
ps.append(p) | |
plt.figure(figsize=(10,5)) | |
plt.title("Probability of user with test sensitivity",fontsize=15) | |
plt.plot(sens,ps,color='k',marker='o',markersize=8) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ps = [] | |
pres = [] | |
for pre in [i*0.001 for i in range(1,51,2)]: | |
pres.append(pre*100) | |
p = drug_user(prob_th=0.5,sensitivity=0.97,specificity=0.95,prevelance=pre,verbose=False) | |
ps.append(p) | |
plt.figure(figsize=(10,5)) | |
plt.title("Probability of user with prevelance rate",fontsize=15) | |
plt.plot(pres,ps,color='k',marker='o',markersize=8) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
def drug_user( | |
prob_th=0.5, | |
sensitivity=0.99, | |
specificity=0.99, | |
prevelance=0.01, | |
verbose=True): | |
""" | |
Computes the posterior using Bayes' rule | |
""" | |
p_user = prevelance |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
for i,e in enumerate(range(epochs)): | |
optimizer.zero_grad() | |
output = model.forward(X) | |
loss = criterion(output,y) | |
loss.backward() | |
optimizer.step() | |
running_loss.append(loss.item()) | |
if i!=0 and (i+1)%20==0: | |
logits = model.forward(X).detach().numpy().flatten() | |
plt.figure(figsize=(15,3)) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
epochs = 10 | |
for i,e in enumerate(range(epochs)): | |
optimizer.zero_grad() # Reset the grads | |
output = model.forward(X) # Forward pass | |
loss = criterion(output.view(output.shape[0]),y) # Calculate loss | |
print(f"Epoch - {i+1}, Loss - {round(loss.item(),3)}") # Print loss | |
loss.backward() # Backpropagation | |
optimizer.step() # Optimizer one step |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Resets the gradients i.e. do not accumulate over passes | |
optimizer.zero_grad() | |
# Forward pass | |
output = model.forward(X) | |
# Calculate loss | |
loss = criterion(output,y) | |
# Backward pass (AutoGrad) | |
loss.backward() | |
# One step of the optimizer | |
optimizer.step() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
model = Network() | |
print(model) | |
Network( | |
(hidden1): Linear(in_features=5, out_features=8, bias=True) | |
(hidden2): Linear(in_features=8, out_features=4, bias=True) | |
(relu): ReLU() | |
(output): Linear(in_features=4, out_features=1, bias=True) | |
(sigmoid): Sigmoid() | |
) |
NewerOlder