Skip to content

Instantly share code, notes, and snippets.

View mohapatras's full-sized avatar

Bhabani mohapatras

  • Bangalore, India
  • 15:47 (UTC +05:30)
View GitHub Profile
@mohapatras
mohapatras / sensivity_specifity_cutoff.py
Created May 13, 2022 23:33 — forked from twolodzko/sensivity_specifity_cutoff.py
Use Youden index to determine cut-off for classification
import numpy as np
from sklearn.metrics import roc_curve
def sensivity_specifity_cutoff(y_true, y_score):
'''Find data-driven cut-off for classification
Cut-off is determied using Youden's index defined as sensitivity + specificity - 1.
Parameters
----------
@mohapatras
mohapatras / install_nvidia_driver.md
Created February 15, 2022 05:23 — forked from espoirMur/install_nvidia_driver.md
How I fix this issue NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running

I had many driver installed I my virtual machine , so It was actually the reason why I was having the error.

To fix it I had first to remove all driver I have installed before using :

  • sudo apt-get purge nvidia-*
  • sudo apt-get update -sudo apt-get autoremove

After that I when a head and installed the latest version of it nvidia driver:

I did :

@mohapatras
mohapatras / installing_nvidia_driver_cuda_cudnn_linux.md
Created October 15, 2021 03:31 — forked from kmhofmann/installing_nvidia_driver_cuda_cudnn_linux.md
Installing the NVIDIA driver, CUDA and cuDNN on Linux

Installing the NVIDIA driver, CUDA and cuDNN on Linux (Ubuntu 20.04)

This is a companion piece to my instructions on building TensorFlow from source. In particular, the aim is to install the following pieces of software

on an Ubuntu Linux system, in particular Ubuntu 20.04.

@mohapatras
mohapatras / check_cuda_cudnn.md
Created October 15, 2021 03:30 — forked from Jongbhin/check_cuda_cudnn.md
[Cuda cudnn version check] #cuda #cudnn #nvidia

To check nvidia driver

modinfo nvidia

To check cuda version

cat /usr/local/cuda/version.txt
$ ls -l /usr/local/cuda-10.1/lib64/libcurand*
lrwxrwxrwx 1 root root 15 Aug 22 02:49 /usr/local/cuda-10.1/lib64/libcurand.so -> libcurand.so.10
lrwxrwxrwx 1 root root 21 Aug 22 02:49 /usr/local/cuda-10.1/lib64/libcurand.so.10 -> libcurand.so.10.1.168
-rwxr-xr-x 1 root root 59812280 Aug 22 02:49 /usr/local/cuda-10.1/lib64/libcurand.so.10.1.168
-rw-r--r-- 1 root root 59842274 Aug 22 02:49 /usr/local/cuda-10.1/lib64/libcurand_static.a
$ echo $LD_LIBRARY_PATH
/usr/local/cuda-10.1/lib64:
@mohapatras
mohapatras / repo_push_github
Created February 15, 2020 07:17
Push local data to github using git.
# Create a new repository on the command line
touch README.md
git init
git add README.md
git commit -m "first commit"
git remote add origin https://github.com/mohapatras/python.git
git push -u origin master
# Push an existing repository from the command line
@mohapatras
mohapatras / quickSort.py
Created June 21, 2018 05:01
Quick Sort implementation in python 3 with last element as pivot.
def partition(A, start, end):
pivot = A[end]
pIndex = start
for i in range(start, len(A)):
if A[i] < pivot:
A[i], A[pIndex] = A[pIndex], A[i]
pIndex += 1
A[pIndex], A[end] = A[end], A[pIndex]
@mohapatras
mohapatras / gpu_tf_keras_memory_limit.py
Last active April 4, 2018 13:56
Assign before doing any keras operation.
# keras example imports
from keras.models import load_model
## extra imports to set GPU options
import tensorflow as tf
from keras import backend as k
###################################
# TensorFlow wizardry
config = tf.ConfigProto()
@mohapatras
mohapatras / resnet.py
Last active June 15, 2022 15:14
# Resnet50 with grayscale images.
import numpy as np
import warnings
import os
import tensorflow as tf
from keras.layers import Input
from keras import layers
from keras.layers import Dense
from keras.layers import Activation
from keras.layers import Flatten
@mohapatras
mohapatras / cuda_cudnn_version.sh
Created March 23, 2018 05:45
check_cuda and cudnn versions ubuntu.
# CUDA version
nvcc --version
which nvcc
# CudaNN version
# Use the output of which nvcc to locate your cuda
cat /usr/include/x86_64-linux-gnu/cudnn_v*.h | grep CUDNN_MAJOR -A 2