Skip to content

Instantly share code, notes, and snippets.

View farhanhubble's full-sized avatar
🎯
Focusing

farhanhubble farhanhubble

🎯
Focusing
View GitHub Profile
@farhanhubble
farhanhubble / cosine-similarity-matrices.py
Created December 31, 2016 17:41
Vectorized implementation of cosine similarity between two rows of two matrices. I wrote this code for finding duplicate images between two image datasets.
## Computes cosine similarity between two matrices.
## similarity[i][j] = A[i,:] * B[:,j] / |A[i,:]| * |B[:,j]|
def cosine_similarity(A,B):
B_T = B.T
product = np.matmul(A,B_T)
normA = np.linalg.norm(A,axis=1)
normA = normA.reshape(normA.size,1)
@farhanhubble
farhanhubble / kalman1d.cpp
Created July 30, 2017 13:35
1 Dimensional Kalman Filter
void filter(VectorXd &x, MatrixXd &P) {
for (unsigned int n = 0; n < measurements.size(); ++n) {
VectorXd z = measurements[n];
//YOUR CODE HERE
// KF Prediction
VectorXd x_ = F*x + u;
MatrixXd P_ = F*P*F.transpose() + Q;
@farhanhubble
farhanhubble / mpc.py
Created May 24, 2018 06:46
Python code for connecting to Udacity Self-Driving Cars Simulator (Tested with term 2 project 5 on model predictive control).
import json
import asyncio
import websockets
def _check_mode(msg):
return "Auto" if msg and msg[:2] == '42' else "Manual"
def _parse_telemetry(msg):
msg_json = msg[2:]
parsed = json.loads(msg_json)
TypeError Traceback (most recent call last)
<ipython-input-44-a553646a889c> in <module>
22 for i, samples in enumerate([samples_1, samples_10, samples_100]):
23 results[clf_name][i] = \
---> 24 train_predict(clf, samples, X_train, y_train, X_test, y_test)
25
26 # Run metrics visualization for the three supervised learning models chosen
<ipython-input-36-2fd67492e721> in train_predict(learner, sample_size, X_train, y_train, X_test, y_test)
19 # TODO: Fit the learner to the training data using slicing with ‘sample_size’ using .fit(training_features[:], training_labels[:])
@farhanhubble
farhanhubble / gol.py
Created July 28, 2020 14:49
Gol game
world = [['▢', '▣', '▢'],
['▢', '▣', '▢'],
['▢', '▣', '▢']]
## Code to update the world one step.
nb_rows = len(world)
nb_cols = len(world[0])
@farhanhubble
farhanhubble / instructions.md
Created December 31, 2021 12:48
Clear Up Root Partition (/) on Linux (Fedora)
  1. List the space used by the root parition and its directories excluding other paritions 1
du -h -x -d 1 /
  1. Get rid of trash in common cleanable folders as described here
  2. If the /var directory has a huge size try cleaning up Packagekit cache
sudo pkcon refresh force -c -1
@farhanhubble
farhanhubble / instructions.md
Created January 1, 2022 08:44
Install CUDA on Fedora 35
  1. Install the correct drivers from within the system (Software): 495.46 as of Jan 1,2022. Compatible Devices
  2. Reboot
  3. Check the latest CUDA version available here
  4. Install an appropriate version of CUDA using the instructions here.
    • The CUDA installer has bundled Nvidia drivers. Deselect these drivers if the driver version installed earlier is higher.
  5. The installer will recommend adding certain directories to the PATH and LD_LIBRARY_PATH. Edit the ~/.bashrc to add the paths.
  6. Run source ~/.bashrc to pull the changes in the currect terminal.
  7. Reboot [Optional].
  8. Go to the samples directory (Will be inside a folder named something like NVIDIA_CUDA-11.5_Samples that is inside the home folder,
@farhanhubble
farhanhubble / auto-nbconvert.md
Created January 19, 2022 10:26 — forked from fa-ahmad/auto-nbconvert.md
Trigger Nbconvert on Jupyter Notebook

Use the entr command to watch the Notebook for changes.

ls myfile.ipynb| entr -r jupyter nbconvert myfile.ipynb --to slides --post serve --ServePostProcessor.open_in_browser=False

The URL of the slides gets printed in the terminal:

Serving your slides at http://127.0.0.1:8000/myfile.slides.html

Sources:

@farhanhubble
farhanhubble / Classification Losses.md
Created December 3, 2022 04:46
Disambiguating Classification Losses

Neural networks that perform classification(predict the class of an input) produce a vector of C raw numbers for every input, where C is the total number of classes, for example 10 for MNIST. This vector is called a logit.

If we feed an MNIST image for digit 3 to a classification model, it may produce the a logit vector like this (top row added to show the class ID's:

0 1 2 3 4 5 6 7 8 9
-121.2 -212.3 81.1 171.1 -55.0 132.5 -13.2 63.5 99.2 -10.9