Skip to content

Instantly share code, notes, and snippets.

View inkrement's full-sized avatar

Christian Hotz-Behofsits inkrement

View GitHub Profile
@jarhill0
jarhill0 / prawbject_to_json.py
Created April 30, 2019 03:55
Sample code for converting a PRAW Comment to JSON
import praw
from json import dumps, JSONEncoder
reddit = praw.Reddit(username='',
password='',
client_id='',
client_secret='',
user_agent='')
comment = reddit.comment('em3rygg')
@Mahedi-61
Mahedi-61 / cuda_11.8_installation_on_Ubuntu_22.04
Last active May 4, 2024 14:18
Instructions for CUDA v11.8 and cuDNN 8.9.7 installation on Ubuntu 22.04 for PyTorch 2.1.2
#!/bin/bash
### steps ####
# Verify the system has a cuda-capable gpu
# Download and install the nvidia cuda toolkit and cudnn
# Setup environmental variables
# Verify the installation
###
### to verify your gpu is cuda enable check
@jovianlin
jovianlin / load_glove_embeddings.py
Created January 11, 2018 08:26
load_glove_embeddings
# coding: utf-8
import numpy as np
def load_glove_embeddings(fp, embedding_dim, include_empty_char=True):
"""
Loads pre-trained word embeddings (GloVe embeddings)
Inputs: - fp: filepath of pre-trained glove embeddings
- embedding_dim: dimension of each vector embedding
@rspeare
rspeare / p_values_for_logreg.py
Last active February 4, 2024 02:50
P values for sklearn logistic regression
from sklearn import linear_model
import numpy as np
import scipy.stats as stat
class LogisticReg:
"""
Wrapper Class for Logistic Regression which has the usual sklearn instance
in an attribute self.model, and pvalues, z scores and estimated
errors for each coefficient in
@lwiklendt
lwiklendt / pystan_vb_extract.py
Last active November 11, 2020 18:58
Extract parameter samples from PyStan's vb method, so that it resembles extract() from the sampling method
import numpy as np
from collections import OrderedDict
def pystan_vb_extract(results):
param_specs = results['sampler_param_names']
samples = results['sampler_params']
n = len(samples[0])
# first pass, calculate the shape
param_shapes = OrderedDict()
@giuseppebonaccorso
giuseppebonaccorso / twitter_sentiment_analysis_convnet.py
Last active March 16, 2020 19:26
Twitter Sentiment Analysis with Gensim Word2Vec and Keras Convolutional Networks
import keras.backend as K
import multiprocessing
import tensorflow as tf
from gensim.models.word2vec import Word2Vec
from keras.callbacks import EarlyStopping
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Flatten
from keras.layers.convolutional import Conv1D
@Tushar-N
Tushar-N / pad_packed_demo.py
Last active December 27, 2022 06:35
How to use pad_packed_sequence in pytorch<1.1.0
import torch
import torch.nn as nn
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
seqs = ['gigantic_string','tiny_str','medium_str']
# make <pad> idx 0
vocab = ['<pad>'] + sorted(set(''.join(seqs)))
# make model
@mizvol
mizvol / Tags LDA topic analysis.ipynb
Created January 18, 2017 10:03
LDA topic analysis of Instagram hashtags for clustering. Analysis + Visualization in D3JS
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@jovianlin
jovianlin / clustering_cosine_similarity_matrix.py
Last active December 21, 2020 07:53
Clustering cosine similarity matrix
"""
### Problem Statement ###
Let's say you have a square matrix which consists of cosine similarities (values between 0 and 1).
This square matrix can be of any size.
You want to get clusters which maximize the values between elemnts in the cluster.
For example, for the following matrix:
| A | B | C | D
A | 1.0 | 0.1 | 0.6 | 0.4
B | 0.1 | 1.0 | 0.1 | 0.2
@mbollmann
mbollmann / attention_lstm.py
Last active June 26, 2023 10:08
My attempt at creating an LSTM with attention in Keras
class AttentionLSTM(LSTM):
"""LSTM with attention mechanism
This is an LSTM incorporating an attention mechanism into its hidden states.
Currently, the context vector calculated from the attended vector is fed
into the model's internal states, closely following the model by Xu et al.
(2016, Sec. 3.1.2), using a soft attention model following
Bahdanau et al. (2014).
The layer expects two inputs instead of the usual one: