Skip to content

Instantly share code, notes, and snippets.

View minesh1291's full-sized avatar
🚖
On The Journey to Neverland

Minesh A. Jethva minesh1291

🚖
On The Journey to Neverland
View GitHub Profile
@minesh1291
minesh1291 / SampleWeights_Regression.py
Last active May 18, 2024 16:07
SampleWeights_Regression.py
import pandas as pd
import numpy as np
# Example DataFrame with random target values
df = pd.DataFrame({
'label': np.random.normal(size=1000) # 100 random values between 0 and 1
})
# Step 1: Bin the target values to create a frequency distribution
df['label_bin'] = pd.cut(df['label'], bins=10)
@minesh1291
minesh1291 / vae_pytorch_lightning.ipynb
Created May 16, 2024 04:42
Vae_pytorch_lightning.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@minesh1291
minesh1291 / MoE.py
Created May 4, 2024 11:55 — forked from ruvnet/MoE.py
A PyTorch implementation of a Mixture of Experts (MoE) model resembling the Mixtral 8x7B architecture, with detailed inline comments. This model combines transformer layers with an MoE layer consisting of 8 experts, aiming for high efficiency by activating only 2 experts per token. It's configured with dimensions reflecting the operational effic…
"""
This model integrates the MoE concept within a Transformer architecture. Each token's
representation is processed by a subset of experts, determined by the gating mechanism.
This architecture allows for efficient and specialized handling of different aspects of the
data, aiming for the adaptability and efficiency noted in the Mixtral 8x7B model's design
philosophy. The model activates only a fraction of the available experts for each token,
significantly reducing the computational resources needed compared to activating all experts
for all tokens.
"""
@minesh1291
minesh1291 / ecg_forecast.py
Created April 27, 2024 11:45
Distribution Forecasting 
# %%
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.signal
from gluonts.dataset.repository import get_dataset, dataset_names
from gluonts.dataset.util import to_pandas
from gluonts.dataset.common import ListDataset
from gluonts.torch import SimpleFeedForwardEstimator
from lightning.pytorch.callbacks.early_stopping import EarlyStopping
@minesh1291
minesh1291 / load_m5_gluonts.py
Created April 24, 2024 05:47 — forked from lostella/load_m5_gluonts.py
Loading M5 competition data into a gluonts PandasDataset
# Works on gluonts dev branch as of May 30th, 2023
# Assumes "m5-forecasting-accuracy" folder with data next to the script
# Data is obtained from https://www.kaggle.com/c/m5-forecasting-accuracy
import pandas as pd
from pathlib import Path
from gluonts.dataset.pandas import PandasDataset
# Load data from csv files
@minesh1291
minesh1291 / bokeh_subprocess_test.py
Created November 29, 2023 15:26 — forked from ruoyu0088/bokeh_subprocess_test.py
a demo for zmq process with bokeh server
from os import path
from bokeh.models import Button, Div
from bokeh.layouts import column
from bokeh.document import without_document_lock
from bokeh.io import curdoc
from zmq_subprocess import ZmqSubProcessClient
ok_button = Button(label="ok")
div = Div()
@minesh1291
minesh1291 / cuda_install.md
Created October 29, 2023 02:46 — forked from denguir/cuda_install.md
Installation procedure for CUDA & cuDNN

How to install CUDA & cuDNN on Ubuntu 22.04

Install NVIDIA drivers

Update & upgrade

sudo apt update && sudo apt upgrade

Remove previous NVIDIA installation

@minesh1291
minesh1291 / train.py
Created August 14, 2023 17:23 — forked from codeKgu/train.py
Tutorial for multimodal_transformers
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./logs/model_name",
logging_dir="./logs/runs",
overwrite_output_dir=True,
do_train=True,
per_device_train_batch_size=32,
num_train_epochs=1,
evaluate_during_training=True,
@minesh1291
minesh1291 / model_loading.py
Created August 14, 2023 17:22 — forked from codeKgu/model_loading.py
Tutorial for multimodal_transformers
from multimodal_transformers.model import AutoModelWithTabular, TabularConfig
from transformers import AutoConfig
num_labels = len(np.unique(torch_dataset, labels))
config = AutoConfig.from_pretrained('bert-base-uncased')
tabular_config = TabularConfig(
num_labels=num_labels,
cat_feat_dim=torch_dataset.cat_feats.shape[1],
numerical_feat_dim=torch_dataset.numerical_feats.shape[1],
combine_feat_method='weighted_feature_sum_on_transformer_cat_and_numerical_feats',
@minesh1291
minesh1291 / data_loading.py
Created August 14, 2023 17:22 — forked from codeKgu/data_loading.py
Tutorial for multimodal_transformers
import pandas as pd
from multimodal_transformers.data import load_data
from transformers import AutoTokenizer
data_df = pd.read_csv('Womens Clothing E-Commerce Reviews.csv')
text_cols = ['Title', 'Review Text']
# The label col is expected to contain integers from 0 to N_classes - 1
label_col = 'Recommended IND'
categorical_cols = ['Clothing ID', 'Division Name', 'Department Name', 'Class Name']
numerical_cols = ['Rating', 'Age', 'Positive Feedback Count']