Skip to content

Instantly share code, notes, and snippets.

View maziyarpanahi's full-sized avatar
😎
Building a private medical ChatGPT!

Maziyar Panahi maziyarpanahi

😎
Building a private medical ChatGPT!
View GitHub Profile
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/trl/trainer/utils.py", line 338, in __call__
to_pad = [torch.LongTensor(ex[k]) for ex in features]
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/trl/trainer/utils.py", line 338, in <listcomp>
to_pad = [torch.LongTensor(ex[k]) for ex in features]
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/trl/trainer/utils.py", line 338, in <listcomp>
to_pad = [torch.LongTensor(ex[k]) for ex in features]
TypeError: 'NoneType' object cannot be interpreted as an integer
to_pad = [torch.LongTensor(ex[k]) for ex in features]
TypeError: 'NoneType' object cannot be interpreted as an integer
return inner_training_loop(

Let's checkout the PR:

git fetch origin pull/625/head:dbrx
git switch dbrx
pip install -vvv --no-build-isolation -e .

Download the model:

@maziyarpanahi
maziyarpanahi / miqu-upload-hf.py
Created February 9, 2024 22:54 — forked from 152334H/miqu-upload-hf.py
upload miqu ckpt to hf
from transformers import LlamaConfig as LC, LlamaForCausalLM as LLM, LlamaTokenizer as LT
from accelerate import init_empty_weights, load_checkpoint_and_dispatch
import torch
lt = LT.from_pretrained("NousResearch/Llama-2-7b-hf")
c = LC.from_pretrained("NousResearch/Llama-2-70b-hf")
c.max_position_embeddings = 32764
c.rope_theta = 1000000
with init_empty_weights(): m = LLM(c)
m = m.half().eval()
m.requires_grad_(False)
@maziyarpanahi
maziyarpanahi / gguf-merge.sh
Created February 2, 2024 08:30 — forked from crasm/gguf-merge.sh
Shell script for merging TheBloke's .gguf-split model files
#!/bin/sh
log() {
format="$1"; shift
# shellcheck disable=SC2059
>&2 printf "$format\n" "$@"
}
usage() {
>&2 cat <<EOF
@maziyarpanahi
maziyarpanahi / PVE-HP-ssacli-smart-storage-admin.md
Created March 31, 2023 09:26 — forked from mrpeardotnet/PVE-HP-ssacli-smart-storage-admin.md
HP Smart Storage Admin CLI (ssacli) installation and usage on Proxmox PVE (6.x)

HP Smart Storage Admin CLI (ssacli) installation and usage on Proxmox PVE (6.x)

Why use HP Smart Storage Admin CLI?

You can use ssacli (smart storage administrator command line interface) tool to manage any of supported HP Smart Array Controllers in your Proxmox host without need to reboot your server to access Smart Storage Administrator in BIOS. That means no host downtime when managing your storage.

CLI is not as convenient as GUI interface provided by BIOS or desktop utilities, but still allows you to fully manage your controller, physical disks and logical drives on the fly with no Proxmox host downtime.

ssacli replaces older hpssacli, but shares the same syntax and adds support for newer servers and controllers.

Installation

@maziyarpanahi
maziyarpanahi / tours.json
Created January 19, 2023 14:49
tours.json
[
{
"tourBlurb" : "Big Sur is big country. The Big Sur Retreat takes you to the most majestic part of the Pacific Coast and show you the secret trails.",
"tourName" : "Big Sur Retreat",
"tourPackage" : "Backpack Cal",
"tourBullets" : "\"Accommodations at the historic Big Sur River Inn, Privately guided hikes through any of the 5 surrounding national parks, Picnic lunches prepared by the River Inn kitchen, Complimentary country breakfast, Admission to the Henry Miller Library and the Point Reyes Lighthouse \"",
"tourRegion" : "Central Coast",
"tourDifficulty" : "Medium",
"tourLength" : 3,
"tourPrice" : 750,
from sparknlp.annotator import *
from sparknlp.base import *
from pyspark.ml import Pipeline
imageAssembler = ImageAssembler() \
.setInputCol("image") \
.setOutputCol("image_assembler")
imageClassifier = ViTForImageClassification \
from transformers import ViTFeatureExtractor, ViTForImageClassification
from transformers import pipeline
import torch
device = "cuda:0" if torch.cuda.is_available() else "cpu"
print(device)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224')
model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224')
model = model.to(device)
from transformers import pipeline
pipe = pipeline("image-classification", model=model, feature_extractor=feature_extractor, device=-1)
for batch_size in [1, 8, 32, 64, 128]:
print("-" * 30)
print(f"Streaming batch_size={batch_size}")
for out in tqdm(pipe(dataset, batch_size=batch_size), total=len(dataset)):
pass
from transformers import ViTFeatureExtractor, ViTForImageClassification
from transformers import pipeline
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224')
model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224')
pipe = pipeline("image-classification", model=model, feature_extractor=feature_extractor, device=-1)