Skip to content

Instantly share code, notes, and snippets.

View carlfm01's full-sized avatar

Carlos Fonseca carlfm01

  • Costa Rica
View GitHub Profile
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
@MihailCosmin
MihailCosmin / cuda_11.8_installation_on_Ubuntu_22.04
Last active July 14, 2024 07:39 — forked from primus852/cuda_11.7_installation_on_Ubuntu_22.04
Instructions for CUDA v11.8 and cuDNN 8.7 installation on Ubuntu 22.04 for PyTorch 2.0.0
#!/bin/bash
### steps ####
# verify the system has a cuda-capable gpu
# download and install the nvidia cuda toolkit and cudnn
# setup environmental variables
# verify the installation
###
### to verify your gpu is cuda enable check
@X-TRON404
X-TRON404 / cuda_11.7_installation_on_Ubuntu_22.04
Last active May 21, 2024 04:09 — forked from primus852/cuda_11.7_installation_on_Ubuntu_22.04
Instructions for CUDA v11.7 and cuDNN 8.5 installation on Ubuntu 22.04 for PyTorch 1.12.1
#!/bin/bash
### steps ####
# verify the system has a cuda-capable gpu
# download and install the nvidia cuda toolkit and cudnn
# setup environmental variables
# verify the installation
###
### to verify your gpu is cuda enable check
@mht-sharma
mht-sharma / onnx_trocr_inference.py
Created December 16, 2022 10:46
ONNX TrOCR Inference
import os
import time
from typing import Optional, Tuple
import torch
from PIL import Image
import onnxruntime as onnxrt
import requests
from transformers import AutoConfig, AutoModelForVision2Seq, TrOCRProcessor, VisionEncoderDecoderModel
@primus852
primus852 / cuda_11.7_installation_on_Ubuntu_22.04
Last active July 13, 2024 10:25 — forked from Mahedi-61/cuda_11.8_installation_on_Ubuntu_22.04
Instructions for CUDA v11.7 and cuDNN 8.5 installation on Ubuntu 22.04 for PyTorch 1.12.1
#!/bin/bash
### steps ####
# verify the system has a cuda-capable gpu
# download and install the nvidia cuda toolkit and cudnn
# setup environmental variables
# verify the installation
###
### to verify your gpu is cuda enable check
@eldrin
eldrin / my_melspec.py
Last active January 17, 2024 15:28
Quick digging in what makes the mel-spectrum discrepancy between torch audio and librosa
import math
from typing import Callable, Optional
from warnings import warn
import torch
from torch import Tensor
from torchaudio import functional as F
from torchaudio.compliance import kaldi
@jlgabriel
jlgabriel / get_productos_jumpseller.py
Last active January 6, 2024 23:10
Script en Python que lee JSON de productos de tienda Jumpseller mediante el API y lo exporta como tabla en formatos Excel y CSV
import requests
import math
import pandas as pd
import flatten_json
# instalar flatten_json con: pip install flatten_json
# Referencia: https://github.com/amirziai/flatten
# parámetros
url_api_productos_contar = "https://api.jumpseller.com/v1/products/count.json"
@mauri870
mauri870 / tensorflow_audio_to_mfcc.py
Last active October 12, 2022 13:21
Wav audio to mfcc features in tensorflow 1.15
import tensorflow as tf
# FIXME: audio_ops.decode_wav is deprecated, use tensorflow_io.IOTensor.from_audio
from tensorflow.contrib.framework.python.ops import audio_ops
# Enable eager execution for a more interactive frontend.
# If using the default graph mode, you'll probably need to run in a session.
tf.enable_eager_execution()
@tf.function
def audio_to_mfccs(
@feroult
feroult / convert_to_pb.py
Last active March 3, 2024 23:22
Convert OpenSeq2Seq trained model to PB file
import tensorflow as tf
from open_seq2seq.utils.utils import get_base_config, check_logdir, create_model
# Change with your configs here
args_S2T = ["--config_file=/data/training/v5/config-J5x3.py",
"--mode=interactive_infer",
"--logdir=/data/training/v5/models",
"--batch_size_per_gpu=10",
]
@zhanwenchen
zhanwenchen / export_tf_model.py
Last active August 31, 2021 22:51
Minimal code to load a trained TensorFlow model from a checkpoint and export it with SavedModelBuilder
import os
import tensorflow as tf
trained_checkpoint_prefix = 'checkpoints/dev'
export_dir = os.path.join('models', '0') # IMPORTANT: each model folder must be named '0', '1', ... Otherwise it will fail!
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Restore from checkpoint
loader = tf.train.import_meta_graph(trained_checkpoint_prefix + '.meta')