Skip to content

Instantly share code, notes, and snippets.

Avatar
:shipit:
busy

Ryuichi Yamamoto r9y9

:shipit:
busy
View GitHub Profile
View Quick-start.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View model.py
# coding: utf-8
import torch
from torch import nn
from torch.nn import functional as F
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
from torch.nn.utils import weight_norm
from nnsvs.base import BaseModel, PredictionType
@r9y9
r9y9 / Musical context features-v2.ipynb
Last active Nov 2, 2020
Musical context features-v2
View Musical context features-v2.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View a.py
if self.use_harvest:
import amfm_decompy.pYAAPT as pYAAPT
import amfm_decompy.basic_tools as basic
signal = basic.SignalObj(wav_path)
print(min_f0, max_f0)
min_f0 = min(150, min_f0) # TODO: Fix this property
pitch = pYAAPT.yaapt(signal, f0_min=min_f0, f0_max=max_f0, frame_length=25, frame_space=5)
f0 = pitch.samp_values.astype(np.float64)
timeaxis = np.linspace(0, (pitch.samp_values.shape[0]-1) * 0.005, len(pitch.samp_values))
#f0, timeaxis = pyworld.harvest(x, fs, frame_period=self.frame_period,
View F0 analysis.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@r9y9
r9y9 / Kiritan singing voice synthesis demo.ipynb
Created May 3, 2020
Neural_network_based_singing_voice_synthesis_demo_using_kiritan_singing_database_(Japanese)
View Kiritan singing voice synthesis demo.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View prep_lab.sh
#!/bin/bash
NEUTRINO_DIR=~/sp/NEUTRINO
dst_dir=./sinsy_lab
mkdir -p $dst_dir/full
mkdir -p $dst_dir/mono
for f in musicxml/*.xml
do
name=$(basename $f)
View a.py
import torch
from torch import nn
torch.manual_seed(1234)
model1 = nn.Sequential(*[nn.Linear(1, 1) for _ in range(2)])
layer = nn.Linear(1, 1)
model2 = nn.Sequential(*[layer for _ in range(2)])
print("Model1 (two different linear layers):")
assert not torch.equal(model1[0].weight, model1[1].weight)
@r9y9
r9y9 / Text processing (ja) for DNN TTS.ipynb
Created Nov 7, 2018
Text processing (ja) for DNN TTS.ipynb
View Text processing (ja) for DNN TTS.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@r9y9
r9y9 / Dockerfile
Created Jun 13, 2018
Docker file for Tacotron2
View Dockerfile
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
git \
curl \
ca-certificates \
libjpeg-dev \
libpng-dev && \
rm -rf /var/lib/apt/lists/*