Skip to content

Instantly share code, notes, and snippets.

@tomekr
Last active May 1, 2020 00:02
Show Gist options
  • Save tomekr/e7968d373683ebea79f18881070fa9a1 to your computer and use it in GitHub Desktop.
Save tomekr/e7968d373683ebea79f18881070fa9a1 to your computer and use it in GitHub Desktop.
$ python jukebox/sample.py --model=5b_lyrics --name=sample_5b --levels=3 --sample_length_in_seconds=20 --total_sample_length_in_seconds=180 --sr=44100 --n_samples=6 --hop_fraction=0.5,0.5,0.125
Using cuda True
{'name': 'sample_5b', 'levels': 3, 'sample_length_in_seconds': 20, 'total_sample_length_in_seconds': 180, 'sr': 44100, 'n_samples': 6, 'hop_fraction': (0.5, 0.5, 0.125)}
Setting sample length to 881920 (i.e. 19.998185941043083 seconds) to be multiple of 128
Downloading from gce
Restored from /home/ubuntu/.cache/jukebox-assets/models/5b/vqvae.pth.tar
0: Loading vqvae in eval mode
Using apex FusedLayerNorm
Conditioning on 1 above level(s)
Checkpointing convs
Checkpointing convs
Loading artist IDs from /home/ubuntu/jukebox/jukebox/data/ids/v2_artist_ids.txt
Loading artist IDs from /home/ubuntu/jukebox/jukebox/data/ids/v2_genre_ids.txt
Level:0, Cond downsample:4, Raw to tokens:8, Sample length:65536
Downloading from gce
Restored from /home/ubuntu/.cache/jukebox-assets/models/5b/prior_level_0.pth.tar
0: Loading prior in eval mode
Conditioning on 1 above level(s)
Checkpointing convs
Checkpointing convs
Loading artist IDs from /home/ubuntu/jukebox/jukebox/data/ids/v2_artist_ids.txt
Loading artist IDs from /home/ubuntu/jukebox/jukebox/data/ids/v2_genre_ids.txt
Level:1, Cond downsample:4, Raw to tokens:32, Sample length:262144
Downloading from gce
Restored from /home/ubuntu/.cache/jukebox-assets/models/5b/prior_level_1.pth.tar
0: Loading prior in eval mode
Loading artist IDs from /home/ubuntu/jukebox/jukebox/data/ids/v2_artist_ids.txt
Loading artist IDs from /home/ubuntu/jukebox/jukebox/data/ids/v2_genre_ids.txt
Level:2, Cond downsample:None, Raw to tokens:128, Sample length:1048576
0: Converting to fp16 params
Downloading from gce
Restored from /home/ubuntu/.cache/jukebox-assets/models/5b_lyrics/prior_level_2.pth.tar
0: Loading prior in eval mode
Sampling level 2
Sampling 6890 tokens for [0,6890]. Conditioning on 0 tokens
Ancestral sampling 3 samples with temp=0.99, top_k=0, top_p=0.0
Traceback (most recent call last):
File "jukebox/sample.py", line 237, in <module>
fire.Fire(run)
File "/home/ubuntu/anaconda3/envs/jukebox/lib/python3.7/site-packages/fire/core.py", line 127, in Fire
component_trace = _Fire(component, args, context, name)
File "/home/ubuntu/anaconda3/envs/jukebox/lib/python3.7/site-packages/fire/core.py", line 366, in _Fire
component, remaining_args)
File "/home/ubuntu/anaconda3/envs/jukebox/lib/python3.7/site-packages/fire/core.py", line 542, in _CallCallable
result = fn(*varargs, **kwargs)
File "jukebox/sample.py", line 234, in run
save_samples(model, device, hps, sample_hps)
File "jukebox/sample.py", line 215, in save_samples
ancestral_sample(labels, sampling_kwargs, priors, hps)
File "jukebox/sample.py", line 123, in ancestral_sample
zs = _sample(zs, labels, sampling_kwargs, priors, sample_levels, hps)
File "jukebox/sample.py", line 101, in _sample
zs = sample_level(zs, labels[level], sampling_kwargs[level], level, prior, total_length, hop_length, hps)
File "jukebox/sample.py", line 86, in sample_level
zs = sample_partial_window(zs, labels, sampling_kwargs, level, prior, total_length, hps)
File "jukebox/sample.py", line 27, in sample_partial_window
return sample_single_window(zs, labels, sampling_kwargs, level, prior, start, hps)
File "jukebox/sample.py", line 68, in sample_single_window
z_samples_i = prior.sample(n_samples=z_i.shape[0], z=z_i, z_conds=z_conds_i, y=y_i, **sampling_kwargs)
File "/home/ubuntu/jukebox/jukebox/prior/prior.py", line 252, in sample
x_cond, y_cond, prime = self.get_cond(z_conds, y)
File "/home/ubuntu/jukebox/jukebox/prior/prior.py", line 232, in get_cond
y_cond, y_pos = self.y_emb(y) if self.y_cond else (None, None)
File "/home/ubuntu/anaconda3/envs/jukebox/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/ubuntu/jukebox/jukebox/prior/conditioners.py", line 150, in forward
pos_emb = self.total_length_emb(total_length) + self.absolute_pos_emb(start, end) + self.relative_pos_emb(start/total_length, end/total_length)
File "/home/ubuntu/anaconda3/envs/jukebox/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/ubuntu/jukebox/jukebox/prior/conditioners.py", line 111, in forward
return self.emb(bins)
File "/home/ubuntu/anaconda3/envs/jukebox/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/jukebox/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 117, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/home/ubuntu/anaconda3/envs/jukebox/lib/python3.7/site-packages/torch/nn/functional.py", line 1506, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: CUDA out of memory. Tried to allocate 450.00 MiB (GPU 0; 11.17 GiB total capacity; 10.69 GiB already allocated; 31.31 MiB free; 172.81 MiB cached)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment