Skip to content

Instantly share code, notes, and snippets.

View mutiann's full-sized avatar

Mutian He mutiann

View GitHub Profile
@mutiann
mutiann / stepwise.py
Last active August 29, 2022 08:44
Stepwise Monotonic Attention
'''
Implementation for https://arxiv.org/abs/1906.00672
Tips: The code could be directly used in place of BadahnauMonotonicAttention in Tensorflow codes. Similar to its
base class in the Tensorflow seq2seq codebase, you may use "hard" for hard inference, or "parallel" for training or
soft inference. "recurrent" mode in BadahnauMonotonicAttention is not supported.
If you have already trained another model using BadahnauMonotonicAttention, the model could be reused, otherwise you
possibly have to tune the score_bias_init, which, similar to that in Raffel et al., 2017, is determined a priori to
suit the moving speed of the alignments, i.e. speed of speech of your training corpus in TTS cases. So