Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save hamletbatista/4d8e6d881b47c1d6b8da7ce2cc494579 to your computer and use it in GitHub Desktop.
Save hamletbatista/4d8e6d881b47c1d6b8da7ce2cc494579 to your computer and use it in GitHub Desktop.
template="""
input_features:
-
name: tokens
type: text
encoder: bert
config_path: uncased_L-12_H-768_A-12/bert_config.json
checkpoint_path: uncased_L-12_H-768_A-12/bert_model.ckpt
reduce_output: null
preprocessing:
word_tokenizer: bert
word_vocab_file: uncased_L-12_H-768_A-12/vocab.txt
padding_symbol: '[PAD]'
unknown_symbol: '[UNK]'
output_features:
-
name: intent
type: category
reduce_input: sum
num_fc_layers: 1
fc_size: 64
-
name: slots
type: sequence
decoder: tagger
text:
word_sequence_length_limit: 128
training:
batch_size: 32
learning_rate: 0.00002
"""
with open("model_definition.yaml", "w") as f:
f.write(template)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment