Skip to content

Instantly share code, notes, and snippets.

@aswalin
Last active June 8, 2022 04:47
Show Gist options
  • Save aswalin/a51b66883b45212a93226e4bac8d7cd4 to your computer and use it in GitHub Desktop.
Save aswalin/a51b66883b45212a93226e4bac8d7cd4 to your computer and use it in GitHub Desktop.
Machine Translation Metric - BLEU Score
from nltk.translate.bleu_score import sentence_bleu
reference = [['the', 'cat',"is","sitting","on","the","mat"]]
candidate = ["on",'the',"mat","is","a","cat"]
score = sentence_bleu( reference, candidate)
print(score)
from nltk.translate.bleu_score import sentence_bleu
reference = [['the', 'cat',"is","sitting","on","the","mat"]]
candidate = ["there",'is',"cat","sitting","cat"]
score = sentence_bleu( reference, candidate)
print(score)
@meseretfetenebiru
Copy link

Thank you very much my dear. you minimized my terror how can I test my Model trained on NMT for MSc Thesis
But how can I do the average BLEU score shall I take sample or run code for how many numbers of times?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment