Skip to content

Instantly share code, notes, and snippets.

@CookieBox26
Last active June 13, 2020 07:21
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save CookieBox26/9e8bc07509cfda8536370609c8d9f014 to your computer and use it in GitHub Desktop.
Save CookieBox26/9e8bc07509cfda8536370609c8d9f014 to your computer and use it in GitHub Desktop.
アテンション機構付きの seq2seq モデルで機械翻訳する(PyTorch チュートリアル)
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# アテンション機構付きの seq2seq モデルで機械翻訳する(PyTorch チュートリアル)\n",
"\n",
"参考文献の 1 つ目のチュートリアルをやります。コードの順序を前後させていたりデバッグプリントを入れていることがあります。自分の誤りは自分に帰属します。何か問題がありましたら以下からご連絡いただけますと幸いです。 \n",
"https://github.com/CookieBox26/ToyBox/issues\n",
"\n",
"### 参考文献\n",
"\n",
"- [NLP FROM SCRATCH: TRANSLATION WITH A SEQUENCE TO SEQUENCE NETWORK AND ATTENTION](https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html)\n",
" - 本記事でなぞるチュートリアル。seq2seq モデルでフランス語を英語に機械翻訳する。\n",
"- [[1409.3215] Sequence to Sequence Learning with Neural Networks](https://arxiv.org/abs/1409.3215)\n",
" - チュートリアル中の随所の sequence to sequence network という文字列からリンクがある論文。この論文では英仏翻訳している。\n",
"- [フランス語の否定文 - Wikipedia](https://ja.wikipedia.org/wiki/%E3%83%95%E3%83%A9%E3%83%B3%E3%82%B9%E8%AA%9E%E3%81%AE%E5%90%A6%E5%AE%9A%E6%96%87)\n",
" - 「ne ... pas に代表されるように否定が ne を含む 2 語で表される」(ことからも仏英翻訳は単語を単語に翻訳すればいいのではないことがわかる)。\n",
"- [[1406.1078] Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation](https://arxiv.org/abs/1406.1078)\n",
" - これもチュートリアル冒頭で2回繰り返し紹介されている論文。この論文で GRU が提案、導入された。\n",
"- [[1409.0473] Neural Machine Translation by Jointly Learning to Align and Translate](https://arxiv.org/abs/1409.0473)\n",
" - これもチュートリアル冒頭で2回繰り返し紹介されている論文。\n",
"\n",
"<h3>目次</h3>\n",
"<ul>\n",
" <li><a href=\"#s1\">データの準備</a></li>\n",
" <li><a href=\"#s2\">seq2seq モデル</a></li>\n",
" <li><a href=\"#s3\">アテンションデコーダの導入</a></li>\n",
" <li><a href=\"#s4\">モデルの訓練</a></li>\n",
" <li><a href=\"#s5\">訓練結果</a></li>\n",
"</ul>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2 id=\"s1\" style=\"background: black; padding: 0.7em 1em 0.5em;color:white;\">データの準備</h2>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table width=\"100%\">\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">このチュートリアルではフランス語を英語に機械翻訳するんですね。フランス語がわからないので結果がすごいのかすごくないのかわかりにくそうですが…自分がわかる言語にカスタマイズしてみるのはまずチュートリアルをなぞった後の方がよさそうですね。素直に https://download.pytorch.org/tutorial/data.zip から data/eng-fra.txt をダウンロードします。135842 行ありますね。早速中身をみてみましょう。</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"◆ データの冒頭\n",
"Go.\tVa !\n",
"Run!\tCours !\n",
"Run!\tCourez !\n",
"Wow!\tÇa alors !\n",
"Fire!\tAu feu !\n",
"Help!\tÀ l'aide !\n",
"Jump.\tSaute.\n",
"Stop!\tÇa suffit !\n",
"Stop!\tStop !\n",
"Stop!\tArrête-toi !\n"
]
}
],
"source": [
"print('◆ データの冒頭')\n",
"with open('./data/eng-fra.txt', mode='r') as ifile:\n",
" for i, line in enumerate(ifile):\n",
" if i == 10:\n",
" break\n",
" print(line.strip())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table width=\"100%\">\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">…ファイルは1語文から始まって徐々に語数の多い文章になっているようですが、Run! や Stop! に対応するフランス語文が複数あるのが気になりますね…。まあいいです、それで、各単語を one-hot ベクトルにするんですね。今回のチュートリアルでは各言語ごとに語彙を数千単語のみに絞るようです。「ちょっとごまかします」とあるように、現実的には単語数はこれでは足りないということですね。one-hot ベクトル化する前に各文章をプレ処理するんですが(以下)、やることとしては「小文字化」「アスキーコード化」「句点と感嘆符と疑問符の切り離し」「アルファベットと句点と感嘆符と疑問符以外の除去」でしょうか。…最終的に単語ベクトルを使用するのならば、文字種をアスキーに限定する必要があるんでしょうか? まあどうでもいいですが…。</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"◆ データの冒頭にプレ処理を適用\n",
"------------------------------\n",
"オリジナル Go. \t Va !\n",
"ASCII文字化 Go. \t Va !\n",
"記号トリム go . \t va !\n",
"------------------------------\n",
"オリジナル Run! \t Cours !\n",
"ASCII文字化 Run! \t Cours !\n",
"記号トリム run ! \t cours !\n",
"------------------------------\n",
"オリジナル Run! \t Courez !\n",
"ASCII文字化 Run! \t Courez !\n",
"記号トリム run ! \t courez !\n",
"------------------------------\n",
"オリジナル Wow! \t Ça alors !\n",
"ASCII文字化 Wow! \t Ca alors !\n",
"記号トリム wow ! \t ca alors !\n",
"------------------------------\n",
"オリジナル Fire! \t Au feu !\n",
"ASCII文字化 Fire! \t Au feu !\n",
"記号トリム fire ! \t au feu !\n",
"------------------------------\n",
"オリジナル Help! \t À l'aide !\n",
"ASCII文字化 Help! \t A l'aide !\n",
"記号トリム help ! \t a l aide !\n",
"------------------------------\n",
"オリジナル Jump. \t Saute.\n",
"ASCII文字化 Jump. \t Saute.\n",
"記号トリム jump . \t saute .\n",
"------------------------------\n",
"オリジナル Stop! \t Ça suffit !\n",
"ASCII文字化 Stop! \t Ca suffit !\n",
"記号トリム stop ! \t ca suffit !\n",
"------------------------------\n",
"オリジナル Stop! \t Stop !\n",
"ASCII文字化 Stop! \t Stop !\n",
"記号トリム stop ! \t stop !\n",
"------------------------------\n",
"オリジナル Stop! \t Arrête-toi !\n",
"ASCII文字化 Stop! \t Arrete-toi !\n",
"記号トリム stop ! \t arrete toi !\n"
]
}
],
"source": [
"import unicodedata\n",
"import re\n",
"\n",
"\n",
"# ユニコード文字集合をアスキー文字集合だけで表現する関数\n",
"# ここでは、元文字列の各文字を NFD という正規化形式で分解することで\n",
"# アクセント記号を分離し、アクセント記号は除去する\n",
"# 元コメント:\n",
"# Turn a Unicode string to plain ASCII, thanks to\n",
"# https://stackoverflow.com/a/518232/2809427\n",
"def unicodeToAscii(s):\n",
" return ''.join(\n",
" c for c in unicodedata.normalize('NFD', s)\n",
" if unicodedata.category(c) != 'Mn'\n",
" )\n",
"\n",
"\n",
"# 小文字化、アスキーコード化、句点と感嘆符と疑問符の切り離し、記号除去する関数\n",
"def normalizeString(s):\n",
" s = unicodeToAscii(s.lower().strip()) # 小文字化、アスキーコード化\n",
" s = re.sub(r\"([.!?])\", r\" \\1\", s) # 任意の文字に続く . ! ? の前に空白を挟む\n",
" s = re.sub(r\"[^a-zA-Z.!?]+\", r\" \", s) # アルファベット . ! ? 以外は除去\n",
" return s\n",
"\n",
"\n",
"print('◆ データの冒頭にプレ処理を適用')\n",
"with open('./data/eng-fra.txt', mode='r') as ifile:\n",
" for i, line in enumerate(ifile):\n",
" if i == 10:\n",
" break\n",
" pair = line.strip().split('\\t')\n",
" print('-'*30)\n",
" print('オリジナル ', pair[0], '\\t', pair[1])\n",
" print('ASCII文字化 ', unicodeToAscii(pair[0]), '\\t', unicodeToAscii(pair[1]))\n",
" print('記号トリム ', normalizeString(pair[0]), '\\t', normalizeString(pair[1]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table width=\"100%\">\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">そして今回は簡単のために、全ての文章を学習するのではなく、「10単語未満」「特定のフレーズで始まる」文章に絞るようですね。</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# 今回は10単語未満の文章に絞る\n",
"MAX_LENGTH = 10\n",
"\n",
"# 今回は英文側が以下のフレーズで始まる文章に絞る\n",
"eng_prefixes = (\n",
" \"i am \", \"i m \",\n",
" \"he is\", \"he s \",\n",
" \"she is\", \"she s \",\n",
" \"you are\", \"you re \",\n",
" \"we are\", \"we re \",\n",
" \"they are\", \"they re \"\n",
")\n",
"\n",
"\n",
"def filterPair(p):\n",
" return len(p[0].split(' ')) < MAX_LENGTH and \\\n",
" len(p[1].split(' ')) < MAX_LENGTH and \\\n",
" p[1].startswith(eng_prefixes)\n",
"\n",
"\n",
"# 文章ペアのリストをフィルタする関数\n",
"def filterPairs(pairs):\n",
" return [pair for pair in pairs if filterPair(pair)]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table width=\"100%\">\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">ここまでで用意した関数を利用してデータを生成する処理が以下ですね。まずすべての文章ペアをロードし、対象の文章ペアに絞り込んだ上で、全ての単語の頻度をカウントしながらインデックスをふっています。</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"135842 ペアを読み込みました\n",
"10599 ペアに絞り込みました\n",
"単語数は以下でした\n",
"fra 4345\n",
"eng 2803\n"
]
}
],
"source": [
"SOS_token = 0\n",
"EOS_token = 1\n",
"\n",
"\n",
"# ある語の語彙を管理するクラス\n",
"# 文章を流し込んでいくことで単語にインデックスをふり各単語の頻度もカウントする\n",
"class Lang:\n",
" def __init__(self, name):\n",
" self.name = name\n",
" self.word2index = {}\n",
" self.word2count = {}\n",
" self.index2word = {0: \"SOS\", 1: \"EOS\"}\n",
" self.n_words = 2 # Count SOS and EOS\n",
"\n",
" def addSentence(self, sentence):\n",
" for word in sentence.split(' '):\n",
" self.addWord(word)\n",
"\n",
" def addWord(self, word):\n",
" if word not in self.word2index:\n",
" self.word2index[word] = self.n_words\n",
" self.word2count[word] = 1\n",
" self.index2word[self.n_words] = word\n",
" self.n_words += 1\n",
" else:\n",
" self.word2count[word] += 1\n",
"\n",
"\n",
"# X語 Y語 のタグ区切り文章ペアがあるファイルから文章を正規化しながら読み取り、\n",
"# 文章ペアのリストを取り出す関数\n",
"# 順序を Y語 X語 に入れ替えて取り出すこともできる\n",
"def readLangs(lang1, lang2, reverse=False):\n",
" lines = open('./data/%s-%s.txt' % (lang1, lang2), encoding='utf-8').\\\n",
" read().strip().split('\\n')\n",
" pairs = [[normalizeString(s) for s in l.split('\\t')] for l in lines]\n",
"\n",
" if reverse:\n",
" pairs = [list(reversed(p)) for p in pairs]\n",
" input_lang = Lang(lang2)\n",
" output_lang = Lang(lang1)\n",
" else:\n",
" input_lang = Lang(lang1)\n",
" output_lang = Lang(lang2)\n",
"\n",
" return input_lang, output_lang, pairs\n",
"\n",
"\n",
"# ファイルから文章を読み込み、対象の文章にフィルタし、語彙を作成\n",
"def prepareData(lang1, lang2, reverse=False):\n",
" input_lang, output_lang, pairs = readLangs(lang1, lang2, reverse) # ファイルから文章取得\n",
" print(\"%s ペアを読み込みました\" % len(pairs))\n",
" pairs = filterPairs(pairs) # 10単語未満で特定のフレーズから始まる文章に絞り込み\n",
" print(\"%s ペアに絞り込みました\" % len(pairs))\n",
" # 語彙作成\n",
" for pair in pairs:\n",
" input_lang.addSentence(pair[0])\n",
" output_lang.addSentence(pair[1])\n",
" print(\"単語数は以下でした\")\n",
" print(input_lang.name, input_lang.n_words)\n",
" print(output_lang.name, output_lang.n_words)\n",
" return input_lang, output_lang, pairs\n",
"\n",
"\n",
"input_lang, output_lang, pairs = prepareData('eng', 'fra', True)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"◆ データの冒頭\n",
"['j ai ans .', 'i m .']\n",
"['je vais bien .', 'i m ok .']\n",
"['ca va .', 'i m ok .']\n",
"['je suis gras .', 'i m fat .']\n",
"['je suis gros .', 'i m fat .']\n",
"['je suis en forme .', 'i m fit .']\n",
"['je suis touche !', 'i m hit !']\n",
"['je suis touchee !', 'i m hit !']\n",
"['je suis malade .', 'i m ill .']\n",
"['je suis triste .', 'i m sad .']\n"
]
}
],
"source": [
"print('◆ データの冒頭')\n",
"for i, pair in enumerate(pairs):\n",
" if i == 10:\n",
" break\n",
" print(pair)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table width=\"100%\">\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">データが用意できましたが、これは単に文章のペアたちですから PyTorch の機械学習モデルに入れることはできませんね。単語に split して各単語をインデックスに直して PyTorch のテンソルにする必要があります。そのための関数を先に用意しておきましょう。文末には文末トークンを付けるようですね。</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"◆ 生データ\n",
"['j ai ans .', 'i m .']\n",
"\n",
"◆ インプットデータ\n",
"torch.Size([5, 1])\n",
"tensor([[2],\n",
" [3],\n",
" [4],\n",
" [5],\n",
" [1]])\n",
"\n",
"◆ ターゲットデータ\n",
"torch.Size([4, 1])\n",
"tensor([[2],\n",
" [3],\n",
" [4],\n",
" [1]])\n"
]
}
],
"source": [
"import torch\n",
"\n",
"\n",
"def indexesFromSentence(lang, sentence):\n",
" return [lang.word2index[word] for word in sentence.split(' ')]\n",
"\n",
"\n",
"def tensorFromSentence(lang, sentence):\n",
" indexes = indexesFromSentence(lang, sentence)\n",
" indexes.append(EOS_token)\n",
" return torch.tensor(indexes, dtype=torch.long, device='cpu').view(-1, 1)\n",
"\n",
"\n",
"def tensorsFromPair(pair):\n",
" input_tensor = tensorFromSentence(input_lang, pair[0])\n",
" target_tensor = tensorFromSentence(output_lang, pair[1])\n",
" return (input_tensor, target_tensor)\n",
"\n",
"\n",
"print('◆ 生データ')\n",
"print(pairs[0])\n",
"(input_tensor, target_tensor) = tensorsFromPair(pairs[0])\n",
"print('\\n◆ インプットデータ')\n",
"print(input_tensor.size())\n",
"print(input_tensor)\n",
"print('\\n◆ ターゲットデータ')\n",
"print(target_tensor.size())\n",
"print(target_tensor)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2 id=\"s2\" style=\"background: black; padding: 0.7em 1em 0.5em;color:white;\">seq2seq モデル</h2>\n",
"<table width=\"100%\">\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">肝心のモデルの話に入っていきますね。ここでは seq2seq モデルといって、エンコーダの RNN とデコーダの RNN をもつモデルを採用するのでしょうか。エンコーダが入力された文章を1つの特徴ベクトル(コンテクストベクトル)に変換し、デコーダがそれを出力文章に変換するようです。エンコードした特徴の空間の1点1点が入力された文章の「意味」なのだというようにもありますね。…というかこのようなモデルを seq2seq モデルというのですね。てっきり入力も出力もシーケンスなら何でも seq2seq モデルとよぶのかと。</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/2.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">チュートリアルが Sequence to Sequence network からリンクしているのは以下の論文だね。\n",
"<ul style=\"margin:0.3em 0\">\n",
"<li><a href=\"https://arxiv.org/abs/1409.3215\">[1409.3215] Sequence to Sequence Learning with Neural Networks</a></li>\n",
"</ul>\n",
"2014年の論文で、シーケンスをシーケンスにマッピングする汎用的なアプローチを提案すると。具体的には、入力シーケンスを多層LSTMで特徴ベクトルにエンコードし、それをまた別の多層LSTMで出力シーケンスにデコードするみたい。</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">はあ。その seq2seq モデルの方が RNN より機械翻訳に適しているとありますね。翻訳は単語を受け取る度に単語を出すという類のものではないからです。フランス語と英語では chat noir と black cat のように形容詞の位置も違いますし、それにフランス語には ne/pas 構造もありますし…ne/pas 構造とは?</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/2.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">フランス語の否定文は、多くの言語と違って、動詞を ne と pas で挟むみたいだね。\n",
"<ul style=\"margin:0.3em 0\">\n",
"<li><a href=\"https://ja.wikipedia.org/wiki/%E3%83%95%E3%83%A9%E3%83%B3%E3%82%B9%E8%AA%9E%E3%81%AE%E5%90%A6%E5%AE%9A%E6%96%87\">フランス語の否定文 - Wikipedia</a></li>\n",
"</ul>\n",
"だから必然的に単語数自体がずれる。先の前置修飾と後置修飾の違いもあるし、特に英仏翻訳では seq2seq モデルが適しているのかもしれない。\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">ええ…フランス語は否定文の形式が変わっているんですね…しかしその記事の言語学的なサイクルは理解できるような。当初は動詞の前に否定表現を付けて、強調のために動詞の後にも否定表現を付けて、やがて後者のみが残るという。まあそれで、そのように翻訳というのはまず入力文を全て読み取った後出力文にするのがよいので、エンコーダとデコーダを構築するわけですが、エンコーダとデコーダに含まれている GRU なる層は何でしょう?\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/2.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">チュートリアル冒頭で紹介があった以下の論文で導入されていて、LSTMより少しシンプルになっている再帰ニューラルネットワークだね。LSTM よりも自由度は少ないけどタスクの種類やデータサイズによってはLSTMと同等かそれ以上の性能が見込めるみたい。<ul style=\"margin:0.3em 0\">\n",
"<li><a href=\"https://arxiv.org/abs/1406.1078\">[1406.1078] Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation</a></li>\n",
"</ul>\n",
"もっとも、torch.nn.GRU は Fully Gated Unit なんだけど arxiv:1406.1078 の定式化とは少し違っている。後の論文で改良されたとかかな? 参考文献がなかったからわからないや。\n",
"<ul style=\"margin:0.3em 0\">\n",
"<li><a href=\"https://pytorch.org/docs/master/generated/torch.nn.GRU.html\">https://pytorch.org/docs/master/generated/torch.nn.GRU.html</a></li>\n",
"</ul>\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">シンプルにしたLSTM? いや確かに LSTM よりすっきりしていますが、記憶セル c がないんですね。むしろ出力 h とされているものが記憶セルに近そうな…。\n",
"</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h4>vanilla RNN と LSTM</h4>\n",
"<img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/lstm_gru_1.png\" width=\"600px\">\n",
"<h4>GRU</h4>\n",
"<img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/lstm_gru_2.png\" width=\"600px\">\n",
"<h4>torch.nn.GRU</h4>\n",
"<img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/lstm_gru_3.png\" width=\"600px\">"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table width=\"100%\">\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">…まあそれで、エンコーダとデコーダは具体的には以下のコードですね。単語インデックスを高次元に埋め込んで GRU するだけです。エンコード時は特徴をどんどん積み重ねていき、デコード時は特徴をどんどん紐解いていくとイメージすればよいのでしょうか。コードの下に図も描いておきました。\n",
"</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"◆ エンコーダの訓練対象パラメータ\n",
"embedding.weight torch.Size([4345, 256])\n",
"gru.weight_ih_l0 torch.Size([768, 256])\n",
"gru.weight_hh_l0 torch.Size([768, 256])\n",
"gru.bias_ih_l0 torch.Size([768])\n",
"gru.bias_hh_l0 torch.Size([768])\n",
"\n",
"◆ デコーダの訓練対象パラメータ\n",
"embedding.weight torch.Size([2803, 256])\n",
"gru.weight_ih_l0 torch.Size([768, 256])\n",
"gru.weight_hh_l0 torch.Size([768, 256])\n",
"gru.bias_ih_l0 torch.Size([768])\n",
"gru.bias_hh_l0 torch.Size([768])\n",
"out.weight torch.Size([2803, 256])\n",
"out.bias torch.Size([2803])\n"
]
}
],
"source": [
"import torch.nn as nn\n",
"from torch import optim\n",
"import torch.nn.functional as F\n",
"\n",
"class EncoderRNN(nn.Module):\n",
" def __init__(self, input_size, hidden_size):\n",
" super(EncoderRNN, self).__init__()\n",
" self.hidden_size = hidden_size\n",
" self.embedding = nn.Embedding(input_size, hidden_size)\n",
" self.gru = nn.GRU(hidden_size, hidden_size)\n",
"\n",
" def forward(self, input, hidden, debug=False):\n",
" if debug:\n",
" print('入力単語: ', input.size(), input)\n",
" print('入力特徴: ', hidden.size(), hidden[:,:,:3])\n",
" embedded = self.embedding(input).view(1, 1, -1)\n",
" if debug:\n",
" print('埋め込み後  : ', embedded.size(), embedded[:,:,:3])\n",
" output = embedded\n",
" output, hidden = self.gru(output, hidden)\n",
" if debug:\n",
" print('GRUの出力  : ', output.size(), output[:,:,:3])\n",
" print('GRUの隠れ状態: ', hidden.size(), hidden[:,:,:3])\n",
" print('(単語を1つずつ流しているので出力と隠れ状態は一致)')\n",
" return output, hidden\n",
"\n",
" def initHidden(self):\n",
" return torch.zeros(1, 1, self.hidden_size, device='cpu')\n",
"\n",
"class DecoderRNN(nn.Module):\n",
" def __init__(self, hidden_size, output_size):\n",
" super(DecoderRNN, self).__init__()\n",
" self.hidden_size = hidden_size\n",
"\n",
" self.embedding = nn.Embedding(output_size, hidden_size)\n",
" self.gru = nn.GRU(hidden_size, hidden_size)\n",
" self.out = nn.Linear(hidden_size, output_size)\n",
" self.softmax = nn.LogSoftmax(dim=1)\n",
"\n",
" def forward(self, input, hidden, debug=False):\n",
" if debug:\n",
" print('入力単語: ', input.size(), input)\n",
" print('入力特徴: ', hidden.size(), hidden[:,:,:3])\n",
" output = self.embedding(input).view(1, 1, -1)\n",
" output = F.relu(output)\n",
" output, hidden = self.gru(output, hidden)\n",
" if debug:\n",
" print('GRUの出力  : ', output.size(), output[:,:,:3])\n",
" print('GRUの隠れ状態: ', hidden.size(), hidden[:,:,:3])\n",
" print('(単語を1つずつ流しているので出力と隠れ状態は一致)')\n",
" output = self.softmax(self.out(output[0]))\n",
" if debug:\n",
" print('最終出力   : ', output.size(), output[:,:3])\n",
" return output, hidden\n",
"\n",
" def initHidden(self):\n",
" return torch.zeros(1, 1, self.hidden_size, device='cpu')\n",
"\n",
"hidden_size = 256\n",
"encoder = EncoderRNN(input_lang.n_words, hidden_size).to('cpu')\n",
"decoder = DecoderRNN(hidden_size, output_lang.n_words).to('cpu')\n",
"\n",
"print('◆ エンコーダの訓練対象パラメータ')\n",
"for name, param in encoder.named_parameters():\n",
" print(name.ljust(14), param.size())\n",
" \n",
"print('\\n◆ デコーダの訓練対象パラメータ')\n",
"for name, param in decoder.named_parameters():\n",
" print(name.ljust(14), param.size())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h4>エンコーダとデコーダ</h4>\n",
"<img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/encoder_decoder_simple.png\" width=\"720px\">"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table width=\"100%\">\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/2.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">仏英翻訳だから、エンコーダにはフランス語の文章を1単語ずつ入れていくことになるね。1単語入れる度にエンコーダから特徴が出力される。1つの文章を入れ終わった最終的な特徴がコンテクストベクトルだね。コンテクストが得られたら、デコーダに文頭トークンとコンテクストベクトルを入れることで1語ずつ単語を取り出す。文末トークンが出てきたらデコードは終了って感じなのかな。\n",
"</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h4>翻訳の流れ</h4>\n",
"<img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/encoder_decoder_simple_example.png\" width=\"720px\">"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table width=\"100%\">\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">実際に翻訳をシミュレーションしてみましょう。無論、まだエンコーダもデコーダも学習していませんので、エンコードとデコードはでたらめです。その状態でデコーダが文末トークンを出すまでデコードを続けることはできませんので、デコーダから3単語まで取り出してみましょう。\n",
"</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"◆ エンコーダに1つ目のデータを流してみる\n",
"\n",
"◇ インプットデータ\n",
"['j', 'ai', 'ans', '.', '<EOS>']\n",
"tensor([[2],\n",
" [3],\n",
" [4],\n",
" [5],\n",
" [1]])\n",
"\n",
"◇ 流す単語: j\n",
"入力単語: torch.Size([1]) tensor([2])\n",
"入力特徴: torch.Size([1, 1, 256]) tensor([[[0., 0., 0.]]])\n",
"埋め込み後  : torch.Size([1, 1, 256]) tensor([[[ 2.2760, -0.3344, -1.0179]]], grad_fn=<SliceBackward>)\n",
"GRUの出力  : torch.Size([1, 1, 256]) tensor([[[ 0.3953, 0.1218, -0.2691]]], grad_fn=<SliceBackward>)\n",
"GRUの隠れ状態: torch.Size([1, 1, 256]) tensor([[[ 0.3953, 0.1218, -0.2691]]], grad_fn=<SliceBackward>)\n",
"(単語を1つずつ流しているので出力と隠れ状態は一致)\n",
"\n",
"◇ 流す単語: ai\n",
"入力単語: torch.Size([1]) tensor([3])\n",
"入力特徴: torch.Size([1, 1, 256]) tensor([[[ 0.3953, 0.1218, -0.2691]]], grad_fn=<SliceBackward>)\n",
"埋め込み後  : torch.Size([1, 1, 256]) tensor([[[ 1.1942, 0.1348, -0.2386]]], grad_fn=<SliceBackward>)\n",
"GRUの出力  : torch.Size([1, 1, 256]) tensor([[[ 0.2106, 0.3436, -0.1310]]], grad_fn=<SliceBackward>)\n",
"GRUの隠れ状態: torch.Size([1, 1, 256]) tensor([[[ 0.2106, 0.3436, -0.1310]]], grad_fn=<SliceBackward>)\n",
"(単語を1つずつ流しているので出力と隠れ状態は一致)\n",
"\n",
"◇ 流す単語: ans\n",
"入力単語: torch.Size([1]) tensor([4])\n",
"入力特徴: torch.Size([1, 1, 256]) tensor([[[ 0.2106, 0.3436, -0.1310]]], grad_fn=<SliceBackward>)\n",
"埋め込み後  : torch.Size([1, 1, 256]) tensor([[[ 0.9443, -2.1165, 0.0393]]], grad_fn=<SliceBackward>)\n",
"GRUの出力  : torch.Size([1, 1, 256]) tensor([[[0.1656, 0.1648, 0.0926]]], grad_fn=<SliceBackward>)\n",
"GRUの隠れ状態: torch.Size([1, 1, 256]) tensor([[[0.1656, 0.1648, 0.0926]]], grad_fn=<SliceBackward>)\n",
"(単語を1つずつ流しているので出力と隠れ状態は一致)\n",
"\n",
"◇ 流す単語: .\n",
"入力単語: torch.Size([1]) tensor([5])\n",
"入力特徴: torch.Size([1, 1, 256]) tensor([[[0.1656, 0.1648, 0.0926]]], grad_fn=<SliceBackward>)\n",
"埋め込み後  : torch.Size([1, 1, 256]) tensor([[[-0.1602, 1.2487, -0.6368]]], grad_fn=<SliceBackward>)\n",
"GRUの出力  : torch.Size([1, 1, 256]) tensor([[[ 0.2673, 0.3713, -0.1989]]], grad_fn=<SliceBackward>)\n",
"GRUの隠れ状態: torch.Size([1, 1, 256]) tensor([[[ 0.2673, 0.3713, -0.1989]]], grad_fn=<SliceBackward>)\n",
"(単語を1つずつ流しているので出力と隠れ状態は一致)\n",
"\n",
"◇ 流す単語: <EOS>\n",
"入力単語: torch.Size([1]) tensor([1])\n",
"入力特徴: torch.Size([1, 1, 256]) tensor([[[ 0.2673, 0.3713, -0.1989]]], grad_fn=<SliceBackward>)\n",
"埋め込み後  : torch.Size([1, 1, 256]) tensor([[[-1.7645, -0.4975, -0.2195]]], grad_fn=<SliceBackward>)\n",
"GRUの出力  : torch.Size([1, 1, 256]) tensor([[[ 0.5425, 0.0323, -0.4908]]], grad_fn=<SliceBackward>)\n",
"GRUの隠れ状態: torch.Size([1, 1, 256]) tensor([[[ 0.5425, 0.0323, -0.4908]]], grad_fn=<SliceBackward>)\n",
"(単語を1つずつ流しているので出力と隠れ状態は一致)\n",
"\n",
"◇ コンテクストベクトル\n",
"torch.Size([1, 1, 256]) tensor([[[ 0.5425, 0.0323, -0.4908, 0.2291]]], grad_fn=<SliceBackward>)\n",
"\n",
"\n",
"◆ コンテクストベクトルをデコードしてみる\n",
"\n",
"◇ 1単語目を取り出す\n",
"入力単語: torch.Size([1, 1]) tensor([[0]])\n",
"入力特徴: torch.Size([1, 1, 256]) tensor([[[ 0.5425, 0.0323, -0.4908]]], grad_fn=<SliceBackward>)\n",
"GRUの出力  : torch.Size([1, 1, 256]) tensor([[[ 0.1666, 0.0783, -0.2216]]], grad_fn=<SliceBackward>)\n",
"GRUの隠れ状態: torch.Size([1, 1, 256]) tensor([[[ 0.1666, 0.0783, -0.2216]]], grad_fn=<SliceBackward>)\n",
"(単語を1つずつ流しているので出力と隠れ状態は一致)\n",
"最終出力   : torch.Size([1, 2803]) tensor([[-7.8665, -7.9430, -7.9727]], grad_fn=<SliceBackward>)\n",
"デコード結果: 2462 --> space\n",
"\n",
"◇ 2単語目を取り出す\n",
"入力単語: torch.Size([]) tensor(2462)\n",
"入力特徴: torch.Size([1, 1, 256]) tensor([[[ 0.1666, 0.0783, -0.2216]]], grad_fn=<SliceBackward>)\n",
"GRUの出力  : torch.Size([1, 1, 256]) tensor([[[0.0473, 0.1050, 0.2641]]], grad_fn=<SliceBackward>)\n",
"GRUの隠れ状態: torch.Size([1, 1, 256]) tensor([[[0.0473, 0.1050, 0.2641]]], grad_fn=<SliceBackward>)\n",
"(単語を1つずつ流しているので出力と隠れ状態は一致)\n",
"最終出力   : torch.Size([1, 2803]) tensor([[-7.8839, -7.9633, -7.9360]], grad_fn=<SliceBackward>)\n",
"デコード結果: 2779 --> attitude\n",
"\n",
"◇ 3単語目を取り出す\n",
"入力単語: torch.Size([]) tensor(2779)\n",
"入力特徴: torch.Size([1, 1, 256]) tensor([[[0.0473, 0.1050, 0.2641]]], grad_fn=<SliceBackward>)\n",
"GRUの出力  : torch.Size([1, 1, 256]) tensor([[[0.0404, 0.1833, 0.4872]]], grad_fn=<SliceBackward>)\n",
"GRUの隠れ状態: torch.Size([1, 1, 256]) tensor([[[0.0404, 0.1833, 0.4872]]], grad_fn=<SliceBackward>)\n",
"(単語を1つずつ流しているので出力と隠れ状態は一致)\n",
"最終出力   : torch.Size([1, 2803]) tensor([[-7.9407, -8.0491, -7.8913]], grad_fn=<SliceBackward>)\n",
"デコード結果: 773 --> fairly\n"
]
}
],
"source": [
"print('◆ エンコーダに1つ目のデータを流してみる')\n",
"input_words = pairs[0][0].split(' ') + ['<EOS>']\n",
"(input_tensor, target_tensor) = tensorsFromPair(pairs[0])\n",
"print('\\n◇ インプットデータ')\n",
"print(input_words)\n",
"print(input_tensor)\n",
"input_length = input_tensor.size(0)\n",
"hidden = encoder.initHidden()\n",
"for ei in range(input_length):\n",
" print('\\n◇ 流す単語: ' + input_words[ei])\n",
" output, hidden = encoder.forward(input_tensor[ei], hidden, debug=True)\n",
"print('\\n◇ コンテクストベクトル')\n",
"print(output.size(), output[:,:,:4])\n",
"\n",
"print('\\n\\n◆ コンテクストベクトルをデコードしてみる')\n",
"input = torch.tensor([[SOS_token]], device='cpu')\n",
"for di in range(3):\n",
" print('\\n◇ {}単語目を取り出す'.format(di + 1))\n",
" output, hidden = decoder.forward(input, hidden, debug=True)\n",
" topv, topi = output.data.topk(1)\n",
" print('デコード結果: {} --> {}'.format(topi.item(), output_lang.index2word[topi.item()]))\n",
" input = topi.squeeze().detach()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table width=\"100%\">\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">デコーダから順次単語が取り出されますが、無論意味のある文章にはみえませんね…。\n",
"</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2 id=\"s3\" style=\"background: black; padding: 0.7em 1em 0.5em;color:white;\">アテンションデコーダの導入</h2>\n",
"<table width=\"100%\">\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">でたらめのエンコーダとデコーダではまったく何もうれしくないですね。早く学習させたいです。どのように学習させるんです?\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/2.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">あ、実際には上で導入したデコーダじゃなくてアテンション付きデコーダをつかうんだって。\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">アテンション?\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/2.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">さっき導入したエンコーダとデコーダだと、エンコーダの最終ステップでの出力であるコンテキストベクトルが文章の情報を一身に背負わなければならなくて、負担が大きいらしい。だから、エンコーダの毎ステップの出力をすべてつかうことにする。\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">ええ…負担が大きいなら最初からそうすればよかったじゃないですか。…毎ステップの出力をすべてデコーダに突っ込むなら、もう GRU で再帰させなくてもいいのでは? 単に個々の単語をエンコードしてデコーダに渡せばいいでしょう?\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/2.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">それは違うと思うかな。ある単語がどんな意味的な特徴をもつかは、やっぱり文脈に依存するよ。だから、単語を個別にエンコードするんじゃなくて、GRU で再帰させながら各ステップの特徴をつくるのは理に適っていると思う(このチュートリアルでは一方向だけど、逆方向からも再帰させたくなってくるね)。無論、ニューラルネットはどんな関数も表現してくれることが期待されるけど、だからといって各単語をばらばらにエンコードしたものをデコーダに丸投げじゃそれこそデコーダの負担が大きすぎると思うよ。\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">それは確かにそんな気も…では、アテンションとは何です? 日本語に訳すと「注意」ですか? 何に注意する必要があるというんです?\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/2.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">デコード時に常にエンコーダの全てのステップの出力をつかうんだけど、実際は、最初の単語を出したいときに、文章全体の特徴をすべてつかいたいわけでもないってことだと思う。X 語から Y 語に翻訳するとき、もしかしたらこの2つの言語は文法が違って語順が違うかもしれないけど、Y 語に翻訳した文章の文頭に来るべき単語の意味は、元の X 語の文章の2番目の単語の意味に対応するとか、なんかそんな意味的な対応はあるはずなんだよね。その場合、最初の単語のデコード時には、エンコード時に2番目に吐き出された特徴だけに特に「注意」したい。その注意すべき箇所を指示するのがアテンション機構だね。注意すべきなのは1箇所とは限らないかもね。もしフランス語から翻訳するなら ne と pas の2箇所に注意することもあるかもしれないから。\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">なるほど…わかりましたよ! アテンションがうれしいのは、フランス語も英語も人間の自然言語だからですね? だって、フランス語を宇宙語に翻訳するのだったら、言葉の体系が違いすぎて、フランス語の何単語目に対応するかなんていうのが意味を成さないかもしれません。そのような場合は、やはり常に全てのステップの特徴を利用した方がいいでしょう?\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/2.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">え、うん…そこまで概念が違う宇宙人の言葉だったら翻訳という行為が意味を成すのかもあやしいんじゃないかな…。\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">下図でいうと、いま何番目の単語に注意すべきかが「アテンションの重み」ですね。もしこのベクトル第1成分と第3成分が大きくなっていたら元の文章の1ステップ目と3ステップ目の特徴に注意せよということですか。それでその重みにしたがって特徴を抜き出して、それをここでは attn_combine と名付けた層で前回デコードした単語の表現に混ぜ込んでいますね? …これ、attn_combine せずに抜き出した特徴をそのまま GRU に突っ込むのでは駄目なんですか? GRU は結局「前回デコードした単語」「元文章の注目すべき位置の特徴」「現在未デコードのコンテクスト」を受け取って出力する特徴をつくるのでしょう? attn_combine しなくても同じであるように思うんですが。\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/2.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">attn_combine した上で活性化しているからモデルとして等価じゃないよ。…そうだな、これは全くあやしいイメージだけど、「前回デコードした単語」が「水」だったとして、「水」はいま「飲むもの」という特徴と「浴びるもの」という特徴をもっているとするよ。それで、「元文章の注目すべき位置の特徴」が、「飲み食いする」という特徴をもっているとする。このとき、「水」の「飲むもの」という特徴の方だけを活性化した状態で GRU に流したいんじゃないかな。\n",
"</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h4>アテンション付きデコーダ</h4>\n",
"<img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/attndecoder.png\" width=\"720px\">\n",
"<h4>アテンション付きデコーダの場合の翻訳の流れ</h4>\n",
"<img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/encoder_attndecoder_example.png\" width=\"720px\">"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table width=\"100%\">\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/2.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">まあそれで、アテンション付きデコーダのコードは以下だね。元の文章のどこに注意するかをいつも計算するから、必然的に MAX_LENGTH を指定することが必要になるよ。forward メソッドが attn_weights も返しているけどこれは後々どこに注目しているか可視化したいからってだけだね。attn_weights を取り出しても次のステップでこれをまた入力するってことはないから。翻訳のシミュレーションもしてみるね。\n",
"</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"◆ アテンションデコーダの訓練対象パラメータ\n",
"embedding.weight torch.Size([2803, 256])\n",
"attn.weight torch.Size([10, 512])\n",
"attn.bias torch.Size([10])\n",
"attn_combine.weight torch.Size([256, 512])\n",
"attn_combine.bias torch.Size([256])\n",
"gru.weight_ih_l0 torch.Size([768, 256])\n",
"gru.weight_hh_l0 torch.Size([768, 256])\n",
"gru.bias_ih_l0 torch.Size([768])\n",
"gru.bias_hh_l0 torch.Size([768])\n",
"out.weight torch.Size([2803, 256])\n",
"out.bias torch.Size([2803])\n",
"\n",
"\n",
"◆ エンコーダに1つ目のデータを流してみる\n",
"\n",
"◇ インプットデータ\n",
"['j', 'ai', 'ans', '.', '<EOS>']\n",
"tensor([[2],\n",
" [3],\n",
" [4],\n",
" [5],\n",
" [1]])\n",
"\n",
"◇ 流す単語: j\n",
"入力単語: torch.Size([1]) tensor([2])\n",
"入力特徴: torch.Size([1, 1, 256]) tensor([[[0.0404, 0.1833, 0.4872]]], grad_fn=<SliceBackward>)\n",
"埋め込み後  : torch.Size([1, 1, 256]) tensor([[[ 2.2760, -0.3344, -1.0179]]], grad_fn=<SliceBackward>)\n",
"GRUの出力  : torch.Size([1, 1, 256]) tensor([[[0.4548, 0.2750, 0.0232]]], grad_fn=<SliceBackward>)\n",
"GRUの隠れ状態: torch.Size([1, 1, 256]) tensor([[[0.4548, 0.2750, 0.0232]]], grad_fn=<SliceBackward>)\n",
"(単語を1つずつ流しているので出力と隠れ状態は一致)\n",
"\n",
"◇ 流す単語: ai\n",
"入力単語: torch.Size([1]) tensor([3])\n",
"入力特徴: torch.Size([1, 1, 256]) tensor([[[0.4548, 0.2750, 0.0232]]], grad_fn=<SliceBackward>)\n",
"埋め込み後  : torch.Size([1, 1, 256]) tensor([[[ 1.1942, 0.1348, -0.2386]]], grad_fn=<SliceBackward>)\n",
"GRUの出力  : torch.Size([1, 1, 256]) tensor([[[0.2415, 0.4507, 0.0896]]], grad_fn=<SliceBackward>)\n",
"GRUの隠れ状態: torch.Size([1, 1, 256]) tensor([[[0.2415, 0.4507, 0.0896]]], grad_fn=<SliceBackward>)\n",
"(単語を1つずつ流しているので出力と隠れ状態は一致)\n",
"\n",
"◇ 流す単語: ans\n",
"入力単語: torch.Size([1]) tensor([4])\n",
"入力特徴: torch.Size([1, 1, 256]) tensor([[[0.2415, 0.4507, 0.0896]]], grad_fn=<SliceBackward>)\n",
"埋め込み後  : torch.Size([1, 1, 256]) tensor([[[ 0.9443, -2.1165, 0.0393]]], grad_fn=<SliceBackward>)\n",
"GRUの出力  : torch.Size([1, 1, 256]) tensor([[[0.1783, 0.2456, 0.2012]]], grad_fn=<SliceBackward>)\n",
"GRUの隠れ状態: torch.Size([1, 1, 256]) tensor([[[0.1783, 0.2456, 0.2012]]], grad_fn=<SliceBackward>)\n",
"(単語を1つずつ流しているので出力と隠れ状態は一致)\n",
"\n",
"◇ 流す単語: .\n",
"入力単語: torch.Size([1]) tensor([5])\n",
"入力特徴: torch.Size([1, 1, 256]) tensor([[[0.1783, 0.2456, 0.2012]]], grad_fn=<SliceBackward>)\n",
"埋め込み後  : torch.Size([1, 1, 256]) tensor([[[-0.1602, 1.2487, -0.6368]]], grad_fn=<SliceBackward>)\n",
"GRUの出力  : torch.Size([1, 1, 256]) tensor([[[ 0.2808, 0.4214, -0.1628]]], grad_fn=<SliceBackward>)\n",
"GRUの隠れ状態: torch.Size([1, 1, 256]) tensor([[[ 0.2808, 0.4214, -0.1628]]], grad_fn=<SliceBackward>)\n",
"(単語を1つずつ流しているので出力と隠れ状態は一致)\n",
"\n",
"◇ 流す単語: <EOS>\n",
"入力単語: torch.Size([1]) tensor([1])\n",
"入力特徴: torch.Size([1, 1, 256]) tensor([[[ 0.2808, 0.4214, -0.1628]]], grad_fn=<SliceBackward>)\n",
"埋め込み後  : torch.Size([1, 1, 256]) tensor([[[-1.7645, -0.4975, -0.2195]]], grad_fn=<SliceBackward>)\n",
"GRUの出力  : torch.Size([1, 1, 256]) tensor([[[ 0.5484, 0.0680, -0.4703]]], grad_fn=<SliceBackward>)\n",
"GRUの隠れ状態: torch.Size([1, 1, 256]) tensor([[[ 0.5484, 0.0680, -0.4703]]], grad_fn=<SliceBackward>)\n",
"(単語を1つずつ流しているので出力と隠れ状態は一致)\n",
"\n",
"◇ 特徴ベクトル(全ステップ分)\n",
"torch.Size([10, 256])\n",
"\n",
"\n",
"◆ アテンションデコーダでデコードしてみる\n",
"\n",
"◇ 1単語目を取り出す\n",
"入力単語: torch.Size([1, 1]) tensor([[0]])\n",
"入力特徴: torch.Size([1, 1, 256]) tensor([[[ 0.5484, 0.0680, -0.4703]]], grad_fn=<SliceBackward>)\n",
"埋め込み後  : torch.Size([1, 1, 256]) tensor([[[ 0.1673, 0.0688, -1.0163]]], grad_fn=<SliceBackward>)\n",
"アテンションの重み: torch.Size([1, 10]) tensor([[0.1398, 0.0967, 0.0651]], grad_fn=<SliceBackward>)\n",
"アテンションを整形: torch.Size([1, 1, 10])\n",
"エンコーダの全ステップの特徴を整形: torch.Size([1, 10, 256])\n",
"アテンション適用後特徴: torch.Size([1, 1, 256]) tensor([[[ 0.1741, 0.1527, -0.0307]]], grad_fn=<SliceBackward>)\n",
"中間特徴: torch.Size([1, 1, 256]) tensor([[[ 0.1741, 0.1527, -0.0307]]], grad_fn=<SliceBackward>)\n",
"GRUの出力  : torch.Size([1, 1, 256]) tensor([[[ 0.3746, 0.0825, -0.2456]]], grad_fn=<SliceBackward>)\n",
"GRUの隠れ状態: torch.Size([1, 1, 256]) tensor([[[ 0.3746, 0.0825, -0.2456]]], grad_fn=<SliceBackward>)\n",
"(単語を1つずつ流しているので出力と隠れ状態は一致)\n",
"デコード結果: 475 --> london\n",
"\n",
"◇ 2単語目を取り出す\n",
"入力単語: torch.Size([]) tensor(475)\n",
"入力特徴: torch.Size([1, 1, 256]) tensor([[[ 0.3746, 0.0825, -0.2456]]], grad_fn=<SliceBackward>)\n",
"埋め込み後  : torch.Size([1, 1, 256]) tensor([[[-1.0136, -0.0224, -0.3247]]], grad_fn=<SliceBackward>)\n",
"アテンションの重み: torch.Size([1, 10]) tensor([[0.1452, 0.1399, 0.0683]], grad_fn=<SliceBackward>)\n",
"アテンションを整形: torch.Size([1, 1, 10])\n",
"エンコーダの全ステップの特徴を整形: torch.Size([1, 10, 256])\n",
"アテンション適用後特徴: torch.Size([1, 1, 256]) tensor([[[ 0.1713, 0.1520, -0.0162]]], grad_fn=<SliceBackward>)\n",
"中間特徴: torch.Size([1, 1, 256]) tensor([[[ 0.1713, 0.1520, -0.0162]]], grad_fn=<SliceBackward>)\n",
"GRUの出力  : torch.Size([1, 1, 256]) tensor([[[ 0.2943, 0.0746, -0.1321]]], grad_fn=<SliceBackward>)\n",
"GRUの隠れ状態: torch.Size([1, 1, 256]) tensor([[[ 0.2943, 0.0746, -0.1321]]], grad_fn=<SliceBackward>)\n",
"(単語を1つずつ流しているので出力と隠れ状態は一致)\n",
"デコード結果: 1595 --> body\n",
"\n",
"◇ 3単語目を取り出す\n",
"入力単語: torch.Size([]) tensor(1595)\n",
"入力特徴: torch.Size([1, 1, 256]) tensor([[[ 0.2943, 0.0746, -0.1321]]], grad_fn=<SliceBackward>)\n",
"埋め込み後  : torch.Size([1, 1, 256]) tensor([[[ 1.9083, 0.1592, -0.1315]]], grad_fn=<SliceBackward>)\n",
"アテンションの重み: torch.Size([1, 10]) tensor([[0.0522, 0.0777, 0.1844]], grad_fn=<SliceBackward>)\n",
"アテンションを整形: torch.Size([1, 1, 10])\n",
"エンコーダの全ステップの特徴を整形: torch.Size([1, 10, 256])\n",
"アテンション適用後特徴: torch.Size([1, 1, 256]) tensor([[[ 0.1601, 0.1209, -0.0242]]], grad_fn=<SliceBackward>)\n",
"中間特徴: torch.Size([1, 1, 256]) tensor([[[ 0.1601, 0.1209, -0.0242]]], grad_fn=<SliceBackward>)\n",
"GRUの出力  : torch.Size([1, 1, 256]) tensor([[[ 0.0315, 0.0151, -0.1499]]], grad_fn=<SliceBackward>)\n",
"GRUの隠れ状態: torch.Size([1, 1, 256]) tensor([[[ 0.0315, 0.0151, -0.1499]]], grad_fn=<SliceBackward>)\n",
"(単語を1つずつ流しているので出力と隠れ状態は一致)\n",
"デコード結果: 2741 --> diplomat\n"
]
}
],
"source": [
"import torch\n",
"import torch.nn as nn\n",
"from torch import optim\n",
"import torch.nn.functional as F\n",
"\n",
"class AttnDecoderRNN(nn.Module):\n",
" def __init__(self, hidden_size, output_size, dropout_p=0.1, max_length=MAX_LENGTH):\n",
" super(AttnDecoderRNN, self).__init__()\n",
" self.hidden_size = hidden_size\n",
" self.output_size = output_size\n",
" self.dropout_p = dropout_p\n",
" self.max_length = max_length\n",
"\n",
" self.embedding = nn.Embedding(self.output_size, self.hidden_size)\n",
" self.attn = nn.Linear(self.hidden_size * 2, self.max_length)\n",
" self.attn_combine = nn.Linear(self.hidden_size * 2, self.hidden_size)\n",
" self.dropout = nn.Dropout(self.dropout_p)\n",
" self.gru = nn.GRU(self.hidden_size, self.hidden_size)\n",
" self.out = nn.Linear(self.hidden_size, self.output_size)\n",
"\n",
" def forward(self, input, hidden, encoder_outputs, debug=False):\n",
" if debug:\n",
" print('入力単語: ', input.size(), input)\n",
" print('入力特徴: ', hidden.size(), hidden[:,:,:3])\n",
" embedded = self.embedding(input).view(1, 1, -1)\n",
" embedded = self.dropout(embedded)\n",
" if debug:\n",
" print('埋め込み後  : ', embedded.size(), embedded[:,:,:3])\n",
"\n",
" attn_weights = F.softmax(\n",
" self.attn(torch.cat((embedded[0], hidden[0]), 1)), dim=1)\n",
" if debug:\n",
" print('アテンションの重み: ', attn_weights.size(), attn_weights[:,:3])\n",
" print('アテンションを整形: ', attn_weights.unsqueeze(0).size())\n",
" print('エンコーダの全ステップの特徴を整形: ', encoder_outputs.unsqueeze(0).size())\n",
" attn_applied = torch.bmm(attn_weights.unsqueeze(0),\n",
" encoder_outputs.unsqueeze(0))\n",
" if debug:\n",
" print('アテンション適用後特徴: ', attn_applied.size(), attn_applied[:,:, :3])\n",
"\n",
" output = torch.cat((embedded[0], attn_applied[0]), 1)\n",
" output = self.attn_combine(output).unsqueeze(0)\n",
" output = F.relu(output)\n",
" if debug:\n",
" print('中間特徴: ', attn_applied.size(), attn_applied[:,:, :3])\n",
" \n",
" output, hidden = self.gru(output, hidden)\n",
" if debug:\n",
" print('GRUの出力  : ', output.size(), output[:,:,:3])\n",
" print('GRUの隠れ状態: ', hidden.size(), hidden[:,:,:3])\n",
" print('(単語を1つずつ流しているので出力と隠れ状態は一致)')\n",
"\n",
" output = F.log_softmax(self.out(output[0]), dim=1)\n",
" return output, hidden, attn_weights\n",
"\n",
" def initHidden(self):\n",
" return torch.zeros(1, 1, self.hidden_size, device='cpu')\n",
"\n",
"\n",
"hidden_size = 256\n",
"del decoder\n",
"decoder = AttnDecoderRNN(hidden_size, output_lang.n_words).to('cpu')\n",
"\n",
"\n",
"print('\\n◆ アテンションデコーダの訓練対象パラメータ')\n",
"for name, param in decoder.named_parameters():\n",
" print(name.ljust(14), param.size())\n",
" \n",
"print('\\n\\n◆ エンコーダに1つ目のデータを流してみる')\n",
"input_words = pairs[0][0].split(' ') + ['<EOS>']\n",
"(input_tensor, target_tensor) = tensorsFromPair(pairs[0])\n",
"print('\\n◇ インプットデータ')\n",
"print(input_words)\n",
"print(input_tensor)\n",
"input_length = input_tensor.size(0)\n",
"encoder_hidden = encoder.initHidden()\n",
"encoder_outputs = torch.zeros(MAX_LENGTH, encoder.hidden_size, device='cpu')\n",
"for ei in range(input_length):\n",
" print('\\n◇ 流す単語: ' + input_words[ei])\n",
" output, hidden = encoder.forward(input_tensor[ei], hidden, debug=True)\n",
" encoder_outputs[ei] += output[0, 0]\n",
"print('\\n◇ 特徴ベクトル(全ステップ分)')\n",
"print(encoder_outputs.size())\n",
"\n",
"print('\\n\\n◆ アテンションデコーダでデコードしてみる')\n",
"input = torch.tensor([[SOS_token]], device='cpu')\n",
"for di in range(3):\n",
" print('\\n◇ {}単語目を取り出す'.format(di + 1))\n",
" output, hidden, decoder_attention = decoder.forward(input, hidden, encoder_outputs, debug=True)\n",
" topv, topi = output.data.topk(1)\n",
" print('デコード結果: {} --> {}'.format(topi.item(), output_lang.index2word[topi.item()]))\n",
" input = topi.squeeze().detach()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2 id=\"s4\" style=\"background: black; padding: 0.7em 1em 0.5em;color:white;\">モデルの訓練</h2>\n",
"<table width=\"100%\">\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">では、エンコーダとアテンション付きデコーダはどうやって訓練するのでしょう? まあ、翻訳をシミュレーションしてみたのでこうやって出てくる文章を正解の文章に寄らせていけばいいのはわかりますが、今回の場合、損失は何になるんでしょうか?\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/2.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">損失としては単に各ステップの出力の交差エントロピー(F.log_softmax + nn.NLLLoss)を足し上げてるね。あと、訓練時に“Teacher forcing”ということもするみたい。\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">何ですかそれは?\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/2.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">上のシミュレーションでもやったように、最初はでたらめな単語がデコードされてきちゃうよね。そうするとデコードの2ステップ目以降、前回の単語がでたらめな状態でデコードしていくことになっちゃって、これじゃなかなか学習が進まない。だから、確率的に前回の単語として正解の単語を入れちゃうってことらしい。そうすると収束が速いと。ただ反面、モデルが不安定になりやすいってある。カンニングしながら訓練しちゃってるようなものだしね…以下のチュートリアルのコードでは、正解の単語を入れる確率 teacher_forcing_ratio が 0.5 になっているけど、本当は徐々に下げていくものなんじゃないのかな…? 実際にはどうなんだろう…。\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">なるほど。「教師あり学習」というか、「教師がときどき代わりに回答を書き込んでくる学習」ですか。それで、以下の train が1対の文章ペアを入れてモデルを更新する関数ですね。\n",
"</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"import random\n",
"\n",
"\n",
"teacher_forcing_ratio = 0.5\n",
"\n",
"# 1対の文章ペアを入れてモデルを更新する関数\n",
"def train(input_tensor, target_tensor, encoder, decoder, \n",
" encoder_optimizer, decoder_optimizer, criterion, \n",
" max_length=MAX_LENGTH):\n",
" encoder_hidden = encoder.initHidden()\n",
"\n",
" encoder_optimizer.zero_grad()\n",
" decoder_optimizer.zero_grad()\n",
"\n",
" input_length = input_tensor.size(0)\n",
" target_length = target_tensor.size(0)\n",
"\n",
" encoder_outputs = torch.zeros(max_length, encoder.hidden_size, device='cpu')\n",
"\n",
" loss = 0\n",
"\n",
" for ei in range(input_length):\n",
" encoder_output, encoder_hidden = encoder(\n",
" input_tensor[ei], encoder_hidden)\n",
" encoder_outputs[ei] = encoder_output[0, 0]\n",
"\n",
" decoder_input = torch.tensor([[SOS_token]], device='cpu')\n",
"\n",
" decoder_hidden = encoder_hidden\n",
"\n",
" use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False\n",
"\n",
" if use_teacher_forcing:\n",
" # デコードの2ステップ目以降、前ステップの単語として正解の単語を利用する\n",
" for di in range(target_length):\n",
" decoder_output, decoder_hidden, decoder_attention = decoder(\n",
" decoder_input, decoder_hidden, encoder_outputs)\n",
" loss += criterion(decoder_output, target_tensor[di])\n",
" decoder_input = target_tensor[di] # Teacher forcing\n",
"\n",
" else:\n",
" # デコードの2ステップ目以降、前ステップの単語としてモデルが予測した単語を利用する\n",
" for di in range(target_length):\n",
" decoder_output, decoder_hidden, decoder_attention = decoder(\n",
" decoder_input, decoder_hidden, encoder_outputs)\n",
" topv, topi = decoder_output.topk(1)\n",
" decoder_input = topi.squeeze().detach() # detach from history as input\n",
"\n",
" loss += criterion(decoder_output, target_tensor[di])\n",
" if decoder_input.item() == EOS_token:\n",
" break\n",
"\n",
" loss.backward()\n",
"\n",
" encoder_optimizer.step()\n",
" decoder_optimizer.step()\n",
"\n",
" return loss.item() / target_length"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table width=\"100%\">\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">以下の trainIters がたくさんの文章ペアに対して学習を回す関数ですね。ここでは n_iters を 75000 にしていますが、いま訓練対象の文章ペアは 10599 ですから、1つの文章が 7, 8 回選ばれている計算になりますね。…訓練する文章はランダムに選ばれていますが、短い文章から長い文章に向かって学習すると上手くいくなどはないのでしょうか? 人間の幼児も2語文、3語文から覚え始めると思いますし…しかし、それでは学習の序盤に短い文章にフィットしてしまうのでしょうか??\n",
"</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"import time\n",
"import math\n",
"\n",
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"plt.switch_backend('agg')\n",
"import matplotlib.ticker as ticker\n",
"import numpy as np\n",
"\n",
"\n",
"def asMinutes(s):\n",
" m = math.floor(s / 60)\n",
" s -= m * 60\n",
" return '%dm %ds' % (m, s)\n",
"\n",
"\n",
"def timeSince(since, percent):\n",
" now = time.time()\n",
" s = now - since\n",
" es = s / (percent)\n",
" rs = es - s\n",
" return '%s (- %s)' % (asMinutes(s), asMinutes(rs))\n",
"\n",
"\n",
"def showPlot(points):\n",
" plt.figure()\n",
" fig, ax = plt.subplots()\n",
" # this locator puts ticks at regular intervals\n",
" loc = ticker.MultipleLocator(base=0.2)\n",
" ax.yaxis.set_major_locator(loc)\n",
" plt.plot(points)\n",
"\n",
"\n",
"def trainIters(encoder, decoder, n_iters, print_every=1000, plot_every=100, learning_rate=0.01):\n",
" start = time.time()\n",
" plot_losses = []\n",
" print_loss_total = 0 # Reset every print_every\n",
" plot_loss_total = 0 # Reset every plot_every\n",
"\n",
" encoder_optimizer = optim.SGD(encoder.parameters(), lr=learning_rate)\n",
" decoder_optimizer = optim.SGD(decoder.parameters(), lr=learning_rate)\n",
" training_pairs = [tensorsFromPair(random.choice(pairs))\n",
" for i in range(n_iters)]\n",
" criterion = nn.NLLLoss()\n",
"\n",
" for iter in range(1, n_iters + 1):\n",
" training_pair = training_pairs[iter - 1]\n",
" input_tensor = training_pair[0]\n",
" target_tensor = training_pair[1]\n",
"\n",
" loss = train(input_tensor, target_tensor, encoder,\n",
" decoder, encoder_optimizer, decoder_optimizer, criterion)\n",
" print_loss_total += loss\n",
" plot_loss_total += loss\n",
"\n",
" if iter % print_every == 0:\n",
" print_loss_avg = print_loss_total / print_every\n",
" print_loss_total = 0\n",
" print('%s (%d %d%%) %.4f' % (timeSince(start, iter / n_iters),\n",
" iter, iter / n_iters * 100, print_loss_avg))\n",
" torch.save(encoder.state_dict(), 'eng-fra-encoder')\n",
" torch.save(decoder.state_dict(), 'eng-fra-decoder')\n",
"\n",
" if iter % plot_every == 0:\n",
" plot_loss_avg = plot_loss_total / plot_every\n",
" plot_losses.append(plot_loss_avg)\n",
" plot_loss_total = 0\n",
"\n",
" showPlot(plot_losses)\n",
"\n",
"\n",
"hidden_size = 256\n",
"encoder1 = EncoderRNN(input_lang.n_words, hidden_size).to('cpu')\n",
"attn_decoder1 = AttnDecoderRNN(hidden_size, output_lang.n_words, dropout_p=0.1).to('cpu')\n",
"\n",
"\n",
"import os\n",
"\n",
"# 既に学習済みの場合はモデルをロード(学習には cpu で1時間かかった)\n",
"if os.path.isfile('eng-fra-encoder') and os.path.isfile('eng-fra-decoder'):\n",
" encoder1.load_state_dict(torch.load('eng-fra-encoder'))\n",
" attn_decoder1.load_state_dict(torch.load('eng-fra-decoder'))\n",
"else:\n",
" trainIters(encoder1, attn_decoder1, 75000, print_every=5000)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2 id=\"s5\" style=\"background: black; padding: 0.7em 1em 0.5em;color:white;\">訓練結果</h2>\n",
"<table width=\"100%\">\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">訓練したモデルにフランス語の文章を読み込ませてみると、そこそこの確率でぴったり正解の英文に翻訳してくれますね。しかし、今回は全てのデータをランダムに利用して学習していますから、これらは訓練データに含まれていた可能性も高いですね…。\n",
"</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"> je suis plutot heureux .\n",
"= i m fairly happy .\n",
"< i m fairly happy . <EOS>\n",
"\n",
"> elle est inapte pour le poste .\n",
"= she s unfit for the job .\n",
"< she is working for the job job . <EOS>\n",
"\n",
"> je n en suis pas entierement sure .\n",
"= i m not entirely sure .\n",
"< i m not entirely sure . <EOS>\n",
"\n",
"> nous ne sommes pas ici pour t arreter .\n",
"= we are not here to arrest you .\n",
"< we are not here to arrest you . <EOS>\n",
"\n",
"> c est un employe de bureau .\n",
"= he is an office worker .\n",
"< he is an office worker . <EOS>\n",
"\n",
"> elle est chanteuse .\n",
"= she is a singer .\n",
"< she is a singer . <EOS>\n",
"\n",
"> vous etes un bon client .\n",
"= you are a good customer .\n",
"< you are a good customer . <EOS>\n",
"\n",
"> vous vous etes trompe de numero .\n",
"= i m afraid you have the wrong number .\n",
"< you re the responsible love . you . <EOS>\n",
"\n",
"> vous n etes pas cense fumer ici .\n",
"= you are not supposed to smoke here .\n",
"< you are not supposed to smoke here . <EOS>\n",
"\n",
"> je suis le professeur .\n",
"= i m the teacher .\n",
"< i m the teacher . <EOS>\n",
"\n"
]
}
],
"source": [
"def evaluate(encoder, decoder, sentence, max_length=MAX_LENGTH):\n",
" with torch.no_grad():\n",
" input_tensor = tensorFromSentence(input_lang, sentence)\n",
" input_length = input_tensor.size()[0]\n",
" encoder_hidden = encoder.initHidden()\n",
" encoder_outputs = torch.zeros(max_length, encoder.hidden_size, device='cpu')\n",
"\n",
" for ei in range(input_length):\n",
" encoder_output, encoder_hidden = encoder(input_tensor[ei], encoder_hidden)\n",
" encoder_outputs[ei] += encoder_output[0, 0]\n",
"\n",
" decoder_input = torch.tensor([[SOS_token]], device='cpu') # SOS\n",
" decoder_hidden = encoder_hidden\n",
" decoded_words = []\n",
" decoder_attentions = torch.zeros(max_length, max_length)\n",
"\n",
" for di in range(max_length):\n",
" decoder_output, decoder_hidden, decoder_attention = decoder(\n",
" decoder_input, decoder_hidden, encoder_outputs)\n",
" decoder_attentions[di] = decoder_attention.data\n",
" topv, topi = decoder_output.data.topk(1)\n",
" if topi.item() == EOS_token:\n",
" decoded_words.append('<EOS>')\n",
" break\n",
" else:\n",
" decoded_words.append(output_lang.index2word[topi.item()])\n",
" decoder_input = topi.squeeze().detach()\n",
"\n",
" return decoded_words, decoder_attentions[:di + 1]\n",
"\n",
"\n",
"def evaluateRandomly(encoder, decoder, n=10):\n",
" for i in range(n):\n",
" pair = random.choice(pairs)\n",
" print('>', pair[0])\n",
" print('=', pair[1])\n",
" output_words, attentions = evaluate(encoder, decoder, pair[0])\n",
" output_sentence = ' '.join(output_words)\n",
" print('<', output_sentence)\n",
" print('')\n",
"\n",
"\n",
"evaluateRandomly(encoder1, attn_decoder1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table width=\"100%\">\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/2.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">このチュートリアルではアテンションの可視化もしているね(以下)。\n",
"</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"input = elle a cinq ans de moins que moi .\n",
"output = she is five years younger than me . <EOS>\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAWIAAAEZCAYAAACtuS94AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAfUklEQVR4nO3deZwdZZ3v8c+XgKDAiBpchi3goBiUNcQFVFT0RgdhRkFA0FHROCqKC3pxuaio9w4y6uAVlYC4b4ioXIxGQRzcSUJCIEEkl2UIbgQRAWVJ93f+qGo4NJ3u06TqVJ1zvm9e9eo6VXWe56nm5NfPeepZZJuIiGjORk0XICJi2CUQR0Q0LIE4IqJhCcQREQ1LII6IaFgCcUREwxKIIyIalkAcEdGwBOKIiIYlEEdET6jwbUlPaLosbZNAHBG98jxgH+DVTRekbRKII6JXjqYIwi+UtHHThWmTBOKIqJ2kmcCutr8HnA/8U8NFapUE4ojohZcBXy33P0uaJ+4jgTgieuFVFAEY24uBx0jartkitUcCcQw8SftK2rzcP0rSRyXt0HS5hoWkrYBP2L6h4/BxwMyGitQ6ysTwMegkrQB2B3YDPgecAbzE9jObLFfEmNSIYxisc1HjOJiiZnYqsGXDZRoKkl4jaedyX5I+K+kvklZI2rPp8rVFAnEMg1slvRM4CviupI2ATRou07A4Fri23D+C4lvJjsBbgY83VKbWSSCOYXAYcCdwtO3fA9sCJzdbpKGxzvbd5f6BwBds32T7fGDzBsvVKmkjjojaSLoE+EfgZuA64Nm2V5bnrrCd4c6kRhxDQNKLJF0l6ZayffJWSX9pulxD4gRgCUXzxLkdQfiZwNUNlqtVUiOOgSdpNfBC21c0XZZhVA5n3tL2zR3HNqeIP7c1V7L2yHjvGAZ/SBBu1MOBN0jatXy9Evik7T80WKZWSY24z0naa7Lzti/pVVnaStIpwKOBb1M8tAPA9jmNFWpISNoX+ApF/+2l5eG9gX8BjrT9s4aK1ioJxH1O0i+BvYAVgIAnUXzg7wBs+9kNFq8VJH12gsO2/aqeF2bIlJ/P19leNu74HsBptp/cTMnaJU0T/e+3wGtsXwYg6YnA+2wf0myx2sP2K+vOoxwyvbPt8yU9GNjY9q1159sH/m58EAawvVxSBtWUEogBSftR/CP6rKStgS1sX9N0ubr0+LEgDGD78jpXQJD0MGA72yvqyqMqkt5h+8OS/i9wv69+tt9UUT6vAeZTtIU+lqKf8qeB51SQdk/uoUaS9LDOB3XlwYeTXlv3GPpALOm9wBzg8RSzQ20CfAnYt8lyTcMKSWdQlBngSIpmispI+jFwEMXnZSnwR0k/s/3WKvOpwdgDuiU15/MGYC7wKwDbV0l6ZEVp9+oe6vIx4AeSjgPGnlfsDZxUngvSRoyk5cCewCW29yyPrbC9W7Ml646kzYDXAc8oD10EfMr2HRXmscz2npJeTVEbfm8//Y7GSNoCoOouU5J+ZfvJHb+njSk+T5X/fuq6hzpJOhB4B7ArRa1+FXCy7f/XaMFaZOhrxMBdti3JcE//xr5RBtyPUW/tYmNJjwFeAry7xnxqUbabf5Gi6UCSbgRePja4oAL/KeldwIMlPRd4PVBpkOnBPdTG9nnAeU2Xo83SRgNnSToN2Kps6zsfOL3hMnWtnGv3h5J+I+nqsa3ibE4EFgGrbS+WtBNwVcV51GkB8FbbO9jeHngb1f4/Ph64EbgMeC2wEHhPhelD/fdQC0lndeyfNO7cD3pfonYa+qYJgLIW8zyK7l+LbP+w4SJ1TdKvgbdQtN2OjB23fVNjhWoZSZfa3n2qY23Wr/cw1lxT7l9ie6+Jzg27NE0AZeDtm+A7zi3lgoy1KXuSvAaYRcdnpqp+uJIeB3wKeJTtJ0raDTjI9gerSB+4WtL/ovhqD8V0mJV9a5B0DRP3aNipqjyo+R5qNFlNL7XA0tAGYkm3MvEHQRSd/f+ux0V6oC6UdDJwDvcdNVbliLrvAD+haLYZmeLaB+J04O3AaQC2V0j6ClBVIH4V8H7gm+XrnwBV9i2e07G/GXAoRVtuleq+h7o8pJwAfiOKNvQ9Kf6NCXhwoyVrkTRN9DlJF05wuNIRdZKW296jqvQmSH+x7X3GfY2tLE9JcygeMs7i3sqH6+z1IWmp7b0rTK/n91CF9Xw+72H7Wb0qS5sNc4140hqL7T/1qiwbokcf5PMkvcD2wprSXyvpsZTfUCQdAvyuwvS/TLFY5eXAaIXpAveb72Mjihpy1f+2ar2HuiTQdmdoa8Qd7Xri3iYKlT9dcfte5SQdZftLkiYcVGH7oxXmdSvFagp3AndTcfNN2QtjAfA0ignEr6GYEOa6itL/qe39qkhrPelfyL2foXUUc+/+u+3fVJhHrfdQp3LI9+NsX9pxbHtgZNzKzkNraGvEtncEKNcvOxLY0faJ5QfkMVXmVQ4L3pmi/XAs/4s2MNmx/s4Tjdev9K+r7S3LbxD3uYcNNe6PyELgQooa5e3Ai4Gq/pi8txx9eAH1zL52Hvf+UafcP1DSWD5V3Efd91CndcA5knazfXt57AzgXUACMUMciDucSvFV79kU/WVvpXggsk8ViZej0Y6lmH9gOfAU4Bdlfg+Y7dPK3Z2AY23/uczvYcBHNiTt8dZzDz9nw+dSGPsj8niK3/d3KILZy4CLNzDtTq8EdqEYvj72td4UDzirsDf3Lf8LKcpfZV/ruu+hNrbvlvQtigFBny0rO1vb7tdh29WzPdQbxVBUgGUdxy6tMP3LKGqRy8vXuwDnVJj+sm6OtfweLqJYwWHs9ZbARRWmf2XNn6Fay9+Le6h7Kz8zF5X77wHe1HSZ2rRlZB3cLWkG9z4o2ppqH4bc4XLeB0mb2v41RQ2wKhuVtWDKPB5O9d906r6HRwF3dby+qzxWlZ9Lml1heuPVXX6o/x5qVX5mVPYZP5x7+0MHaZoA+DjwLeCRkj4EHEK1w1PXSNqKYnWIH0oaW822Kh8BfiHpG+XrQ4EPVZg+1H8PXwAuLr++AvwTxYoOVXkKsLx8QHsn9z5srKrrV93lh/rv4X4kPdr27ytM8jMUbcOXedy0mMNuaHtNdJK0C0V7p4ALXNP6ZipWrn0o8H3bd011/TTSnc29bc4/sr2qqrQnyKuue9gLeHr58iJPMJn4BqS9w0THXVGvjDKP2spfpl/7PUyQ53dt/2OF6T2Eolvii22fX1W6gyCBOCKiYWkjjohoWALxOJLmJ/2k39b0e5FHv6ffjxKI76/uD0nST/ptz6Pf0+87CcQREQ0b6Id1M2fO9KxZs6b1nhtvvJGtt966q2uXLl36AEoVEdO01nZ3/yjXY968eV67dm1X1y5dunSR7Xkbkt90DXQ/4lmzZrFkSX2jKMfmEoiIWm1wF721a9d2HQskzdzQ/KZroANxRMSYNn/7TyCOiIFnYGS0vdM4JxBHxBAwbvESeQnEETH4DKPtjcMJxBExHNJGHBHRIAOjCcQREc1KjXgaJF0LzLHdXe/riIgp2E6viYiIprW5RtzoXBOSNpf0XUmXSrpc0mHlqTdKukTSZeWk7WPXninpYknLJB3cYNEjos+4y/+a0PSkP/OA39re3fYTge+Xx9fa3gv4FHBceezdFKtPzAWeBZwsafPxCUqaL2mJpCU33nhjD24hItqueFjX3daEpgPxZcBzJZ0k6em2bymPjy0RvhSYVe4/Dzhe0nLgxxSrCm8/PkHbC2zPsT2n28l7ImLwTWPF6Z5rtI3Y9m/Ktb5eAHxQ0gXlqTvLnyPcW0ZRrHV1ZY+LGRH9ruUP65puI/574K+2vwScDOw1yeWLKNqOVb53zx4UMSIGgEmNeDJPomjrHQXuBl4HnL2eaz8A/AewQtJGwDXAgT0pZUT0vQzoWA/biyhqup1mdZxfAuxf7v8NeG2vyhYRg6XN3dearhFHRPRAZl+LiGiUM/taRETzRlvcayKBOCIGXmZfi4hogTysi4hokp0acVOWLl3a10ve9+IveD//fiKmIzXiiIgGGRhJII6IaFZqxBERDUsgjohokPOwLiKieakRR0Q0LIE4IqJBRa+JDHGOiGhUmyf9aXrNuq5I+nnTZYiIPtbl6hzDukJHV2w/rekyRET/Glsqqa36pUZ8W/nzMZIukrRc0uWSnt502SKiP4yWXdim2prQFzXiDi8FFtn+kKQZwEOaLlBE9Ic214j7LRAvBs6UtAnwbdvLx18gaT4wv+cli4jWss1IiyeG74umiTG2LwKeAdwAfE7Syye4ZoHtObbn9LyAEdFa7vK/JvRVjVjSDsAa26dL2hTYC/hCw8WKiD6Q7mvV2R+4VNIy4DDglGaLExH9YKzXRBXd1yTNk3SlpNWSjp/g/PaSLpS0TNIKSS+YKs2+qBHb3qL8+Xng8w0XJyL6UBUP68pOAqcCzwXWAIslnWt7Vcdl7wHOsv0pSbOBhcCsydLti0AcEbFBqntYNxdYbftqAElfAw4GOgOxgb8r9x8K/HaqRBOII2LgVTigYxvg+o7Xa4Anj7vmfcAPJL0R2Bw4YKpE+62NOCLiAZnGgI6ZkpZ0bNPtDnsE8Dnb2wIvAL4oadJYmxpxRAyFaXRNWztJ99cbgO06Xm9bHut0NDAPwPYvJG0GzAT+uL4MUyOOiKFgd7dNYTGws6QdJT0IOBw4d9w1/wU8B0DSE4DNgBsnSzQ14ogYeIZK5pGwvU7SMcAiYAZwpu2Vkk4Eltg+F3gbcLqkt5RZv8JTNFAnELeYpNrzqHv8fS/uIWJKFQ5xtr2Qokta57ETOvZXAftOJ80E4ogYeG2fBjOBOCKGQgJxRETDmppruBsJxBExBJqbWa0bCcQRMfC67JrWmATiiBgKbZ4YPoE4IgZeVf2I65JAHBFDoc29Jhod4izpTZKukHTzRBMsR0RUostJ4ZsK1k3XiF8PHGB7TcPliIhBlxrx/Un6NLAT8D1Jb5H0CUkPlXTd2JRxkjaXdL2kTSQ9VtL3JS2V9BNJuzRV9ojoP6Mj7mprQmOB2Pa/Usxc/yzg5vLYLcBy4JnlZQcCi2zfDSwA3mh7b+A44JM9L3RE9KWi+1qaJqbj6xQLg15IMcXcJyVtATwN+EbHJDKbTvTmchLn6U7kHBEDrs0P69oYiM8F/rekhwN7Az+iWG7kz7b3mOrNthdQ1J6R1N7ffET0UHO13W60bmJ427dRTL58CnCe7RHbfwGukXQogAq7N1nOiOgvHnVXWxNaF4hLXweOKn+OORI4WtKlwEqKlVMjIqaUNuJJ2J5V7n6u3MaOnw1o3LXXUK4DFRExXc4Q54iIZrW4iTiBOCKGgJtr/+1GAnFEDIU295pIII6IgZc16yIiWiCBOCKiSTYeSa+JiIhGpUYcrdUxd0ct6v7w113+GBwtjsMJxBEx+PKwLiKiaU4gjohomBnNw7qIiGalRhwR0SCnaSIiogUSiCMimuX2NhEnEEfEcEjTREREk2xGMzH8hpM0w/ZI0+WIiP7T9gEdtaxZJ+lESW/ueP0hScdKerukxZJWSHp/x/lvS1oqaaWk+R3Hb5P0kXKduqdK+jdJq8r3/3sdZY+IAeTqFg+VNE/SlZJWSzp+Pde8pIxVKyV9Zao061o89Ezg5WWBNgIOB34P7AzMBfYA9pb0jPL6V9neG5gDvEnSI8rjmwO/sr07cAXwz8CutncDPjhRxpLmS1oiaUk9txYRfanowzb1NglJM4BTgecDs4EjJM0ed83OwDuBfW3vCrz5fgmNU0sgtn0tcJOkPYHnAcuAfTr2LwF2oQjMUATfS4FfAtt1HB8Bvlnu3wLcAXxG0ouAv64n7wW259ieU/V9RUS/6m4F5y6aL+YCq21fbfsu4Gvcf0X51wCn2r4ZwPYfp0q0zjbiM4BXAI+mqCE/B/g/tk/rvEjS/sABwFNt/1XSj4HNytN3jLUL214naW6ZziHAMcCzayx/RAyQ0e7XrJs57hv1AtsLyv1tgOs7zq0Bnjzu/Y8DkPQzYAbwPtvfnyzDOgPxt4ATgU2AlwLrgA9I+rLt2yRtA9wNPBS4uQzCuwBPmSgxSVsAD7G9sLzBq2sse0QMEJdtxF1au4HfqDem+Fa/P7AtcJGkJ9n+82RvqIXtuyRdCPy5rNX+QNITgF+Uc8jeBhwFfB/4V0lXAFdSNE9MZEvgO5I2AwS8ta6yR8TgqajXxA0Uzadjti2PdVpD8WzrbuAaSb+hCMyL15dobYG4fEj3FODQsWO2TwFOmeDy50+Uhu0tOvZ/R9E+ExExbRUF4sXAzpJ2pAjAh1N84+/0beAI4LOSZlI0VUz6Db6u7muzgdXABbavqiOPiIjuVfOwzvY6iudTiyh6cp1le2XZZfeg8rJFFJ0VVgEXAm+3fdNk6dZSI7a9CtipjrQjIqatwtnXbC8EFo47dkLHvimaTrtuPu2bkXUREQ+UAY+0d2RdAnFEDIU2D3FOII6IwdfdYI3GJBBHxFCYRj/inksgjlqVfcZrU3ctp+7yR++kRhwR0aC2T4OZQBwRg8/GmRg+IqJZWbMuIqJhaZqIiGhShSPr6pBAHBEDLw/rIiIaZ0ZH2ttInEAcEYMvTRMRES2QQFwPSTPG1rSLiJhMi+Pw5BPDl5Mdv7nj9YckHSvpZEmXS7pM0mHluf0lnddx7SckvaLcv1bS+yVdUr5nl/L41pJ+KGmlpDMkXVfOaI+koyRdLGm5pNPKZayRdJukj5SrPj+16l9IRAyesYd1FaziXIupVug4E3g53LP00eEU6zHtAexOsfryyZIe00Vea23vBXwKOK489l7gR7Z3Bc4Gti/zegJwGLCv7T2AEeDI8j2bU6wHtbvtn47PRNJ8SUvGrcIaEcOsXDy0m60JkzZN2L5W0k2S9gQeBSwD9gO+WjYJ/EHSfwL7AH+ZIq9zyp9LgReV+/sB/1zm9X1JN5fHnwPsDSwuJ115MPDH8twI8M1JyrwAWAAgqcVfRiKid8xonw9xPgN4BfBoihryc9dz3TruW8PebNz5O8ufI13kK+Dztt85wbk70i4cEdPV5l4T3Swe+i1gHkWtdxHwE+AwSTMkbQ08A7gYuA6YLWlTSVtR1Gqn8jPgJQCSngc8rDx+AXCIpEeW5x4uaYfubysiYhy7u60BU9aIbd8l6ULgz7ZHJH2L4iHZpRRt4O+w/XsASWcBlwPXUDRjTOX9wFclvQz4BfB74FbbayW9B/hB2TZ9N/AGimAfETEtdp9PDF8GwqcAh8I9K5S+vdzuw/Y7gHdMcHxWx/4SYP/y5S3A/7C9TtJTgX1s31le93Xg6xOktcVUZY6IGK/FLROTB2JJs4HzgG/ZvqqG/LcHziqD/V3Aa2rIIyKGXh+vWWd7FbBTXZmXwX3PutKPiADA9H2viYiIvmb6vI04ImIQ9G3TRETEYGiua1o3EogjYvBlGsyI+pRD4GtT9z/eussf9xodSSCOiGhMlkqKiGhamiYiIprWxwM6IiIGRQJxRETD2jygo5tpMCMi+trY7GtVrNAhaZ6kKyWtlnT8JNe9WJIlzZkqzQTiiBgKVaxZV66deSrwfGA2cEQ5Odr467YEjgV+1U3ZEogjYgh0F4S7aEeeC6y2fbXtu4CvAQdPcN0HgJOAO7opXc8CsaStJL2+3L/Pis8REbWqrmliG+D6jtdrymP3kLQXsJ3t73ZbvF7WiLcCXt/D/CIi7jGNGvHMsZXgy21+t3mUc6t/FHjbdMrWy14T/wY8VtJyiqWPbpd0NvBEipWdj7JtSScAL6RYufnnwGvL4z+maG95FkVQP9r2T3pY/ojoU9McWbfW9voesN0AbNfxetvy2JgtKWLaj8vh648GzpV0ULk60YR6WSM+Hvj/tvegWGZpT+DNFA3eOwH7ltd9wvY+tp9IEYwP7EhjY9tzy/e9d6JMJM0f+0tW031ERN8xHh3tapvCYmBnSTtKehBwOHDuPbnYt9ieaXtWuUTcL4FJgzA0+7DuYttrbI8Cy4FZ5fFnSfqVpMuAZwO7drznnPLn0o7r78P2AttzJvmLFhHDxuDR7rZJk7HXAcdQrGh/BXCW7ZWSTpR00AMtXpMDOu7s2B8BNpa0GfBJYI7t6yW9D9hsgveMkMEoETENVY2ss70QWDju2AnruXb/btLsZY34Vor2k8mMBd21krYADqm3SBExLCrqvlaLntUqbd8k6WeSLgf+Bvxhgmv+LOl04HLg9xTtMRERGyTTYHaw/dL1HD+mY/89wHsmuGb/jv21rKeNOCLifmxGR7KKc0REs1IjjoholkkgjohojLNCR0RE04yn6iTcoATiiBgKqRFHRDRsdOrhy41JII6YRDlxS216UUur+x76QTFYI4E4IqJZaZqIiGhWuq9FRDQsD+siIhplRkdHmi7EeiUQR8TAy4COiIgWSCCOiGhYAnFERKOc7msREU0zGdAREdEYu91DnJtcxfk+JM2S9GtJn5P0G0lflnRAubzSVZLmStpc0pmSLpa0TNLBTZc7IvpBd+vVDfyadV36B+BQ4FUU69W9FNgPOAh4F7AK+JHtV0naCrhY0vm2bx9LQNJ8YH7PSx4RrZa5Jrp3je3LACStBC6wbUmXUaxRty1wkKTjyus3A7YHrhhLwPYCYEGZRntb5yOip9Jront3duyPdrwepSjrCPBi21f2umAR0d/aHIhb00bcpUXAG1XO6ydpz4bLExH9wO5+a0DbasRT+QDwH8AKSRsB1wAHNlukiGg7A6POXBNTsn0t8MSO169Yz7nX9rJcETEImusR0Y3WBOKIiDolEEdENCyBOCKiQcVzuPQjjohokHGLhzgnEEfEUMiadRERDUsbcUREo5w24oh+VYwbqs+dd99da/pQ/z20OcCNafuadf02xDki4gGpahpMSfMkXSlptaTjJzj/VkmrJK2QdIGkHaZKM4E4IobC6OhoV9tkJM0ATgWeD8wGjpA0e9xly4A5tncDzgY+PFXZEogjYggYPNrdNrm5wGrbV9u+C/gacJ8FKmxfaPuv5ctfUkzfO6kE4ogYCu7yP2CmpCUdW+dCE9sA13e8XlMeW5+jge9NVbY8rIuIgTfNh3Vrbc/Z0DwlHQXMAZ451bUJxBExFCrqNXEDsF3H623LY/ch6QDg3cAzbd85/vx4CcQRMQQq60e8GNhZ0o4UAfhwirU171EuWHEaMM/2H7tJNIE4IobCVD0iumF7naRjKFYLmgGcaXulpBOBJbbPBU4GtgC+US4m9F+2D5os3QTiiBh4VQ7osL0QWDju2Akd+wdMN80E4ogYAs2tR9eNBOKIGAqmvUOxBy4Ql33+5k95YUQMlTbPNTFwgdj2AmABgKT2/uYjoodcycO6ugxcII6IGK/tSyX17RBnSQsl/X3T5YiI/lDV7Gt16Nsase0XNF2GiOgfaSOOiGhUuq9FRDQui4dGRDTIhtHRkaaLsV4JxBExBJp7ENeNBOKIGAoJxBERDUsgjohoWJsHdCQQR0yi7n+8m26ySa3pQ7sDUM843dciIhplYLTFf5ASiCNiKLT5m0ECcUQMgXRfi4hoXAJxRESDqlyzrg4JxBExBIwzxDkiolltnvSnlonhJf1Y0pWSlpfb2R3n5kv6dbldLGm/jnMHSlom6VJJqyS9to7yRcTwGYqJ4SU9CNjE9u3loSNtLxl3zYHAa4H9bK+VtBfwbUlzgZso1pqba3uNpE2BWeX7Hmb75qrKGhHDp81txBtcI5b0BEkfAa4EHjfF5f8TeLvttQC2LwE+D7wB2JLiD8NN5bk7bV9Zvu8wSZdLepukrTe0zBExXIra7mhXWxMeUCCWtLmkV0r6KXA6sArYzfayjsu+3NE0cXJ5bFdg6bjklgC72v4TcC5wnaSvSjpS0kYAtj8NPB94CHCRpLMlzRs7HxExlUFsmvgdsAJ4te1fr+ea+zVNTMX2qyU9CTgAOA54LvCK8tz1wAckfZAiKJ9JEcQP6kxD0nxg/nTyjYjBNzra3pF1D7RGeQhwA3COpBMk7dDl+1YBe487tjewcuyF7ctsf4wiCL+488KyLfmTwMeBs4B3js/A9gLbc2zP6fZmImIIjE38M9XWgAcUiG3/wPZhwNOBW4DvSDpf0qwp3vph4CRJjwCQtAdFjfeTkraQtH/HtXsA15XXPU/SCuCDwIXAbNtvtr2SiIgpGTPa1daEDeo1Yfsm4BTglLK22tlj+suS/lbur7V9gO1zJW0D/FySgVuBo2z/TtKWwDsknQb8DbidslmC4gHeC21ftyHljYjh1PaRdWpz4TZUGewjWqsX//4k1Z5HzZZuaFPjRhvN8KabPrira++44/YNzm+6MrIuIoZCmyudCcQRMQTMaOaaiIhoTtvbiDMgIiKGQ0Xd18rBZFdKWi3p+AnObyrp6+X5X3XRmyyBOCKGgbv+bzKSZgCnUgwqmw0cIWn2uMuOBm62/Q/Ax4CTpipdAnFEDIWK5pqYC6y2fbXtu4CvAQePu+Zgijl0AM4GnqMpuq6kjTgihkJFQ5y3Aa7veL0GePL6rrG9TtItwCOAtetLdNAD8VrK0XnTMJNJfmEVSPpJ/x4PsI9vq+6hB+l3O4XCZBaV+XZjM0md8+QssL2ggjKs10AHYtvTnjJT0pI6O3Mn/aTf9jz6Pf2J2J5XUVI3ANt1vN62PDbRNWskbQw8lHJ63/VJG3FERPcWAztL2rFcDONwiul7O50L/Eu5fwjwI0/Rd26ga8QREVUq23yPoWjqmAGcaXulpBOBJbbPBT4DfFHSauBPFMF6UgnE91drW1DST/p9kEe/p18r2wuBheOOndCxfwdw6HTSHOhJfyIi+kHaiCMiGpZAHBHRsATiiIiGJRBHRDQsgTgiomEJxBERDUsgjoho2H8DHWJDPH6DZ+YAAAAASUVORK5CYII=\n",
"text/plain": [
"<Figure size 432x288 with 2 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"input = elle est trop petit .\n",
"output = she is too drunk . <EOS>\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAXgAAAD9CAYAAAC2l2x5AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAYfklEQVR4nO3de7hdVX3u8e9LuAp4KZsehUCDp0GNilwiaIVKK9BgFeiDRy7V1haJpy19bClUOPpwfMDWolWLLXiM3PGClArkKDRUK6VilexwTwSfPCgSwGqQAxEskL3e88ecm6xs9t5r73Wbc8+8nzzz2WvOOdYYYxHyW2OPMeYYsk1ERDTPVlVXICIiBiMBPiKioRLgIyIaKgE+IqKhEuAjIhoqAT4ioqES4CMiGioBPiKioRLgIyIaKgE+oo0K10p6VdV1iehVAnzE5o4AXg+8t+qKRPQqAT5icydRBPe3S9q66spE9CIBPqIkaQR4te0bgK8Dx1RcpYieJMBHbPJu4Evl60tIN03McQnwEZv8IUVgx/ZK4GWS9qi2ShHdS4CPACS9GPgH2w+1XT4NGKmoShE9Uzb8iIhoprTgY4sn6WRJC8vXknSJpCck3SVpv6rrF9GtBPgIeD/ww/L1CcA+wF7AqcCnK6pTRM8S4CNgo+1ny9dvAy63/ajtrwM7VliviJ4kwEdAS9LLJG0PvIViDvy4HSqqU0TP8qReBJwFjALzgOW2VwNIejNwf5UVi3pbsmSJ169f3zHdqlWrVtheMoQqbSazaKIrkq6w/e5O1+aKclmCnW0/1nZtR4p/Iz+vrmZRZ4sXL/bo6GjHdJJW2V48hCptJi346Nar208kzQMOqKgu/fBLwJ9IGv9cq4ELbP9nhXWKOaDOjeT0wcesSDpT0gZgn3Iq4RPl+U+A6yquXlckvQlYWZ5eXh4A3y3vRUzKwFir1fGoSlrwDVEOEP4xcDDF/3ffAj5j+7/6WY7tjwIflfRR22f2M+8KfQI4xvbtbdeWS7oG+CxwUDXVivozpr4t+AT45rgc2AD8fXl+InAF8D/6WYikV9q+F/hHSftPvG/7tn6WNyQvnBDcAbB9h6Sdq6hQzBGGVn3j+5Yb4CUdDCy0fYmkXYGdbP+g6nr14DW2F7Wdf1PSmgGUcyqwlKLVO5GB3xxAmYMmSS9pH2AtL/4S6caMDurcB79FBnhJ/xtYDLyCYvXAbYDPA3O5v/U2SW+w/R0ASQdRTP3rK9tLy5dHTuz+KbuJ5qJPATdKOg0Y/w3kAODc8l7EpAy0EuBr53eA/Sj/Mdt+eFC/ikvazvbTna71wQHAtyX9qDzfE7hP0t2Abe/T5/K+DUzsopnsWu3ZXibpYeAcitlBBtYAH7H9fyutXNReWvD184xtSzI8N995UP6D5we9ya71aigPUUh6KbA7sEO5EJfKWy8EXjCMOgyC7a8CX626HjG32K50lkwnW2qAv0rSZ4EXSzqZYqOHz/WzgGEHQtsPSHodcEh56d9t39nvcoDfAt4DzAc+2Xb9CeB/DaC8gZN0le13lq/Ptf2Btns32j6iutpF3aUFXzO2/1bS4RRB6RXAWbb/pc/FtAfCT7ApwG9gAIFQ0vuBk4GvlJc+L2mZ7b+f5m2zZvsy4DJJx9r+p37mXaGFba8PBz7Qdr7rkOsSc0ymSdZQGdD7HdTb8x92IDwJOMj2k1C0RCm6gvoa4NvcIukiYDfbR0paBLzR9kUDKm+QpvsXWt9/vVG5YpC16lpMbYuaAiZpQ9vTl+3HBklPDKjY+ZJeWG4kcaGk2yQN4ld+AWNt52Ns+q1hEC4BVgC7leffB/5sgOUN0gsk7SfpAMouNUn7j59XXbmoN9sdj6psUS1421U8tPKHts+T9FvALsC7KR5AurHP5VxC8Wj9NeX5McAgW9Mjtq+SdCaA7Y2Sxjq9qaYeYdN4wo/ZfGzhx8OvTswZGWStj/LBlSnZ/tkgii1//jbFRhKrJfW9ZW37k5JuoliqAOAPJns6s4+elLQLZReGpDcAjw+wvIGx/RtV1yHmJpNB1jpZRfF3Ijb1rY4HWwMvH0SZklaUeZ9Rzrfv61d+uZLjatuvZNODOoN2KrAceLmkWygGI98xpLL7TtIOwN7tM48k7QmM2X6ouppF3eVBp5qwvReApK2A3wX2sn12+Q/5ZQMq9iTgQ8Aa20+VZfW1r9r2mKT7JO1p+0ed39EXa4BrgKcoZgZdS9EPP1dtBL4iaZ/xgWrgQooZTwnwMaU6t+C3qEHWNucDb6DYYBmKAPUPAyzrv7HpQaQNbN7H2y8vAVZL+oak5ePHAMoZdznwSuCvKWbq7E0xtjAnlXuyXgOMz4ffE9jVdt+Xe4gm8Yz+VGWLasG3Ocj2/pJuB7D9mKRt53hZ21NsGD1OFGupDMqwFjcbpguBZRQD1r9X/oyYkrOaZC09W/Zbjw8Q7kqf+8UrKGtr2//WfqHsVx6UoSxuNky27y2ns+4NHM+mp4IjptTKLJra+TTFr+O/LOmvKAYHPzQXy5L0RxQbfbxc0l1tt3YGbulXOZMY9uJmm5H0UtuDmMJ4EUVL/u6JywdHTJTVJGvI9hckrQLeQtGVcYzt783Rsr4I3AB8FDij7fqGAU37HDf0HeInuIhi6mm/XQWcB5w9gLyjgeo8yLpFBngofh0H7p3rZdl+nGL++Qmd0va53AeGWd4k5Q8iuGP7KeBFg8g7GshOCz4ioqnSgo+IaCADYzUO8FvqPPjnSFraOVXKqkNZTfxMKWvulDOVOi82tsUHeIoNpFPW3CiriZ8pZc2dciZV5wCfLpqIiC45g6zDMzIy4gULFszqPXvuuSeLFy+e9d/QqlWrZvsWAMb3gR2GJpbVxM+UsiorZ73tnnfsyiDrkCxYsIDR0eE8TDmAFX8jYrj6MtU3AT4iooGKWTRZqiAiopGy2FhERBNVPEumkwT4iIguZcu+iIgGyzTJiIiGSgs+IqKBbDNW4w0/Kl+qQNIPJY1UXY+IiG5kT9aIiIaq8zTJobbgJe0o6WuS7pR0j6Tjylt/Kuk2SXdLemVb2osl3SrpdklHD7OuERGdjM+i6cdiY5KWSLpP0lpJZ0xyf09J3yzj4V2S3topz2F30SwBHrb9OtuvAf65vL7e9v7AZ4DTymsfBP7V9oHAbwAfl7TjkOsbETGtfgR4SfOA84EjgUXACZIWTUj2IeAq2/tRbAp/Qad8hx3g7wYOl3SupEPK7eYAvlL+XAUsKF8fAZwh6Q7gJmB7io2dNyNpqaRRSaM//elPB1r5iIjNlIOsnY4ZOBBYa/t+288AVwITey0MvLB8/SLg4U6ZDrUP3vb3Je0PvBX4iKRvlLeeLn+OtdVJwLG27+uQ5zJgGdDVqpAREd3q44NOuwMPtp2vAw6akObDwI2S/hTYETisU6bD7oPfDXjK9ueBjwP7T5N8BUXfvMr37jeEKkZEzEqrXBN+ugMYGe9pKI9uNik5AbjU9nyKRvIVkqaN4cOeRfNair70FvAs8EfA1VOkPQf4O+Cu8kP8AHjbUGoZETFDM5wGud724mnuPwTs0XY+v7zW7iSKcUxs/4ek7YER4CdTZTrsLpoVFC3zdgva7o8Ch5avfwG8b1h1i4joRp8eZF0JLJS0F0VgPx44cUKaHwFvAS6V9CqKcclpBx4zDz4iokumP2vR2N4o6RSKBvA84GLbqyWdDYzaXg78BfA5SX9eFv0edxgASICPiOhWH5cqsH09cP2Ea2e1vV4DvGk2eSbAR0R0KcsFR0Q0WAJ8RERDZT34iIhGqna1yE4S4CMiumT3bZrkQCTAR0T0oM4bfjQqwK9atYpyZYNGGdYgThP/20UMUr/mwQ9KowJ8RMSwZRZNREQTzWJDjyokwEdE9CIBPiKimVpjCfAREY1TTJNMgI+IaKQE+IiIRsoga0REY7mVAB8R0Th174Mf6qbbMyXp21XXISJiJtxqdTyqUssWvO1fq7oOEREzUeMGfG1b8D8vf75M0s2S7pB0j6RDqq5bRMRzbNzqfFSlli34NicCK2z/laR5wAuqrlBERLs698HXPcCvBC6WtA1wre07JiaQtBRYOvSaRcQWr+57stayi2ac7ZuBXwceAi6V9HuTpFlme7HtxUOvYERs8VwuODbdUZVat+Al/QqwzvbnJG0H7A9cXnG1IiIKNh7Lhh/dOhQ4XdKzwM+B57XgIyKqVOcumloGeNs7lT8vAy6ruDoREVOqcXyvZ4CPiJgL6j7ImgAfEdGtmi9VkAAfEdE108oga0REM6UFHxHRQHVfTTIBPiKiFwnwERHN5Pp2wSfAR0T0Il000RNJQylnmP+jDuszRQyUTavCDT06SYCPiOhS3R90qvVqkhERtWb6tuGHpCWS7pO0VtIZU6R5p6Q1klZL+mKnPNOCj4joRR9a8OWGRucDhwPrgJWSltte05ZmIXAm8Cbbj0n65U75pgUfEdG1zmvBz7AL50Bgre37bT8DXAkcPSHNycD5th8DsP2TTpkmwEdE9KDVcscDGJE02nZM3IVud+DBtvN15bV2ewN7S7pF0nckLelUt3TRRER0yWUf/Ays78Ouc1sDCyn2yZgP3Czptbb/31RvSAs+IqIHfeqieQjYo+18fnmt3Tpgue1nbf8A+D5FwJ9SAnxERA/6FOBXAgsl7SVpW+B4YPmENNdStN6RNELRZXP/dJmmiyYiomv92VTb9kZJpwArgHnAxbZXSzobGLW9vLx3hKQ1wBhwuu1Hp8t36AFe0ouBE21fMOyyIyL6qo+rSdq+Hrh+wrWz2l4bOLU8ZqSKLpoXA39cQbkREX1lwGPueFSligD/N8B/l3SHpI+Xxz2S7pZ0HIAKz7seEVE3feqDH4gq+uDPAF5je19JxwL/E3gdMELx9NbNwK8B+068bvuRCuobETG5igN4J1XPojkY+JLtMdv/Cfwb8Ppprj+PpKXjDw8MrdYREaV+rUUzCHN+Fo3tZcAyAEn1/SqNiEZKC35zG4Cdy9f/DhwnaZ6kXYFfB26d5npERG2MLxecPviS7UfLtRTuAW4A7gLupPhv9Ze2fyzpGuCNE68Pu64REdOycTb82JztEydcOn3CfZfXTiciosayJ2tEREPVuQ8+AT4iolt9fJJ1EBLgIyK6VPc9WRPgIyK6Zlpj9e2ET4CPiOhWumgiIhosAT4ioplqHN8T4GMTSUMra5i/1g7zc8WWJYOsERFNNfNNtyuRAB8R0TXTylIFERHNlC6aiIimSoCPiGgepw8+IqK5atyAT4CPiOhevfdkTYCPiOiWySyaiIgmMumDj4horDp30fR1021JH5Z0Wp/y+qGkkX7kFRExGC6n0nQ4KjLwFrykrW1vHHQ5ERFDV/PlgntuwUv6oKTvS/oW8Iry2k2S/k7SKPB+SZdKekfbe35e/jy0THu1pHslfUETVoaStIOkGySd3GtdIyL6rTXmjkdVemrBSzoAOB7Yt8zrNmBVeXtb24vLdJdOk81+wKuBh4FbgDcB3yrv7QRcCVxu+/Ip6rAUWNrL54iI6EbdV5PstQV/CHCN7adsPwEsb7v35RnmcavtdbZbwB3AgrZ71wGXTBXcAWwvs714/MskImJoyi6aTkdV+jrIOsGTba83jpclaStg27Z7T7e9HmPz3ypuAZZM7LaJiKiHzsF9Lgf4m4Fjyn7ynYG3T5Huh8AB5eujgG1mmP9ZwGPA+b1UMiJiUBob4G3fRtEVcydwA7ByiqSfA94s6U7gjWzeuu/k/cAOkj7WS10jIgbBLXc8qqI6DxDMlqTmfJiGy5Z9UQOreh2722VkN//2Ue/tmO6KS87pWJakJcB5wDzgQtt/M0W6Y4GrgdfbHp0uz0H2wUdENF4/umgkzaPoij4SWAScIGnRJOl2pujV+O5M6pYAHxHRtb4Nsh4IrLV9v+1nKKaHHz1JunOAc4H/mkmmCfAREd1y3/rgdwcebDtfV157jqT9gT1sf22m1ctiYxERPZhhC32kfLJ/3DLby2ZaRjm9/JPAe2ZTtwT4iIguzeJJ1vUdBlkfAvZoO59fXhu3M/Aa4KZy0sBLgeWSjppuoDUBPiKia8b92fBjJbBQ0l4Ugf144MTnSrEfB55bXVfSTcBpmUUTETEoBrc6Hx2zKVbcPQVYAXwPuMr2aklnSzqq2+qlBR+VGObc9LEhbqk2b6t5Qyur6CCIqvXrmQ7b1wPXT7h21hRpD51JngnwERE9qPPDognwERFdqvtywQnwERHdsmmNDa8LcLYS4CMiepEWfEREM7nGg90J8BERXXLNN91OgI+I6JrxTCa6VyQBPiKiB2nBR0Q0VGuID9LNVgJ8RESXivXeE+AjIpopXTQREc2UaZIREQ2VQdYBkrQUWFp1PSJiS2RarbGqKzGlOR/gy22vlgFIqu9XaUQ0Th50iohosDoH+Dmzo5Ok6yXtVnU9IiLaFVMlpz+qMmda8LbfWnUdIiI250yTjIhoKpMHnSIiGsfOUgUREQ1VbR97JwnwERE9yFo0ERENlRZ8RERDJcBHRDSRM00yIqKRDLSctWgiKjNvq+E9sP2pL14ztLJu+vI3h1LOddd9eijlzE2ZRRMR0VgJ8BERDZUAHxHRQMUYa+bBR0Q0kHGWKoiIaKbsyRoR0VDpg4+IaCSnDz4ioonqvifrnNmyLyKijvq1ZZ+kJZLuk7RW0hmT3D9V0hpJd0n6hqRf6ZRnzwFe0k1lpe4oj6vb7i2VdG953Crp4LZ7b5N0u6Q7y0q/r9e6REQMW6vV6nh0ImkecD5wJLAIOEHSognJbgcW294HuBr4WKd8u+qikbQtsI3tJ8tLv2t7dEKatwHvAw62vV7S/sC1kg4EHgWWAQfaXidpO2BB+b6X2H6sm3pFRAyXoT998AcCa23fDyDpSuBoYM1zJdnta1N8B3hXp0xn1YKX9CpJnwDuA/bukPwDwOm215eVuw24DPgTYGeKL5dHy3tP276vfN9xku6R9BeSdp1N/SIihs0z+AOMSBptO5ZOyGZ34MG283XltamcBNzQqW4dW/CSdgTeWWYIcAnwYdsb2pJ9QdIvytf/Yvt04NXAqgnZjQK/b/tnkpYDD0j6BvBV4Eu2W7b/j6SvAe8Bbpa0GrgQuNF1Hq6OiC3OLAZZ19te3I8yJb0LWAy8uVPamXTRPALcBbzX9r1TpHleF00ntt8r6bXAYcBpwOEUQR3bDwLnSPoIRZ/UxRRfDkdNzKf8Jpz4bRgRMRR9mkXzELBH2/n88tpmJB0GfBB4s+2nO2U6ky6ad5QFfUXSWTMZuS2tAQ6YcO0AYPX4ie27bX+KIrgf256w7Ku/APg0cBVw5mSF2F5me3G/vh0jImaumAff6ZiBlcBCSXuVY5zHA8vbE0jaD/gscJTtn8wk044B3vaNto8DDgEeB66T9HVJCzq89WPAuZJ2KSu3L0UL/QJJO0k6tC3tvsADZbojJN0FfAT4JrDI9p/ZXk1ERM30YxaN7Y3AKcAK4HvAVbZXSzpb0njPxceBnYB/LGcsLp8iu+fMeBaN7UeB84DzytZ1+zYm7X3w620fZnu5pN2Bb0sysAF4l+1HJO0M/KWkzwK/AJ6k7J6hGHh9u+0HZlq3iIgq9PNBJ9vXA9dPuHZW2+vDZptnV9Mkbd/a9vrQadJ9BvjMJNc3AG+d4j0TB2YjImoqe7JGRDSWqe/kvgT4iIge1HktmgT4iIiueUaDqFVJgI+I6FK27IuIaLB00URENFQCfEREI2WaZEREY2XT7eFZT7nkwSyMlO8bhpQ1N8rpuqw/P/F3hlZWl5pYVrflzHRdrSnZ0GqNdU5YkUYFeNuzXj9e0uiwFipLWXOjnJQ1t8oa5md6vplvyVeFRgX4iIhhS4CPiGioBPh6W5ay5kxZTfxMKWvulDOpOj/opDp/+0RE1Nm222znkZH5HdM98uP7V1UxTpAWfERElwy0atyCT4CPiOhBnbtoEuAjIrqWaZIREY2VAB8R0UD93JN1EBLgIyK6ZpylCiIimimLjUVENFS6aCIiGioBPiKigWxnHnxERFOlBR8R0VCtVlrwERHNlBZ8REQTGZMWfERE4+RJ1oiIBkuAj4hoqAT4iIhGMq2sRRMR0Tx174PfquoKRETMaUWUn/6YAUlLJN0naa2kMya5v52kL5f3vytpQac8E+AjIrrmGf3pRNI84HzgSGARcIKkRROSnQQ8ZvtXgU8B53bKNwE+IqIHdqvjMQMHAmtt32/7GeBK4OgJaY4GLitfXw28RZKmyzQBPiKiB61Wq+MxA7sDD7adryuvTZrG9kbgcWCX6TLNIGtERPdWACMzSLe9pNG282W2lw2oTs9JgI+I6JLtJX3K6iFgj7bz+eW1ydKsk7Q18CLg0ekyTRdNRET1VgILJe0laVvgeGD5hDTLgd8vX78D+Fd3mKOZFnxERMVsb5R0CkWXzzzgYturJZ0NjNpeDlwEXCFpLfAzii+BaanOk/QjIqJ76aKJiGioBPiIiIZKgI+IaKgE+IiIhkqAj4hoqAT4iIiGSoCPiGioBPiIiIb6/xKmONx4YNhMAAAAAElFTkSuQmCC\n",
"text/plain": [
"<Figure size 432x288 with 2 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"input = je ne crains pas de mourir .\n",
"output = i m not scared to die . <EOS>\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAXgAAAEYCAYAAABWae38AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAb50lEQVR4nO3de5xdZX3v8c+XgBIkipgci9zCUUADIpAAXkBtBRsBwfMS5eoRq6YX9dgqKrYetBRfPUiplwKV8RxALQoIVVPBgjdERSUJ96REU1AuUiGIgIBcsr/nj7UGN+PM7Jk9e++19prvO6/1mr0u+3l+e5L85plnPet5ZJuIiGiejaoOICIi+iMJPiKioZLgIyIaKgk+IqKhkuAjIhoqCT4ioqGS4CMiGioJPiKioZLgIyIaKgk+YhZQ4SuSXlB1LDE4SfARs8Orgb2At1UdSAxOEnwET7Rwt606jj56K0Vyf62kjasOJgYjCT4CcDHr3iVVx9EPkuYDu9j+OvBN4HUVhxQDkgQf8TtXS9qr6iD64E3AF8vXZ5NumllDmS44oiDpJuB5wM+BBwFRNO53qzSwGZJ0A7DU9h3l/nXAwbZvqzay6Lf0xTWEpDcA/277AUkfAvYETrJ9dcWhDZM/rjqAXpO0BXDaaHIvHQfMB5LgGy4t+IaQdL3t3STtC5wEnAKcYHufikOrPUlPt32/pC3HO2/7V4OOKaIX0gffHBvKrwcBI7YvBp5SYTzD5Avl11XAyvLrqrb9oSTp7ZJ2LF9L0tmS7pd0vaQ9qo4v+i8t+IaQ9DXgDuAAiu6Zh4GrbL+o0sCGhCQB29q+tepYekXSjcAeth+TdBTwXorx8HsAH7a9X6UBRt+lBd8cbwQuBf7Y9q+BLYH3VRvS8CiHSV5cdRw99rjtx8rXBwOfs32P7W8CT6swrhiQJPiGsP0Q8FXgQUnbAZsAN1Ub1dBp2jDJlqStJG0KvIpiDPyouRXFFAOUUTQNIeldwIeBXwKt8rCBoR7iN2D7AEdLasowyRMo7iHMAZbbXg0g6RXAzVUGFoORPviGkLQO2Mf2PVXHMqwkbT/ecds/H3QsvVJOSzDP9r1tx55G8X//N9VFFoOQFnxz3AbcV3UQQ66JrZ0tgXdI2qXcXw2cYfuXFcYUA5IE3xw3A5dLuhh4ZPSg7X+sLqShczFFkhewKbADsBbYZbI31ZWkl1EMAT0H+Fx5eDHwY0lH2/5BVbHFYCTBN8et5fYUMv69K7Zf2L4vaU/gLyoKpxdOBV5n+5q2Y8slfRk4k+KeQzRY+uAjJiHphrGJf1hIWmN70XTPRXOkBT/kJH3C9l9K+jfG6UO2fcgAYtgI2Nz2/f2uq58kvadtdyOKB8Z+UVE4vSBJz2y/wVoe3JIMkZ4VkuCH3+fLr/8wyEolfQH4M4opElYAT5f0SdunDDKOHpvX9vpxij75iyqKpRc+Dlwm6ThgdNK5xcDJ5blouHTRRFckXWt7d0lHU7R0jwdWDfGY8SdI2hygCcMIJR0MvJ/iRrGBNcAptv+t0sBiINKCb4hyUqm/BxZRjAABwPZ/71OVm0jahGJ1oNPK+U6GurUgaVeK34i2LPfXA2+2fWOlgc2A7a8BX6s6jqhG+uGa42zgnym6Fv6QYljcv/SxvjOBn1HMaXJF+ZDQUPfBAyPAe2xvb3t7ism5RiqOqWuSLmh7ffKYc5cNPqIYtHTRNISkVbYXt4/6GD02wBg2tv34oOrrNUnXjZ19c7xjw0LSNbb3KF9fbXvP8c5Fc6WLps8k7UTRsn627V0l7QYcYvukHlf1SDma5aeS3kkxdfDmPa7jSSQdRNG3u2nb4RN7XMegvn8AN0v63/zuxvUxDPecLZO13tKymwXSRdN/nwE+CDwGYPt64Ig+1PNuYDPgf1GMlDgGeHMf6gFA0qeBw4F3UTz5+QZg3LlcZmhQ3z+APwEWUIycuYhiWbu39KmuQdhM0h6SFgNzy9d7ju5XHVz036xswZfL2u1o+2xJCyjGcN/Sp+o2s31VsZ7EE3rajSFpDnC47eOA3zCYpPTSconA623/raRTga/3oZ6+f//aPBfYlqLhszHFFLt/xPDOyHknMDpVxX+1vR7dj4abdQle0oeBJcDOFDcmN6G4GfmyPlW5XtJzKX8llnQYxX+8nrG9ofyhNUgPl18fkvQc4B5gqz7U0/fvX5tzKRakvpHfTbk8tGz/YdUxRLVmXYIH/gfFkmVXA9j+haR5k79lRt5BMRLj+ZLuAG4Bju5DPddIWg58iWIucwBs/2sf6gL4mqQtgI9RrF0K8H/7UM+gvn8AdzdtfLikucBOtq9rO7YdsMH2HdVFFoMw60bRSLrK9t6jowrKubF/2K8HdCQ9FTgMWEgxvvp+ikUken0z8uxxDtv2n/Synrb65gJ/DuxH0br+HvDPtn/bo/LfM+bQXIqukwehP7NkSnoVcCTwLZ48I2e/fkj2Xfmswk3AbrYfLI9dBvy17aFdUDymZja24C+QdCawhaS3A2+lPy3PUV8Ffk3xG0M/5zXZCHh3uR4rkp5JMZtgv3wWeAD4VLl/FMXY+zf2qPzR36p2Bvai+D4KeBNwVY/qGOstwPMpuu3aV8Ua2gRfPoD2ZYq/l7PL1vuCJPfZYda14AEkHUCxujzApeUixP2q60bbu/ar/LZ6fm9ccz/HOo83G2E/ZiiUdAVwkO0Hyv15wMW2X97Lesqy19reudflVk3S84ER2y+X9CHgftuf6vS+GH6zZpikpO+XXx+gGAL3Z+X2ZUn3SbpFUj/m/r5S0iCmm92obLUDT8wY2M/f0K6W9OK2+vahWP+z154NPNq2/2h5rB+ulNS4KXRt30Qxs+ROFENMP9/hLdEQs6aLxva+5ddxb6hKehZwJXBGj6veFzhW0i0U/br9Wsj5VOCHkr5U7r8B+GiP62i3mCIh3lrubweslXQDvf18nwOuKrsZoJj75pwelT3Wi4FrB/B3NS5Jf2C7X8MX/x9FV+QNY6cPjuaalV00E5G0le2eDsEb5ELOZevzj8rdb9te0+s62uqa9KGmXn6+cmWl/crdK8asUNQzg/y7mqD+i20f1KeyN6MYXvr6fnZJRr0kwUdENNSs6YOPiJhtkuAjIhpq1id4SctS13DU1cTPlLqGp55hNOv74CWttL0kddW/riZ+ptQ1PPWMZ+nSpV6/fn3H61atWnWp7aUDCOlJZs0wyYiIXlu/fj0rV3Z+/EPS/AGE83saleDnz5/vhQsXTus92223HUuWLJn2rzGrVq3qfNE4BrluaRPrauJnSl2V1bPe9oKZ1l3nXpBGJfiFCxdO6adpL4yZnzwihs+Mn28wsKFV35mlG5XgIyIGy7jGqx8mwUdEdMvQqm9+T4KPiJiJ9MFHRDSQgVYSfEREM6UFHxHRQLYziiYioqnq3IIfmrloJF1ZdQwREWN5Cn+qMjQteNsvrTqGiIh2xU3WqqOY2NAkeEm/sb151XFERLSrcxfN0CT4iIjayU3W/irngl4GxcRhERGDYurdgh+am6wTsT1ie4ntJQsWzHhiuIiIaWnZHbeqDH0LPiKiSnVuwSfBR0R0LbNJ9kRG0ERE3TizSUZENFcro2giIpons0lGRDRYbrJGRDRRxcMgO0mCj4iYgbTgIyIayMCGJPiIiGZKCz4ioqGS4Adk1apVSKo6jJ4b1D+gJn7vIvrJuckaEdFcacFHRDRUEnxERAMVo2gyVUFERCNlsrGIiCay00UTEdFEdV+yLwk+ImIGMkwyIqKh0oKPiGgg22zIgh8REc2UNVkjIhqqzsMkN6o6gHaSFkq6SdI5kn4i6VxJ+0v6gaSfStq76hgjIkaNjqLptE2FpKWS1kpaJ+n4cc5vJ+k7kq6RdL2kAzuVWasEX3oecCrw/HI7CtgXOA746wrjioj4Pb1I8JLmAKcDrwEWAUdKWjTmsg8BF9jeAzgCOKNTuXXsornF9g0AklYD37JtSTcAC8deLGkZsGywIUZEAL27ybo3sM72zQCSzgMOBda01wY8vXz9DOAXnQqtY4J/pO11q22/xTjx2h4BRgAk1bg3LCKapocPOm0N3Na2fzuwz5hrPgJcJuldwNOA/TsVWscumoiIodEq54SfbAPmS1rZtnXT63AkcI7tbYADgc9LmjSH17EFHxExNKY4THK97SWTnL8D2LZtf5vyWLu3AksBbP9Q0qbAfOCuiQqtVQve9s9s79q2f6ztC8c7FxFRB3bnbQpWADtK2kHSUyhuoi4fc82twKsAJL0A2BS4e7JC04KPiOiS6c1cNLYfl/RO4FJgDnCW7dWSTgRW2l4OvBf4jKS/Kqs+1h1uACTBR0R0q4dTFdi+BLhkzLET2l6vAV42nTKT4CMiupTpgiMiGiwJPiKioTIffEREIzmzSUZENNE0hkFWIgk+ImIGsuBHzIikgdQzyJtFg/pMEf3Uq3Hw/ZIEHxExAxlFExHRRNNY0KMKSfARETORBB8R0UytDUnwERGNUwyTTIKPiGikJPiIiEbKTdaIiMZyKwk+IqJx6t4HX6sl+yYi6VhJz6k6joiIsdxqddyqMhQJHjgWSIKPiNrp0ZqsfVFJgpe0UNJ/SPqMpNWSLpM0V9Lukn4k6XpJX5b0TEmHAUuAcyVdK2luFTFHRPweG7c6b1WpsgW/I3C67V2AXwOvBz4HfMD2bsANwIdtXwisBI62vbvthyuLOCJiDJfTFUy2VaXKm6y32L62fL0KeC6whe3vlsc+C3ypUyGSlgHL+hNiRMTEsibrxB5pe70B2KKbQmyPACMAkur7nY6IRqpzgq/TTdb7gHsl7VfuvwkYbc0/AMyrJKqIiInYeEOr41aVuo2DfzPwaUmbATcDbymPn1Mefxh4SfrhI6Iu6tyCryTB2/4ZsGvb/j+0nX7xONdfBFzU/8giIqanxvm9di34iIihkZusERFNVfOpCpLgIyK6ZloV3kTtJAk+ImIG0oKPiGigus8mmQQfETETSfAREc3k+nbBJ8FHRMxEumhiKEiqOoS+GOR/wKZ+D2MCNq0KF/TopE5z0UREDJXRB516MV2wpKWS1kpaJ+n4Ca55o6Q15ToaX+hUZlrwERHdcm8W3ZY0BzgdOAC4HVghabntNW3X7Ah8EHiZ7Xsl/bdO5aYFHxExE71Zs29vYJ3tm20/CpwHHDrmmrdTLJJ0b1Gt7+pUaBJ8RETXOnfPTLGLZmvgtrb928tj7XYCdpL0g3Jp06WdCk0XTUTEDLSm1kUzX9LKtv2RcrGi6diYYqnTVwLbAFdIeqHtX0/2hoiI6IKn3ge/3vaSSc7fAWzbtr9Neazd7cCPbT8G3CLpJxQJf8VEhaaLJiJiBnrURbMC2FHSDpKeAhwBLB9zzVcoWu9Imk/RZXPzZIWmBR8RMQO9eM7C9uOS3glcCswBzrK9WtKJwErby8tzr5a0hmId6/fZvmeycpPgIyK6NvVx7h1Lsi8BLhlz7IS21wbeU25TMhRdNJKOlXRa1XFERDyJe/egUz9U1oKXtLHtx6uqPyJipgx4Q33noplWC17S0yRdLOk6STdKOlzSXpKuLI9dJWmepIWSvifp6nJ7afn+V5bHlwNrymPHlO+7VtKZ5RNdSHqLpJ9Iugp4Wa8/eERELzSpBb8U+IXtgwAkPQO4Bjjc9gpJTwceBu4CDrD92/Lx2i8Co0OE9gR2tX2LpBcAh1M8evuYpDOAoyV9A/hbYDFwH/Cdsp6IiPqoOIF3Mt0EfwNwqqSTga8BvwbutL0CwPb9ULT0gdMk7U5xt3entjKusn1L+fpVFEl8RTkL31yKHw77AJfbvrss7/wxZTxB0jJg2TQ/R0RET/RiLpp+mVaCt/0TSXsCBwInAd+e4NK/An4JvIiiG+i3becebHst4LO2P9j+Zkmvm0ZMI8BI+b76fqcjopHq3IKfbh/8c4CHbP8LcApFS3srSXuV5+dJ2hh4BkXLvgW8iWJc53i+BRw2OiuapC0lbQ/8GHiFpGdJ2gR4QxefLSKir3o5XXA/TLeL5oXAKZJawGPAn1O0wv9J0lyK/vf9gTOAiyT9T+DfeXKr/Qm210j6EHCZpI3KMt9h+0eSPgL8kKIb6Nppf7KIiH6zcY0X/JhuF82lFE9TjfXiMfs/BXZr2/9A+f7LgcvHlHk+cP44dZ0NnD2d+CIiBi1rskZENFSd++CT4CMiuuUk+IiIRhq9yVpXSfAREV0zrQ317YRPgo+I6Fa6aCIiGiwJPiKimWqc35Pgo/n+8667BlbXggXbDayuu+++dWB1xfhykzUioqmmvuh2JZLgIyK6ZlpNmaogIiKeLF00ERFNlQQfEdE8Th98RERz1bgBnwQfEdG9Zq3JGhERo0xG0URENJGpdx/8tNZk7SdJW0j6i6rjiIiYjjqvyVqbBA9sASTBR8QQcTmUpsNWkTp10fwf4LmSrgW+UR57DcVvQSeVa7dGRNRHzacLrlML/njgP23vDvwI2B14EbA/cIqkraoMLiJiPK0N7rhVpU4Jvt2+wBdtb7D9S+C7wF7jXShpmaSVklYONMKImPVGZ5Osax98nbpoumJ7BBgBkFTf35UionnSRTNlDwDzytffAw6XNEfSAuDlwFWVRRYRMa7Orfe04AHb90j6gaQbga8D1wPXUfwW9H7b/1VpgBER46hzC742CR7A9lFjDr2vkkAiIqYoDzpFRDTQ6GySnbapkLRU0lpJ6yQdP8l1r5dkSUs6lZkEHxExA73og5c0Bzid4tmfRcCRkhaNc9084N3Aj6cSWxJ8RETXenaTdW9gne2bbT8KnAccOs51fwecDPx2KoUmwUdEdKt3XTRbA7e17d9eHnuCpD2BbW1fPNXwanWTNSJi2EyxhT5/zMOYI+UzPFMiaSPgH4FjpxNbEnxERJdGn2SdgvW2J7spegewbdv+NuWxUfOAXYHLJQH8AbBc0iG2J3yKPwk+IqJrxr1Z8GMFsKOkHSgS+xHAE8PGbd8HzB/dl3Q5cNxkyR3SBx8R0T2DW523jsXYjwPvBC4F/gO4wPZqSSdKOqTb8NKCj8Z73rOfXXUIfTGoJyjLLoGYQK/+HmxfAlwy5tgJE1z7yqmUmQQfETEDmaogIqKBpnGTtRJJ8BER3bJpbejJTda+SIKPiJiJtOAjIprJJMFHRDSOa76iUxJ8RETXjKcy0L0iSfARETOQFnxEREO1ejNVQV/UIsFL+gjwG+DpwBW2v1ltRBERnRXzvSfBT8lEj+VGRNRWjbtoKptsTNLfSPqJpO8DO5fHzpF0WPl6saTvSlol6VJJW1UVa0TERDyFP1WpJMFLWkwxHebuwIHAXmPObwL8E3CY7cXAWcBHBx1nREQnPVqyry+q6qLZD/iy7YcAJC0fc35nisntv1HOZDcHuHO8giQtA5b1L9SIiImYVmtD1UFMqFZ98G0ErLb9kk4XlstejQBIqm9nWEQ0Tt0fdKqqD/4K4HWS5kqaB7x2zPm1wAJJL4Giy0bSLoMOMiKik3TRjGH7aknnA9cBd1EsV9V+/tHyZuunJD2DIs5PAKsHHmxExCTq3IKvrIvG9keZ5Map7WuBlw8uooiI6XKth0nWtQ8+ImIomDzoFBHROHamKoiIaKhqb6J2kgQfETEDmYsmIqKh0oKPiGioJPiIiCZyhklGRDSSgZYzF01ERANlFE1ERGMlwUdENFQSfEREAxX3WDMOPiKigYwzVUFERDNVueZqJ0nwEREzkD74iIhGcq374Ktasi8iYuiNrsnaiyX7JC2VtFbSOknHj3P+PZLWSLpe0rckbd+pzCT4iIgZ6EWClzQHOB14DbAIOFLSojGXXQMssb0bcCHwsU7lJsFHRMxAq9XquE3B3sA62zfbfhQ4Dzi0/QLb37H9ULn7I2CbToUmwUdEdM3gVuets62B29r2by+PTeStwNc7FZqbrBERMzDFYZLzJa1s2x+xPdJNfZKOAZYAr+h0bRJ8RESXRm+yTsF620smOX8HsG3b/jblsSeRtD/wN8ArbD/SqdKhT/CSlgHLqo4jImanHo2DXwHsKGkHisR+BHBU+wWS9gDOBJbavmsqhQ59gi9/zRkBkFTfJw4iooF6Mw7e9uOS3glcCswBzrK9WtKJwErby4FTgM2BL0kCuNX2IZOVO/QJPiKiSlMcJdOR7UuAS8YcO6Ht9f7TLXNoRtFIukTSc6qOIyJiVC8fdOqHoWnB2z6w6hgiIp4sa7JGRDSWqe9cNEnwEREzkNkkIyIayT27ydoPSfAREV3Kkn0REQ2WLpqIiIZKgo+IaKQMk4yIaKwsuh0R0UA2tFobqg5jQknwERFdq3Yqgk6S4CMiZiAJPiKioZLgIyIaKg86RUQ0kTNMMiKikQy00oKPiGimdNFERDRShklGRDRWnRP8jNdklXS5pLWSri23C9vOLZN0U7ldJWnftnMHS7pG0nWS1kj605nGEhExSI1ck1XSU4BNbD9YHjra9sox1xwM/Cmwr+31kvYEviJpb+AeYATY2/btkp4KLCzf90zb93b3cSIiBsm4xlMVTKsFL+kFkk4F1gI7dbj8A8D7bK8HsH018FngHcA8ih8u95TnHrG9tnzf4ZJulPReSQumE19ExKB5Cn+q0jHBS3qapLdI+j7wGWANsJvta9ouO7eti+aU8tguwKoxxa0EdrH9K2A58HNJX5R0tKSNAGx/GngNsBlwhaQLJS0dPR8RUSfD3kVzJ3A98DbbN01wze910XRi+22SXgjsDxwHHAAcW567Dfg7SSdRJPuzKH44HDK2HEnLgGXTqTsioleG/SbrYcAdwL9KOkHS9lMsew2weMyxxcDq0R3bN9j+OEVyf337hWVf/RnAp4ALgA+OV4ntEdtLbC+ZYlwRET1RtNBbHbeqdEzwti+zfTiwH3Af8FVJ35S0sMNbPwacLOlZAJJ2p2ihnyFpc0mvbLt2d+Dn5XWvlnQ9cBLwHWCR7b+0vZqIiJoZ9i4aAGzfA3wS+GTZum6/dXyupIfL1+tt7297uaStgSslGXgAOMb2nZLmAe+XdCbwMPAgZfcMxY3X19r++Yw+WUTEALRa9X2SVXXuP5qu8gdJxKwwqP+7kgZSTwVWzbRrd86cjT130807XvfgQ/fNuK5u5EnWiIiuGVPfFnwSfEREl0afZK2rJPiIiBlIgo+IaKgk+IiIRjKtGs9FkwQfEdGluvfBZ36XiIiZGF2XdbJtCso5t9ZKWifp+HHOP1XS+eX5H0/hYdMk+IiI7k1lLsnOCV7SHOB0irm3FgFHSlo05rK3Avfafh7wceDkTuU2rYtmPeWUB9Mwv3zfIKSu4ahnKOrq8gGk2n+uAdYz1Xm1JtWjuWb2BtbZvhlA0nnAoRRzeo06FPhI+fpC4DRJ8iR9RI1K8LanPX+8pJWDesIsdQ1HPalruOoa5GcaT4+mKtgauK1t/3Zgn4musf24pPuAZzHJD7dGJfiIiAG7lOI3iE42ldQ+pfqI7ZE+xfSEJPiIiC7ZXtqjou4Atm3b36Y8Nt41t0vaGHgG5ap4E8lN1mJt2NQ1HHU18TOlruGpp59WADtK2qFc8/oIilXv2i0H3ly+Pgz49mT979Cw2SQjIoaVpAOBTwBzgLNsf1TSicDKcvr1TYHPA3sAvwKOGL0pO2GZSfAREc2ULpqIiIZKgo+IaKgk+IiIhkqCj4hoqCT4iIiGSoKPiGioJPiIiIZKgo+IaKj/D/FNUhxyr9dqAAAAAElFTkSuQmCC\n",
"text/plain": [
"<Figure size 432x288 with 2 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"input = c est un jeune directeur plein de talent .\n",
"output = he s a talented writer . <EOS>\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAX0AAAETCAYAAADah9Z7AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAfVklEQVR4nO3de5hcVZ3u8e+byP2qBrwQMBwMIiBCkgERHAGBCV5gPDACojMIGkfB68iIc3wYB/EoIsPoGTwSFERFEfGWM6IgiIIiSMIlEJA5OSACopxwE0GBpN75Y+82laLT1Z3sqr2r6/3k2U/X3rVrr1Wd7l+tXnut35JtIiJiOEypuwIREdE/CfoREUMkQT8iYogk6EdEDJEE/YiIIZKgHxExRBL0IyKGSIJ+RMQQSdCPgSJpqqT31V2PiEGVoB8DxfYK4Mi66xExqJQ0DDFoJJ0BrAN8HXhs5Ljt62urVMSASNCPgSPpilEO2/Z+fa9MxIBJ0I+IGCLPqLsCERMl6aTRjts+ud91iRg0CfoxiB5re7w+8FrgtprqEjFQ0r0TA0/SesAltvepuy4RTZchmzEZbAhMr7sSEYMg3TsxcCTdDIz8iToV2AJIf37EOKR7JwaOpBe07S4Hfmd7eV31ieaQJODbwIds5z7PKNK9EwPH9l3A1sB+tu8FNpe0bc3VimY4EPgL4K11V6SpEvRj4Ej6Z+CDwIfKQ+sCX6mvRtEgx1IE/NdJSvf1KBL0YxC9HjiYcuim7d8Am9Rao6idpGnATra/D1wG/HXNVWqkBP0YRE+6uBllAEkb1VyfaIY3A18rH59LunhGlaAfg+hCSWdR9OW/jaJV9/ma6xT1O4Yi2GP7OuB5kraut0rNk9E7MZAkHUBx004UE7N+WHOVBoak9Ww/0e3YIJG0OXC47bPajh0ALLN9Q301a54E/ahMOZRypu3LJG0APMP2oz0o51TbH+x2LEYn6Xrbs7odi8kp3TtRibKb5SJgpKU1HfhOj4o7YJRjB/WorElD0nMlzQY2kLSbpFnltg/FrOaBJOltkmaWjyXpXEm/l7RY0m51169pMqQpqnIcsDtwLYDt/ytpyyoLkPQO4J3AdpIWtz21CXB1lWVNUn8FHE3xgfyvbccfBf6pjgpV5D3AF8vHRwK7ANsCuwGfAV5RT7WaKUE/qvKE7SeLCZFQjpGuuu/wq8D3gY8DJ7Ydf9T2gxWXNenYPg84T9Khtr9Zd30qtNz2U+Xj1wJfsv0AcJmkT9ZYr0ZK0I+q/ETSP1F0HRxA0SL/P1UWYPsR4BFJnwYeHLlfIGlTSXvYvrbK8iax/5D0RmAGbTFggNcjaEl6HvAQ8CrgY23PbVBPlZorQT+qciLFbMibgbcDF9O7YZT/G2i/6fiHUY5VQtIWwNt4eoA8puqy+ui7wCPAImBgR+y0OQlYSJF8b4HtJQCSXgncUWfFmiijd2LgSLrR9q4dxxbb3qUHZV0NXEURIFeMHB/k7hFJt9jeue56VKnsTtzE9kNtxzaiiHF/qK9mzZOWflRC0l7AR4AXUPxciWKx8v/Wg+LukPRuitY9FF1JvWrRbTgJh4JeLekltm+uuyIVehZwnKSdyv0lwGdt/67GOjVSWvpRCUm/BN7H01vED/SgrC0pRmXsR3Gz+HLgvbbv70FZpwBX27646mvXRdKtwAuBOym6d0Y+oCv/S6kfygbHVylG8CwqD88G/g44yvbPaqpaIyXoRyUkXWt7j7rrUTVJjwIbUQTHp1gZIDettWJroWM9gj8rU1YPHEnXAO/onHkraVfgrMn4c7k2MjkrqnKFpNMk7dk26acnMzwlbS/pckm3lPu7SPpwL8qyvYntKbY3sL1puT+wAR+eth7BXcDjDHYs2HS0VAu2byTZV58mLf2ohKQrRjls2/v1oKyfACdQtOJ2K49VenNS0g62f7m6Dy7b11dVVr+V6xHMAV5ke3tJzwe+YXuvmqu2RiTdBry8/SZuefxZFF1zO9RTs2bKjdyohO19+1jchrZ/MTIRrFT1convB+YBp4/ynCnuJwyq11PMVr0eivUIJA1yi/gM4FJJH6B8TxR9+qeWz0WbBP0aTNIshyeNdrxHE36WSdqOlfn0DwPuq7IA2/PKr/38MOuXJ21b0qRYj8D2fEm/AT4K7ETxc3ErcIrtSicITgYJ+vX4OU+fSDTasUHyWNvj9Smmw/dqYerjgPnADpLupRiFclQvCpK0IUWrfxvb88rEXi+y/R+9KK9POtcjOAY4u+Y6rZXy/2OQ/0/6JkG/jyQ9F9iKMsshxUgQgE0Z4CyHALZX6QaR9CngkqrLkTQFmGN7/7KFOqUX6ZvbnEsxDPDl5f69wDcY4ABj+1NlqozfAy8CThrk9QgkXWj7DeXjVVJsS7rU9oH11W7tSDqHogF1/2j3rFT0cX4aeDXFDfmju91vStDvr/Ysh6ezMugPepbD0WxI8T4rZbsl6R+BC20/1vUFa28724dLOrIs/3F13EwYRGWQH9hA32Fm2+MDgPbJdFv0uS5V+yLw78CXVvP8QRTvfyawB8WExTGHqCbo91EdWQ77NVNW0s2szKo5leKXrVcJvC4rb9p9nbZupR5l2nyyXBBmpP97O3qUr0bS9hS/tM+xvbOkXYCDbZ9S0fUfZfTMp4M+92CsIYgDPTzR9pWSZoxxyiEUWUUNXCNpc0nPs73ae1wJ+vWYLmlTihb+2RR9+SfavrQHZX2BUWbK9sBr2x4vB35nu+oRNSMOL78e13bMQC9SPvwz8ANga0nnA3tR/LXWC2dTDkUFsL1Y0leBSoK+7UEeoTOWDcvu0ims2nUqasiyOXfuXC9btqzreYsWLVoC/Knt0Hzb8ydY3FbA3W3795THEvQb5hjbn5b0V8CzgTcDXwZ6EfQfsf39Hlx3FbbvkrQ3xXKJ50qaJmkT23f2oKxtq77mGGX9UNL1wMsogsh7bHf/jV4z/RiK+mdlOov1R/Zt/7pXZfXYfaxcFOa3rLpAzG/7XZlly5axcOHCrudJ+pPtOX2o0ioS9Osx8lv9Goo/zZb0sJ/4CkmnAd+irVui6slF7RN+KG5+rgt8haJlXFUZ+9n+kaT/Ptrztr9VYVmdI6lGWk7bSNqmR5Ozej4UtbzuwRT3lJ4P3E/R9XcbxXDHgdPEYbV9nPR6L8Xs6hHTy2OrlaBfj0WSLqHojjixnBjT6lFZIzd1ZpdfRW8mF/Vjws9fAj8CXkfxHtTxtbKgz6qTstp/g3v1/YP+DUX9KMVfLpfZ3k3SvsCbelBO35T3Xba3fVPbsW2AFbbHDIJVM7Ci1atf56dZABwv6QKK3/VHxurPhwT9uhwLfBi4tRwNsg3w3h6V9eNRjvWiGdKPCT+PSno/cAsrgz304P2MtB7LYPJOYO+ynKtYmdK5EuV7GnExcAVF//RjwKGs2l1RhadsPyBpiqQptq+Q9G8Vl9Fvy4FvSdqlbVTX5ylGxfU16INxRT+Skr4G7ANMk3QPxT2mdQBsf47i5+XVwFKKIZtv6XbNBP2SpPMo+msfLvefCZzu3qyQdCZFy34/ipusj1L8Yv9FD8pqX0Cil5OmOif8HEv1K2dtXH59EcX36rsUgf91wC8qLmvEeRTj2T9T7r+RYvjcGyosY+Qvos739WZ6874elrQxcCVwvqT7WfXnZODYfkrStyn+X84tG1Jb2O7euV55ZaBVUTPE9pFdnjerDmjoKkF/pV1GAj6A7YfKUQC9sIftWZJuaCtr3V4U1K9JUx0TfrYHPmz7sorL+BcASVcCs7xyjdyPAN+rsqw2O9vesW3/ChX56CtTw/u6iaJV+D6K7qPNWPmBOsg+T9E9di7wt+XXWjQ5kWWC/kpTJD1zJFNfmaGvV9+fpyRNZeUNuy3oXZ9+p0onTUn6qe2928aAj3S5/L2kFvAgcJrtz1ZVJvAc4Mm2/SfLY71wvaSX2b4GQNIeFOux9kK/3te+tlsUP3PnQbHcZA/K6asyK6rK+Q5HAK+opR5AK0F/IJwO/FzSN8r9vwE+1qOyPgN8G9hS0seAwyj6+CvX60lTtvcuv45601bSs4GrgSqD/peAX5R/zgP8NcXMxV6YTbG84Mhwxm2A20e+r652tamevi9J76C4P7FdR5DfBOjr6lKSnmu7F8Mpv0DR4r+5M9VyPzW5pZ98+m0k7cjKURk/sl3pn/EdZe0AvIqiZXy57Z4kJ9OqqyT1etLU6uow5gzBNbzmLFa25K70KItoVFTOqKtMjXDFq0318n1J2gx4JvBx4MS2px7t0WzmseryPduv6cF1N6QY5npo1d2L47XbrFn+yc+6f4ZutuGGi+oYp5+gHxFRod1mzfKPf/rTrudtvtFGtQT9dO9ERFSsqiGbvTDI62L2jKR5KWswypqM7yllDU45oylu5Hbf6pKgP7p+/sCkrMEoJ2UNVlm1BX0obuR22+qS7p2IiCrZ/UzDMGGTPuhPmzbNM2bMmNBrttlmG+bMmTPhj+JFixZN9CUAjKQu6IfJWNZkfE8pq7Zyltleq4VXTLOHbE76oD9jxoxxpTmtQu8SZUZEn1QyBDeTsyIihkha+hERQ6O6LJu9kKAfEVEh1zwks5sE/YiIirUyeiciYjgky2ZExJDJjdyIiGFhp6UfETFMmtzSb2TuHUkzJN1Sdz0iIibKwAq761aXtPQjIiqWlv6amSrpbElLJF0qaQNJ20n6gaRFkq4qV5+KiGiUJmfZbHLQnwmcaXsn4GHgUIqV7t9lezbwAapddzUiYq25vJHbbatLk7t37rR9Y/l4ETADeDnwjbbEZuuN9sJyAYV5UGTMjIjopyZ37zQ56D/R9ngF8BzgYdu7dnuh7fkUfxWsUYrkiIi10eSg3+TunU6/B+6U9DcAKry05jpFRKyiGL3T6rrVZZCCPsBRwLGSbgKWAIfUXJ+IiKdp8hq5jezesf0rYOe2/U+1PT237xWKiBivmkfndNPIoB8RMaiyXGJExJBJ7p2IiCGSln5ExJCwzYosohIRMTyyRm5ExBBp8hq5gzZOPyKi0UZG71SRcE3SXEm3S1oq6cRRnt9G0hWSbpC0WNKru10zQT8iomJVBH1JU4EzgYOAHYEjJe3YcdqHgQtt7wYcwTiSUE767p1FixbRlqBt0ujX6IDJ+L2L6KnqbuTuDiy1fQeApAsoshDc2l4asGn5eDPgN90uOumDfkREP1U4OWsr4O62/XuAPTrO+QhwqaR3ARsB+3e7aLp3IiIqNs58+tMkLWzb5q1BUUcCX7Q9HXg18GVJY8b1tPQjIio2ziGby2zPGeP5e4Gt2/anl8faHUuZj8z2zyWtD0wD7l/dRdPSj4iomN19G4frgJmStpW0LsWN2gUd5/waeBWApBcD6wP/f6yLpqUfEVEhU03uHdvLJR0PXAJMBc6xvUTSycBC2wuAfwDOlvS+suij3eWGQoJ+RESVKkzDYPti4OKOYye1Pb4V2Gsi10zQj4ioUFIrR0QMmQT9iIghknz6ERFDw8myGRExLCYwJLMWAzdOX9JGkr4n6SZJt0g6vO46RUS0W9Fqdd3qMogt/bnAb2y/BkDSZjXXJyLiz6oap98rA9fSB24GDpB0qqRX2H6k8wRJ80byWdRQv4gYclXl0++FgQv6tv8TmEUR/E+RdNIo58y3PadLXouIiOqNI+DXGfQHrntH0vOBB21/RdLDwFvrrlNExCoa3L0zcEEfeAlwmqQW8BTwjprrExGxitaKBP3K2L6EIgFRRETjFEM2E/QjIoZGgn5ExNCo90ZtNwn6EREVcytBPyJiKKRPPyJiyLjGNAvdJOhHRFSswQ39BP2IiErZ6dOPiBgm6dOPyknqSzn9/OHt13uK6KWskRsRMWQS9CMihoWNV2T0TkTE0EhLPyJiiDQ45ifoR0RUKTdyIyKGSdIwREQME9PKjdyIiOGRln5ExJBIls2IiGGToB8RMTzc3C59ptRdgYmS9B1JiyQtkTSv7vpERHSy3XWryyC29I+x/aCkDYDrJH3T9gPtJ5QfBvlAiIj+s2llEZVKvVvS68vHWwMzgVWCvu35wHwASc3tXIuISafpk7MGqntH0j7A/sCetl8K3ACsX2ulIiLauVgYvds2HpLmSrpd0lJJJ67mnDdIurXs8v5qt2sOWkt/M+Ah249L2gF4Wd0Vioh4mgpa+pKmAmcCBwD3UHRnL7B9a9s5M4EPAXvZfkjSlt2uO1AtfeAHwDMk3QZ8Arim5vpERHTofhN3nN0/uwNLbd9h+0ngAuCQjnPeBpxp+yEA2/d3u+hAtfRtPwEcVHc9IiLG0hpf9800SQvb9ueX9yNHbAXc3bZ/D7BHxzW2B5D0M2Aq8BHbPxir0IEK+hERTeeyT38cltmes5bFPYNiMMs+wHTgSkkvsf3w6l4waN07ERGNV1H3zr0UIxRHTC+PtbsHWGD7Kdt3Av9J8SGwWgn6EREVqyjoXwfMlLStpHWBI4AFHed8h6KVj6RpFN09d4x10XTvRERUqpoZt7aXSzoeuISiv/4c20sknQwstL2gfO5ASbcCK4ATOierdkrQj4ioUoVZNm1fDFzcceyktscG3l9u45KgHxFRIQNe0dwZuQn6EREVa3IahgT9iIgq1ZxFs5sE/RiTpL6V1c9flH6+rxg+482tU4cE/YiIiqWlHxExJJqeWjlBPyKiSjbOIioREcOjyWvkJuhHRFQs3TsREcOiwhm5vZCgHxFRodzIjYgYKqa1ormd+mOmVpa0uaR3druIpD+saQUkHS3p+RN8zQxJt6xpmRERPePKUiv3RLd8+psDXYP+WjoamFDQj4hoNLv7VpNuQf8TwHaSbpR0hqTLJV0v6WZJnQv0AiDpBEnXSVos6V/KYzMk3SbpbElLJF0qaQNJhwFzgPPLMjaQNFvSTyQtknSJpOeV15gt6SZJNwHHVfg9iIioVINjftegfyLw/2zvCpwAvN72LGBf4HR1JDCRdCDFUl27A7sCsyX9Zfn0TIpV23cCHgYOtX0RsBA4qixjOfC/gMNszwbOAT5Wvv5c4F22X7pW7zgioodGbuQ2tXtnIjdyBfzPMoi3KFZqfw7w27ZzDiy3G8r9jSmC/a+BO23fWB5fBMwYpYwXATsDPyw/T6YC90naHNjc9pXleV8GDlptRaV5wLwJvLeIiGqMf2H0Wkwk6B8FbAHMtv2UpF8B63ecI+Djts9a5aA0A3ii7dAKYINRyhCwxPaeHa/ffAL1xPZ8YH752uZ+9yNiEjKtBqdh6Na98yiwSfl4M+D+MuDvC7xglPMvAY6RtDGApK0kbTmBMm4HtpC0Z/n6dSTtZPth4GFJe5fnHdXlmhERtRnY7h3bD0j6WTk88jpgB0k3U/TD/3KU8y+V9GLg52X3zB+AN1G07Ffni8DnJP0R2BM4DPiMpM3K+v0bsAR4C3BO2XK/dELvMiKinxo8OUtNnjlWhXTvDI4sohINsMj2nLW5wJbP3dqH/+17up7376edsNZlrYnMyI2IqFiT29IJ+hERlcoauRERw8M0evROgn5ERIXM5BmnHxER45DunYiIoVFzcp0uEvQjIqqUlbMiIoZLa0WCfkRX/ZwwlYlg0StZLjEiYpikeyciYphkclZExFBJ0I+IGCJNnpzVLZ9+RERMgMuVs7pt4yFprqTbJS2VdOIY5x0qyZK6Zu1M0I+IqFgVi6hImgqcSbE07I7AkZJ2HOW8TYD3ANeOp24J+hERleoe8MfZ5787sNT2HbafBC4ADhnlvI8CpwJ/Gs9FE/QjIqpUXffOVsDdbfv3lMf+TNIsYGvb3xtv9XIjNyKiYuNsyU+TtLBtf77t+eMtQ9IU4F+BoydSt9pb+pI+P9JPJemf6q5PRMTaGJmRO47unWW257RtnQH/XmDrtv3p5bERmwA7Az+W9CvgZcCCbjdzaw36kqbafqvtW8tDEw765c2OiIiGMG61um7jcB0wU9K2ktYFjgAW/LkU+xHb02zPsD0DuAY42PbC0S9XqDzoSzpB0rvLx2dI+lH5eD9J50v6g6TTJd0E7Cnpx5LmSPoEsIGkGyWdX77mTZJ+UR47ayTAd16j6vcQEbHGDG5137pexl4OHA9cAtwGXGh7iaSTJR28ptXrRUv/KuAV5eM5wMaS1imPXQlsBFxr+6W2fzryItsnAn+0vavtoyS9GDgc2Mv2rsAK4Kjy9FGvERHRBBWN3sH2xba3t72d7Y+Vx06yvWCUc/fp1sqH3tzIXQTMlrQp8ARwPUXwfwXwborg/c1xXOdVwGzgujJL4QbA/eVzY15D0jxg3hrWPyJirQxVGgbbT0m6k+KO8tXAYmBf4IUUf6L8yfaKcVxKwHm2PzTKc2Neo7whMh9AUnO/+xEx6TQ9tXKvbuReBXyAojvnKuDvgRvc/TvxVNkVBHA5cJikLQEkPUvSC3pU34iIati0VrS6bnXpZdB/HvBz27+jmCl21TheNx9YLOn8ckTPh4FLJS0GflheMyKi2ezuW016MjnL9uXAOm3727c93rjj3H3aHn8Q+GDb/teBr49y/Y07j0VENIVpbvdOZuRGRFTIWTkrImKYGI9nIH5NEvQjIiqWln5ExBBpjS/NQi0S9CMiKlTMuE3Qj4gYHuneiYgYHhmyGRExRHIjN6JhyiR+sYZW9PFG5dQpta/1NEGm1RpPerF6JOhHRFQok7MiIoZMgn5ExBBJ0I+IGBr1ZtHsJkE/IqJiJpOzIiKGgp00DBERQ2T8C5/XIUE/IqJiyb0TETFE0tKPiBgiCfoREcOi5oXPu0nQj4iokIGWk3snImJIZPRO30maB8yrux4RMZwS9PvM9nxgPoCk5n73I2JSStCPiBgSxX3c5o7TH7TVCVYh6WJJz6+7HhERKxm3Wl23ugx0S9/2q+uuQ0REp6yRGxExRNKnHxExNNzoPv0E/YiICjV9jdyBvpEbEdFEtrtu4yFprqTbJS2VdOIoz79f0q2SFku6XNILul0zQT8iomKtVqvr1o2kqcCZwEHAjsCRknbsOO0GYI7tXYCLgE92u26CfkREpQxudd+62x1YavsO208CFwCHrFKSfYXtx8vda4Dp3S6aoB8RUTGP4x8wTdLCtq0zdcxWwN1t+/eUx1bnWOD73eqWG7kRERWawI3cZbbnVFGmpDcBc4BXdjs3QT9iEunXqBFJfSlnUFX0/3AvsHXb/vTy2Cok7Q/8D+CVtp/odtEE/YiISlU2Tv86YKakbSmC/RHAG9tPkLQbcBYw1/b947logn5ERMXGMzqnG9vLJR0PXAJMBc6xvUTSycBC2wuA04CNgW+Uf3392vbBY103QT8iokJVTs6yfTFwccexk9oe7z/RayboR0RUKmvkRkQMFZPcOxERQ6PJuXcS9CMiKuVKbuT2SoJ+RESFmr5cYoJ+RETFmty905PcO5J+XKYDvbHcLmp7bp6kX5bbLyTt3fbcayXdIOmmMl3o23tRv4iIXqoqtXIvVNbSl7QusI7tx8pDR9le2HHOa4G3A3vbXiZpFvAdSbsDDwDzgd1t3yNpPWBG+bpn2n6oqrpGRPROs4dsrnVLX9KLJZ0O3A5s3+X0DwIn2F4GYPt64DzgOGATig+hB8rnnrB9e/m6wyXdIukfJG2xtnWOiOilcWbZrMUaBX1JG0l6i6SfAmcDtwK72L6h7bTz27p3TiuP7QQs6rjcQmAn2w8CC4C7JH1N0lGSpgDY/hzFQgIbAldKuqhcUWbU+pddSAslLRzt+YiIXrGh1VrRdavLmnbv3AcsBt5q+5erOedp3Tvd2H6rpJcA+wMfAA4Aji6fuxv4qKRTKD4AzqH4wHhangnb8ym6ipDU3L+zImISqrfPvps17d45jCLr27cknTSedRlLtwKzO47NBpaM7Ni+2fYZFAH/0PYTy77/zwKfAS4EPrRm1Y+I6J0m38hdo6Bv+1LbhwOvAB4BvivpMkkzurz0k8Cpkp4NIGlXipb8ZyVtLGmftnN3Be4qzztQ0mLgFOAKYEfb77W9hIiIhmly0F+r0Tu2HwA+DXy6bIW3d1SdL+mP5eNltve3vUDSVsDVZbfLo8CbbN8naRPgHyWdBfwReIyya4fi5u7rbN+1NvWNiOiHJk/OUpP7nqqQPv0YJlk5a60tWtslDNddZz1Pm9Z1fXLu++0da13WmsiM3IiIChloNbiln6AfEVGxJnfvJOhHRFSq2UM2E/QjIiqWoB8RMSSqXCO3FxL0IyIqZVxjmoVuEvQjIipWZ0K1bhL0IyIqlu6dei2jTOcwAdPK1/VDyhqMcgairDWcNNX499XHcsabR2xMCfo1sj3h/PuSFvZrplzKGoxyUtZgldXP99SpyK2TcfoREUMjLf2IiCHSaqWlP2jmp6yBKWsyvqeUNTjljK7BLf1Jn2UzIqKfpk6d6vXX36jreY8//miybEZEDLrMyI2IGDIJ+hERQyRBPyJiaJhWcu9ERAyHpvfpT6m7AhERk04R+cfexkHSXEm3S1oq6cRRnl9P0tfL56+VNKPbNRP0IyIq5XH960bSVOBM4CBgR+BISTt2nHYs8JDtFwJnAKd2u26CfkRExexW120cdgeW2r7D9pPABcAhHeccApxXPr4IeJW6ZN1L0I+IqFir1eq6jcNWwN1t+/eUx0Y9x/Zy4BHg2WNdNDdyIyKqdQlFaudu1pe0sG1/vu2ep49I0I+IqJDtuRVd6l5g67b96eWx0c65R9IzgM2AB8a6aLp3IiKa6TpgpqRtJa0LHAEs6DhnAfB35ePDgB+5y3jRtPQjIhrI9nJJx1N0F00FzrG9RNLJwELbC4AvAF+WtBR4kOKDYUzJshkRMUTSvRMRMUQS9CMihkiCfkTEEEnQj4gYIgn6ERFDJEE/ImKIJOhHRAyRBP2IiCHyX9njRpdku39jAAAAAElFTkSuQmCC\n",
"text/plain": [
"<Figure size 432x288 with 2 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"def showAttention(input_sentence, output_words, attentions):\n",
" # Set up figure with colorbar\n",
" fig = plt.figure()\n",
" ax = fig.add_subplot(111)\n",
" cax = ax.matshow(attentions.numpy(), cmap='bone')\n",
" fig.colorbar(cax)\n",
"\n",
" # Set up axes\n",
" ax.set_xticklabels([''] + input_sentence.split(' ') +\n",
" ['<EOS>'], rotation=90)\n",
" ax.set_yticklabels([''] + output_words)\n",
"\n",
" # Show label at every tick\n",
" ax.xaxis.set_major_locator(ticker.MultipleLocator(1))\n",
" ax.yaxis.set_major_locator(ticker.MultipleLocator(1))\n",
"\n",
" plt.show()\n",
"\n",
"\n",
"def evaluateAndShowAttention(input_sentence):\n",
" output_words, attentions = evaluate(\n",
" encoder1, attn_decoder1, input_sentence)\n",
" print('input =', input_sentence)\n",
" print('output =', ' '.join(output_words))\n",
" showAttention(input_sentence, output_words, attentions)\n",
"\n",
"\n",
"evaluateAndShowAttention(\"elle a cinq ans de moins que moi .\")\n",
"\n",
"evaluateAndShowAttention(\"elle est trop petit .\")\n",
"\n",
"evaluateAndShowAttention(\"je ne crains pas de mourir .\")\n",
"\n",
"evaluateAndShowAttention(\"c est un jeune directeur plein de talent .\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table width=\"100%\">\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/2.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">このチュートリアルではアテンションの可視化もしているね(以下)。…これをみると、not を読みだすときに注目しているのは ne/pas の両方じゃなくて pas だけだなあ…。形容詞の前置修飾と後置修飾はどうだろう。\n",
"</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"input = c est un employe de bureau .\n",
"output = he is an office worker . <EOS>\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAXgAAAEQCAYAAAC6Om+RAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAc4klEQVR4nO3de7QdZZ3m8e+TqEDCxUuQtgkQ1DAaaIQQAS8oKmC0EXTQBsTuRpHY3V5bQbHbxXJA1hpUtHUalaBocBBUVEyPwdAoVxVCQrgliGZAJbQjBkFBFEjOM39UHdgcTrL32beqU3k+WbVSVbuq3t8+65zffvdbb72vbBMREc0zpeoAIiJiMJLgIyIaKgk+IqKhkuAjIhoqCT4ioqGS4CMiGioJPiKioZLgIyIaKgk+uiJpF0kHletbSdqm6pgi4vGS4GPCJB0PXAicVe6aCVxUXUQRMZ4k+OjGO4GXAH8AsP1z4JmVRhQRT5AEH914yPbDoxuSngRkUKOImkmCj25cIelfgK0kHQx8E/iPimOKiDGU0SRjoiRNAY4DDgEELAW+6PwyRdRKEnxMmKTXAd+zPVJ1LBGxcWmiiW4cCfxc0sclPa/qYKI9FS6S9PyqY4nhSQ0+uiJpW+Bo4K0UN1i/DJxv+/5KA+uBpJPH22/7lGHH0m+SXg2cA1xg+wNVxxPDkRp8dMX2Hyj6wl8APAt4A3C9pHdXGlhv/tiybABeA8yqMqA+Og54O/C6stdTbAZSg48Jk3QYRc39ucC5wCLbd0uaBqy2PavK+PpF0hbAUtsHVh1LLyTNAK6wvbukzwE/tH1h1XHF4OWTPLpxBPBp21e27rT9oKTjKoppEKZRPKU72f0tcH65/mXgVIpvX9FwqcFHVyTtALyw3Fxm++4q4+kHSTfz2ANbU4HtgVNs/3t1UfWufF/zbd9Vbt8IHGr7zmoji0FLgo8Jk/Qm4JPA5RT94A8ATpzsX/sl7dKyuR74je31VcXTD5KeChxp+6yWfQcD62yvrC6yGIYk+JiwsgZ48GitXdL2wKW2X1BtZP0h6ZnAlqPbtn9VYTgRXUsvmujGlDFNMvfQgN8lSYdJ+jlwB3AF8Avg4kqD6oGk4yXNLtcl6cuS/iDpJkl7Vx1fDN6k/6OMSnxf0lJJx0o6FvgesKTimPrhVGB/4Ge2dwVeBVxTbUg9eS/FhxQUzyzsCewKvB/4bEUxxRAlwceE2T4RWEiRMPYEFtr+ULVR9cUjtu8BpkiaYvsyYF7VQfVgve1HyvVDgXNt32P7UmB6hXHFkKSbZHTF9reAb1UdR5/dJ2lr4CrgPEl3Uzz0NFmNSHoWcC/Ft5HTWl7bqpqQYpiS4KNjku5n/HHfBdj2tkMOqd8OB/4EvA84BtgOmMzDFJwMLKfo8rnY9ioASS8Hbq8ysBiO9KKJaFF2lZxt+9Lyydypk3x8nScB29i+t2XfdIq//QeqiyyGITX46IqkucBLKWr0VzehT3U51+wC4OnAc4AdgS9QNG9MVk8H3ilp93J7FfA527+pMKYYktxkjQkrR11cBDwDmAF8RdJHqo2qLxo116yklwDXlZvnlgvAteVr0XBpookJk3Qb8ALbfy63twJusP3fqo2sN5Kutb2fpJW29y6bN663vWfVsXVD0jXAP479diVpL+As2/tVE1kMS2rw0Y3/ouVJT2AL4K6KYumnps01u+14TWe2bwC2qSCeGLLU4GPCJF1EMdDYf1K0wR8MLAPWAth+T3XRda9pc81KuhV4cesN1nL/04Ef285sXA2XBB8TJunvN/W67UXDiqVfJE2leBDomKpj6RdJC4DjgROA68vd+wCnA+e0DkAWzZQEP2CStrD9ULt9UT1JVwOvtP1w1bH0i6RDgQ8Cu1N821oNfML2ZG56ig4lwQ+YpOttz223bzIpk8apwC4UXW0b8aCTpHOB5wOLaXmC1fanKgsqogfpBz8gkv6Coh/1VuXIfSpf2pZipqDJ7N+A/w7cPFnbpzfi/5bLFBpwE1LSN2z/Tbl+eut4QZIusX1IddHFMCTBD86rgWMppnw7g8cS/P3Av1QUU7/cCdzSsOSO7f9RdQx9Nrtl/WCgdUC47YccS1QgCX5AyhuNiyQdUQ7MNXDlwysf5YlNJ8/uc1EfBJZIugJ49F7CIJoyJO0GfB7YwfYekvYEDrP9sQGUdRnjjLVj+5X9LmtINvUB3KgP5xhfEvzgzZS0LUXN/WxgLnCS7UsGUNaXgH8GVgAbBnD9UacBD1D0hX/KAMuB4md2InAWgO2bJH0N6HuCp+htMmpLisnFJ/OUfdPK5sEpPL6pUGQ0yb6YP3++161b1/a4FStWLLU9fwghPU4S/OC9zfZnJL2a4tH+vwW+Cgwiwf/e9jBmIPpL23sMoRyAabaXSWrdN5Cka3vFmF0/krRsEGUNya+B0W9V/69lfXQ7erRu3TqWL1/e9jhJM4YQzhMkwQ/eaGb6a4p+1qs0Jlv10WWSPgF8m8c3nVy/8VO6skTSIQP6FjLWOknPoWxSkPRGisTVd+UDQKOmUEz2sd0gyhoG26+oOobNQZ1vRSXBD94KSUuBZwMnSdoGGBlQWaNji+xT/i+KxNjvNuR/BD4g6WHgEQbbTfKdFLNHPU/SXRTzpQ7qYaQVPNY2vZ5iurvjBlTWUJTjBO1m+8aWfTsDG2w3YXiJShnYMDKoP+feJcEP3nHAR4DVth8s/7jeN6CyLh9n3yCqF9tRJNldbZ9Svqdn9bMASe9v2VwCXEZRq/4jRdv4IPqmzwH+iceGQb6KYsKMyWw98G1Je9oe7dv/RYqeXEnwPTOu8f3qzXKwMUmLJD21Zftpks4ZUHFnAjsAozdY7mcwyQmKG5+jy/qyzFkDKOdMismpjy637wf+vc9lbFMu8yi+MTwNeCrwDxQ3qgdhEcWDTp8F/hdFwv/qgMoainJO1u8Ao/3hdwa2tz3ZP7jqwTDSwVKVzbUGv6ft+0Y3bN9b9jAYhP1sz5W0sqWsgfQ8sX1G67akT1IMmNVvA39Po33SJV0JzB2dVUnSR4Hv9bOsFnvYntOyfZmk1QMqa5i+SNHM9WXg78r/o0/q3Aa/WdbggSmSnja6Ud5cG9SH3SPlQFajNwm3Z3Bt8GNNo3jQqt+G+Z52AFrHhnm43DcI10vaf3RD0n5M/iYabP8UUPlMwVFM8m8ldWJgxG67VGVzrcGfAfxE0jfL7Tfx+Bnn++mzFF+RnynpNOCNFG3yfSfpZh5rc59K8bTiICaNHtp7opiFaJmk75Tbrwe+0s8CWn5uTwZ+LOlX5fYuwE/7WVabOP7C9qC6L36JoiZ/89jhg6M3da7Bb7aDjUmaw2O9S35oe2BfxSU9j2JeTwE/sH3rgMrZpWVzPfAb2wPpMz6s91SWNRc4oNy8st/zv475uT2B7V/2s7xNxPE92389oGtPo+heeoTtSwdRxuZo77lzfcWPftT2uO2mTVthe94QQnqczTbBR0T0au+5c3351Ve3Pe6p06dXkuA31yaaiIi+SDfJGitnvUlZk6CsJr6nlDV5yhlPcZO1vt0kN/sEDwzzlyNlTY5yUtbkKquyBA/FTdZ2S1XSRBMR0S07QxUMy4wZMzxr1qwJnbPzzjszb968CX/ErlgxduDBzkga2sd5E8tq4ntKWZWVs852TxOfmHp3k2xUgp81a1ZHQ3f2w+AGhIyIIelL99cqH2Rqp1EJPiJi2FKDj4hopHqPJpkEHxHRJVfcDbKdJPiIiB6MpBdNRETzjI4mWVdJ8BERPchN1oiIJqp4vPd2kuAjInpQ5xp85WPRSJol6Zaq44iImCgDG+y2S1VSg4+I6EFq8O1NlXS2pFWSLpG0laTnSPq+pBWSripnEIqIqJU6jyZZlwQ/GzjT9u7AfcARFLPAv9v2PsAJwOcqjC8i4gncwYTbmXQb7rB9Q7m+ApgFvBj4ZsugXluMd2I52P8CKEaGjIgYpjo30dQlwT/Usr4B2AG4z/Ze7U60vZCitt/VsL8REb2oc4KvSxPNWH8A7pD0JgAVXlBxTBERj1P0ohlpu1Slrgke4BjgOEk3AquAwyuOJyLiCeo8J2vlTTS2fwHs0bL9yZaX5w89oIiITlXcS6adyhN8RMRklSn7IiIaLGPRREQ0VGrwERENZJsNmfAjIqKZMidrRERDZU7WiIgGqnsvmjo/6BQRUXv9Gk1S0nxJt0laI+mkcV7fWdJlklZKuknSa9tds1E1+BUrVtAyOFljDKuG0MSfXcRA9ekmq6SpwJnAwcBa4DpJi22vbjnsI8A3bH9e0hxgCcXAjBuVGnxERJdGm2j6UIPfF1hj+3bbDwMX8MThWQxsW65vB/xXu4s2qgYfETFsHT7oNEPS8pbtheVIuKN2BO5s2V4L7DfmGh8FLpH0bmA6cFC7QpPgIyJ60GE3yXW25/VY1NHAV2yfIelFwFcl7WFvfLjKJPiIiB706RbZXcBOLdszy32tjqMcgNH2TyRtCcwA7t7YRdMGHxHRJUO/puy7DpgtaVdJTwGOAhaPOeZXwKsAJD0f2BL47aYumhp8RES3+tSLxvZ6Se8ClgJTgXNsr5J0CrDc9mLgA8DZkv6Z4rPlWLe5g5sEHxHRpX4+6GR7CUXXx9Z9J7esrwZeMpFrJsFHRPSgzk+yJsFHRPSgzuPB1/Imq6QfVx1DRER77uhfVWpZg7f94qpjiIhox+5bN8mBqGsN/oHy/2dJulLSDZJukXRA1bFFRLTaMDLSdqlKLWvwLd4MLLV9WjkYz7SqA4qIGDXaD76u6p7grwPOkfRk4CLbN4w9QNICYMHQI4uIoN69aGrZRDPK9pXAyyge2f2KpL8b55iFtuf1YZyHiIiJ6WAkySo/AGpdg5e0C7DW9tmStgDmAudWHFZExGNqXIOvdYIHDgROlPQI8ADwhBp8RESVRjYkwU+I7a3L/xcBiyoOJyJiXEU3yST4iIhGSoKPiGikam+itpMEHxHRA48kwUdENE7a4CMiGswVDkXQThJ8REQPalyBT4KPiOianTb4iIimSht89ETSUMoZ5i/qsN5TxCD1c07WQUiCj4joQRJ8REQT2XhDetFERDRSavAREQ1V4/yeBB8R0a3cZI2IaKoMVRAR0VRmJDdZIyKaKTX4iIgGymiSERFNlgQfEdFMrm8TPFOqDgBA0kWSVkhaJWlBue8BSadJulHSNZJ2qDrOiIixbLddqlKLBA+8zfY+wDzgPZKeAUwHrrH9AuBK4PjxTpS0QNJyScuHF25EBGAzMjLSdqlKXZpo3iPpDeX6TsBs4GHg/5T7VgAHj3ei7YXAQgBJ9W0Mi4jGqfuDTpXX4CUdCBwEvKisra8EtgQe8WM/uQ3U58MoIqLgYtLtdksnJM2XdJukNZJO2sgxfyNpddmc/bV216xD0twOuNf2g5KeB+xfdUARER3rQw1e0lTgTIqWirXAdZIW217dcsxs4MPAS2zfK+mZ7a5beQ0e+D7wJEm3Av8TuKbieCIiOtT+BmuHTTj7Amts3277YeAC4PAxxxwPnGn7XgDbd7e7aOU1eNsPAa8Z56WtW465ELhwaEFFRHRopLMmmBljOoIsLO8fjtoRuLNley2w35hr7AYg6UfAVOCjtr+/qUIrT/AREZOVyzb4DqyzPa/H4p5E0QHlQGAmcKWkv7J938ZOqEMTTUTEpNWnJpq7KHoQjppZ7mu1Flhs+xHbdwA/o0j4G5UEHxHRgz4l+OuA2ZJ2lfQU4Chg8ZhjLqKovSNpBkWTze2bumiaaCIiutafJ1Vtr5f0LmApRfv6ObZXSToFWG57cfnaIZJWU3QdP9H2PZu6bhJ8RES3+jiapO0lwJIx+05uWTfw/nLpSBJ8RESXDHhDfZ9kTYKPiOhBnYcqSIKPiOhWxaNFtpMEH4+SNLSyhvlHMcz3FZufTseaqUISfERED1KDj4hooLoPF5wEHxHRLRtXOKFHO0nwERE9qPOcrEnwERE9SBNNREQT9fFJ1kFIgo+I6FJuskZENJYZ2VDfRvgk+IiIbtW8iWag48FLeo+kWyWdJ2kLSZdKukHSkZK+KGnOIMuPiBg4u/1SkUHX4P8JOMj2Wkn7A9jeq3zt6wMuOyJi4Gpcge9fDV7S+yXdUi7vk/QF4NnAxZI+BPxv4IVlDf45ki6XNK88d76k6yXdKOkH5b7pks6RtEzSSkljZxiPiKjU6E3WPszoNBB9qcFL2gd4K8Us4AKuBd4CzAdeYXudpGuBE2wfWp4zeu72wNnAy2zfIenp5WX/Ffih7bdJeiqwTNKltv84puwFwIJ+vI+IiAnpfNLtSvSriealwHdGk6+kbwMHdHju/sCV5SSy2P5duf8Q4DBJJ5TbWwI7A7e2nmx7IbCwLLe+P+mIaCAzkqEKuiLgCNu3VR1IRMTGbA69aK4CXi9pmqTpwBvKfZ24BniZpF0BWppolgLvVtmWI2nvPsUaEdE/Te9FY/t6SV8BlpW7vmh7ZScTLdj+bdmO/m1JU4C7gYOBU4F/A24q998BHNqPeCMi+sGbSRs8tj8FfGrMvlkt65cDl7dsH9iyfjFw8Zhz/wS8o1/xRUQMQo1baGrdBh8RUXOZkzUioplMetFERDSR2Uza4CMiNkdpoomIaKRqu0G2kwQfEdGtmg8XnAQfEdGDkQ1J8BGP08lDcP0yzBrWMN9XVC9T9kVENFWaaCIimioPOkVENFYSfEREQ+VBp4iIBqr7aJJ9m5M1ImJz1K85Wcu5qW+TtEbSSZs47ghJHp3TelOS4CMiutY+uXeS4CVNBc4EXgPMAY6WNGec47YB3ksx73VbSfAREd0qm2jaLR3YF1hj+3bbDwMXAIePc9ypwOnAnzu5aBJ8REQPOqzBz5C0vGVZMOYyOwJ3tmyvLfc9StJcYCfb3+s0ttxkjYjo0gSeZF1nu22b+caU05Z+Cjh2IucNvQYv6ReSZgy73IiI/jMeGWm7dOAuYKeW7ZnlvlHbAHsAl0v6BbA/sLjdjdah1uDLGwk9nW97Q7/iiYjoicH9mdDpOmC2pF0pEvtRwJsfLcb+PfBoxVjS5cAJtpdv6qId1+AlnSjpPeX6pyX9sFx/paTzJB0t6WZJt0g6veW8BySdIelG4EUt+7eSdLGk48vtt0haJukGSWeNfhhs7PyIiDroRy8a2+uBdwFLgVuBb9heJekUSYd1G9tEmmiuAg4o1+cBW0t6crnvZxR3dl8J7AW8UNLry2OnA9fafoHtq8t9WwP/AZxv+2xJzweOBF5iey9gA3DMJs5/lKQFozcuJvBeIiL6ol/94G0vsb2b7efYPq3cd7LtxeMce2C72jtMLMGvAPaRtC3wEPATikR/AHAfcLnt35afROcBLyvP2wB8a8y1vgt82fa55fargH2A6yTdUG4/exPnP8r2QtvzermBERHRjdGbrP1I8IPQcRu87Uck3UFxF/fHwE3AK4DnAr+gSNDj+fM47eY/AuZL+pqLdy9gke0Pd3h+RET1bEY29KcRfhAm2ovmKuAE4Mpy/R+AlcAy4OWSZpRt50cDV2ziOicD91I8uQXwA+CNkp4JIOnpknaZYGwREcNnt18q0k2CfxbwE9u/oXia6irbvwZOAi4DbgRW2P5um2u9F9hK0sdtrwY+Alwi6SbgP8tyIiJqzR38q8qEukna/gHw5Jbt3VrWzwfOH+ecrcdsz2rZfGvL/q8DX293fkREXTgzOkVENJVxnzrCD0ISfERED1KDj4hoqJHOhiKoRBJ8RESXin7uSfAREc2UJpqIiGaqshtkO0nwERE9yE3WiApJGlpZw/xjH+b7io0xIyP1HUklCT4iokt50CkiosGS4CMiGioJPiKikaodLbKdJPiIiB6YPOgUEdE4doYqiIhoqGqn5GsnCT4iogcZiyYioqFSg4+IaKgk+IiIJqp4Uu12kuAjIrpkYMQZiyYiooHSi2agJC0AFlQdR0RsnpLgB8j2QmAhgKT6/qQjopGS4CMiGqi4x1rffvBTqg6gU5KWSPrLquOIiHiM8chI26Uqk6YGb/u1VccQETFW5mSNiGiotMFHRDSSa90GnwQfEdGlus/JOmluskZE1JHttksnJM2XdJukNZJOGuf190taLekmST+QtEu7aybBR0T0YGRkpO3SjqSpwJnAa4A5wNGS5ow5bCUwz/aewIXAx9tdNwk+IqJrBo+0X9rbF1hj+3bbDwMXAIc/riT7MtsPlpvXADPbXTQJPiKiB+7gHzBD0vKWZezwKjsCd7Zsry33bcxxwMXtYstN1oiILk3gJus62/P6UaaktwDzgJe3OzYJPqKPZs7cbWhl3XLnne0P6oM9dtppKOVMVn3qRXMX0PqDnlnuexxJBwH/Crzc9kPtLpoEHxHRtb71g78OmC1pV4rEfhTw5tYDJO0NnAXMt313JxdNgo+I6EEnvWTasb1e0ruApcBU4BzbqySdAiy3vRj4BLA18E1JAL+yfdimrpsEHxHRpX4+6GR7CbBkzL6TW9YPmug1k+AjIrqWOVkjIhrLZCyaiIhGqvNYNEnwERFdc19usg5KEnxERJfqPmVfEnxERA/q3ETT81g0ki4vh7i8oVwubHltgaSflssySS9tee1QSSsl3VgOgfmOXmOJiBi2fg0XPAhd1eAlPQV4su0/lruOsb18zDGHAu8AXmp7naS5wEWS9gXuARYC+9peK2kLYFZ53tNs39vd24mIGKZ6d5OcUA1e0vMlnQHcBrQbdONDwIm21wHYvh5YBLwT2Ibiw+We8rWHbN9WnnekpFskfUDS9hOJLyJi2DocTbISbRO8pOmS3irpauBsYDWwp+2VLYed19JE84ly3+7AijGXWw7sbvt3wGLgl5LOl3SMpCkAtr9AMej9NOBKSReWM52MG2vZDLRc0vLxXo+IGBQbRkY2tF2q0kkTza+Bm4C32/7pRo55QhNNO7bfLumvgIOAE4CDgWPL1+4ETpX0MYpkfw7Fh8MTxl2wvZCiuQdJ9f2uFBENVG0bezudNNG8kWJ0s29LOrmTeQBLq4F9xuzbB1g1umH7ZtufpkjuR7QeWLbVfw74LPAN4MMdlhsRMTR1vsnaNsHbvsT2kcABwO+B70q6VNKsNqd+HDhd0jMAJO1FUUP/nKStJR3YcuxewC/L4w6RdBPwMeAyYI7t99leRUREzdQ5wXfci8b2PcBngM+UtevWhqXzJP2pXF9n+yDbiyXtCPy4bDq5H3iL7V9L2gb4oKSzgD8Bf6RsnqG48fo627/s6Z1FRAxB4x50sr2sZf3ATRz3eeDz4+y/H3jtRs4Ze2M2IqKeXO9uknmSNSKiSwZGmlaDj4iIQuOaaCIiAureTTIJPiKiB0nwEREN1M85WQchCT4iomvGFQ5F0E4SfERED6ocTKydJPiIiB6kiWZ41lEOeTABM8rzhiFlTY5yui7rrrt+PrSy9thpp6GV1aW6/150Oq7WJiXBD4ntCY8fL2m57XmDiCdlTc5yUtbkKmuY72msYqyZ9IOPiGik1OAjIhpqZCQ1+DpbmLImTVlNfE8pa/KUM74a1+BV568XERF1NnXqVG+55fS2xz344P0rqrhPkBp8RESX8iRrRESDJcFHRDRUEnxERCOZkYxFExHRPGmDj4hoshon+ClVBxARMXm5o3+dkDRf0m2S1kg6aZzXt5D09fL1ayXNanfNJPiIiB7YI22XdiRNBc4EXgPMAY6WNGfMYccB99p+LvBp4PR2102Cj4jowcjISNulA/sCa2zfbvth4ALg8DHHHA4sKtcvBF4lSZu6aNrgIyK6t5RiuOJ2tpS0vGV7oe3WIRZ2BO5s2V4L7DfmGo8eY3u9pN8Dz2ATQyUnwUdEdMn2/Kpj2JQ00UREVO8uoHUGl5nlvnGPkfQkYDvgnk1dNAk+IqJ61wGzJe0q6SnAUcDiMccsBv6+XH8j8EO36YSfJpqIiIqVbervomjTnwqcY3uVpFOA5bYXA18CvippDfA7ig+BTcpwwRERDZUmmoiIhkqCj4hoqCT4iIiGSoKPiGioJPiIiIZKgo+IaKgk+IiIhvr/36H96Xklu2wAAAAASUVORK5CYII=\n",
"text/plain": [
"<Figure size 432x288 with 2 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"evaluateAndShowAttention(\"c est un employe de bureau .\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table width=\"100%\">\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/2.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">これでもアテンションの重みが大きいマスは概ね対角線上に並んでいるなあ…。アテンションは意味の対応する語に注目するわけではないのか、今回が訓練データに過学習なのかなあ…。\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/1.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">ええ…。\n",
"</td>\n",
"</tr>\n",
"<tr>\n",
"<td width=\"80px\" style=\"vertical-align:top;\"><img src=\"https://cookiebox26.github.io/ToyBox/20200407_100ML/2.png\"></td>\n",
"<td style=\"vertical-align:top;text-align:left;\">チュートリアルの最後には Exercises として、別の言語間の翻訳機にしてみようとか、会話応答などを学んでみようとか、学習済みの単語埋め込みをつかってみようとか、層数を変えてみようとか、入出力を同一にしてオートエンコーダを学習した後デコーダだけ新しく学習しようとかあるね。最後のはどう変わるんだろう?\n",
"</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"おわり"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment