Skip to content

Instantly share code, notes, and snippets.

@ddugovic
Last active May 13, 2016 12:13
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ddugovic/827dcac503f9b458efe49d01888a3a60 to your computer and use it in GitHub Desktop.
Save ddugovic/827dcac503f9b458efe49d01888a3a60 to your computer and use it in GitHub Desktop.
Parsey McParseface versus 2007 Miss Teen South Carolina
echo "I personally believe that U.S. Americans are unable to do so because uh some uh people out there in our nation don't have maps and uh I believe that our ed- education like such as in South Africa and uh the- the Iraq everywhere like such as and I believe that they should uh our education over here in the U.S. should help the U.S. or- or- should help South Africa and should help the Iraq and the Asian countries so we will be able to build up our future for our children." | ./syntaxnet/demo.sh
I syntaxnet/term_frequency_map.cc:101] Loaded 46 terms from syntaxnet/models/parsey_mcparseface/label-map.
I syntaxnet/embedding_feature_extractor.cc:35] Features: stack.child(1).label stack.child(1).sibling(-1).label stack.child(-1).label stack.child(-1).sibling(1).label stack.child(2).label stack.child(-2).label stack(1).child(1).label stack(1).child(1).sibling(-1).label stack(1).child(-1).label stack(1).child(-1).sibling(1).label stack(1).child(2).label stack(1).child(-2).label; input.token.tag input(1).token.tag input(2).token.tag input(3).token.tag stack.token.tag stack.child(1).token.tag stack.child(1).sibling(-1).token.tag stack.child(-1).token.tag stack.child(-1).sibling(1).token.tag stack.child(2).token.tag stack.child(-2).token.tag stack(1).token.tag stack(1).child(1).token.tag stack(1).child(1).sibling(-1).token.tag stack(1).child(-1).token.tag stack(1).child(-1).sibling(1).token.tag stack(1).child(2).token.tag stack(1).child(-2).token.tag stack(2).token.tag stack(3).token.tag; input.token.word input(1).token.word input(2).token.word input(3).token.word stack.token.word stack.child(1).token.word stack.child(1).sibling(-1).token.word stack.child(-1).token.word stack.child(-1).sibling(1).token.word stack.child(2).token.word stack.child(-2).token.word stack(1).token.word stack(1).child(1).token.word stack(1).child(1).sibling(-1).token.word stack(1).child(-1).token.word stack(1).child(-1).sibling(1).token.word stack(1).child(2).token.word stack(1).child(-2).token.word stack(2).token.word stack(3).token.word
I syntaxnet/embedding_feature_extractor.cc:36] Embedding names: labels;tags;words
I syntaxnet/embedding_feature_extractor.cc:37] Embedding dims: 32;32;64
I syntaxnet/term_frequency_map.cc:101] Loaded 49 terms from syntaxnet/models/parsey_mcparseface/tag-map.
I syntaxnet/term_frequency_map.cc:101] Loaded 64036 terms from syntaxnet/models/parsey_mcparseface/word-map.
INFO:tensorflow:Building training network with parameters: feature_sizes: [12 20 20] domain_sizes: [ 49 51 64038]
INFO:tensorflow:Created variable step:0 with shape () and init <function OnesInitializer at 0x7f3e6ef9b578>
INFO:tensorflow:Created variable embedding_matrix_0:0 with shape (49, 32) and init <function _initializer at 0x7f3e6ef9b938>
I syntaxnet/term_frequency_map.cc:101] Loaded 46 terms from syntaxnet/models/parsey_mcparseface/label-map.
I syntaxnet/embedding_feature_extractor.cc:35] Features: input.digit input.hyphen; input.prefix(length="2") input(1).prefix(length="2") input(2).prefix(length="2") input(3).prefix(length="2") input(-1).prefix(length="2") input(-2).prefix(length="2") input(-3).prefix(length="2") input(-4).prefix(length="2"); input.prefix(length="3") input(1).prefix(length="3") input(2).prefix(length="3") input(3).prefix(length="3") input(-1).prefix(length="3") input(-2).prefix(length="3") input(-3).prefix(length="3") input(-4).prefix(length="3"); input.suffix(length="2") input(1).suffix(length="2") input(2).suffix(length="2") input(3).suffix(length="2") input(-1).suffix(length="2") input(-2).suffix(length="2") input(-3).suffix(length="2") input(-4).suffix(length="2"); input.suffix(length="3") input(1).suffix(length="3") input(2).suffix(length="3") input(3).suffix(length="3") input(-1).suffix(length="3") input(-2).suffix(length="3") input(-3).suffix(length="3") input(-4).suffix(length="3"); input.token.word input(1).token.word input(2).token.word input(3).token.word input(-1).token.word input(-2).token.word input(-3).token.word input(-4).token.word
I syntaxnet/embedding_feature_extractor.cc:36] Embedding names: other;prefix2;prefix3;suffix2;suffix3;words
I syntaxnet/embedding_feature_extractor.cc:37] Embedding dims: 8;16;16;16;16;64
INFO:tensorflow:Created variable embedding_matrix_1:0 with shape (51, 32) and init <function _initializer at 0x7f3e6ef9b938>
INFO:tensorflow:Created variable embedding_matrix_2:0 with shape (64038, 64) and init <function _initializer at 0x7f3e6d68bc08>
I syntaxnet/term_frequency_map.cc:101] Loaded 64036 terms from syntaxnet/models/parsey_mcparseface/word-map.
I syntaxnet/term_frequency_map.cc:101] Loaded 49 terms from syntaxnet/models/parsey_mcparseface/tag-map.
INFO:tensorflow:Building training network with parameters: feature_sizes: [2 8 8 8 8 8] domain_sizes: [ 5 10665 10665 8970 8970 64038]
INFO:tensorflow:Created variable step:0 with shape () and init <function OnesInitializer at 0x7ffbb44c1578>
INFO:tensorflow:Created variable embedding_matrix_0:0 with shape (5, 8) and init <function _initializer at 0x7ffbb44c18c0>
INFO:tensorflow:Created variable weights_0:0 with shape (2304, 512) and init <function _initializer at 0x7f3e6d69be60>
INFO:tensorflow:Created variable bias_0:0 with shape (512,) and init <function _initializer at 0x7f3e6ef9b668>
INFO:tensorflow:Created variable embedding_matrix_1:0 with shape (10665, 16) and init <function _initializer at 0x7ffbb44c18c0>
INFO:tensorflow:Created variable weights_1:0 with shape (512, 512) and init <function _initializer at 0x7f3e6d5775f0>
INFO:tensorflow:Created variable bias_1:0 with shape (512,) and init <function _initializer at 0x7f3e6ef9b668>
INFO:tensorflow:Created variable embedding_matrix_2:0 with shape (10665, 16) and init <function _initializer at 0x7ffbb43b5c08>
INFO:tensorflow:Created variable softmax_weight:0 with shape (512, 93) and init <function _initializer at 0x7f3e6d5775f0>
INFO:tensorflow:Created variable embedding_matrix_3:0 with shape (8970, 16) and init <function _initializer at 0x7ffbb43c4e60>
INFO:tensorflow:Created variable softmax_bias:0 with shape (93,) and init <function zeros_initializer at 0x7f3e7081da28>
INFO:tensorflow:Created variable embedding_matrix_4:0 with shape (8970, 16) and init <function _initializer at 0x7ffbb42efde8>
INFO:tensorflow:Created variable embedding_matrix_5:0 with shape (64038, 64) and init <function _initializer at 0x7ffbb428b938>
INFO:tensorflow:Created variable weights_0:0 with shape (1040, 64) and init <function _initializer at 0x7ffbb421cb90>
INFO:tensorflow:Created variable bias_0:0 with shape (64,) and init <function _initializer at 0x7ffbb44c1668>
INFO:tensorflow:Created variable softmax_weight:0 with shape (64, 49) and init <function _initializer at 0x7ffbb4164e60>
INFO:tensorflow:Created variable softmax_bias:0 with shape (49,) and init <function zeros_initializer at 0x7ffbb5d43a28>
I syntaxnet/term_frequency_map.cc:101] Loaded 46 terms from syntaxnet/models/parsey_mcparseface/label-map.
I syntaxnet/embedding_feature_extractor.cc:35] Features: stack.child(1).label stack.child(1).sibling(-1).label stack.child(-1).label stack.child(-1).sibling(1).label stack.child(2).label stack.child(-2).label stack(1).child(1).label stack(1).child(1).sibling(-1).label stack(1).child(-1).label stack(1).child(-1).sibling(1).label stack(1).child(2).label stack(1).child(-2).label; input.token.tag input(1).token.tag input(2).token.tag input(3).token.tag stack.token.tag stack.child(1).token.tag stack.child(1).sibling(-1).token.tag stack.child(-1).token.tag stack.child(-1).sibling(1).token.tag stack.child(2).token.tag stack.child(-2).token.tag stack(1).token.tag stack(1).child(1).token.tag stack(1).child(1).sibling(-1).token.tag stack(1).child(-1).token.tag stack(1).child(-1).sibling(1).token.tag stack(1).child(2).token.tag stack(1).child(-2).token.tag stack(2).token.tag stack(3).token.tag; input.token.word input(1).token.word input(2).token.word input(3).token.word stack.token.word stack.child(1).token.word stack.child(1).sibling(-1).token.word stack.child(-1).token.word stack.child(-1).sibling(1).token.word stack.child(2).token.word stack.child(-2).token.word stack(1).token.word stack(1).child(1).token.word stack(1).child(1).sibling(-1).token.word stack(1).child(-1).token.word stack(1).child(-1).sibling(1).token.word stack(1).child(2).token.word stack(1).child(-2).token.word stack(2).token.word stack(3).token.word
I syntaxnet/embedding_feature_extractor.cc:36] Embedding names: labels;tags;words
I syntaxnet/embedding_feature_extractor.cc:37] Embedding dims: 32;32;64
I syntaxnet/term_frequency_map.cc:101] Loaded 49 terms from syntaxnet/models/parsey_mcparseface/tag-map.
I syntaxnet/term_frequency_map.cc:101] Loaded 64036 terms from syntaxnet/models/parsey_mcparseface/word-map.
I syntaxnet/term_frequency_map.cc:101] Loaded 49 terms from syntaxnet/models/parsey_mcparseface/tag-map.
I syntaxnet/term_frequency_map.cc:101] Loaded 46 terms from syntaxnet/models/parsey_mcparseface/label-map.
I syntaxnet/embedding_feature_extractor.cc:35] Features: input.digit input.hyphen; input.prefix(length="2") input(1).prefix(length="2") input(2).prefix(length="2") input(3).prefix(length="2") input(-1).prefix(length="2") input(-2).prefix(length="2") input(-3).prefix(length="2") input(-4).prefix(length="2"); input.prefix(length="3") input(1).prefix(length="3") input(2).prefix(length="3") input(3).prefix(length="3") input(-1).prefix(length="3") input(-2).prefix(length="3") input(-3).prefix(length="3") input(-4).prefix(length="3"); input.suffix(length="2") input(1).suffix(length="2") input(2).suffix(length="2") input(3).suffix(length="2") input(-1).suffix(length="2") input(-2).suffix(length="2") input(-3).suffix(length="2") input(-4).suffix(length="2"); input.suffix(length="3") input(1).suffix(length="3") input(2).suffix(length="3") input(3).suffix(length="3") input(-1).suffix(length="3") input(-2).suffix(length="3") input(-3).suffix(length="3") input(-4).suffix(length="3"); input.token.word input(1).token.word input(2).token.word input(3).token.word input(-1).token.word input(-2).token.word input(-3).token.word input(-4).token.word
I syntaxnet/embedding_feature_extractor.cc:36] Embedding names: other;prefix2;prefix3;suffix2;suffix3;words
I syntaxnet/embedding_feature_extractor.cc:37] Embedding dims: 8;16;16;16;16;64
I syntaxnet/term_frequency_map.cc:101] Loaded 64036 terms from syntaxnet/models/parsey_mcparseface/word-map.
INFO:tensorflow:Processed 1 documents
INFO:tensorflow:Total processed documents: 1
INFO:tensorflow:num correct tokens: 0
INFO:tensorflow:total tokens: 95
INFO:tensorflow:Seconds elapsed in evaluation: 0.15, eval metric: 0.00%
INFO:tensorflow:Processed 1 documents
INFO:tensorflow:Total processed documents: 1
INFO:tensorflow:num correct tokens: 1
INFO:tensorflow:total tokens: 91
INFO:tensorflow:Seconds elapsed in evaluation: 0.92, eval metric: 1.10%
INFO:tensorflow:Read 1 documents
Input: I personally believe that U.S. Americans are unable to do so because uh some uh people out there in our nation do n't have maps and uh I believe that our ed- education like such as in South Africa and uh the- the Iraq everywhere like such as and I believe that they should uh our education over here in the U.S. should help the U.S. or- or- should help South Africa and should help the Iraq and the Asian countries so we will be able to build up our future for our children .
Parse:
believe VBP ROOT
+-- I PRP nsubj
+-- personally RB advmod
+-- unable JJ ccomp
| +-- that IN mark
| +-- Americans NNPS nsubj
| | +-- U.S. NNP nn
| +-- are VBP cop
| +-- do VB xcomp
| | +-- to TO aux
| | +-- so RB advmod
| +-- have VB advcl
| +-- because IN mark
| +-- uh UH discourse
| +-- people NNS nsubj
| | +-- some DT det
| | +-- uh UH dep
| | +-- there RB advmod
| | | +-- out RB advmod
| | +-- in IN prep
| | +-- nation NN pobj
| | +-- our PRP$ poss
| +-- do VBP aux
| +-- n't RB neg
| +-- maps NNS dobj
+-- and CC cc
+-- believe VBP conj
| +-- I PRP nsubj
| +-- help VB ccomp
| +-- should MD advcl
| | +-- that IN mark
| | +-- they PRP nsubj
| | +-- uh UH neg
| +-- education NN nsubj
| | +-- our PRP$ poss
| | +-- here RB advmod
| | +-- over IN advmod
| | +-- in IN prep
| | +-- U.S. NNP pobj
| | +-- the DT det
| +-- should MD aux
| +-- help VB ccomp
| +-- U.S. NNP nsubj
| | +-- the DT det
| | +-- or- , punct
| | +-- or- , conj
| +-- should MD aux
| +-- Africa NNP dobj
| | +-- South NNP nn
| +-- and CC cc
| +-- help VB conj
| +-- should MD aux
| +-- Iraq NNP dobj
| | +-- the DT det
| | +-- and CC cc
| | +-- countries NNS conj
| | +-- the DT det
| | +-- Asian JJ amod
| +-- able JJ advcl
| +-- so IN mark
| +-- we PRP nsubj
| +-- will MD aux
| +-- be VB cop
| +-- build VB xcomp
| +-- to TO aux
| +-- up RP prt
| +-- future NN dobj
| +-- our PRP$ poss
| +-- for IN prep
| +-- children NNS pobj
| +-- our PRP$ poss
+-- . . punct
@ddugovic
Copy link
Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment