Skip to content

Instantly share code, notes, and snippets.

@agostini01
Last active April 20, 2020 21:54
Show Gist options
  • Save agostini01/4005b54e570d1cc593589488139169ab to your computer and use it in GitHub Desktop.
Save agostini01/4005b54e570d1cc593589488139169ab to your computer and use it in GitHub Desktop.

Extracted with:

for var in synthesizer._model.model.all_vars:
    print('{:15} {} {}'.format(
        '{}'.format(var.get_shape().as_list()),
        var.dtype,
        var.name))
[66, 512]       <dtype: 'float32_ref'> Tacotron_model/inference/inputs_embedding:0
[5, 512, 512]   <dtype: 'float32_ref'> Tacotron_model/inference/encoder_convolutions/conv_layer_1_encoder_convolutions/conv1d/kernel:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/encoder_convolutions/conv_layer_1_encoder_convolutions/conv1d/bias:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/encoder_convolutions/conv_layer_1_encoder_convolutions/batch_normalization/gamma:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/encoder_convolutions/conv_layer_1_encoder_convolutions/batch_normalization/beta:0
[5, 512, 512]   <dtype: 'float32_ref'> Tacotron_model/inference/encoder_convolutions/conv_layer_2_encoder_convolutions/conv1d/kernel:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/encoder_convolutions/conv_layer_2_encoder_convolutions/conv1d/bias:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/encoder_convolutions/conv_layer_2_encoder_convolutions/batch_normalization/gamma:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/encoder_convolutions/conv_layer_2_encoder_convolutions/batch_normalization/beta:0
[5, 512, 512]   <dtype: 'float32_ref'> Tacotron_model/inference/encoder_convolutions/conv_layer_3_encoder_convolutions/conv1d/kernel:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/encoder_convolutions/conv_layer_3_encoder_convolutions/conv1d/bias:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/encoder_convolutions/conv_layer_3_encoder_convolutions/batch_normalization/gamma:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/encoder_convolutions/conv_layer_3_encoder_convolutions/batch_normalization/beta:0
[768, 1024]     <dtype: 'float32_ref'> Tacotron_model/inference/encoder_LSTM/bidirectional_rnn/fw/encoder_fw_LSTM/kernel:0
[1024]          <dtype: 'float32_ref'> Tacotron_model/inference/encoder_LSTM/bidirectional_rnn/fw/encoder_fw_LSTM/bias:0
[768, 1024]     <dtype: 'float32_ref'> Tacotron_model/inference/encoder_LSTM/bidirectional_rnn/bw/encoder_bw_LSTM/kernel:0
[1024]          <dtype: 'float32_ref'> Tacotron_model/inference/encoder_LSTM/bidirectional_rnn/bw/encoder_bw_LSTM/bias:0
[768, 128]      <dtype: 'float32_ref'> Tacotron_model/inference/memory_layer/kernel:0
[80, 256]       <dtype: 'float32_ref'> Tacotron_model/inference/decoder/decoder_prenet/dense_1/kernel:0
[256]           <dtype: 'float32_ref'> Tacotron_model/inference/decoder/decoder_prenet/dense_1/bias:0
[256, 256]      <dtype: 'float32_ref'> Tacotron_model/inference/decoder/decoder_prenet/dense_2/kernel:0
[256]           <dtype: 'float32_ref'> Tacotron_model/inference/decoder/decoder_prenet/dense_2/bias:0
[2048, 4096]    <dtype: 'float32_ref'> Tacotron_model/inference/decoder/decoder_LSTM/multi_rnn_cell/cell_0/decoder_LSTM_1/kernel:0
[4096]          <dtype: 'float32_ref'> Tacotron_model/inference/decoder/decoder_LSTM/multi_rnn_cell/cell_0/decoder_LSTM_1/bias:0
[2048, 4096]    <dtype: 'float32_ref'> Tacotron_model/inference/decoder/decoder_LSTM/multi_rnn_cell/cell_1/decoder_LSTM_2/kernel:0
[4096]          <dtype: 'float32_ref'> Tacotron_model/inference/decoder/decoder_LSTM/multi_rnn_cell/cell_1/decoder_LSTM_2/bias:0
[1024, 128]     <dtype: 'float32_ref'> Tacotron_model/inference/decoder/Location_Sensitive_Attention/query_layer/kernel:0
[31, 1, 32]     <dtype: 'float32_ref'> Tacotron_model/inference/decoder/Location_Sensitive_Attention/location_features_convolution/kernel:0
[32]            <dtype: 'float32_ref'> Tacotron_model/inference/decoder/Location_Sensitive_Attention/location_features_convolution/bias:0
[32, 128]       <dtype: 'float32_ref'> Tacotron_model/inference/decoder/Location_Sensitive_Attention/location_features_layer/kernel:0
[128]           <dtype: 'float32_ref'> Tacotron_model/inference/decoder/Location_Sensitive_Attention/attention_variable_projection:0
[128]           <dtype: 'float32_ref'> Tacotron_model/inference/decoder/Location_Sensitive_Attention/attention_bias:0
[1792, 160]     <dtype: 'float32_ref'> Tacotron_model/inference/decoder/linear_transform_projection/projection_linear_transform_projection/kernel:0
[160]           <dtype: 'float32_ref'> Tacotron_model/inference/decoder/linear_transform_projection/projection_linear_transform_projection/bias:0
[1792, 2]       <dtype: 'float32_ref'> Tacotron_model/inference/decoder/stop_token_projection/projection_stop_token_projection/kernel:0
[2]             <dtype: 'float32_ref'> Tacotron_model/inference/decoder/stop_token_projection/projection_stop_token_projection/bias:0
[5, 80, 512]    <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_1_postnet_convolutions/conv1d/kernel:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_1_postnet_convolutions/conv1d/bias:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_1_postnet_convolutions/batch_normalization/gamma:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_1_postnet_convolutions/batch_normalization/beta:0
[5, 512, 512]   <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_2_postnet_convolutions/conv1d/kernel:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_2_postnet_convolutions/conv1d/bias:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_2_postnet_convolutions/batch_normalization/gamma:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_2_postnet_convolutions/batch_normalization/beta:0
[5, 512, 512]   <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_3_postnet_convolutions/conv1d/kernel:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_3_postnet_convolutions/conv1d/bias:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_3_postnet_convolutions/batch_normalization/gamma:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_3_postnet_convolutions/batch_normalization/beta:0
[5, 512, 512]   <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_4_postnet_convolutions/conv1d/kernel:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_4_postnet_convolutions/conv1d/bias:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_4_postnet_convolutions/batch_normalization/gamma:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_4_postnet_convolutions/batch_normalization/beta:0
[5, 512, 512]   <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_5_postnet_convolutions/conv1d/kernel:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_5_postnet_convolutions/conv1d/bias:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_5_postnet_convolutions/batch_normalization/gamma:0
[512]           <dtype: 'float32_ref'> Tacotron_model/inference/postnet_convolutions/conv_layer_5_postnet_convolutions/batch_normalization/beta:0
[512, 80]       <dtype: 'float32_ref'> Tacotron_model/inference/postnet_projection/projection_postnet_projection/kernel:0
[80]            <dtype: 'float32_ref'> Tacotron_model/inference/postnet_projection/projection_postnet_projection/bias:0

Trying quantization:

with tf.Session() as sess:
    for var in synthesizer._model.model.all_vars:
        var = tf.quantization.quantize(
            var, -1,1,tf.qint8, mode='MIN_COMBINED',
            round_mode='HALF_AWAY_FROM_ZERO', name=None
        )
        print(var)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_116:0' shape=(66, 512) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_116:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_116:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_117:0' shape=(5, 512, 512) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_117:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_117:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_118:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_118:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_118:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_119:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_119:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_119:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_120:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_120:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_120:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_121:0' shape=(5, 512, 512) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_121:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_121:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_122:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_122:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_122:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_123:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_123:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_123:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_124:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_124:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_124:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_125:0' shape=(5, 512, 512) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_125:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_125:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_126:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_126:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_126:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_127:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_127:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_127:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_128:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_128:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_128:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_129:0' shape=(768, 1024) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_129:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_129:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_130:0' shape=(1024,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_130:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_130:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_131:0' shape=(768, 1024) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_131:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_131:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_132:0' shape=(1024,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_132:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_132:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_133:0' shape=(768, 128) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_133:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_133:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_134:0' shape=(80, 256) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_134:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_134:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_135:0' shape=(256,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_135:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_135:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_136:0' shape=(256, 256) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_136:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_136:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_137:0' shape=(256,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_137:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_137:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_138:0' shape=(2048, 4096) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_138:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_138:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_139:0' shape=(4096,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_139:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_139:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_140:0' shape=(2048, 4096) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_140:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_140:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_141:0' shape=(4096,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_141:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_141:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_142:0' shape=(1024, 128) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_142:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_142:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_143:0' shape=(31, 1, 32) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_143:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_143:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_144:0' shape=(32,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_144:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_144:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_145:0' shape=(32, 128) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_145:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_145:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_146:0' shape=(128,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_146:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_146:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_147:0' shape=(128,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_147:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_147:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_148:0' shape=(1792, 160) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_148:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_148:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_149:0' shape=(160,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_149:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_149:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_150:0' shape=(1792, 2) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_150:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_150:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_151:0' shape=(2,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_151:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_151:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_152:0' shape=(5, 80, 512) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_152:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_152:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_153:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_153:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_153:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_154:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_154:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_154:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_155:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_155:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_155:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_156:0' shape=(5, 512, 512) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_156:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_156:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_157:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_157:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_157:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_158:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_158:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_158:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_159:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_159:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_159:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_160:0' shape=(5, 512, 512) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_160:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_160:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_161:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_161:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_161:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_162:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_162:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_162:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_163:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_163:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_163:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_164:0' shape=(5, 512, 512) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_164:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_164:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_165:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_165:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_165:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_166:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_166:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_166:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_167:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_167:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_167:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_168:0' shape=(5, 512, 512) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_168:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_168:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_169:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_169:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_169:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_170:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_170:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_170:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_171:0' shape=(512,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_171:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_171:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_172:0' shape=(512, 80) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_172:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_172:2' shape=() dtype=float32>)
QuantizeV2(output=<tf.Tensor 'QuantizeV2_173:0' shape=(80,) dtype=qint8>, output_min=<tf.Tensor 'QuantizeV2_173:1' shape=() dtype=float32>, output_max=<tf.Tensor 'QuantizeV2_173:2' shape=() dtype=float32>)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment