Skip to content

Instantly share code, notes, and snippets.

@wj-Mcat
Created October 18, 2022 07:32
Show Gist options
  • Save wj-Mcat/7f6cd7925284725ddbcd30167f5192cb to your computer and use it in GitHub Desktop.
Save wj-Mcat/7f6cd7925284725ddbcd30167f5192cb to your computer and use it in GitHub Desktop.
paddle 动转静示例代码
# 1. print the paddle version
from paddle import __version__
print("Paddle Version: " + __version__)
from paddlenlp.transformers import AutoModel
import paddle
# 2. load the dynamic model
input_ids = paddle.randint(10, 20, shape=[1, 20], dtype='int64')
model = AutoModel.from_pretrained("albert-base-v1")
dynamic_output = model(input_ids)[0]
# 3. save & load static model
inputs = [paddle.static.InputSpec(shape=[None, None], dtype="int64")]
model = paddle.jit.to_static(model, input_spec=inputs)
path = "sss/static_model"
paddle.jit.save(model, path)
model = paddle.jit.load(path)
static_output = model(input_ids)[0]
# 4. compare the dynamic output and static output
print(dynamic_output[:, 1:4, 1:4])
print("=======")
print(static_output[:, 1:4, 1:4])
assert paddle.allclose(dynamic_output[:, 1:4, 1:4], static_output[:, 1:4, 1:4], atol=1e-4)
@wj-Mcat
Copy link
Author

wj-Mcat commented Oct 18, 2022

I have done the following experiments:

  • paddlepaddle==2.4.0-rc0
Paddle Version: 2.4.0-rc0
W1018 07:33:21.694754 68740 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.2, Runtime API Version: 10.2
W1018 07:33:21.700225 68740 gpu_resources.cc:91] device: 0, cuDNN Version: 7.6.
[2022-10-18 07:33:24,043] [    INFO] - We are using <class 'paddlenlp.transformers.albert.modeling.AlbertModel'> to load 'albert-base-v1'.
[2022-10-18 07:33:24,044] [    INFO] - Already cached /root/.paddlenlp/models/albert-base-v1/albert-base-v1.pdparams
[2022-10-18 07:33:24,687] [    INFO] - Weights from pretrained model not used in AlbertModel: ['predictions.bias', 'predictions.layer_norm.weight', 'predictions.layer_norm.bias', 'predictions.dense.weight', 'predictions.dense.bias', 'predictions.decoder.weight', 'predictions.decoder.bias']
Traceback (most recent call last):
  File "b.py", line 662, in <module>
    test_albert_to_static()
  File "b.py", line 660, in test_albert_to_static
    assert paddle.allclose(dynamic_output[:, 1:4, 1:4], static_output[:, 1:4, 1:4], atol=1e-4)
AssertionError
  • paddlepaddle==2.3.0
Paddle Version: 2.3.0
W1018 07:34:53.149269 69043 gpu_context.cc:278] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.2, Runtime API Version: 10.2
W1018 07:34:53.154621 69043 gpu_context.cc:306] device: 0, cuDNN Version: 7.6.
[2022-10-18 07:34:59,241] [    INFO] - We are using <class 'paddlenlp.transformers.albert.modeling.AlbertModel'> to load 'albert-base-v1'.
[2022-10-18 07:34:59,243] [    INFO] - Already cached /root/.paddlenlp/models/albert-base-v1/albert-base-v1.pdparams
[2022-10-18 07:34:59,887] [    INFO] - Weights from pretrained model not used in AlbertModel: ['predictions.bias', 'predictions.layer_norm.weight', 'predictions.layer_norm.bias', 'predictions.dense.weight', 'predictions.dense.bias', 'predictions.decoder.weight', 'predictions.decoder.bias']
Traceback (most recent call last):
  File "b.py", line 662, in <module>
    test_albert_to_static()
  File "b.py", line 654, in test_albert_to_static
    paddle.jit.save(model, path)
  File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/dygraph/jit.py", line 629, in wrapper
    func(layer, path, input_spec, **configs)
  File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
  File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
    return wrapped_func(*args, **kwargs)
  File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/dygraph/base.py", line 51, in __impl__
    return func(*args, **kwargs)
  File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/dygraph/jit.py", line 857, in save
    inner_input_spec, with_hook=with_hook)
  File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 528, in concrete_program_specify_input_spec
    *desired_input_spec, with_hook=with_hook)
  File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 436, in get_concrete_program
    concrete_program, partial_program_layer = self._program_cache[cache_key]
  File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 801, in __getitem__
    self._caches[item_id] = self._build_once(item)
  File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 790, in _build_once
    **cache_key.kwargs)
  File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
  File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
    return wrapped_func(*args, **kwargs)
  File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/dygraph/base.py", line 51, in __impl__
    return func(*args, **kwargs)
  File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 740, in from_func_spec
    error_data.raise_new_exception()
  File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/error.py", line 336, in raise_new_exception
    six.exec_("raise new_exception from None")
  File "<string>", line 1, in <module>
ValueError: In transformed code:


    File "/tmp/tmpe8_l0hzf.py", line 164, in forward
        (attention_mask,))
    File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 211, in convert_ifelse
        out = _run_py_ifelse(pred, true_fn, false_fn, true_args, false_args)
    File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 257, in _run_py_ifelse
        return true_fn(*true_args) if pred else false_fn(*false_args)
    File "/tmp/tmpe8_l0hzf.py", line 157, in true_fn_3
        globals())))
    File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/paddle/tensor/creation.py", line 291, in ones
        return fill_constant(value=1.0, shape=shape, dtype=dtype, name=name)
    File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/layers/tensor.py", line 797, in fill_constant
        check_shape(shape)
    File "/root/miniconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/layers/utils.py", line 387, in check_shape
        "All elements in ``shape`` must be positive when it's a list or tuple"

    ValueError: All elements in ``shape`` must be positive when it's a list or tuple

@gongel
Copy link

gongel commented Oct 18, 2022

May try: model.eval()

@wj-Mcat
Copy link
Author

wj-Mcat commented Oct 18, 2022

Oh, good catch, using eval can output the same logit. @gongel thx.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment