Created
March 22, 2023 13:28
-
-
Save SomeoneSerge/7c4c3f7ac5c18f85bb93726a13d2e3de to your computer and use it in GitHub Desktop.
nix log /nix/store/5khy3vr59v193zla77izslymvn1w2l3c-python3.10-torchinfo-1.64.drv
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sourcing python-remove-tests-dir-hook | |
Sourcing python-catch-conflicts-hook.sh | |
Sourcing python-remove-bin-bytecode-hook.sh | |
Sourcing setuptools-build-hook | |
Using setuptoolsBuildPhase | |
Using setuptoolsShellHook | |
Sourcing pip-install-hook | |
Using pipInstallPhase | |
Sourcing python-imports-check-hook.sh | |
Using pythonImportsCheckPhase | |
Sourcing python-namespaces-hook | |
Sourcing python-catch-conflicts-hook.sh | |
Sourcing setuptools-check-hook | |
Using setuptoolsCheckPhase | |
Sourcing pytest-check-hook | |
Using pytestCheckPhase | |
Removing setuptoolsCheckPhase | |
@nix { "action": "setPhase", "phase": "unpackPhase" } | |
unpacking sources | |
unpacking source archive /nix/store/q7xh02zqmqgibvcpbv74qkdbf96zb8jh-source | |
source root is source | |
setting SOURCE_DATE_EPOCH to timestamp 315619200 of file source/torchinfo/torchinfo.py | |
@nix { "action": "setPhase", "phase": "patchPhase" } | |
patching sources | |
@nix { "action": "setPhase", "phase": "configurePhase" } | |
configuring | |
no configure script, doing nothing | |
@nix { "action": "setPhase", "phase": "buildPhase" } | |
building | |
Executing setuptoolsBuildPhase | |
/nix/store/8zx4h7r5mn1b913hgi8rwjfynwg1wgdi-python3.10-setuptools-67.4.0/lib/python3.10/site-packages/setuptools/config/setupcfg.py:520: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead. | |
warnings.warn(msg, warning_class) | |
running bdist_wheel | |
running build | |
running build_py | |
creating build | |
creating build/lib | |
creating build/lib/torchinfo | |
copying torchinfo/layer_info.py -> build/lib/torchinfo | |
copying torchinfo/formatting.py -> build/lib/torchinfo | |
copying torchinfo/torchinfo.py -> build/lib/torchinfo | |
copying torchinfo/__init__.py -> build/lib/torchinfo | |
copying torchinfo/model_statistics.py -> build/lib/torchinfo | |
copying torchinfo/enums.py -> build/lib/torchinfo | |
running egg_info | |
creating torchinfo.egg-info | |
writing torchinfo.egg-info/PKG-INFO | |
writing dependency_links to torchinfo.egg-info/dependency_links.txt | |
writing top-level names to torchinfo.egg-info/top_level.txt | |
writing manifest file 'torchinfo.egg-info/SOURCES.txt' | |
reading manifest file 'torchinfo.egg-info/SOURCES.txt' | |
adding license file 'LICENSE' | |
writing manifest file 'torchinfo.egg-info/SOURCES.txt' | |
copying torchinfo/py.typed -> build/lib/torchinfo | |
/nix/store/8zx4h7r5mn1b913hgi8rwjfynwg1wgdi-python3.10-setuptools-67.4.0/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. | |
warnings.warn( | |
installing to build/bdist.linux-x86_64/wheel | |
running install | |
running install_lib | |
creating build/bdist.linux-x86_64 | |
creating build/bdist.linux-x86_64/wheel | |
creating build/bdist.linux-x86_64/wheel/torchinfo | |
copying build/lib/torchinfo/layer_info.py -> build/bdist.linux-x86_64/wheel/torchinfo | |
copying build/lib/torchinfo/formatting.py -> build/bdist.linux-x86_64/wheel/torchinfo | |
copying build/lib/torchinfo/torchinfo.py -> build/bdist.linux-x86_64/wheel/torchinfo | |
copying build/lib/torchinfo/__init__.py -> build/bdist.linux-x86_64/wheel/torchinfo | |
copying build/lib/torchinfo/model_statistics.py -> build/bdist.linux-x86_64/wheel/torchinfo | |
copying build/lib/torchinfo/py.typed -> build/bdist.linux-x86_64/wheel/torchinfo | |
copying build/lib/torchinfo/enums.py -> build/bdist.linux-x86_64/wheel/torchinfo | |
running install_egg_info | |
Copying torchinfo.egg-info to build/bdist.linux-x86_64/wheel/torchinfo-1.6.4-py3.10.egg-info | |
running install_scripts | |
creating build/bdist.linux-x86_64/wheel/torchinfo-1.6.4.dist-info/WHEEL | |
creating 'dist/torchinfo-1.6.4-py3-none-any.whl' and adding 'build/bdist.linux-x86_64/wheel' to it | |
adding 'torchinfo/__init__.py' | |
adding 'torchinfo/enums.py' | |
adding 'torchinfo/formatting.py' | |
adding 'torchinfo/layer_info.py' | |
adding 'torchinfo/model_statistics.py' | |
adding 'torchinfo/py.typed' | |
adding 'torchinfo/torchinfo.py' | |
adding 'torchinfo-1.6.4.dist-info/LICENSE' | |
adding 'torchinfo-1.6.4.dist-info/METADATA' | |
adding 'torchinfo-1.6.4.dist-info/WHEEL' | |
adding 'torchinfo-1.6.4.dist-info/top_level.txt' | |
adding 'torchinfo-1.6.4.dist-info/RECORD' | |
removing build/bdist.linux-x86_64/wheel | |
Finished executing setuptoolsBuildPhase | |
@nix { "action": "setPhase", "phase": "installPhase" } | |
installing | |
Executing pipInstallPhase | |
/build/source/dist /build/source | |
Processing ./torchinfo-1.6.4-py3-none-any.whl | |
Installing collected packages: torchinfo | |
Successfully installed torchinfo-1.6.4 | |
/build/source | |
Finished executing pipInstallPhase | |
@nix { "action": "setPhase", "phase": "pythonOutputDistPhase" } | |
pythonOutputDistPhase | |
Executing pythonOutputDistPhase | |
Finished executing pythonOutputDistPhase | |
@nix { "action": "setPhase", "phase": "fixupPhase" } | |
post-installation fixup | |
shrinking RPATHs of ELF executables and libraries in /nix/store/ccqikiigwrx31c75v7kd85ysz38pd7yz-python3.10-torchinfo-1.64 | |
checking for references to /build/ in /nix/store/ccqikiigwrx31c75v7kd85ysz38pd7yz-python3.10-torchinfo-1.64... | |
patching script interpreter paths in /nix/store/ccqikiigwrx31c75v7kd85ysz38pd7yz-python3.10-torchinfo-1.64 | |
stripping (with command strip and flags -S) in /nix/store/ccqikiigwrx31c75v7kd85ysz38pd7yz-python3.10-torchinfo-1.64/lib | |
shrinking RPATHs of ELF executables and libraries in /nix/store/dh9sswl0xhhw91vp048k1w58vw1hrxv9-python3.10-torchinfo-1.64-dist | |
checking for references to /build/ in /nix/store/dh9sswl0xhhw91vp048k1w58vw1hrxv9-python3.10-torchinfo-1.64-dist... | |
patching script interpreter paths in /nix/store/dh9sswl0xhhw91vp048k1w58vw1hrxv9-python3.10-torchinfo-1.64-dist | |
Executing pythonRemoveTestsDir | |
Finished executing pythonRemoveTestsDir | |
@nix { "action": "setPhase", "phase": "installCheckPhase" } | |
running install tests | |
no Makefile or custom installCheckPhase, doing nothing | |
@nix { "action": "setPhase", "phase": "pythonCatchConflictsPhase" } | |
pythonCatchConflictsPhase | |
@nix { "action": "setPhase", "phase": "pythonRemoveBinBytecodePhase" } | |
pythonRemoveBinBytecodePhase | |
@nix { "action": "setPhase", "phase": "pythonImportsCheckPhase" } | |
pythonImportsCheckPhase | |
Executing pythonImportsCheckPhase | |
Check whether the following modules can be imported: torchvision | |
@nix { "action": "setPhase", "phase": "pytestCheckPhase" } | |
pytestCheckPhase | |
Executing pytestCheckPhase | |
[1m============================= test session starts ==============================[0m | |
platform linux -- Python 3.10.10, pytest-7.2.1, pluggy-1.0.0 | |
rootdir: /build/source | |
[1mcollecting ... [0m[1m collected 64 items / 2 deselected / 62 selected [0m | |
tests/exceptions_test.py [32m.[0m[32m.[0m[32m.[0m[31mF[0m[32m.[0m[31m [ 8%][0m | |
tests/gpu_test.py [33ms[0m[33ms[0m[31m [ 11%][0m | |
tests/half_precision_test.py [33ms[0m[33ms[0m[33ms[0m[31m [ 16%][0m | |
tests/torchinfo_test.py [32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[31m [ 87%][0m | |
tests/torchinfo_xl_test.py [32m.[0m[32m.[0m[32m.[0m[32m.[0m[32m.[0m[33ms[0m[32m.[0m[32m.[0m[31m [100%][0m | |
=================================== FAILURES =================================== | |
[31m[1m________________________ test_input_size_half_precision ________________________[0m | |
model = Linear(in_features=2, out_features=5, bias=True) | |
x = [tensor([[0.7124, 0.1774], | |
[0.6299, 0.7700], | |
[0.0745, 0.4951], | |
[0.0251, 0.7275], | |
[0.86... [0.0835, 0.0176], | |
[0.0797, 0.2688], | |
[0.7529, 0.1649], | |
[0.9116, 0.8252]], dtype=torch.float16)] | |
batch_dim = None, cache_forward_pass = False, device = 'cpu' | |
mode = <Mode.EVAL: 'eval'>, kwargs = {}, model_name = 'Linear' | |
all_layers = [Linear: 0], summary_list = [Linear: 0] | |
hooks = {140728432927184: (<torch.utils.hooks.RemovableHandle object at 0x7ffde44134f0>, <torch.utils.hooks.RemovableHandle object at 0x7ffde4411240>)} | |
named_module = ('Linear', Linear(in_features=2, out_features=5, bias=True)) | |
saved_model_mode = True | |
def forward_pass( | |
model: nn.Module, | |
x: CORRECTED_INPUT_DATA_TYPE, | |
batch_dim: int | None, | |
cache_forward_pass: bool, | |
device: torch.device | str, | |
mode: Mode, | |
**kwargs: Any, | |
) -> list[LayerInfo]: | |
"""Perform a forward pass on the model using forward hooks.""" | |
global _cached_forward_pass # pylint: disable=global-variable-not-assigned | |
model_name = model.__class__.__name__ | |
if cache_forward_pass and model_name in _cached_forward_pass: | |
return _cached_forward_pass[model_name] | |
all_layers: list[LayerInfo] = [] | |
summary_list: list[LayerInfo] = [] | |
hooks: dict[int, tuple[RemovableHandle, RemovableHandle]] | None = ( | |
None if x is None else {} | |
) | |
named_module = (model_name, model) | |
apply_hooks(named_module, model, batch_dim, summary_list, hooks, all_layers) | |
if x is None: | |
if not summary_list or summary_list[0].var_name != model_name: | |
summary_list.insert(0, LayerInfo("", model, 0)) | |
set_depth_index(summary_list) | |
return summary_list | |
kwargs = set_device(kwargs, device) | |
saved_model_mode = model.training | |
try: | |
if mode == Mode.TRAIN: | |
model.train() | |
elif mode == Mode.EVAL: | |
model.eval() | |
else: | |
raise RuntimeError( | |
f"Specified model mode ({list(Mode)}) not recognized: {mode}" | |
) | |
with torch.no_grad(): # type: ignore[no-untyped-call] | |
if isinstance(x, (list, tuple)): | |
> _ = model.to(device)(*x, **kwargs) | |
[1m[31mtorchinfo/torchinfo.py[0m:294: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = Linear(in_features=2, out_features=5, bias=True) | |
args = (tensor([[0.7124, 0.1774], | |
[0.6299, 0.7700], | |
[0.0745, 0.4951], | |
[0.0251, 0.7275], | |
[0.86...[0.0835, 0.0176], | |
[0.0797, 0.2688], | |
[0.7529, 0.1649], | |
[0.9116, 0.8252]], dtype=torch.float16),) | |
kwargs = {} | |
forward_call = <bound method Linear.forward of Linear(in_features=2, out_features=5, bias=True)> | |
full_backward_hooks = [], non_full_backward_hooks = [], backward_pre_hooks = [] | |
hook_id = 24, hook = <function apply_hooks.<locals>.pre_hook at 0x7ffde45e67a0> | |
result = None, bw_hook = None | |
def _call_impl(self, *args, **kwargs): | |
forward_call = (self._slow_forward if torch._C._get_tracing_state() else self.forward) | |
# If we don't have any hooks, we want to skip the rest of the logic in | |
# this function, and just call forward. | |
if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks | |
or _global_backward_pre_hooks or _global_backward_hooks | |
or _global_forward_hooks or _global_forward_pre_hooks): | |
return forward_call(*args, **kwargs) | |
# Do not call functions when jit is used | |
full_backward_hooks, non_full_backward_hooks = [], [] | |
backward_pre_hooks = [] | |
if self._backward_pre_hooks or _global_backward_pre_hooks: | |
backward_pre_hooks = self._get_backward_pre_hooks() | |
if self._backward_hooks or _global_backward_hooks: | |
full_backward_hooks, non_full_backward_hooks = self._get_backward_hooks() | |
if _global_forward_pre_hooks or self._forward_pre_hooks: | |
for hook_id, hook in ( | |
*_global_forward_pre_hooks.items(), | |
*self._forward_pre_hooks.items(), | |
): | |
if hook_id in self._forward_pre_hooks_with_kwargs: | |
result = hook(self, args, kwargs) # type: ignore[misc] | |
if result is not None: | |
if isinstance(result, tuple) and len(result) == 2: | |
args, kwargs = result | |
else: | |
raise RuntimeError( | |
"forward pre-hook must return None or a tuple " | |
f"of (new_args, new_kwargs), but got {result}." | |
) | |
else: | |
result = hook(self, args) | |
if result is not None: | |
if not isinstance(result, tuple): | |
result = (result,) | |
args = result | |
bw_hook = None | |
if full_backward_hooks or backward_pre_hooks: | |
bw_hook = hooks.BackwardHook(self, full_backward_hooks, backward_pre_hooks) | |
args = bw_hook.setup_input_hook(args) | |
> result = forward_call(*args, **kwargs) | |
[1m[31m/nix/store/x2s2mb5i6skm7galc1y72w1q2329mjqr-python3.10-torch-2.0.0/lib/python3.10/site-packages/torch/nn/modules/module.py[0m:1538: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = Linear(in_features=2, out_features=5, bias=True) | |
input = tensor([[0.7124, 0.1774], | |
[0.6299, 0.7700], | |
[0.0745, 0.4951], | |
[0.0251, 0.7275], | |
[0.866... [0.0835, 0.0176], | |
[0.0797, 0.2688], | |
[0.7529, 0.1649], | |
[0.9116, 0.8252]], dtype=torch.float16) | |
def forward(self, input: Tensor) -> Tensor: | |
> return F.linear(input, self.weight, self.bias) | |
[1m[31mE RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'[0m | |
[1m[31m/nix/store/x2s2mb5i6skm7galc1y72w1q2329mjqr-python3.10-torch-2.0.0/lib/python3.10/site-packages/torch/nn/modules/linear.py[0m:114: RuntimeError | |
[33mThe above exception was the direct cause of the following exception:[0m | |
def test_input_size_half_precision() -> None: | |
test = torch.nn.Linear(2, 5).half() | |
with pytest.warns( | |
UserWarning, | |
match=( | |
"Half precision is not supported with input_size parameter, and " | |
"may output incorrect results. Try passing input_data directly." | |
), | |
): | |
> summary(test, dtypes=[torch.float16], input_size=(10, 2), device="cpu") | |
[1m[31mtests/exceptions_test.py[0m:59: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31mtorchinfo/torchinfo.py[0m:215: in summary | |
summary_list = forward_pass( | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
model = Linear(in_features=2, out_features=5, bias=True) | |
x = [tensor([[0.7124, 0.1774], | |
[0.6299, 0.7700], | |
[0.0745, 0.4951], | |
[0.0251, 0.7275], | |
[0.86... [0.0835, 0.0176], | |
[0.0797, 0.2688], | |
[0.7529, 0.1649], | |
[0.9116, 0.8252]], dtype=torch.float16)] | |
batch_dim = None, cache_forward_pass = False, device = 'cpu' | |
mode = <Mode.EVAL: 'eval'>, kwargs = {}, model_name = 'Linear' | |
all_layers = [Linear: 0], summary_list = [Linear: 0] | |
hooks = {140728432927184: (<torch.utils.hooks.RemovableHandle object at 0x7ffde44134f0>, <torch.utils.hooks.RemovableHandle object at 0x7ffde4411240>)} | |
named_module = ('Linear', Linear(in_features=2, out_features=5, bias=True)) | |
saved_model_mode = True | |
def forward_pass( | |
model: nn.Module, | |
x: CORRECTED_INPUT_DATA_TYPE, | |
batch_dim: int | None, | |
cache_forward_pass: bool, | |
device: torch.device | str, | |
mode: Mode, | |
**kwargs: Any, | |
) -> list[LayerInfo]: | |
"""Perform a forward pass on the model using forward hooks.""" | |
global _cached_forward_pass # pylint: disable=global-variable-not-assigned | |
model_name = model.__class__.__name__ | |
if cache_forward_pass and model_name in _cached_forward_pass: | |
return _cached_forward_pass[model_name] | |
all_layers: list[LayerInfo] = [] | |
summary_list: list[LayerInfo] = [] | |
hooks: dict[int, tuple[RemovableHandle, RemovableHandle]] | None = ( | |
None if x is None else {} | |
) | |
named_module = (model_name, model) | |
apply_hooks(named_module, model, batch_dim, summary_list, hooks, all_layers) | |
if x is None: | |
if not summary_list or summary_list[0].var_name != model_name: | |
summary_list.insert(0, LayerInfo("", model, 0)) | |
set_depth_index(summary_list) | |
return summary_list | |
kwargs = set_device(kwargs, device) | |
saved_model_mode = model.training | |
try: | |
if mode == Mode.TRAIN: | |
model.train() | |
elif mode == Mode.EVAL: | |
model.eval() | |
else: | |
raise RuntimeError( | |
f"Specified model mode ({list(Mode)}) not recognized: {mode}" | |
) | |
with torch.no_grad(): # type: ignore[no-untyped-call] | |
if isinstance(x, (list, tuple)): | |
_ = model.to(device)(*x, **kwargs) | |
elif isinstance(x, dict): | |
_ = model.to(device)(**x, **kwargs) | |
else: | |
# Should not reach this point, since process_input_data ensures | |
# x is either a list, tuple, or dict | |
raise ValueError("Unknown input type") | |
except Exception as e: | |
executed_layers = [layer for layer in summary_list if layer.executed] | |
> raise RuntimeError( | |
"Failed to run torchinfo. See above stack traces for more details. " | |
f"Executed layers up to: {executed_layers}" | |
) from e | |
[1m[31mE RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to: [][0m | |
[1m[31mtorchinfo/torchinfo.py[0m:303: RuntimeError | |
[33m=============================== warnings summary ===============================[0m | |
tests/exceptions_test.py: 1 warning | |
tests/torchinfo_test.py: 39 warnings | |
tests/torchinfo_xl_test.py: 7 warnings | |
/build/source/torchinfo/torchinfo.py:455: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() | |
action_fn=lambda data: sys.getsizeof(data.storage()), | |
tests/exceptions_test.py: 1 warning | |
tests/torchinfo_test.py: 39 warnings | |
tests/torchinfo_xl_test.py: 7 warnings | |
/nix/store/x2s2mb5i6skm7galc1y72w1q2329mjqr-python3.10-torch-2.0.0/lib/python3.10/site-packages/torch/storage.py:665: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() | |
return super().__sizeof__() + self.nbytes() | |
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html | |
[36m[1m=========================== short test summary info ============================[0m | |
[31mFAILED[0m tests/exceptions_test.py::[1mtest_input_size_half_precision[0m - RuntimeError: Failed to run torchinfo. See above stack traces for more deta... | |
[31m= [31m[1m1 failed[0m, [32m55 passed[0m, [33m6 skipped[0m, [33m2 deselected[0m, [33m94 warnings[0m[31m in 138.34s (0:02:18)[0m[31m =[0m | |
/nix/store/pw17yc3mwmsci4jygwalj8ppg0drz31v-stdenv-linux/setup: line 1593: pop_var_context: head of shell_variables not a function context |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment