Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save SomeoneSerge/7c4c3f7ac5c18f85bb93726a13d2e3de to your computer and use it in GitHub Desktop.
Save SomeoneSerge/7c4c3f7ac5c18f85bb93726a13d2e3de to your computer and use it in GitHub Desktop.
nix log /nix/store/5khy3vr59v193zla77izslymvn1w2l3c-python3.10-torchinfo-1.64.drv
Sourcing python-remove-tests-dir-hook
Sourcing python-catch-conflicts-hook.sh
Sourcing python-remove-bin-bytecode-hook.sh
Sourcing setuptools-build-hook
Using setuptoolsBuildPhase
Using setuptoolsShellHook
Sourcing pip-install-hook
Using pipInstallPhase
Sourcing python-imports-check-hook.sh
Using pythonImportsCheckPhase
Sourcing python-namespaces-hook
Sourcing python-catch-conflicts-hook.sh
Sourcing setuptools-check-hook
Using setuptoolsCheckPhase
Sourcing pytest-check-hook
Using pytestCheckPhase
Removing setuptoolsCheckPhase
@nix { "action": "setPhase", "phase": "unpackPhase" }
unpacking sources
unpacking source archive /nix/store/q7xh02zqmqgibvcpbv74qkdbf96zb8jh-source
source root is source
setting SOURCE_DATE_EPOCH to timestamp 315619200 of file source/torchinfo/torchinfo.py
@nix { "action": "setPhase", "phase": "patchPhase" }
patching sources
@nix { "action": "setPhase", "phase": "configurePhase" }
configuring
no configure script, doing nothing
@nix { "action": "setPhase", "phase": "buildPhase" }
building
Executing setuptoolsBuildPhase
/nix/store/8zx4h7r5mn1b913hgi8rwjfynwg1wgdi-python3.10-setuptools-67.4.0/lib/python3.10/site-packages/setuptools/config/setupcfg.py:520: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
warnings.warn(msg, warning_class)
running bdist_wheel
running build
running build_py
creating build
creating build/lib
creating build/lib/torchinfo
copying torchinfo/layer_info.py -> build/lib/torchinfo
copying torchinfo/formatting.py -> build/lib/torchinfo
copying torchinfo/torchinfo.py -> build/lib/torchinfo
copying torchinfo/__init__.py -> build/lib/torchinfo
copying torchinfo/model_statistics.py -> build/lib/torchinfo
copying torchinfo/enums.py -> build/lib/torchinfo
running egg_info
creating torchinfo.egg-info
writing torchinfo.egg-info/PKG-INFO
writing dependency_links to torchinfo.egg-info/dependency_links.txt
writing top-level names to torchinfo.egg-info/top_level.txt
writing manifest file 'torchinfo.egg-info/SOURCES.txt'
reading manifest file 'torchinfo.egg-info/SOURCES.txt'
adding license file 'LICENSE'
writing manifest file 'torchinfo.egg-info/SOURCES.txt'
copying torchinfo/py.typed -> build/lib/torchinfo
/nix/store/8zx4h7r5mn1b913hgi8rwjfynwg1wgdi-python3.10-setuptools-67.4.0/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
installing to build/bdist.linux-x86_64/wheel
running install
running install_lib
creating build/bdist.linux-x86_64
creating build/bdist.linux-x86_64/wheel
creating build/bdist.linux-x86_64/wheel/torchinfo
copying build/lib/torchinfo/layer_info.py -> build/bdist.linux-x86_64/wheel/torchinfo
copying build/lib/torchinfo/formatting.py -> build/bdist.linux-x86_64/wheel/torchinfo
copying build/lib/torchinfo/torchinfo.py -> build/bdist.linux-x86_64/wheel/torchinfo
copying build/lib/torchinfo/__init__.py -> build/bdist.linux-x86_64/wheel/torchinfo
copying build/lib/torchinfo/model_statistics.py -> build/bdist.linux-x86_64/wheel/torchinfo
copying build/lib/torchinfo/py.typed -> build/bdist.linux-x86_64/wheel/torchinfo
copying build/lib/torchinfo/enums.py -> build/bdist.linux-x86_64/wheel/torchinfo
running install_egg_info
Copying torchinfo.egg-info to build/bdist.linux-x86_64/wheel/torchinfo-1.6.4-py3.10.egg-info
running install_scripts
creating build/bdist.linux-x86_64/wheel/torchinfo-1.6.4.dist-info/WHEEL
creating 'dist/torchinfo-1.6.4-py3-none-any.whl' and adding 'build/bdist.linux-x86_64/wheel' to it
adding 'torchinfo/__init__.py'
adding 'torchinfo/enums.py'
adding 'torchinfo/formatting.py'
adding 'torchinfo/layer_info.py'
adding 'torchinfo/model_statistics.py'
adding 'torchinfo/py.typed'
adding 'torchinfo/torchinfo.py'
adding 'torchinfo-1.6.4.dist-info/LICENSE'
adding 'torchinfo-1.6.4.dist-info/METADATA'
adding 'torchinfo-1.6.4.dist-info/WHEEL'
adding 'torchinfo-1.6.4.dist-info/top_level.txt'
adding 'torchinfo-1.6.4.dist-info/RECORD'
removing build/bdist.linux-x86_64/wheel
Finished executing setuptoolsBuildPhase
@nix { "action": "setPhase", "phase": "installPhase" }
installing
Executing pipInstallPhase
/build/source/dist /build/source
Processing ./torchinfo-1.6.4-py3-none-any.whl
Installing collected packages: torchinfo
Successfully installed torchinfo-1.6.4
/build/source
Finished executing pipInstallPhase
@nix { "action": "setPhase", "phase": "pythonOutputDistPhase" }
pythonOutputDistPhase
Executing pythonOutputDistPhase
Finished executing pythonOutputDistPhase
@nix { "action": "setPhase", "phase": "fixupPhase" }
post-installation fixup
shrinking RPATHs of ELF executables and libraries in /nix/store/ccqikiigwrx31c75v7kd85ysz38pd7yz-python3.10-torchinfo-1.64
checking for references to /build/ in /nix/store/ccqikiigwrx31c75v7kd85ysz38pd7yz-python3.10-torchinfo-1.64...
patching script interpreter paths in /nix/store/ccqikiigwrx31c75v7kd85ysz38pd7yz-python3.10-torchinfo-1.64
stripping (with command strip and flags -S) in /nix/store/ccqikiigwrx31c75v7kd85ysz38pd7yz-python3.10-torchinfo-1.64/lib
shrinking RPATHs of ELF executables and libraries in /nix/store/dh9sswl0xhhw91vp048k1w58vw1hrxv9-python3.10-torchinfo-1.64-dist
checking for references to /build/ in /nix/store/dh9sswl0xhhw91vp048k1w58vw1hrxv9-python3.10-torchinfo-1.64-dist...
patching script interpreter paths in /nix/store/dh9sswl0xhhw91vp048k1w58vw1hrxv9-python3.10-torchinfo-1.64-dist
Executing pythonRemoveTestsDir
Finished executing pythonRemoveTestsDir
@nix { "action": "setPhase", "phase": "installCheckPhase" }
running install tests
no Makefile or custom installCheckPhase, doing nothing
@nix { "action": "setPhase", "phase": "pythonCatchConflictsPhase" }
pythonCatchConflictsPhase
@nix { "action": "setPhase", "phase": "pythonRemoveBinBytecodePhase" }
pythonRemoveBinBytecodePhase
@nix { "action": "setPhase", "phase": "pythonImportsCheckPhase" }
pythonImportsCheckPhase
Executing pythonImportsCheckPhase
Check whether the following modules can be imported: torchvision
@nix { "action": "setPhase", "phase": "pytestCheckPhase" }
pytestCheckPhase
Executing pytestCheckPhase
============================= test session starts ==============================
platform linux -- Python 3.10.10, pytest-7.2.1, pluggy-1.0.0
rootdir: /build/source
collecting ...  collected 64 items / 2 deselected / 62 selected 
tests/exceptions_test.py ...F. [ 8%]
tests/gpu_test.py ss [ 11%]
tests/half_precision_test.py sss [ 16%]
tests/torchinfo_test.py ............................................ [ 87%]
tests/torchinfo_xl_test.py .....s.. [100%]
=================================== FAILURES ===================================
________________________ test_input_size_half_precision ________________________
model = Linear(in_features=2, out_features=5, bias=True)
x = [tensor([[0.7124, 0.1774],
[0.6299, 0.7700],
[0.0745, 0.4951],
[0.0251, 0.7275],
[0.86... [0.0835, 0.0176],
[0.0797, 0.2688],
[0.7529, 0.1649],
[0.9116, 0.8252]], dtype=torch.float16)]
batch_dim = None, cache_forward_pass = False, device = 'cpu'
mode = <Mode.EVAL: 'eval'>, kwargs = {}, model_name = 'Linear'
all_layers = [Linear: 0], summary_list = [Linear: 0]
hooks = {140728432927184: (<torch.utils.hooks.RemovableHandle object at 0x7ffde44134f0>, <torch.utils.hooks.RemovableHandle object at 0x7ffde4411240>)}
named_module = ('Linear', Linear(in_features=2, out_features=5, bias=True))
saved_model_mode = True
def forward_pass(
model: nn.Module,
x: CORRECTED_INPUT_DATA_TYPE,
batch_dim: int | None,
cache_forward_pass: bool,
device: torch.device | str,
mode: Mode,
**kwargs: Any,
) -> list[LayerInfo]:
"""Perform a forward pass on the model using forward hooks."""
global _cached_forward_pass # pylint: disable=global-variable-not-assigned
model_name = model.__class__.__name__
if cache_forward_pass and model_name in _cached_forward_pass:
return _cached_forward_pass[model_name]
all_layers: list[LayerInfo] = []
summary_list: list[LayerInfo] = []
hooks: dict[int, tuple[RemovableHandle, RemovableHandle]] | None = (
None if x is None else {}
)
named_module = (model_name, model)
apply_hooks(named_module, model, batch_dim, summary_list, hooks, all_layers)
if x is None:
if not summary_list or summary_list[0].var_name != model_name:
summary_list.insert(0, LayerInfo("", model, 0))
set_depth_index(summary_list)
return summary_list
kwargs = set_device(kwargs, device)
saved_model_mode = model.training
try:
if mode == Mode.TRAIN:
model.train()
elif mode == Mode.EVAL:
model.eval()
else:
raise RuntimeError(
f"Specified model mode ({list(Mode)}) not recognized: {mode}"
)
with torch.no_grad(): # type: ignore[no-untyped-call]
if isinstance(x, (list, tuple)):
> _ = model.to(device)(*x, **kwargs)
torchinfo/torchinfo.py:294:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Linear(in_features=2, out_features=5, bias=True)
args = (tensor([[0.7124, 0.1774],
[0.6299, 0.7700],
[0.0745, 0.4951],
[0.0251, 0.7275],
[0.86...[0.0835, 0.0176],
[0.0797, 0.2688],
[0.7529, 0.1649],
[0.9116, 0.8252]], dtype=torch.float16),)
kwargs = {}
forward_call = <bound method Linear.forward of Linear(in_features=2, out_features=5, bias=True)>
full_backward_hooks = [], non_full_backward_hooks = [], backward_pre_hooks = []
hook_id = 24, hook = <function apply_hooks.<locals>.pre_hook at 0x7ffde45e67a0>
result = None, bw_hook = None
def _call_impl(self, *args, **kwargs):
forward_call = (self._slow_forward if torch._C._get_tracing_state() else self.forward)
# If we don't have any hooks, we want to skip the rest of the logic in
# this function, and just call forward.
if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
or _global_backward_pre_hooks or _global_backward_hooks
or _global_forward_hooks or _global_forward_pre_hooks):
return forward_call(*args, **kwargs)
# Do not call functions when jit is used
full_backward_hooks, non_full_backward_hooks = [], []
backward_pre_hooks = []
if self._backward_pre_hooks or _global_backward_pre_hooks:
backward_pre_hooks = self._get_backward_pre_hooks()
if self._backward_hooks or _global_backward_hooks:
full_backward_hooks, non_full_backward_hooks = self._get_backward_hooks()
if _global_forward_pre_hooks or self._forward_pre_hooks:
for hook_id, hook in (
*_global_forward_pre_hooks.items(),
*self._forward_pre_hooks.items(),
):
if hook_id in self._forward_pre_hooks_with_kwargs:
result = hook(self, args, kwargs) # type: ignore[misc]
if result is not None:
if isinstance(result, tuple) and len(result) == 2:
args, kwargs = result
else:
raise RuntimeError(
"forward pre-hook must return None or a tuple "
f"of (new_args, new_kwargs), but got {result}."
)
else:
result = hook(self, args)
if result is not None:
if not isinstance(result, tuple):
result = (result,)
args = result
bw_hook = None
if full_backward_hooks or backward_pre_hooks:
bw_hook = hooks.BackwardHook(self, full_backward_hooks, backward_pre_hooks)
args = bw_hook.setup_input_hook(args)
> result = forward_call(*args, **kwargs)
/nix/store/x2s2mb5i6skm7galc1y72w1q2329mjqr-python3.10-torch-2.0.0/lib/python3.10/site-packages/torch/nn/modules/module.py:1538:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Linear(in_features=2, out_features=5, bias=True)
input = tensor([[0.7124, 0.1774],
[0.6299, 0.7700],
[0.0745, 0.4951],
[0.0251, 0.7275],
[0.866... [0.0835, 0.0176],
[0.0797, 0.2688],
[0.7529, 0.1649],
[0.9116, 0.8252]], dtype=torch.float16)
def forward(self, input: Tensor) -> Tensor:
> return F.linear(input, self.weight, self.bias)
E RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
/nix/store/x2s2mb5i6skm7galc1y72w1q2329mjqr-python3.10-torch-2.0.0/lib/python3.10/site-packages/torch/nn/modules/linear.py:114: RuntimeError
The above exception was the direct cause of the following exception:
def test_input_size_half_precision() -> None:
test = torch.nn.Linear(2, 5).half()
with pytest.warns(
UserWarning,
match=(
"Half precision is not supported with input_size parameter, and "
"may output incorrect results. Try passing input_data directly."
),
):
> summary(test, dtypes=[torch.float16], input_size=(10, 2), device="cpu")
tests/exceptions_test.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
torchinfo/torchinfo.py:215: in summary
summary_list = forward_pass(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
model = Linear(in_features=2, out_features=5, bias=True)
x = [tensor([[0.7124, 0.1774],
[0.6299, 0.7700],
[0.0745, 0.4951],
[0.0251, 0.7275],
[0.86... [0.0835, 0.0176],
[0.0797, 0.2688],
[0.7529, 0.1649],
[0.9116, 0.8252]], dtype=torch.float16)]
batch_dim = None, cache_forward_pass = False, device = 'cpu'
mode = <Mode.EVAL: 'eval'>, kwargs = {}, model_name = 'Linear'
all_layers = [Linear: 0], summary_list = [Linear: 0]
hooks = {140728432927184: (<torch.utils.hooks.RemovableHandle object at 0x7ffde44134f0>, <torch.utils.hooks.RemovableHandle object at 0x7ffde4411240>)}
named_module = ('Linear', Linear(in_features=2, out_features=5, bias=True))
saved_model_mode = True
def forward_pass(
model: nn.Module,
x: CORRECTED_INPUT_DATA_TYPE,
batch_dim: int | None,
cache_forward_pass: bool,
device: torch.device | str,
mode: Mode,
**kwargs: Any,
) -> list[LayerInfo]:
"""Perform a forward pass on the model using forward hooks."""
global _cached_forward_pass # pylint: disable=global-variable-not-assigned
model_name = model.__class__.__name__
if cache_forward_pass and model_name in _cached_forward_pass:
return _cached_forward_pass[model_name]
all_layers: list[LayerInfo] = []
summary_list: list[LayerInfo] = []
hooks: dict[int, tuple[RemovableHandle, RemovableHandle]] | None = (
None if x is None else {}
)
named_module = (model_name, model)
apply_hooks(named_module, model, batch_dim, summary_list, hooks, all_layers)
if x is None:
if not summary_list or summary_list[0].var_name != model_name:
summary_list.insert(0, LayerInfo("", model, 0))
set_depth_index(summary_list)
return summary_list
kwargs = set_device(kwargs, device)
saved_model_mode = model.training
try:
if mode == Mode.TRAIN:
model.train()
elif mode == Mode.EVAL:
model.eval()
else:
raise RuntimeError(
f"Specified model mode ({list(Mode)}) not recognized: {mode}"
)
with torch.no_grad(): # type: ignore[no-untyped-call]
if isinstance(x, (list, tuple)):
_ = model.to(device)(*x, **kwargs)
elif isinstance(x, dict):
_ = model.to(device)(**x, **kwargs)
else:
# Should not reach this point, since process_input_data ensures
# x is either a list, tuple, or dict
raise ValueError("Unknown input type")
except Exception as e:
executed_layers = [layer for layer in summary_list if layer.executed]
> raise RuntimeError(
"Failed to run torchinfo. See above stack traces for more details. "
f"Executed layers up to: {executed_layers}"
) from e
E RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to: []
torchinfo/torchinfo.py:303: RuntimeError
=============================== warnings summary ===============================
tests/exceptions_test.py: 1 warning
tests/torchinfo_test.py: 39 warnings
tests/torchinfo_xl_test.py: 7 warnings
/build/source/torchinfo/torchinfo.py:455: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
action_fn=lambda data: sys.getsizeof(data.storage()),
tests/exceptions_test.py: 1 warning
tests/torchinfo_test.py: 39 warnings
tests/torchinfo_xl_test.py: 7 warnings
/nix/store/x2s2mb5i6skm7galc1y72w1q2329mjqr-python3.10-torch-2.0.0/lib/python3.10/site-packages/torch/storage.py:665: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return super().__sizeof__() + self.nbytes()
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
FAILED tests/exceptions_test.py::test_input_size_half_precision - RuntimeError: Failed to run torchinfo. See above stack traces for more deta...
= 1 failed, 55 passed, 6 skipped, 2 deselected, 94 warnings in 138.34s (0:02:18) =
/nix/store/pw17yc3mwmsci4jygwalj8ppg0drz31v-stdenv-linux/setup: line 1593: pop_var_context: head of shell_variables not a function context
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment