Skip to content

Instantly share code, notes, and snippets.

@mlazos
Created June 5, 2024 18:47
Show Gist options
  • Save mlazos/cdd15b93ea6096ace88d0a5cf9fc802b to your computer and use it in GitHub Desktop.
Save mlazos/cdd15b93ea6096ace88d0a5cf9fc802b to your computer and use it in GitHub Desktop.
--> TORCH_LOGS="dynamo" python nvembed.py
setup passages
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████| 4/4 [00:03<00:00, 1.07it/s]
downloaded model
[INFO]:Step 1: torchdynamo start tracing fn /data/users/mlazos/empathy_day/nvembed.py:31
[INFO]:Restarting analysis due to _dynamo/symbolic_convert.py:148 in fail_and_restart_analysis
[INFO]:Step 1: torchdynamo start tracing fn /data/users/mlazos/empathy_day/nvembed.py:31
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing encode /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:403
[INFO]:Restarting analysis due to _dynamo/symbolic_convert.py:148 in fail_and_restart_analysis
[INFO]:Step 1: torchdynamo start tracing encode /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:403
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_encode_at_406 /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:406
[INFO]:Restarting analysis due to _dynamo/symbolic_convert.py:148 in fail_and_restart_analysis
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_encode_at_406 /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:406
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing as_tensor /data/users/mlazos/transformers/src/transformers/tokenization_utils_base.py:718
[INFO]:Step 1: torchdynamo done tracing as_tensor (RETURN_VALUE)
[INFO]:Step 2: calling compiler function inductor
[INFO]:Step 2: done compiler function inductor
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing as_tensor /data/users/mlazos/transformers/src/transformers/tokenization_utils_base.py:718
[INFO]:Step 1: torchdynamo done tracing as_tensor (RETURN_VALUE)
[INFO]:Step 2: calling compiler function inductor
[INFO]:Step 2: done compiler function inductor
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_encode_at_411 /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:411
[INFO]:Restarting analysis due to _dynamo/symbolic_convert.py:148 in fail_and_restart_analysis
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_encode_at_411 /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:411
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing prepare_kwargs_from_batch /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:335
[INFO]:WON'T CONVERT prepare_kwargs_from_batch /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py line 335
due to:
Traceback (most recent call last):
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 949, in __call__
result = self._inner_convert(
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 473, in __call__
return _compile(
File "/data/users/mlazos/pytorch/torch/_utils_internal.py", line 83, in wrapper_function
return StrobelightCompileTimeProfiler.profile_compile_time(
File "/data/users/mlazos/pytorch/torch/_strobelight/compile_time_profiler.py", line 129, in profile_compile_time
return func(*args, **kwargs)
File "/home/mlazos/.conda/envs/pytorch-3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 818, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/users/mlazos/pytorch/torch/_dynamo/utils.py", line 232, in time_wrapper
r = func(*args, **kwargs)
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 637, in compile_inner
out_code = transform_code_object(code, transform)
File "/data/users/mlazos/pytorch/torch/_dynamo/bytecode_transformation.py", line 1184, in transform_code_object
transformations(instructions, code_options)
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 177, in _fn
return fn(*args, **kwargs)
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 583, in transform
tracer.run()
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 2450, in run
super().run()
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 892, in run
while self.step():
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 804, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 1904, in CONTAINS_OP
self.push(right.call_method(self, "__contains__", [left], {}))
File "/data/users/mlazos/pytorch/torch/_dynamo/variables/user_defined.py", line 641, in call_method
return UserMethodVariable(method, self, source=source).call_function(
File "/data/users/mlazos/pytorch/torch/_dynamo/variables/functions.py", line 342, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/mlazos/pytorch/torch/_dynamo/variables/functions.py", line 294, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/mlazos/pytorch/torch/_dynamo/variables/functions.py", line 91, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 748, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 2665, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 2709, in inline_call_
result = InliningInstructionTranslator.check_inlineable(func)
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 2686, in check_inlineable
unimplemented(
File "/data/users/mlazos/pytorch/torch/_dynamo/exc.py", line 220, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: 'inline in skipfiles: UserDict.__contains__ | __contains__ /home/mlazos/.conda/envs/pytorch-3.10/lib/python3.10/collections/__init__.py, skipped according trace_rules.lookup SKIP_DIRS'
from user code:
File "/home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py", line 337, in prepare_kwargs_from_batch
attention_mask = batch_dict['attention_mask'].clone() if 'attention_mask' in batch_dict else None
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
Traceback (most recent call last):
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 949, in __call__
result = self._inner_convert(
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 473, in __call__
return _compile(
File "/data/users/mlazos/pytorch/torch/_utils_internal.py", line 83, in wrapper_function
return StrobelightCompileTimeProfiler.profile_compile_time(
File "/data/users/mlazos/pytorch/torch/_strobelight/compile_time_profiler.py", line 129, in profile_compile_time
return func(*args, **kwargs)
File "/home/mlazos/.conda/envs/pytorch-3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 818, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/users/mlazos/pytorch/torch/_dynamo/utils.py", line 232, in time_wrapper
r = func(*args, **kwargs)
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 637, in compile_inner
out_code = transform_code_object(code, transform)
File "/data/users/mlazos/pytorch/torch/_dynamo/bytecode_transformation.py", line 1184, in transform_code_object
transformations(instructions, code_options)
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 177, in _fn
return fn(*args, **kwargs)
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 583, in transform
tracer.run()
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 2450, in run
super().run()
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 892, in run
while self.step():
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 804, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 1904, in CONTAINS_OP
self.push(right.call_method(self, "__contains__", [left], {}))
File "/data/users/mlazos/pytorch/torch/_dynamo/variables/user_defined.py", line 641, in call_method
return UserMethodVariable(method, self, source=source).call_function(
File "/data/users/mlazos/pytorch/torch/_dynamo/variables/functions.py", line 342, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/mlazos/pytorch/torch/_dynamo/variables/functions.py", line 294, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/mlazos/pytorch/torch/_dynamo/variables/functions.py", line 91, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 748, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 2665, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 2709, in inline_call_
result = InliningInstructionTranslator.check_inlineable(func)
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 2686, in check_inlineable
unimplemented(
File "/data/users/mlazos/pytorch/torch/_dynamo/exc.py", line 220, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: 'inline in skipfiles: UserDict.__contains__ | __contains__ /home/mlazos/.conda/envs/pytorch-3.10/lib/python3.10/collections/__init__.py, skipped according trace_rules.lookup SKIP_DIRS'
from user code:
File "/home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py", line 337, in prepare_kwargs_from_batch
attention_mask = batch_dict['attention_mask'].clone() if 'attention_mask' in batch_dict else None
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
/home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:345: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
'input_ids': torch.tensor(batch_dict.get('input_ids').to(batch_dict.get('input_ids')).long()),
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_encode_at_417 /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:417
[INFO]:Restarting analysis due to _dynamo/symbolic_convert.py:148 in fail_and_restart_analysis
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_encode_at_417 /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:417
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing forward /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:388
[INFO]:Restarting analysis due to _dynamo/symbolic_convert.py:148 in fail_and_restart_analysis
[INFO]:Step 1: torchdynamo start tracing forward /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:388
[INFO]:Step 2: calling compiler function inductor
[INFO]:Step 2: done compiler function inductor
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing forward /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:300
[INFO]:Restarting analysis due to _dynamo/symbolic_convert.py:148 in fail_and_restart_analysis
[INFO]:Step 1: torchdynamo start tracing forward /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:300
[INFO]:Step 2: calling compiler function inductor
[INFO]:Step 2: done compiler function inductor
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing forward /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:229
[INFO]:Restarting analysis due to _dynamo/symbolic_convert.py:148 in fail_and_restart_analysis
[INFO]:Step 1: torchdynamo start tracing forward /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:229
[INFO]:Step 2: calling compiler function inductor
[INFO]:Step 2: done compiler function inductor
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_forward_at_234 /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:234
/home/mlazos/.conda/envs/pytorch-3.10/lib/python3.10/contextlib.py:103: FutureWarning: `torch.backends.cuda.sdp_kernel()` is deprecated. In the future, this context manager will be removed. Please see `torch.nn.attention.sdpa_kernel()` for the new context manager, with updated signature.
self.gen = func(*args, **kwds)
[INFO]:Restarting analysis due to _dynamo/symbolic_convert.py:148 in fail_and_restart_analysis
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_forward_at_234 /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:234
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing forward /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:271
/home/mlazos/.conda/envs/pytorch-3.10/lib/python3.10/contextlib.py:103: FutureWarning: `torch.backends.cuda.sdp_kernel()` is deprecated. In the future, this context manager will be removed. Please see `torch.nn.attention.sdpa_kernel()` for the new context manager, with updated signature.
self.gen = func(*args, **kwds)
[INFO]:Restarting analysis due to _dynamo/symbolic_convert.py:148 in fail_and_restart_analysis
[INFO]:Step 1: torchdynamo start tracing forward /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:271
[INFO]:Step 2: calling compiler function inductor
[INFO]:Step 2: done compiler function inductor
[INFO]:produce_guards
/home/mlazos/.conda/envs/pytorch-3.10/lib/python3.10/contextlib.py:103: FutureWarning: `torch.backends.cuda.sdp_kernel()` is deprecated. In the future, this context manager will be removed. Please see `torch.nn.attention.sdpa_kernel()` for the new context manager, with updated signature.
self.gen = func(*args, **kwds)
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_forward_at_277 /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:277
[INFO]:WON'T CONVERT torch_dynamo_resume_in_forward_at_277 /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py line 277
due to:
Traceback (most recent call last):
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 949, in __call__
result = self._inner_convert(
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 473, in __call__
return _compile(
File "/data/users/mlazos/pytorch/torch/_utils_internal.py", line 83, in wrapper_function
return StrobelightCompileTimeProfiler.profile_compile_time(
File "/data/users/mlazos/pytorch/torch/_strobelight/compile_time_profiler.py", line 129, in profile_compile_time
return func(*args, **kwargs)
File "/home/mlazos/.conda/envs/pytorch-3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 818, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/users/mlazos/pytorch/torch/_dynamo/utils.py", line 232, in time_wrapper
r = func(*args, **kwargs)
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 637, in compile_inner
out_code = transform_code_object(code, transform)
File "/data/users/mlazos/pytorch/torch/_dynamo/bytecode_transformation.py", line 1184, in transform_code_object
transformations(instructions, code_options)
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 177, in _fn
return fn(*args, **kwargs)
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 583, in transform
tracer.run()
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 2450, in run
super().run()
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 892, in run
while self.step():
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 804, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 1175, in SETUP_WITH
self.setup_or_before_with(inst)
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 2084, in setup_or_before_with
unimplemented(f"{inst.opname} {ctx}")
File "/data/users/mlazos/pytorch/torch/_dynamo/exc.py", line 220, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: SETUP_WITH UserDefinedObjectVariable(_GeneratorContextManager)
from user code:
File "/home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py", line 277, in torch_dynamo_resume_in_forward_at_277
with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_mem_efficient=True):
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
Traceback (most recent call last):
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 949, in __call__
result = self._inner_convert(
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 473, in __call__
return _compile(
File "/data/users/mlazos/pytorch/torch/_utils_internal.py", line 83, in wrapper_function
return StrobelightCompileTimeProfiler.profile_compile_time(
File "/data/users/mlazos/pytorch/torch/_strobelight/compile_time_profiler.py", line 129, in profile_compile_time
return func(*args, **kwargs)
File "/home/mlazos/.conda/envs/pytorch-3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 818, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/users/mlazos/pytorch/torch/_dynamo/utils.py", line 232, in time_wrapper
r = func(*args, **kwargs)
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 637, in compile_inner
out_code = transform_code_object(code, transform)
File "/data/users/mlazos/pytorch/torch/_dynamo/bytecode_transformation.py", line 1184, in transform_code_object
transformations(instructions, code_options)
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 177, in _fn
return fn(*args, **kwargs)
File "/data/users/mlazos/pytorch/torch/_dynamo/convert_frame.py", line 583, in transform
tracer.run()
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 2450, in run
super().run()
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 892, in run
while self.step():
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 804, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 1175, in SETUP_WITH
self.setup_or_before_with(inst)
File "/data/users/mlazos/pytorch/torch/_dynamo/symbolic_convert.py", line 2084, in setup_or_before_with
unimplemented(f"{inst.opname} {ctx}")
File "/data/users/mlazos/pytorch/torch/_dynamo/exc.py", line 220, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: SETUP_WITH UserDefinedObjectVariable(_GeneratorContextManager)
from user code:
File "/home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py", line 277, in torch_dynamo_resume_in_forward_at_277
with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_mem_efficient=True):
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
[INFO]:Step 1: torchdynamo start tracing rearrange /home/mlazos/.conda/envs/pytorch-3.10/lib/python3.10/site-packages/einops/einops.py:536
[INFO]:Step 1: torchdynamo done tracing rearrange (RETURN_VALUE)
[INFO]:Step 2: calling compiler function inductor
[INFO]:Step 2: done compiler function inductor
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_forward_at_305 /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:305
[INFO]:Step 1: torchdynamo done tracing torch_dynamo_resume_in_forward_at_305 (RETURN_VALUE)
[INFO]:Step 2: calling compiler function inductor
[INFO]:Step 2: done compiler function inductor
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_forward_at_397 /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:397
[INFO]:Step 1: torchdynamo done tracing torch_dynamo_resume_in_forward_at_397 (RETURN_VALUE)
[INFO]:Step 2: calling compiler function inductor
[INFO]:Step 2: done compiler function inductor
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_fn_at_33 /data/users/mlazos/empathy_day/nvembed.py:33
[INFO]:Restarting analysis due to _dynamo/symbolic_convert.py:148 in fail_and_restart_analysis
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_fn_at_33 /data/users/mlazos/empathy_day/nvembed.py:33
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing encode /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:403
[INFO]:Restarting analysis due to _dynamo/symbolic_convert.py:148 in fail_and_restart_analysis
[INFO]:Step 1: torchdynamo start tracing encode /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:403
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing as_tensor /data/users/mlazos/transformers/src/transformers/tokenization_utils_base.py:718
[INFO]:create_symbol s0 = 4577 for L['value'][0][1] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0"
[INFO]:create_symbol s1 = 368 for L['value'][0][2] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s1"
[INFO]:create_symbol s2 = 28742 for L['value'][0][3] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s2"
[INFO]:create_symbol s3 = 267 for L['value'][0][4] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s3"
[INFO]:create_symbol s4 = 4865 for L['value'][0][5] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s4"
[INFO]:create_symbol s5 = 456 for L['value'][0][6] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s5"
[INFO]:create_symbol s6 = 368 for L['value'][0][8] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s6"
[INFO]:create_symbol s7 = 460 for L['value'][0][9] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s7"
[INFO]:create_symbol s8 = 3049 for L['value'][0][10] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s8"
[INFO]:create_symbol s9 = 2493 for L['value'][0][11] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s9"
[INFO]:create_symbol s10 = 477 for L['value'][0][12] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s10"
[INFO]:create_symbol s11 = 264 for L['value'][0][13] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s11"
[INFO]:create_symbol s12 = 4612 for L['value'][0][14] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s12"
[INFO]:create_symbol s13 = 28709 for L['value'][0][15] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s13"
[INFO]:create_symbol s14 = 5414 for L['value'][0][16] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s14"
[INFO]:create_symbol s15 = 442 for L['value'][0][17] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s15"
[INFO]:create_symbol s16 = 2493 for L['value'][0][18] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s16"
[INFO]:create_symbol s17 = 693 for L['value'][0][19] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s17"
[INFO]:create_symbol s18 = 349 for L['value'][0][20] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s18"
[INFO]:create_symbol s19 = 776 for L['value'][0][21] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s19"
[INFO]:create_symbol s20 = 12785 for L['value'][0][22] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s20"
[INFO]:create_symbol s21 = 910 for L['value'][0][23] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s21"
[INFO]:create_symbol s22 = 4612 for L['value'][0][24] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s22"
[INFO]:create_symbol s23 = 28709 for L['value'][0][25] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s23"
[INFO]:create_symbol s24 = 9804 for L['value'][0][26] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s24"
[INFO]:create_symbol s25 = 541 for L['value'][0][27] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s25"
[INFO]:create_symbol s26 = 347 for L['value'][0][28] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s26"
[INFO]:create_symbol s27 = 7589 for L['value'][0][29] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s27"
[INFO]:create_symbol s28 = 916 for L['value'][0][30] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s28"
[INFO]:create_symbol s29 = 17619 for L['value'][0][31] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s29"
[INFO]:create_symbol s30 = 20811 for L['value'][1][1] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s30"
[INFO]:create_symbol s31 = 460 for L['value'][1][2] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s31"
[INFO]:create_symbol s32 = 272 for L['value'][1][3] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s32"
[INFO]:create_symbol s33 = 6471 for L['value'][1][4] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s33"
[INFO]:create_symbol s34 = 5944 for L['value'][1][5] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s34"
[INFO]:create_symbol s35 = 298 for L['value'][1][6] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s35"
[INFO]:create_symbol s36 = 7888 for L['value'][1][7] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s36"
[INFO]:create_symbol s37 = 264 for L['value'][1][8] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s37"
[INFO]:create_symbol s38 = 16893 for L['value'][1][9] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s38"
[INFO]:create_symbol s39 = 504 for L['value'][1][10] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s39"
[INFO]:create_symbol s40 = 1711 for L['value'][1][11] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s40"
[INFO]:create_symbol s41 = 28730 for L['value'][1][12] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s41"
[INFO]:create_symbol s42 = 2345 for L['value'][1][13] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s42"
[INFO]:create_symbol s43 = 23661 for L['value'][1][14] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s43"
[INFO]:create_symbol s44 = 28730 for L['value'][1][15] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s44"
[INFO]:create_symbol s45 = 12413 for L['value'][1][16] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s45"
[INFO]:create_symbol s46 = 11408 for L['value'][1][17] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s46"
[INFO]:create_symbol s47 = 2412 for L['value'][1][18] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s47"
[INFO]:create_symbol s48 = 12418 for L['value'][1][19] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s48"
[INFO]:create_symbol s49 = 297 for L['value'][1][20] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s49"
[INFO]:create_symbol s50 = 13642 for L['value'][1][21] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s50"
[INFO]:create_symbol s51 = 28747 for L['value'][1][22] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s51"
[INFO]:create_symbol s52 = 28749 for L['value'][1][23] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s52"
[INFO]:create_symbol s53 = 1331 for L['value'][1][24] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s53"
[INFO]:create_symbol s54 = 264 for L['value'][1][25] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s54"
[INFO]:create_symbol s55 = 1486 for L['value'][1][26] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s55"
[INFO]:create_symbol s56 = 2052 for L['value'][1][27] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s56"
[INFO]:create_symbol s57 = 890 for L['value'][1][28] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s57"
[INFO]:create_symbol s58 = 452 for L['value'][1][29] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s58"
[INFO]:create_symbol s59 = 6943 for L['value'][1][30] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s59"
[INFO]:create_symbol s60 = 28723 for L['value'][1][31] [-9223372036854775808, 9223372036854775807] at transformers/src/transformers/tokenization_utils_base.py:721 in as_tensor (_dynamo/variables/builder.py:1474 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s60"
[INFO]:Step 1: torchdynamo done tracing as_tensor (RETURN_VALUE)
[INFO]:Step 2: calling compiler function inductor
[INFO]:Step 2: done compiler function inductor
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing as_tensor /data/users/mlazos/transformers/src/transformers/tokenization_utils_base.py:718
[INFO]:Step 1: torchdynamo done tracing as_tensor (RETURN_VALUE)
[INFO]:Step 2: calling compiler function inductor
[INFO]:Step 2: done compiler function inductor
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_encode_at_411 /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:411
[INFO]:Restarting analysis due to _dynamo/symbolic_convert.py:148 in fail_and_restart_analysis
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_encode_at_411 /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:411
[INFO]:produce_guards
/home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:345: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
'input_ids': torch.tensor(batch_dict.get('input_ids').to(batch_dict.get('input_ids')).long()),
[INFO]:Step 1: torchdynamo start tracing forward /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:388
[INFO]:create_symbol s0 = 114 for L['features']['input_ids'].size()[1] [2, 9223372036854775806] at ome/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:62 in forward (_dynamo/variables/builder.py:2276 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0"
[INFO]:create_symbol s1 = 114 for L['features']['attention_mask'].size()[1] [2, 9223372036854775806] at ome/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:98 in forward (_dynamo/variables/builder.py:2276 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s1"
[INFO]:eval s0 <= 32768 [guard added] at transformers/src/transformers/models/mistral/modeling_mistral.py:120 in forward (_dynamo/variables/tensor.py:1041 in evaluate_expr), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="s0 <= 32768"
[INFO]:set_replacement s1 = s0 (solve) ValueRanges(lower=2, upper=32768, is_bool=False)
[INFO]:eval Eq(s1, s0) [guard added] at _dynamo/polyfill.py:54 in list_cmp (_dynamo/variables/tensor.py:1041 in evaluate_expr), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="Eq(s1, s0)"
[INFO]:Restarting analysis due to _dynamo/symbolic_convert.py:148 in fail_and_restart_analysis
[INFO]:Step 1: torchdynamo start tracing forward /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:388
[INFO]:create_symbol s0 = 114 for L['features']['input_ids'].size()[1] [2, 9223372036854775806] at ome/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:62 in forward (_dynamo/variables/builder.py:2276 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0"
[INFO]:create_symbol s1 = 114 for L['features']['attention_mask'].size()[1] [2, 9223372036854775806] at ome/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:98 in forward (_dynamo/variables/builder.py:2276 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s1"
[INFO]:eval s0 <= 32768 [guard added] at transformers/src/transformers/models/mistral/modeling_mistral.py:120 in forward (_dynamo/variables/tensor.py:1041 in evaluate_expr), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="s0 <= 32768"
[INFO]:set_replacement s1 = s0 (solve) ValueRanges(lower=2, upper=32768, is_bool=False)
[INFO]:eval Eq(s1, s0) [guard added] at _dynamo/polyfill.py:54 in list_cmp (_dynamo/variables/tensor.py:1041 in evaluate_expr), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="Eq(s1, s0)"
[INFO]:create_symbol s2 = 114 for L['features']['pool_mask'].size()[1] [2, 9223372036854775806] at ome/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:397 in forward (_dynamo/variables/builder.py:2276 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s2"
[INFO]:Step 2: calling compiler function inductor
[INFO]:Step 2: done compiler function inductor
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing forward /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:300
[INFO]:create_symbol s0 = 114 for L['hiddens'].size()[1] [2, 9223372036854775806] at ome/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:303 in forward (_dynamo/variables/builder.py:2276 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0"
[INFO]:Restarting analysis due to _dynamo/symbolic_convert.py:148 in fail_and_restart_analysis
[INFO]:Step 1: torchdynamo start tracing forward /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:300
[INFO]:create_symbol s0 = 114 for L['hiddens'].size()[1] [2, 9223372036854775806] at ome/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:303 in forward (_dynamo/variables/builder.py:2276 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0"
[INFO]:Step 2: calling compiler function inductor
[INFO]:Step 2: done compiler function inductor
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing forward /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:229
[INFO]:create_symbol s0 = 114 for L['x'].size()[1] [2, 9223372036854775806] at ome/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:230 in forward (_dynamo/variables/builder.py:2276 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0"
[INFO]:Restarting analysis due to _dynamo/symbolic_convert.py:148 in fail_and_restart_analysis
[INFO]:Step 1: torchdynamo start tracing forward /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:229
[INFO]:create_symbol s0 = 114 for L['x'].size()[1] [2, 9223372036854775806] at ome/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:230 in forward (_dynamo/variables/builder.py:2276 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0"
[INFO]:Step 2: calling compiler function inductor
[INFO]:Step 2: done compiler function inductor
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_forward_at_234 /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:234
[INFO]:create_symbol s0 = 114 for L['x'].size()[1] [2, 9223372036854775806] at ome/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:273 in forward (_dynamo/variables/builder.py:2276 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0"
/home/mlazos/.conda/envs/pytorch-3.10/lib/python3.10/contextlib.py:103: FutureWarning: `torch.backends.cuda.sdp_kernel()` is deprecated. In the future, this context manager will be removed. Please see `torch.nn.attention.sdpa_kernel()` for the new context manager, with updated signature.
self.gen = func(*args, **kwds)
[INFO]:Restarting analysis due to _dynamo/symbolic_convert.py:148 in fail_and_restart_analysis
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_forward_at_234 /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:234
[INFO]:create_symbol s0 = 114 for L['x'].size()[1] [2, 9223372036854775806] at ome/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:235 in torch_dynamo_resume_in_forward_at_234 (_dynamo/variables/builder.py:2276 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0"
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing forward /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:271
[INFO]:create_symbol s0 = 114 for L['x'].size()[1] [2, 9223372036854775806] at ome/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:273 in forward (_dynamo/variables/builder.py:2276 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0"
/home/mlazos/.conda/envs/pytorch-3.10/lib/python3.10/contextlib.py:103: FutureWarning: `torch.backends.cuda.sdp_kernel()` is deprecated. In the future, this context manager will be removed. Please see `torch.nn.attention.sdpa_kernel()` for the new context manager, with updated signature.
self.gen = func(*args, **kwds)
[INFO]:Restarting analysis due to _dynamo/symbolic_convert.py:148 in fail_and_restart_analysis
[INFO]:Step 1: torchdynamo start tracing forward /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:271
[INFO]:create_symbol s0 = 114 for L['x'].size()[1] [2, 9223372036854775806] at ome/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:273 in forward (_dynamo/variables/builder.py:2276 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0"
[INFO]:Step 2: calling compiler function inductor
[INFO]:Step 2: done compiler function inductor
[INFO]:produce_guards
/home/mlazos/.conda/envs/pytorch-3.10/lib/python3.10/contextlib.py:103: FutureWarning: `torch.backends.cuda.sdp_kernel()` is deprecated. In the future, this context manager will be removed. Please see `torch.nn.attention.sdpa_kernel()` for the new context manager, with updated signature.
self.gen = func(*args, **kwds)
[INFO]:Step 1: torchdynamo start tracing rearrange /home/mlazos/.conda/envs/pytorch-3.10/lib/python3.10/site-packages/einops/einops.py:536
[INFO]:create_symbol s0 = 114 for L['tensor'].size()[1] [2, 9223372036854775806] at ome/mlazos/.conda/envs/pytorch-3.10/lib/python3.10/site-packages/einops/einops.py:591 in rearrange (_dynamo/variables/builder.py:2276 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0"
[INFO]:Step 1: torchdynamo done tracing rearrange (RETURN_VALUE)
[INFO]:Step 2: calling compiler function inductor
[INFO]:Step 2: done compiler function inductor
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_forward_at_305 /home/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:305
[INFO]:create_symbol s0 = 114 for L['___stack0'].size()[1] [2, 9223372036854775806] at ome/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:305 in torch_dynamo_resume_in_forward_at_305 (_dynamo/variables/builder.py:2276 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0"
[INFO]:create_symbol s1 = 114 for L['hiddens'].size()[1] [2, 9223372036854775806] at ome/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:305 in torch_dynamo_resume_in_forward_at_305 (_dynamo/variables/builder.py:2276 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s1"
[INFO]:set_replacement s1 = s0 (solve) ValueRanges(lower=2, upper=9223372036854775806, is_bool=False)
[INFO]:eval Eq(s0, s1) [guard added] at ome/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:305 in torch_dynamo_resume_in_forward_at_305 (_subclasses/fake_impls.py:1016 in infer_size), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="Eq(s0, s1)"
[INFO]:create_symbol s2 = 114 for L['attention_mask'].size()[1] [2, 9223372036854775806] at ome/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:307 in torch_dynamo_resume_in_forward_at_305 (_dynamo/variables/builder.py:2276 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s2"
[INFO]:set_replacement s2 = s0 (solve) ValueRanges(lower=2, upper=9223372036854775806, is_bool=False)
[INFO]:eval Eq(s0, s2) [guard added] at ome/mlazos/.cache/huggingface/modules/transformers_modules/nvidia/NV-Embed-v1/97aefcdd69565404f4a24de8ca4eb8114cb25ff0/modeling_nvembed.py:307 in torch_dynamo_resume_in_forward_at_305 (_subclasses/fake_impls.py:1016 in infer_size), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="Eq(s0, s2)"
[INFO]:Step 1: torchdynamo done tracing torch_dynamo_resume_in_forward_at_305 (RETURN_VALUE)
[INFO]:Step 2: calling compiler function inductor
[INFO]:Step 2: done compiler function inductor
[INFO]:produce_guards
[INFO]:Step 1: torchdynamo start tracing torch_dynamo_resume_in_fn_at_34 /data/users/mlazos/empathy_day/nvembed.py:34
created embeddings
Traceback (most recent call last):
File "/data/users/mlazos/empathy_day/nvembed.py", line 39, in <module>
query_embeddings = F.normalize(query_embeddings, p=2, dim=1)
NameError: name 'query_embeddings' is not defined
[INFO]:TorchDynamo compilation metrics:
Function, Runtimes (s)
_compile.<locals>.compile_inner, 172.2320
OutputGraph.call_user_compiler, 142.7964
create_aot_dispatcher_function, 141.1614
compile_fx.<locals>.fw_compiler_base, 112.2122
compile_fx_inner, 107.1212
GraphLowering.run, 9.7853
GraphLowering.compile_to_module, 86.5707
Scheduler.__init__, 7.4592
Scheduler.codegen, 45.7911
WrapperCodeGen.generate, 0.3617
compile_file, 107.9520
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment