We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test_auto_register_torchops.py::TestFallbackToTorch::test_alexnet
Note: If you have a model or program that is not supported yet but should be, please use the program coverage template.
As per title.
pjnl-20240919, or nightly pytorch and torchvision and lightning-thunder, then
pytest thunder/tests/test_auto_register_torchops.py -v -k test_alexnet
[2024-09-20 21:35:17] thunder/tests/test_auto_register_torchops.py::TestFallbackToTorch::test_alexnet FAILED [100%] ================================================================= FAILURES ================================================================= _____________________________________________________ TestFallbackToTorch.test_alexnet _____________________________________________________ self = <thunder.tests.test_auto_register_torchops.TestFallbackToTorch object at 0x79d937320f40> @requiresCUDA def test_alexnet(self): torchvision = pytest.importorskip("torchvision") for op in _skip_ops_alexnet: register_default_torch_op(op.torch_reference, torch.nn.functional) self._tmp_update_jit_lookup(op.torch_reference) tdtype = torch.float32 device = torch.device("cuda") model = torchvision.models.alexnet(weights=None).to(device=device, dtype=tdtype) model = model.train() executor = TorchExecutor jitted = executor.make_callable(model) x = make_tensor((1, 3, 224, 224), dtype=tdtype, device=device) > cache_entry, _, _ = thunder.compile_data(jitted).get_computation_and_inputs(x) thunder/tests/test_auto_register_torchops.py:214: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ thunder/core/langctxs.py:136: in _fn result = fn(*args, **kwargs) thunder/__init__.py:219: in cache_info_wrapper res = fn(*args, **kwargs) thunder/__init__.py:630: in get_computation_and_inputs computation_trc, backward_trc = split_forward_backward(computation_trc, cd, cs, *inps) thunder/executors/torch_autograd.py:137: in split_forward_backward fw_trace, bw_trace = forward_and_backward_from_trace(primal_trace, torch_autograd=True) thunder/core/transforms.py:2987: in forward_and_backward_from_trace forward_trace = construct_trace()(augmented_forward_fn, *trace.args, **trace.kwargs) thunder/core/interpreter.py:1317: in fn_ return fn(*args, **kwargs) thunder/common.py:576: in _trace result = fn(*proxyargs, **proxykwargs) thunder/core/transforms.py:2965: in augmented_forward_fn result, env = augmented_forward_pass(*args, trace=trace, **kwargs) thunder/core/transforms.py:2568: in augmented_forward_pass result, env = eval_trace( thunder/core/trace_interpreter.py:60: in interpret_trace prim_func = symbol_mapper(symbol) if symbol_mapper is not None else symbol.sym thunder/core/transforms.py:2492: in vjp_symbol_mapper vjp_impl, backward_fn = make_aug_forward_and_backward(symbol) thunder/core/vjp_utils.py:94: in make_aug_forward_and_backward bw_outputs = {name: bw_output for name, bw_output in utils.safe_zip(meta_parameters, bw_outputs_args)} _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .0 = <zip object at 0x79d9357fd8c0> > bw_outputs = {name: bw_output for name, bw_output in utils.safe_zip(meta_parameters, bw_outputs_args)} E ValueError: zip() argument 2 is longer than argument 1 thunder/core/vjp_utils.py:94: ValueError ========================================================= short test summary info ========================================================== FAILED thunder/tests/test_auto_register_torchops.py::TestFallbackToTorch::test_alexnet - ValueError: zip() argument 2 is longer than argument 1 ============================================== 1 failed, 1529 deselected, 1 warning in 7.03s ===============================================
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Note: If you have a model or program that is not supported yet but should be, please use the program coverage template.
🐛 Bug
As per title.
To Reproduce
pjnl-20240919, or nightly pytorch and torchvision and lightning-thunder, then
The text was updated successfully, but these errors were encountered: