We suffer more in imagination than in reality - - Seneca
Right now
- fear:
- prevent by:
- repair by:
- fear:
module { | |
func.func @torch_jit(%arg0: !torch.vtensor<[3,300,400],f32>, %arg1: !torch.vtensor<[3,500,400],f32>) -> (!torch.vtensor<[?,4],f32>, !torch.vtensor<[?,?,?],f32>, !torch.vtensor<[?,?],f32>, !torch.vtensor<[?],si64>, !torch.vtensor<[?],f32>, !torch.vtensor<[?,4],f32>, !torch.vtensor<[?,?,?],f32>, !torch.vtensor<[?,?],f32>, !torch.vtensor<[?],si64>, !torch.vtensor<[?],f32>) attributes {torch.onnx_meta.ir_version = 8 : si64, torch.onnx_meta.opset_version = 17 : si64, torch.onnx_meta.producer_name = "pytorch", torch.onnx_meta.producer_version = "1.13.1"} { | |
%none = torch.constant.none | |
%0 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<_> : tensor<f32>} : () -> !torch.vtensor<[],f32> | |
%1 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__1> : tensor<f32>} : () -> !torch.vtensor<[],f32> | |
%2 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__2> : tensor<f32>} : () -> !torch.vtensor<[],f32> | |
%3 = torch.operator "onnx.Constant" |
/home/azureuser/iree-build/tools/iree-compile: /home/azureuser/miniconda/lib/libtinfo.so.6: no version information available (required by /home/azureuser/iree-build/lib/libIREECompiler.so) | |
deit-small-distilled-patch16-224.default.pytorch.torch.mlir:1029:12: error: failed to legalize operation 'torch.aten.squeeze' that was explicitly marked illegal | |
%999 = torch.aten.squeeze %998 : !torch.vtensor<[1,1,384],f32> -> !torch.vtensor<[1,384],f32> | |
^ | |
deit-small-distilled-patch16-224.default.pytorch.torch.mlir:1029:12: note: see current operation: %13245 = "torch.aten.squeeze"(%13244) : (!torch.vtensor<[1,1,384],f32>) -> !torch.vtensor<[1,384],f32> | |
iree-compile: /home/azureuser/iree/third_party/llvm-project/mlir/include/mlir/IR/UseDefLists.h:198: mlir::IRObjectWithUseList<mlir::OpOperand>::~IRObjectWithUseList() [OperandType = mlir::OpOperand]: Assertion `use_empty() && "Cannot destroy a value that still has uses!"' failed. | |
Please report issues to https://github.com/openxla/iree/issues and include the cr |
# Description: This script is used to test the model deit-small-distilled-patch16-224.default.pytorch.torch.stripped.mlir | |
# run original model and print ir after failure | |
/home/azureuser/iree-build/tools/iree-compile --iree-input-demote-i64-to-i32 --iree-hal-target-backends=llvm-cpu stripped/deit-small-distilled-patch16-224.default.pytorch.torch.stripped.mlir -o deit-small-distilled-patch16-224.default.stripped.vmfb --mlir-print-debuginfo --mlir-print-ir-after-failure |& gh gist create - -d "native_layer_norm ir dump after failure" | |
# run again with --debug and grep for `(tensor<198xf32>) -> tensor<?x198xf32>` and pass names | |
# grep patterns: | |
# `(tensor<198xf32>) -> tensor<?x198xf32>` | |
# `IR Dump After` |
OVERVIEW: MLIR modular optimizer driver | |
Available Dialects: builtin, chlo, complex, func, linalg, memref, ml_program, quant, scf, sparse_tensor, stablehlo, tensor, tm_tensor, torch, torch_c, tosa, vhlo | |
USAGE: torch-mlir-opt [options] <input file> | |
OPTIONS: | |
Color Options: | |
--color - Use colors in output (default=autodetect) |
OVERVIEW: IREE compilation driver | |
USAGE: iree-compile [options] <input file or '-' for stdin> | |
OPTIONS: | |
CUDA HAL Target: | |
--iree-hal-cuda-dump-ptx - Dump ptx to the debug stream. | |
--iree-hal-cuda-llvm-target-arch=<string> - LLVM target chip. |
~/torch-mlir/build/bin/torch-mlir-opt --convert-torch-to-linalg --convert-torch-to-tmtensor --debug -mlir-disable-threading -mlir-print-ir-after-all ./stripped-opt-125M.fp32.onnx.torch.mlir &> /tmp/torchopt.out | |
/home/azureuser/torch-mlir/build/bin/torch-mlir-opt: /home/azureuser/miniconda/lib/libtinfo.so.6: no version information available (required by /home/azureuser/torch-mlir/build/bin/torch-mlir-opt) | |
Args: /home/azureuser/torch-mlir/build/bin/torch-mlir-opt --convert-torch-to-linalg --convert-torch-to-tmtensor --debug -mlir-disable-threading -mlir-print-ir-after-all ./stripped-opt-125M.fp32.onnx.torch.mlir | |
Load new dialect in Context builtin | |
ImplicitTypeIDRegistry::lookupOrInsert(mlir::ShapedType) | |
ImplicitTypeIDRegistry::lookupOrInsert(mlir::MemRefLayoutAttrInterface) | |
ImplicitTypeIDRegistry::lookupOrInsert(mlir::TypedAttr) | |
ImplicitTypeIDRegistry::lookupOrInsert(mlir::ElementsAttr) | |
ImplicitTypeIDRegistry::lookupOrInsert(mlir::DistinctAttr) |
/home/azureuser/iree-build/tools/iree-compile: /home/azureuser/miniconda/lib/libtinfo.so.6: no version information available (required by /home/azureuser/iree-build/lib/libIREECompiler.so) | |
iree-compile: iree/third_party/llvm-project/llvm/include/llvm/Support/Casting.h:566: decltype(auto) llvm::cast(const From &) [To = mlir::DenseElementsAttr, From = mlir::Attribute]: Assertion `isa<To>(Val) && "cast<Ty>() argument of incompatible type!"' failed. | |
Please report issues to https://github.com/openxla/iree/issues and include the crash backtrace. | |
Stack dump: | |
0. Program arguments: /home/azureuser/iree-build/tools/iree-compile --iree-hal-target-backends=llvm-cpu opt-125M.fp32.onnx.torch.mlir | |
Stack dump without symbol names (ensure you have llvm-symbolizer in your PATH or set the environment var `LLVM_SYMBOLIZER_PATH` to point to it): | |
0 libIREECompiler.so 0x00007fed01436997 llvm::sys::PrintStackTrace(llvm::raw_ostream&, int) + 39 | |
1 libIREECompiler.so 0x00007fed01434bc0 llvm::sys::RunSignalHandlers() + 80 | |
2 libIREECo |
/home/azureuser/iree-build/tools/iree-compile: /home/azureuser/miniconda/lib/libtinfo.so.6: no version information available (required by /home/azureuser/iree-build/lib/libIREECompiler.so) | |
Args: /home/azureuser/iree-build/tools/iree-compile --iree-hal-target-backends=llvm-cpu -o output.vmfb stripped-opt-125M.fp32.onnx.torch.mlir --debug | |
Load new dialect in Context builtin | |
ImplicitTypeIDRegistry::lookupOrInsert(mlir::ShapedType) | |
ImplicitTypeIDRegistry::lookupOrInsert(mlir::MemRefLayoutAttrInterface) | |
ImplicitTypeIDRegistry::lookupOrInsert(mlir::TypedAttr) | |
ImplicitTypeIDRegistry::lookupOrInsert(mlir::ElementsAttr) | |
ImplicitTypeIDRegistry::lookupOrInsert(mlir::DistinctAttr) | |
ImplicitTypeIDRegistry::lookupOrInsert(mlir::BytecodeOpInterface) | |
ImplicitTypeIDRegistry::lookupOrInsert(mlir::SymbolOpInterface) |
import onnx | |
import numpy as np | |
from onnx import numpy_helper, TensorProto, save_model | |
from onnx.helper import make_model, make_node, make_graph, make_tensor_value_info | |
from onnx.checker import check_model | |
# condition has to be a float tensor | |
condition = make_tensor_value_info('condition', TensorProto.FLOAT, [1]) |