Skip to content

Instantly share code, notes, and snippets.

@llandsmeer
Created October 14, 2023 18:31
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save llandsmeer/2365efed9a43e3c76042222033744b01 to your computer and use it in GitHub Desktop.
Save llandsmeer/2365efed9a43e3c76042222033744b01 to your computer and use it in GitHub Desktop.
Onnx model zoo operator use
'Conv' #############################################################################################################################################################
'MaxPool' #################################################################################################################################################
'Relu' #############################################################################################################################################
'Reshape' ##################################################################################################################################
'Softmax' ##########################################################################################################
'Concat' ##################################################################################################
'Add' #################################################################################################
'Gemm' #########################################################################################
'Mul' ########################################################
'Unsqueeze' ########################################################
'Dropout' ########################################################
'BatchNormalization' ######################################################
'Transpose' ##################################################
'Shape' #################################################
'Gather' ################################################
'LRN' ###############################################
'Cast' ##########################################
'Slice' ##########################################
'Sub' ##########################################
'DequantizeLinear' #########################################
'AveragePool' #########################################
'QuantizeLinear' #######################################
'MatMul' ################################
'Squeeze' ################################
'Div' ################################
'Constant' #############################
'GlobalAveragePool' #############################
'Flatten' ##########################
'Exp' ########################
'ConstantOfShape' ######################
'QLinearConv' ######################
'Split' ###################
'Sigmoid' ###################
'Sqrt' ##################
'QLinearAdd' ##################
'Resize' ##################
'NonMaxSuppression' ################
'ReduceMin' ################
'Clip' ###############
'Sum' ###############
'ReduceMean' ##############
'QLinearMatMul' ##############
'Log' #############
'Floor' #############
'Expand' ############
'TopK' ############
'NonZero' ###########
'Upsample' ###########
'Pow' ##########
'Tile' ##########
'Less' ##########
'InstanceNormalization' ##########
'Pad' ##########
'Tanh' #########
'Equal' #########
'Loop' #########
'Greater' ########
'RoiAlign' ########
'LeakyRelu' #######
'Identity' ######
'Ceil' ######
'ScatterElements' ######
'Not' #####
'QLinearConcat' #####
'Reciprocal' ####
'And' ####
'ConvTranspose' ####
'QLinearGlobalAveragePool' ####
'OneHot' ###
'Min' ###
'Where' ###
'Abs' ###
'QLinearAveragePool' ###
'DynamicQuantizeLinear' ##
'MatMulInteger' ##
'Neg' ##
'Range' ##
'CumSum' ##
'Erf' ##
'ArgMax' ##
'CategoryMapper' ##
'Compress' ##
'Hardmax' ##
'ReduceMax' ##
'ReduceSum' ##
'Scan' ##
'QLinearSigmoid' ##
'Scatter' ##
'PRelu' ##
'FusedMatMul' #
'LessOrEqual' #
'Max' #
'LSTM' #
'ConvInteger' #
'DynamicQuantizeLSTM' #
'Round' #
'QLinearLeakyRelu' #
'preprocess' #
'QLinearMul' #
with 1 ops we support 0 / 190 = 0 % by adding Conv
with 2 ops we support 0 / 190 = 0 % by adding MaxPool
with 3 ops we support 0 / 190 = 0 % by adding Relu
with 4 ops we support 0 / 190 = 0 % by adding Reshape
with 5 ops we support 0 / 190 = 0 % by adding Softmax
with 6 ops we support 0 / 190 = 0 % by adding Concat
with 7 ops we support 0 / 190 = 0 % by adding Add
with 8 ops we support 0 / 190 = 0 % by adding Gemm
with 9 ops we support 0 / 190 = 0 % by adding Mul
with 10 ops we support 0 / 190 = 0 % by adding Unsqueeze
with 11 ops we support 8 / 190 = 4 % by adding Dropout
with 12 ops we support 8 / 190 = 4 % by adding BatchNormalization
with 13 ops we support 8 / 190 = 4 % by adding Transpose
with 14 ops we support 8 / 190 = 4 % by adding Shape
with 15 ops we support 8 / 190 = 4 % by adding Gather
with 16 ops we support 31 / 190 = 16 % by adding LRN
with 17 ops we support 31 / 190 = 16 % by adding Cast
with 18 ops we support 31 / 190 = 16 % by adding Slice
with 19 ops we support 31 / 190 = 16 % by adding Sub
with 20 ops we support 31 / 190 = 16 % by adding DequantizeLinear
with 21 ops we support 51 / 190 = 27 % by adding AveragePool
with 22 ops we support 52 / 190 = 27 % by adding QuantizeLinear
with 23 ops we support 59 / 190 = 31 % by adding MatMul
with 24 ops we support 60 / 190 = 32 % by adding Squeeze
with 25 ops we support 62 / 190 = 33 % by adding Div
with 26 ops we support 65 / 190 = 34 % by adding Constant
with 27 ops we support 84 / 190 = 44 % by adding GlobalAveragePool
with 28 ops we support 98 / 190 = 52 % by adding Flatten
with 29 ops we support 100 / 190 = 53 % by adding Exp
with 30 ops we support 100 / 190 = 53 % by adding ConstantOfShape
with 31 ops we support 100 / 190 = 53 % by adding QLinearConv
with 32 ops we support 100 / 190 = 53 % by adding Split
with 33 ops we support 100 / 190 = 53 % by adding Sigmoid
with 34 ops we support 100 / 190 = 53 % by adding Sqrt
with 35 ops we support 100 / 190 = 53 % by adding QLinearAdd
with 36 ops we support 105 / 190 = 55 % by adding Resize
with 37 ops we support 105 / 190 = 55 % by adding NonMaxSuppression
with 38 ops we support 105 / 190 = 55 % by adding ReduceMin
with 39 ops we support 108 / 190 = 57 % by adding Clip
with 40 ops we support 121 / 190 = 64 % by adding Sum
with 41 ops we support 124 / 190 = 65 % by adding ReduceMean
with 42 ops we support 132 / 190 = 69 % by adding QLinearMatMul
with 43 ops we support 132 / 190 = 69 % by adding Log
with 44 ops we support 132 / 190 = 69 % by adding Floor
with 45 ops we support 132 / 190 = 69 % by adding Expand
with 46 ops we support 136 / 190 = 72 % by adding TopK
with 47 ops we support 136 / 190 = 72 % by adding NonZero
with 48 ops we support 137 / 190 = 72 % by adding Upsample
with 49 ops we support 137 / 190 = 72 % by adding Pow
with 50 ops we support 137 / 190 = 72 % by adding Tile
with 51 ops we support 137 / 190 = 72 % by adding Less
with 52 ops we support 137 / 190 = 72 % by adding InstanceNormalization
with 53 ops we support 147 / 190 = 77 % by adding Pad
with 54 ops we support 148 / 190 = 78 % by adding Tanh
with 55 ops we support 148 / 190 = 78 % by adding Equal
with 56 ops we support 151 / 190 = 79 % by adding Loop
with 57 ops we support 151 / 190 = 79 % by adding Greater
with 58 ops we support 151 / 190 = 79 % by adding RoiAlign
with 59 ops we support 155 / 190 = 82 % by adding LeakyRelu
with 60 ops we support 155 / 190 = 82 % by adding Identity
with 61 ops we support 157 / 190 = 83 % by adding Ceil
with 62 ops we support 159 / 190 = 84 % by adding ScatterElements
with 63 ops we support 159 / 190 = 84 % by adding Not
with 64 ops we support 160 / 190 = 84 % by adding QLinearConcat
with 65 ops we support 161 / 190 = 85 % by adding Reciprocal
with 66 ops we support 161 / 190 = 85 % by adding And
with 67 ops we support 163 / 190 = 86 % by adding ConvTranspose
with 68 ops we support 166 / 190 = 87 % by adding QLinearGlobalAveragePool
with 69 ops we support 168 / 190 = 88 % by adding OneHot
with 70 ops we support 169 / 190 = 89 % by adding Min
with 71 ops we support 170 / 190 = 89 % by adding Where
with 72 ops we support 170 / 190 = 89 % by adding Abs
with 73 ops we support 172 / 190 = 91 % by adding QLinearAveragePool
with 74 ops we support 172 / 190 = 91 % by adding DynamicQuantizeLinear
with 75 ops we support 172 / 190 = 91 % by adding MatMulInteger
with 76 ops we support 172 / 190 = 91 % by adding Neg
with 77 ops we support 173 / 190 = 91 % by adding Range
with 78 ops we support 174 / 190 = 92 % by adding CumSum
with 79 ops we support 176 / 190 = 93 % by adding Erf
with 80 ops we support 176 / 190 = 93 % by adding ArgMax
with 81 ops we support 176 / 190 = 93 % by adding CategoryMapper
with 82 ops we support 176 / 190 = 93 % by adding Compress
with 83 ops we support 176 / 190 = 93 % by adding Hardmax
with 84 ops we support 176 / 190 = 93 % by adding ReduceMax
with 85 ops we support 176 / 190 = 93 % by adding ReduceSum
with 86 ops we support 176 / 190 = 93 % by adding Scan
with 87 ops we support 178 / 190 = 94 % by adding QLinearSigmoid
with 88 ops we support 180 / 190 = 95 % by adding Scatter
with 89 ops we support 182 / 190 = 96 % by adding PRelu
with 90 ops we support 183 / 190 = 96 % by adding FusedMatMul
with 91 ops we support 183 / 190 = 96 % by adding LessOrEqual
with 92 ops we support 184 / 190 = 97 % by adding Max
with 93 ops we support 185 / 190 = 97 % by adding LSTM
with 94 ops we support 185 / 190 = 97 % by adding ConvInteger
with 95 ops we support 186 / 190 = 98 % by adding DynamicQuantizeLSTM
with 96 ops we support 187 / 190 = 98 % by adding Round
with 97 ops we support 188 / 190 = 99 % by adding QLinearLeakyRelu
with 98 ops we support 189 / 190 = 99 % by adding preprocess
with 99 ops we support 190 / 190 = 100 % by adding QLinearMul
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment