1. Operator support list

This section describes the support of AX620A/U for the ONNX operator.

Operator

CPU or NPU

constraints

ONNX OpSet

Abs

NPU

v13

Add

NPU

Only two inputs are supported, The shape needs to be the same, Support Auto Boardcast

v13

ArgMax

NPU

Input only supports 4-dimensional tensor, needs to be connected to the Conv operator after

v13

ArgMin

NPU

The input only supports four-dimensional tensor, needs to be connected to the Conv operator

v13

AveragePool

NPU

does not support the configuration of auto_pad /count_include_pad attribute, and the pads must be symmetrical

BatchNormalization

NPU

needs to be connected to the Conv operator after

Clip

NPU

Concat

NPU

Conv

NPU

in_channels % groups == 0

out_channels % groups == 0

Kernel Size support [1, 18]

strides Generally take [[1, 1], [1, 2], [2, 1], [2, 2], [3, 3]], When strides = kernel_size, support (n, n) or (n, 1) or (1, n) of pattern, (n, 1), (1, n), requires that n is not equal to 2 or 3

padding before the feature map of width <= 8191, and the top and bottom, padding top <= 16384

After dilation, the kernel_size must also be less than or equal to 18, otherwise it will easily report an error

The width should be divisible by stride_w, i.e. output_shape_width % stride_w == 0

kernel_size_width * kernel_size_height * input_channel <= 65535

ConvTranspose

NPU

auto_pad is not supported. attribute, pad must be symmetrical, dilation currently only supports a configuration of 1

DepthToSpace

NPU

Div

NPU

Only two inputs are supported, The shape needs to be the same, Support Auto Boardcast

Flatten

NPU

Flatten is generally used with Linear

GRU

CPU

Gemm

NPU

GlobalAveragePool

NPU

GlobalMaxPool

NPU

HardSigmoid

NPU

Only support for connecting to Conv operator after the

Identity

NPU

LRN

CPU

LSTM

CPU / NPU

LeakyRelu

NPU

MatMul

NPU

MaxPool

NPU

Configuration is not supported storage_order attribute, and dilation currently only supports a configuration of 1

MaxRoiPool

NPU

Mul

NPU

Only two inputs are supported, The shape needs to be the same, Support automatic boardcast

PRelu

NPU

Pad

NPU

Only Constant is supported mode, you can configure 0 and -inf, does not support reflect, edge mode

ReduceL2

NPU

Only support input 4D Tensor case

ReduceMax

CPU / NPU

Only the input is 4D Tensor

ReduceMean

CPU / NPU

Only the input is a 4D Tensor

ReduceSum

CPU / NPU

Only the input is a 4D Tensor

ReLU

NPU

Reshape

NPU

Only 3D input is supported 4D Tensor case

Resize

CPU / NPU

Shape

CPU / NPU

Sigmoid

NPU

Slice

NPU

Only step 1 is supported operations

Softmax

CPU / NPU

Softplus

CPU / NPU

Only supports operations following the Conv operator after the

SpaceToDepth

NPU

Sub

NPU

Only two inputs are supported, shape needs to be the same, Automatic Boardcast support

Tanh

NPU

Tile

CPU / NPU

Transpose

CPU / NPU

Unsqueeze

CPU / NPU