Input feature map size (Width × Height): No Limited
Precision: INT8
Normal and Depth-wise Convolution Operations
Non-normal Convolution Operations (such as transposed
convolution, dilated convolution)
Pooling Operations
Average Pooling, and Maximum Pooling Algorithm
Size (Width × Height): 2 × 2, 4 × 4
Element-wise Operations
Element-wise addition, and multiplication are both support
Activation Operations
ReLU, LeaklyReLU, LReLU, Tanh, Sigmoid, and other types of non-linear activation
functions are all supported
Fully-Connected Operations
There is no limit on input feature map size, however, large connections might degrade
NNE50 performance because of the restriction of on-chip SRAM size