Intel® Math Kernel Library 2019 Developer Reference - C
Creates sum layers. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.
dnnError_t dnnSumCreate_F32 (dnnPrimitive_t *pSum, dnnPrimitiveAttributes_t attributes, const size_t nSummands, dnnLayout_t dataLayout, float *coefficients);
dnnError_t dnnSumCreate_F64 (dnnPrimitive_t *pSum, dnnPrimitiveAttributes_t attributes, const size_t nSummands, dnnLayout_t dataLayout, float *coefficients);
attributes |
The set of attributes for the primitive. |
nSummands |
The number of input layouts. |
dataLayout |
Pointer to the layout of the input. |
coefficients |
Coefficients of input tensors in the weighted sum. |
pSum |
Pointer to the primitive to create. |
Each dnnSumCreate function creates a sum primitive. The sum (weighted) of N tensors of the same sizes is defined as: