Intel® Math Kernel Library 2019 Developer Reference - C
Creates concatenation layers. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.
dnnError_t dnnConcatCreate_F32 (dnnPrimitive_t *pConcat, dnnPrimitiveAttributes_t attributes, size_t nSrcTensors, const dnnLayout_t *src);
dnnError_t dnnConcatCreate_F64 (dnnPrimitive_t *pConcat, dnnPrimitiveAttributes_t attributes, size_t nSrcTensors, const dnnLayout_t *src);
attributes |
The set of attributes for the primitive. |
nSrcTensors |
The number of input tensors. |
src |
Pointer to the layout of input tensors. |
pConcat |
Pointer to the primitive to create. |
Each dnnConcatCreate function creates a concatenation primitive. The concatenation tensor for M tensors with C0, C1, ..., CM-1 channels, respectively, and same other sizes is defined as:
where