Intel® Math Kernel Library 2019 Developer Reference - C
Creates conversion operations. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.
dnnError_t dnnConversionCreate_F32 (dnnPrimitive_t *pConversion, const dnnLayout_t from, const dnnLayout_t to);
dnnError_t dnnConversionCreate_F64 (dnnPrimitive_t *pConversion, const dnnLayout_t from, const dnnLayout_t to);
from |
The layout to convert from. |
to |
The layout to convert to. |
pConversion |
Pointer to the primitive to create. |
Each dnnConversionCreate function creates a forward or backward propagation operation for conversion layers.
If both layouts are plain, they must have the same number of dimensions and the same size along each dimension. In this case, the following formula is applied:
dst[(x,stridesTo)] = src[(x,stridesFrom)],
where stridesFrom and stridesTo are strides in layouts from and to and (. , .) denotes the scalar product of two vectors.
If one of the layouts is custom, dst is the matrix-vector product C * src for a matrix C = C (from, to) such that all Cij are 0 or 1, sum ( j; Cij ) ≤ 1, and sum( i;Cij ) ≥ 1.