Intel® Math Kernel Library 2019 Developer Reference - C
Creates scale layers. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.
dnnError_t dnnScaleCreate_F32 (dnnPrimitive_t *pScale, dnnPrimitiveAttributes_t attributes, const dnnLayout_t dataLayout, float alpha);
dnnError_t dnnScaleCreate_F64 (dnnPrimitive_t *pScale, dnnPrimitiveAttributes_t attributes, const dnnLayout_t dataLayout, float alpha);
attributes |
The set of attributes for the primitive. |
dataLayout |
The layout of the input. |
alpha |
The scaling factor. |
pScale |
Pointer to the primitive to create. |
Each dnnScaleCreate function creates a scale primitive. A scaling operation is defined as follows:
dst[x] = alpha*src[x].
The primitive supports the following kinds of scaling:
In-place: the src and dst pointers reference the same memory block.
Out-of-place: the src and dst pointers reference different non-interleaving memory blocks.