Intel® Math Kernel Library 2019 Developer Reference - C

dnnBatchNormalizationCreate_v2

Creates propagation operations for batch normalization performed using the specified method. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

Syntax

dnnError_t dnnBatchNormalizationCreate_Forward_v2_F32 (dnnPrimitive_t *pBatchNormalization, dnnPrimitiveAttributes_t attributes, const dnnLayout_t dataLayout, float eps, unsigned int flags);

dnnError_t dnnBatchNormalizationCreate_Backward_v2_F32 (dnnPrimitive_t *pBatchNormalization, dnnPrimitiveAttributes_t attributes, const dnnLayout_t dataLayout, float eps, unsigned int flags);

dnnError_t dnnBatchNormalizationCreate_Forward_v2_F64 (dnnPrimitive_t *pBatchNormalization, dnnPrimitiveAttributes_t attributes, const dnnLayout_t dataLayout, double eps, unsigned int flags);

dnnError_t dnnBatchNormalizationCreate_Backward_v2_F64 (dnnPrimitive_t *pBatchNormalization, dnnPrimitiveAttributes_t attributes, const dnnLayout_t dataLayout, double eps, unsigned int flags);

Include Files

Input Parameters

dataLayout

The layout of the input.

attributes

The set of attributes for the primitive.

eps

The eps parameter.

flags

The set of flags to define the computation method for the primitive.

Output Parameters

pBatchNormalization

Pointer to the primitive to create:

dnnBatchNormalizationCreateForward

Forward

dnnBatchNormalizationCreateBackward

Backward

Description

Each dnnBatchNormalizationCreate_v2 function creates a forward or backward propagation operation for batch normalization to be performed using the computation method specified by flags.

Batch normalization is defined as:

where

w ∈ [1,W], h ∈ [1, H], n ∈ [1, N], k ∈ [1, K],

γ[k] is the weight and β[k] is the bias of k,

and eps is the constant to improve numerical stability.

dnnBatchNormalizationCreate_v2 called with the dnnUseScaleShift value of flags does the same as dnnBatchNormalizationCreate except that dnnBatchNormalizationCreate_v2 saves the mean and variance in separate output buffers instead of the workspace buffer.