Intel® Math Kernel Library 2019 Developer Reference - C

dnnBatchNormalizationCreate

Creates propagation operations for batch normalization layers. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

Syntax

dnnError_t dnnBatchNormalizationCreateForward_F32 (dnnPrimitive_t *pBatchNormalization, dnnPrimitiveAttributes_t attributes, const dnnLayout_t dataLayout, float eps);

dnnError_t dnnBatchNormalizationCreateBackwardScaleShift_F32 (dnnPrimitive_t *pBatchNormalization, dnnPrimitiveAttributes_t attributes, const dnnLayout_t dataLayout, float eps);

dnnError_t dnnBatchNormalizationCreateBackwardData_F32 (dnnPrimitive_t *pBatchNormalization, dnnPrimitiveAttributes_t attributes, const dnnLayout_t dataLayout, float eps);

dnnError_t dnnBatchNormalizationCreateForward_F64 (dnnPrimitive_t *pBatchNormalization, dnnPrimitiveAttributes_t attributes, const dnnLayout_t dataLayout, double eps);

dnnError_t dnnBatchNormalizationCreateBackwardScaleShift_F64 (dnnPrimitive_t *pBatchNormalization, dnnPrimitiveAttributes_t attributes, const dnnLayout_t dataLayout, double eps);

dnnError_t dnnBatchNormalizationCreateBackwardData_F64 (dnnPrimitive_t *pBatchNormalization, dnnPrimitiveAttributes_t attributes, const dnnLayout_t dataLayout, double eps);

Include Files

Input Parameters

dataLayout

The layout of the input.

attributes

The set of attributes for the primitive.

eps

The eps parameter.

Output Parameters

pBatchNormalization

Pointer to the primitive to create:

dnnBatchNormalizationCreateForward

Forward

dnnBatchNormalizationCreateBackwardData

Backward with respect to data

dnnBatchNormalizationCreateBackwardScaleShift

Backward with respect to scale and shift

Description

Each dnnBatchNormalizationCreate function creates a forward or backward propagation operation for the batch normalization. Batch normalization is defined as:

where

w ∈ [1,W], h ∈ [1, H], n ∈ [1, N], k ∈ [1, K],

γ[k] is the weight and β[k] is the bias of k,

and eps is the constant to improve numerical stability.

This primitive does the same as the dnnBatchNormalizationCreate_v2 primitive called with the dnnUseScaleShift value of flags except that dnnBatchNormalizationCreate saves the mean and variance in the workspace buffer instead of output buffers.