Intel® Math Kernel Library 2019 Developer Reference - C

dnnReLUCreate

Creates propagation operations for rectified linear neuron activation layers. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

Syntax

dnnError_t dnnReLUCreateForward_F32 (dnnPrimitive_t *pRelu, dnnPrimitiveAttributes_t attributes, const dnnLayout_t dataLayout, float negativeSlope);

dnnError_t dnnReLUCreateBackward_F32 (dnnPrimitive_t *pRelu, dnnPrimitiveAttributes_t attributes, const dnnLayout_t diffLayout, const dnnLayout_t dataLayout, float negativeSlope);

dnnError_t dnnReLUCreateForward_F64 (dnnPrimitive_t *pRelu, dnnPrimitiveAttributes_t attributes, const dnnLayout_t dataLayout, double negativeSlope);

dnnError_t dnnReLUCreateBackward_F64 (dnnPrimitive_t *pRelu, dnnPrimitiveAttributes_t attributes, const dnnLayout_t diffLayout, const dnnLayout_t dataLayout, double negativeSlope);

Include Files

Input Parameters

attributes

The set of attributes for the primitive.

dataLayout

The layout of the input.

diffLayout

The layout of the destination diff.

negativeSlope

The negative slope.

Output Parameters

pRelu

Pointer to the primitive to create:

dnnReLUCreateForward

Forward

dnnReLUCreateBackward

Backward

Description

Each dnnReLUCreate function creates a forward or backward propagation operation for batch rectified linear neuron activation (ReLU). The ReLU operation is defined as:

dst[x] = max(src[x], 0) + negativeSlope*min(src[x], 0).