Intel® Math Kernel Library 2018 Developer Reference - C

Deep Neural Network Functions

Intel® Math Kernel Library (Intel® MKL) functions for Deep Neural Networks (DNN functions) is a collection of performance primitives for Deep Neural Networks (DNN) applications optimized for Intel® architecture. The implementation of DNN functions includes a set of primitives necessary to accelerate popular image recognition topologies, such as AlexNet, Visual Geometry Group (VGG), GoogleNet, and Residual Networks (ResNet).

The primitives implement forward and backward passes for the following operations:

Intel MKL DNN primitives implement a plain C application programming interface (API) that can be used in the existing C/C++ DNN frameworks, as well as in custom DNN applications.

In addition to input and output arrays of DNN applications, the DNN primitives work with special opaque data types to represent the following:

Input and output arrays of DNN operations are called resources. Each DNN operation requires that the resources have certain data layouts. The application can query DNN operations about the required data layouts and check whether the layouts of the resources are really the required layouts.

An application that calls Intel MKL DNN functions should involve the following stages:

  1. Setup

    Given a DNN topology, the application creates all DNN operations necessary to implement scoring, training, or other application-specific computations. To pass data from one DNN operation to the next one, some applications create intermediate conversions and allocate temporary arrays if the appropriate output and input data layouts do not match.

  2. Execution

    This stage consists of calls to the DNN primitives that apply the DNN operations, including necessary conversions, to the input, output, and temporary arrays.

This section describes Intel MKL DNN functions, enumerated types used, as well as array layouts and attributes required to perform DNN operations.

The following table lists Intel MKL DNN functions grouped according to their purpose.

Intel MKL DNN Functions

Function Name

Description

Handling Array Layouts

dnnLayoutCreate

Creates a plain layout.

dnnLayoutCreateFromPrimitive

Creates a custom layout.

dnnLayoutGetMemorySize

Returns the size of the array specified by a layout.

dnnLayoutSerializationBufferSize

Returns the size required for layout serialization.

dnnLayoutSerialize

Serializes a layout to a buffer.

dnnLayoutDeserialize

Deserializes a layout from a buffer.

dnnLayoutCompare

Checks whether layouts are equal.

dnnLayoutDelete

Deletes a layout.

Handling Attributes of DNN Operations

dnnPrimitiveAttributesCreate

Creates an attribute container.

dnnPrimitiveAttributesDestroy

Destroys an attribute container.

dnnPrimitiveGetAttributes

Returns the container with attributes set for an instance of a primitive.

DNN Operations

dnnConvolutionCreate, dnnGroupsConvolutionCreate

Creates propagation operations for convolution layers.

dnnInnerProductCreate

Creates propagation operations for inner product layers.

dnnReLUCreate

Creates propagation operations for rectified linear neuron activation layers.

dnnLRNCreate

Creates propagation operations for layers performing local response normalization across channels.

dnnPoolingCreate

Creates propagation operations for pooling layers.

dnnBatchNormalizationCreate

Creates propagation operations for batch normalization layers.

dnnBatchNormalizationCreate_v2

Creates propagation operations for batch normalization performed using the specified method.

dnnSplitCreate

Creates split layers.
dnnConcatCreate Creates concatenation layers.
dnnSumCreate Creates sum layers.
dnnScaleCreate Creates scale layers.

dnnConversionCreate

Creates conversion operations.

dnnExecute

Performs DNN operations.

dnnConversionExecute

Performs a conversion operation.

dnnDelete

Deletes descriptions of DNN operations.

dnnAllocateBuffer

Allocates an array with a given layout.

dnnReleaseBuffer

Releases an array allocated by dnnAllocateBuffer.

Optimization Notice

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804