Developer Guide for Intel® Data Analytics Acceleration Library 2018 Update 2
Intel DAAL provides the following types of layers:
Fully-connected layers,
which compute the inner product of all weighed inputs plus bias.
Activation layers,
which apply a transform to the input data.
Absolute Value (Abs) Layers | Forward | Backward |
Logistic Layers | Forward | Backward |
Parametric Rectifier Linear Unit (pReLU) Layers | Forward | Backward |
Rectifier Linear Unit (ReLU) Layers | Forward | Backward |
Smooth Rectifier Linear Unit (SmoothReLU) Layers | Forward | Backward |
Hyperbolic Tangent Layers | Forward | Backward |
Exponential Linear Unit (ELU) Layers | Forward | Backward |
Normalization layers,
which normalize the input data.
Anti-overfitting layers,
which prevent the neural network from overfitting.
Pooling layers,
which apply a form of non-linear downsampling to input data.
1D Max Pooling Layers | Forward | Backward |
2D Max Pooling Layers | Forward | Backward |
3D Max Pooling Layers | Forward | Backward |
1D Average Pooling Layers | Forward | Backward |
2D Average Pooling Layers | Forward | Backward |
3D Average Pooling Layers | Forward | Backward |
2D Stochastic Pooling Layers | Forward | Backward |
2D Spatial Pyramid Pooling Layers | Forward | Backward |
Convolutional and locally-connected layers,
which apply filters to input data.
Service layers,
which apply service operations to the input tensors.
Softmax layers,
which measure confidence of the output of the neural network.
Loss layers,
which measure the difference between the output of the neural network and ground truth.
In the descriptions of specific layers, the preceding layer for the layer i is the layer:
When using Intel DAAL neural networks, be aware of the following assumptions:
In Intel DAAL, numbering of data samples is scalar.
For neural network layers, the first dimension of the input tensor represents the data samples.
While the actual layout of the data can be different, the access methods of the tensor return the data in the assumed layout. Therefore, for a tensor containing the input to the neural network, it is your responsibility to change logical indexing of tensor dimensions so that the first dimension represents the sample data. To do this, use the shuffleDimensions() method of the Tensor class.
Several neural network layers listed below support in-place computation, which means the result rewrites the input memory under the following conditions:
The following layers support in-place computation: