Developer Guide for Intel® Data Analytics Acceleration Library 2019 Update 5
Intel DAAL provides the following types of layers:
Fully-connected layers,
which compute the inner product of all weighed inputs plus bias.
Activation layers,
which apply a transform to the input data.
Normalization layers,
which normalize the input data.
Anti-overfitting layers,
which prevent the neural network from overfitting.
Pooling layers,
which apply a form of non-linear downsampling to input data.
Convolutional and locally-connected layers,
which apply filters to input data.
Service layers,
which apply service operations to the input tensors.
Softmax layers,
which measure confidence of the output of the neural network.
Loss layers,
which measure the difference between the output of the neural network and ground truth.
When using Intel DAAL neural networks, be aware of the following assumptions:
In Intel DAAL, numbering of data samples is scalar.
For neural network layers, the first dimension of the input tensor represents the data samples.
While the actual layout of the data can be different, the access methods of the tensor return the data in the assumed layout. Therefore, for a tensor containing the input to the neural network, it is your responsibility to change logical indexing of tensor dimensions so that the first dimension represents the sample data. To do this, use the shuffleDimensions() method of the Tensor class.
Several neural network layers listed below support in-place computation, which means the result rewrites the input memory under the following conditions:
The following layers support in-place computation: