C++ API Reference for Intel® Data Analytics Acceleration Library 2019

References | Namespaces | Classes | Enumerations
Batch Normalization Layer

Contains classes for batch normalization layer. More...

References

 Backward Batch Normalization Layer
 Contains classes for the backward batch normalization layer.
 
 Forward Batch Normalization Layer
 Contains classes for forward batch normalization layer.
 

Namespaces

 daal::algorithms::neural_networks::layers::batch_normalization
 Contains classes for batch normalization layer.
 
 daal::algorithms::neural_networks::layers::batch_normalization::interface1
 Contains version 1.0 of Intel(R) Data Analytics Acceleration Library (Intel(R) DAAL) interface.
 

Classes

class  Batch< algorithmFPType, method >
 Provides methods for the batch normalization layer in the batch processing mode. More...
 
struct  Parameter
 Parameters for the forward and backward batch normalization layers. More...
 

Enumerations

enum  Method { defaultDense = 0 }
 Computation methods for the batch normalization layer. More...
 
enum  LayerDataId {
  auxData, auxWeights, auxMean, auxStandardDeviation,
  auxPopulationMean, auxPopulationVariance
}
 Identifiers of input objects for the backward batch normalization layer and results for the forward batch normalization layer. More...
 

Enumeration Type Documentation

enum LayerDataId
Enumerator
auxData 

p-dimensional tensor that stores forward batch normalization layer input data

auxWeights 

1-dimensional tensor of size $n_k$ that stores input weights for forward batch normalization layer

auxMean 

1-dimensional tensor of size $n_k$ that stores mini-batch mean

auxStandardDeviation 

1-dimensional tensor of size $n_k$ that stores mini-batch standard deviation

auxPopulationMean 

1-dimensional tensor of size $n_k$ that stores resulting population mean

auxPopulationVariance 

1-dimensional tensor of size $n_k$ that stores resulting population variance

enum Method

Enumerator
defaultDense 

Default: performance-oriented method.

For more complete information about compiler optimizations, see our Optimization Notice.