C++ API Reference for Intel® Data Analytics Acceleration Library 2019 Update 5

List of all members

Provides methods for the forward local response normalization layer in the batch processing mode. More...

Class Declaration

template<typename algorithmFPType = DAAL_ALGORITHM_FP_TYPE, Method method = defaultDense>
class daal::algorithms::neural_networks::layers::lrn::forward::interface1::Batch< algorithmFPType, method >

Template Parameters
algorithmFPTypeData type to use in intermediate computations for the forward local response normalization layer, double or float
methodForward local response normalization layer method, Method
Enumerations
  • Method Computation methods for the forward local response normalization layer
  • forward::InputId Identifiers of input objects for the forward local response normalization layer
  • forward::ResultId Identifiers of result objects for the forward local response normalization layer
  • forward::ResultLayerDataId Identifiers of extra results computed by the forward local response normalization layer
  • LayerDataId Identifiers of collection in result objects for the forward local response normalization layer
References

Constructor & Destructor Documentation

Batch ( )
inline

Default constructor

Batch ( ParameterType &  parameter)
inline

Constructs a forward local response normalization layer in the batch processing mode and initializes its parameter with the provided parameter

Parameters
[in]parameterParameter to initialize the parameter of the layer
Batch ( const Batch< algorithmFPType, method > &  other)
inline

Constructs a forward local response normalization layer by copying input objects and parameters of another forward local response normalization layer in the batch processing mode

Parameters
[in]otherAlgorithm to use as the source to initialize the input objects and parameters of the layer

Member Function Documentation

virtual services::Status allocateResult ( )
inlinevirtual

Allocates memory to store the result of the forward local response normalization layer

Returns
Status of computations
services::SharedPtr<Batch<algorithmFPType, method> > clone ( ) const
inline

Returns a pointer to a newly allocated forward local response normalization layer with a copy of the input objects and parameters for this forward local response normalization layer in the batch processing mode

Returns
Pointer to the newly allocated layer
virtual InputType* getLayerInput ( )
inlinevirtual

Returns the structure that contains the input objects of the forward local response normalization layer

Returns
Structure that contains the input objects of the forward local response normalization layer
virtual ParameterType* getLayerParameter ( )
inlinevirtual

Returns the structure that contains the parameters of the forward local response normalization layer

Returns
Structure that contains the parameters of the forward local response normalization layer
layers::forward::ResultPtr getLayerResult ( )
inline

Returns the structure that contains result of the forward local response normalization layer

Returns
Structure that contains result of the forward local response normalization layer
virtual int getMethod ( ) const
inlinevirtual

Returns the method of the layer

Returns
Method of the layer
ResultPtr getResult ( )
inline

Returns the structure that contains the result of the forward local response normalization layer

Returns
Structure that contains the result of the forward local response normalization layer
services::Status setResult ( const ResultPtr &  result)
inline

Registers user-allocated memory to store the result of the forward local response normalization layer

Parameters
[in]resultStructure to store the result of the forward local response normalization layer
Returns
Status of computations

Member Data Documentation

InputType input

Forward local response normalization layer input

ParameterType& parameter

Forward local response normalization layer parameters


The documentation for this class was generated from the following file:

For more complete information about compiler optimizations, see our Optimization Notice.