C++ API Reference for Intel® Data Analytics Acceleration Library 2019 Update 5

List of all members
Batch< algorithmFPType, method > Class Template Reference

Trains model of the AdaBoost algorithms in batch mode. More...

Class Declaration

template<typename algorithmFPType = DAAL_ALGORITHM_FP_TYPE, Method method = defaultDense>
class daal::algorithms::adaboost::training::interface1::Batch< algorithmFPType, method >

Template Parameters
algorithmFPTypeData type to use in intermediate computations for the AdaBoost, double or float
methodAdaBoost computation method, daal::algorithms::adaboost::training::Method
Enumerations
References

Constructor & Destructor Documentation

Batch ( const Batch< algorithmFPType, method > &  other)
inline

Constructs an AdaBoost training algorithm by copying input objects and parameters of another AdaBoost training algorithm

Parameters
[in]otherAn algorithm to be used as the source to initialize the input objects and parameters of the algorithm

Member Function Documentation

services::SharedPtr<Batch<algorithmFPType, method> > clone ( ) const
inline

Returns a pointer to the newly allocated AdaBoost training algorithm with a copy of input objects and parameters of this AdaBoost training algorithm

Returns
Pointer to the newly allocated algorithm
InputType* getInput ( )
inline

Get input objects for the AdaBoost training algorithm

Returns
Input objects for the AdaBoost training algorithm
virtual int getMethod ( ) const
inlinevirtual

Returns method of the algorithm

Returns
Method of the algorithm
ResultPtr getResult ( )
inline

Returns the structure that contains results of AdaBoost training

Returns
Structure that contains results of AdaBoost training
services::Status resetResult ( )
inline

Resets the training results of the classification algorithm

Member Data Documentation

InputType input

Input data structure

ParameterType parameter

Parameters of the algorithm


The documentation for this class was generated from the following file:

For more complete information about compiler optimizations, see our Optimization Notice.