Java* API Reference for Intel® Data Analytics Acceleration Library 2018 Update 2

List of all members

Algorithm class for training naive Bayes model in the batch processing mode. More...

Class Constructor

◆ TrainingBatch() [1/2]

TrainingBatch ( DaalContext  context,
TrainingBatch  other 
)

Constructs multinomial naive Bayes training algorithm by copying input objects and parameters of another multinomial naive Bayes training algorithm

Parameters
contextContext to manage the multinomial naive Bayes training
otherAn algorithm to be used as the source to initialize the input objects and parameters of the algorithm

◆ TrainingBatch() [2/2]

TrainingBatch ( DaalContext  context,
Class<? extends Number >  cls,
TrainingMethod  method,
long  nClasses 
)

Constructs multinomial naive Bayes training algorithm

Parameters
contextContext to manage the multinomial naive Bayes training
clsData type to use in intermediate computations of the multinomial naive Bayes training, Double.class or Float.class
methodMultinomial naive Bayes training method, TrainingMethod
nClassesNumber of classes

Detailed Description

References

Member Function Documentation

◆ clone()

TrainingBatch clone ( DaalContext  context)

Returns the newly allocated multinomial naive Bayes training algorithm with a copy of input objects and parameters of this multinomial naive Bayes training algorithm

Parameters
contextContext to manage the multinomial naive Bayes training
Returns
The newly allocated algorithm

◆ compute()

TrainingResult compute ( )

Computes naive Bayes training results

Returns
Naive Bayes training results

◆ setResult()

void setResult ( TrainingResult  result)

Registers user-allocated memory to store naive Bayes training results

Parameters
resultStructure to store naive Bayes training results

Member Data Documentation

◆ method

Training method for the algorithm

◆ parameter

Parameter parameter

Parameters of the algorithm


The documentation for this class was generated from the following file:

For more complete information about compiler optimizations, see our Optimization Notice.