C++ API Reference for Intel® Data Analytics Acceleration Library 2019
Contains classes for training the model of the neural network. More...
References | |
| Batch | |
| Distributed | |
Namespaces | |
| daal::algorithms::neural_networks::training | |
| Contains classes for training the model of the neural network. | |
Classes | |
| class | Topology |
| Class defining a neural network topology - a set of layers and connection between them - on the training stage. More... | |
Enumerations | |
| enum | Method { defaultDense = 0, feedforwardDense = 0 } |
| Computation methods for the neural network model based training. More... | |
| enum | InputId { data, groundTruth } |
| Available identifiers of input objects for the neural network model based training. More... | |
| enum | InputCollectionId { groundTruthCollection = lastInputId + 1 } |
| Available identifiers of input collection objects for the neural network model based training. More... | |
| enum | Step1LocalInputId { inputModel = lastInputCollectionId + 1 } |
| Available identifiers of input objects for the neural network model based training. More... | |
| enum | Step2MasterInputId { partialResults } |
| Partial results from the previous steps in the distributed processing mode required by the second distributed step of the algorithm. More... | |
| enum | Step1LocalPartialResultId |
| Available identifiers of partial results of the neural network training algorithm required by the first distributed step. More... | |
| enum | Step2MasterPartialResultId |
| Available identifiers of partial results of the neural network training algorithm equired by the second distributed step. More... | |
| enum | ResultId { model = 0 } |
| Available identifiers of result of the neural network model based training. More... | |
| enum InputCollectionId |
| enum InputId |
| Enumerator | |
|---|---|
| data |
Training data set |
| groundTruth |
Ground-truth results for the training data set |
| enum Method |
| enum Step2MasterInputId |
For more complete information about compiler optimizations, see our Optimization Notice.