Developer Guide for Intel® Data Analytics Acceleration Library 2018 Update 2
Neural network prediction algorithm in the batch processing mode accepts the following input. Pass the Input ID as a parameter to the methods that provide input for your algorithm. For more details, see Algorithms.
Input ID |
Input |
|
---|---|---|
data |
Pointer to the tensor of size n1 x n2 x ... x np that stores the neural network input data. This input can be an object of any class derived from Tensor. |
|
model |
Trained model with the optimum set of weights and biases. The result can only be an object of the Model class. |
Neural network prediction algorithm in the batch processing mode has the following parameters:
Parameter |
Default Value |
Description |
|
---|---|---|---|
algorithmFPType |
float |
The floating-point type that the algorithm uses for intermediate computations. Can be float or double. |
|
method |
defaultDense |
Performance-oriented computation method. |
|
nIterations |
1000 |
The number of iterations. |
|
batchSize |
1 |
The number of samples simultaneously used for prediction. |
Neural network prediction algorithm in the batch processing mode calculates the result described below. Pass the Result ID as a parameter to the methods that access the results of your algorithm. For more details, see Algorithms.
Result ID |
Result |
|
---|---|---|
prediction |
Pointer to the tensor of size n1 that stores the predicted result for each sample. This input can be an object of any class derived from Tensor. |
C++: neural_net_predict_dense_batch.cpp
Java*: NeuralNetPredicDenseBatch.java
Python*: neural_net_predict_dense_batch.py