Java* API Reference for Intel® Data Analytics Acceleration Library 2018 Update 1
Class that computes the results of the forward 2D convolution layer in the batch processing mode. More...
Convolution2dForwardBatch | ( | DaalContext | context, |
Convolution2dForwardBatch | other | ||
) |
Constructs the forward 2D convolution layer by copying input objects of another forward 2D convolution layer
context | Context to manage the forward 2D convolution layer |
other | A forward 2D convolution layer to be used as the source to initialize the input objects of the forward 2D convolution layer |
Convolution2dForwardBatch | ( | DaalContext | context, |
Class<?extends Number > | cls, | ||
Convolution2dMethod | method | ||
) |
Constructs the forward 2D convolution layer
context | Context to manage the layer |
cls | Data type to use in intermediate computations for the layer, Double.class or Float.class |
method | The layer computation method, Convolution2dMethod |
Convolution2dForwardBatch clone | ( | DaalContext | context | ) |
Returns the newly allocated forward 2D convolution layer with a copy of input objects of this forward 2D convolution layer
context | Context to manage the layer |
Convolution2dForwardResult compute | ( | ) |
Computes the result of the forward 2D convolution layer
Convolution2dForwardInput getLayerInput | ( | ) |
Returns the structure that contains input object of the forward layer
Convolution2dParameter getLayerParameter | ( | ) |
Returns the structure that contains parameters of the forward layer
Convolution2dForwardResult getLayerResult | ( | ) |
Returns the structure that contains result of the forward layer
void setResult | ( | Convolution2dForwardResult | result | ) |
Registers user-allocated memory to store the result of the forward 2D convolution layer
result | Structure to store the result of the forward 2D convolution layer |
Input data
Convolution2dMethod method |
Computation method for the layer
Convolution2dParameter parameter |
Convolution2dParameters of the layer
For more complete information about compiler optimizations, see our Optimization Notice.