Java* API Reference for Intel® Data Analytics Acceleration Library 2019 Update 4

Classes

Contains classes of the two-dimensional (2D) convolution layer.

Classes

class  Convolution2dBackwardBatch
 Class that computes the results of the 2D convolution layer in the batch processing mode. More...
 
class  Convolution2dBackwardInput
 Input object for the backward 2D convolution layer. More...
 
class  Convolution2dBackwardResult
 Provides methods to access results obtained with the compute() method of the backward 2D convolution layer. More...
 
class  Convolution2dBatch
 Provides methods for the 2D convolution layer in the batch processing mode. More...
 
class  Convolution2dForwardBatch
 Class that computes the results of the forward 2D convolution layer in the batch processing mode. More...
 
class  Convolution2dForwardInput
 Input object for the forward 2D convolution layer More...
 
class  Convolution2dForwardResult
 Class that provides methods to access the result obtained with the compute() method of the forward 2D convolution layer. More...
 
class  Convolution2dIndices
 Data structure representing the dimension for convolution kernels. More...
 
class  Convolution2dKernelSize
 Data structure representing the sizes of the two-dimensional kernel subtensor for the backward 2D convolution layer and results for the forward 2D convolution layer. More...
 
class  Convolution2dLayerDataId
 Identifiers of input objects for the backward 2D convolution layer and results for the forward 2D convolution layer. More...
 
class  Convolution2dMethod
 Available methods for the 2D convolution layer. More...
 
class  Convolution2dPadding
 Data structure representing the number of data to be implicitly added to the subtensor. More...
 
class  Convolution2dParameter
 Class that specifies parameters of the 2D convolution layer. More...
 
class  Convolution2dStride
 Data structure representing the intervals on which the kernel should be applied to the input. More...
 

For more complete information about compiler optimizations, see our Optimization Notice.