Developer Guide for Intel® Data Analytics Acceleration Library 2019 Update 5
The forward ReLU layer accepts the input described below. Pass the Input ID as a parameter to the methods that provide input for your algorithm. For more details, see Algorithms.
Input ID |
Input |
|
---|---|---|
data |
Pointer to the tensor of size n1 x n2 x ... x np that stores the input data for the forward ReLU layer. This input can be an object of any class derived from Tensor. |
For common parameters of neural network layers, see Common Parameters.
In addition to the common parameters, the forward ReLU layer has the following parameters:
Parameter |
Default Value |
Description |
|
---|---|---|---|
algorithmFPType |
float |
The floating-point type that the algorithm uses for intermediate computations. Can be float or double. |
|
method |
defaultDense |
Performance-oriented computation method, the only method supported by the layer. |
The forward ReLU layer calculates the result described below. Pass the Result ID as a parameter to the methods that access the results of your algorithm. For more details, see Algorithms.
Result ID |
Result |
|
---|---|---|
value |
Pointer to the tensor of size n1 x n2 x ... x np that stores the result of the forward ReLU layer. This input can be an object of any class derived from Tensor. |
|
resultForBackward |
Collection of data needed for the backward ReLU layer. |
|
Element ID |
Element |
|
auxData |
Pointer to the tensor of size n1 x n2 x ... x np that stores the input data for the forward ReLU layer. This input can be an object of any class derived from Tensor. |
C++: relu_layer_dense_batch.cpp
Java*: ReLULayerDenseBatch.java
Python*: relu_layer_dense_batch.py