Developer Guide for Intel® Data Analytics Acceleration Library 2019 Update 4
The backward softmax layer accepts the input described below. Pass the Input ID as a parameter to the methods that provide input for your algorithm. For more details, see Algorithms.
Input ID |
Input |
|
---|---|---|
inputGradient |
Pointer to tensor G of size n1 x ... x np that stores the input gradient computed on the preceding layer. This input can be an object of any class derived from Tensor. |
|
inputFromForward |
Collection of input data needed for the backward softmax layer. The collection may contain objects of any class derived from Tensor. |
|
Element ID |
Element |
|
auxValue |
Pointer to tensor Y of size n1 x n2 x ... x np that stores the result of the forward softmax layer. This input can be an object of any class derived from Tensor. |
For common parameters of neural network layers, see Common Parameters.
In addition to the common parameters, the backward softmax layer has the following parameters:
Parameter |
Default Value |
Description |
|
---|---|---|---|
algorithmFPType |
float |
The floating-point type that the algorithm uses for intermediate computations. Can be float or double. |
|
method |
defaultDense |
Performance-oriented computation method, the only method supported by the layer. |
|
dimension |
1 |
Index of the dimension of type size_t to calculate softmax backpropagation. |
The backward softmax layer calculates the result described below. Pass the Result ID as a parameter to the methods that access the results of your algorithm. For more details, see Algorithms.
Result ID |
Result |
|
---|---|---|
gradient |
Pointer to tensor Z of size n1 x n2 x ... x np that stores the result of the backward softmax layer. This input can be an object of any class derived from Tensor. |
C++: softmax_layer_dense_batch.cpp
Java*: SoftmaxLayerDenseBatch.java
Python*: softmax_layer_dense_batch.py