Python* API Reference for Intel® Data Analytics Acceleration Library 2019 Update 4

concat_layer_dense_batch.py

Deprecation Notice: With the introduction of daal4py, a package that supersedes PyDAAL, Intel is deprecating PyDAAL and will discontinue support starting with Intel® DAAL 2021 and Intel® Distribution for Python 2021. Until then Intel will continue to provide compatible pyDAAL pip and conda packages for newer releases of Intel DAAL and make it available in open source. However, Intel will not add the new features of Intel DAAL to pyDAAL. Intel recommends developers switch to and use daal4py.

Note: To find daal4py examples, refer to daal4py documentation or browse github repository.

1 # file: concat_layer_dense_batch.py
2 #===============================================================================
3 # Copyright 2014-2019 Intel Corporation.
4 #
5 # This software and the related documents are Intel copyrighted materials, and
6 # your use of them is governed by the express license under which they were
7 # provided to you (License). Unless the License provides otherwise, you may not
8 # use, modify, copy, publish, distribute, disclose or transmit this software or
9 # the related documents without Intel's prior written permission.
10 #
11 # This software and the related documents are provided as is, with no express
12 # or implied warranties, other than those that are expressly stated in the
13 # License.
14 #===============================================================================
15 
16 #
17 # ! Content:
18 # ! Python example of forward and backward concatenation (concat) layer usage
19 # !
20 # !*****************************************************************************
21 
22 #
23 ## <a name="DAAL-EXAMPLE-PY-CONCAT_LAYER_BATCH"></a>
24 ## \example concat_layer_dense_batch.py
25 #
26 
27 import os
28 import sys
29 
30 from daal.algorithms.neural_networks import layers
31 
32 utils_folder = os.path.realpath(os.path.abspath(os.path.dirname(os.path.dirname(__file__))))
33 if utils_folder not in sys.path:
34  sys.path.insert(0, utils_folder)
35 from utils import printNumericTable, printTensor, readTensorFromCSV
36 
37 # Input data set parameters
38 datasetName = os.path.join("..", "data", "batch", "layer.csv")
39 concatDimension = 1
40 nInputs = 3
41 
42 if __name__ == "__main__":
43 
44  # Retrieve the input data
45  tensorData = readTensorFromCSV(datasetName)
46  tensorDataCollection = layers.LayerData()
47 
48  for i in range(nInputs):
49  tensorDataCollection[i] = tensorData
50 
51  # Create an algorithm to compute forward concatenation layer results using default method
52  concatLayerForward = layers.concat.forward.Batch(concatDimension)
53 
54  # Set input objects for the forward concatenation layer
55  concatLayerForward.input.setInputLayerData(layers.forward.inputLayerData, tensorDataCollection)
56 
57  # Compute forward concatenation layer results
58  forwardResult = concatLayerForward.compute()
59 
60  printTensor(forwardResult.getResult(layers.forward.value), "Forward concatenation layer result value (first 5 rows):", 5)
61 
62  # Create an algorithm to compute backward concatenation layer results using default method
63  concatLayerBackward = layers.concat.backward.Batch(concatDimension)
64 
65  # Set inputs for the backward concatenation layer
66  concatLayerBackward.input.setInput(layers.backward.inputGradient, forwardResult.getResult(layers.forward.value))
67  concatLayerBackward.input.setInputLayerData(layers.backward.inputFromForward, forwardResult.getResultLayerData(layers.forward.resultForBackward))
68 
69  printNumericTable(forwardResult.getLayerData(layers.concat.auxInputDimensions), "auxInputDimensions ")
70 
71  # Compute backward concatenation layer results
72  backwardResult = concatLayerBackward.compute()
73 
74  for i in range(tensorDataCollection.size()):
75  printTensor(backwardResult.getResultLayerData(layers.backward.resultLayerData, i),
76  "Backward concatenation layer backward result (first 5 rows):", 5)

For more complete information about compiler optimizations, see our Optimization Notice.