Intel® C++ Compiler 18.0 Developer Guide and Reference

Initiating an Offload on Intel® Graphics Technology

This topic only applies when targeting Intel® Graphics Technology.

Intel® Cilk™ Plus is a deprecated feature in the Intel® C++ Compiler 18.0. An alternative for offloading to the processor graphics is planned for a future release. For more information see Migrate Your Application to use OpenMP* or Intel® TBB Instead of Intel® Cilk™ Plus.

The code inside _Cilk_for loops or _Cilk_for loop nests following #pragma offload target(gfx) and in functions qualified with #pragma offload target(gfx) or #pragma offload target(gfx_kernel) is compiled to both the target and the CPU. In addition to target attribute, the functions can be qualified with vector attributes using __declspec(target(gfx)) (Windows* and Linux*) or __attribute__((target(gfx))) (Linux* only). Using target(gfx_kernel) gives both host and target versions, but the target version cannot be called from the offload region. Rather, it must be passed as an argument to the asynchronous offload API, which is discussed in Asynchronous Offloading.

You can place #pragma offload target(gfx) only before a parallel loop, a perfect parallel loop nest, or an Intel® Cilk™ Plus array notation statement. The parallel loop must be expressed using a _Cilk_for loop.

#pragma offload can contain the following clauses when programming for Intel® Graphics Technology:

#pragma offload_wait target(gfx) wait (address_expression) has the same effect as #pragma offload target(gfx) wait(address_expression) except that no offload block can follow.

#pragma offload_transfer enables you to transfer data between the host and target without offloading any computation. This pragma supports the same clauses as #pragma offload. You can use signal and wait clauses to organize asynchronous data transfer.

Note

Using pin substantially reduces the cost of offloading because instead of copying data to or from memory accessible by the target, the pin clause organizes sharing the same physical memory area between the host and the target, which is much faster. For kernels that perform substantial work on a relatively small data size, such as O(N2)), this optimization is not important.

Howeversd, it makes OS lock pinned memory pages making them non-swappable, so excessive pinning may cause overall system performance degradation.

Although by default the compiler builds an application that runs on both the host CPU and target, you can also compile the same source code to run on just the CPU, using the negative form of the [Q]offload compiler option.

Example: Offloading to the Target

unsigned parArrayRHist[256][256],
     parArrayGHist[256][256], parArrayBHist[256][256];

#pragma offload target(gfx) if (do_offload) \
     pin(inputImage: length(imageSize)) \
     out(parArrayRHist, parArrayGHist, parArrayBHist)

     __Cilk_for (int ichunk = 0; ichunk < chunkCount; ichunk++){
          …
     }

In the example above, the generated CPU code and the runtime do the following:

Example: Offloading Using Perfectly Nested _Cilk_for Loops

float (* A)[k] = (float (*)[k])matA;
float (* B)[n] = (float (*)[n])matB;
float (* C)[n] = (float (*)[n])matC;

#pragma offload target(gfx) if (do_offload) \
     pin(A: length(m*k)), pin(B: length(k*n)), pin(C: length(m*n))

     __Cilk_for (int r = 0; r < m; r += TILE_m) {
          __Cilk_for (int c = 0; c < n; c += TILE_n) {
               …
          }
     }

In the example above:

Example: Asynchronous Offloading to the Target

In this example, initialization of the in1 array is offloaded to the processor graphics, and in parallel the host initalizes in2. Then in1 and in2 are used in the computation of the out performed on the host.

To ensure that the initialization of in1 is complete, the omp taskwait pragma appears before computing out.

When you invoke the compiler, you must include the following compiler options to enable offloading to the processor graphics with the help of OpenMP* syntax: /Qopenmp /Qopenmp-offload=gfx (Windows*) or -qopenmp -qopenmp-offload=gfx (Linux*)


    int* in1 = (int*)malloc(SIZE * sizeof(int));
    int* in2 = (int*)malloc(SIZE * sizeof(int));
    int* out = (int*)malloc(SIZE * sizeof(int));

#pragma omp target map(tofrom: in1[0:SIZE]) nowait
#pragma omp parallel for
    for (int i = 0; i < SIZE; i++) {
        in1[i] = 1;
    }

#pragma omp parallel for
    for (int i = 0; i < SIZE; i++) {
        in2[i] = i;
    }

#pragma omp taskwait

#pragma omp parallel for
    for (int i = 0; i < SIZE; i++) {
        out[i] = in1[i] + in2[i];
    }

This example uses code that is roughly equivalent to the OpenMP example above, but it is written using #pragma offload syntax. To ensure that the initialization of in1 is complete, the offload_wait pragma appears before computing out.

    int* in1 = (int*)malloc(SIZE * sizeof(int));
    int* in2 = (int*)malloc(SIZE * sizeof(int));
    int* out = (int*)malloc(SIZE * sizeof(int));

#pragma offload target(gfx) pin(in1: length(SIZE)) signal(in1)
    _Cilk_for (int i = 0; i < SIZE; i++) {
        in1[i] = SIZE - i;
    }

    _Cilk_for (int i = 0; i < SIZE; i++) {
        in2[i] = i;
    }

#pragma offload_wait target(gfx) wait(in1)

    _Cilk_for (int i = 0; i < SIZE; i++) {
        out[i] = in1[i] + in2[i];
    }

See Also