Part 2: AVX Intrinsics

The AVX intrinsics are very similar to the SSE2 intrinsics, and follow a similar naming convention. The intrinsics for AVX are defined in the header file immintrin.h, which is available if your compiler supports writing AVX code, as indicated by the __AVX__ macro.

#ifdef __AVX__
  #include <immintrin.h>
#else
  #warning AVX is not available. Code will not compile!
#endif

You will need to pass a flag to your compiler to switch on AVX support. For GCC and clang the flag is -mavx. This is needed as code compiled with AVX support with not run on processors that don’t support AVX (the program will just crash with an “unsupported” or “invalid instruction” error).

The immintrin.h header file defines a set of data types that represent different types of vectors. These are;

Several functions are defined that operate on __m256 vectors, e.g.

In addition to functions that operate on __m256 float vectors, there are equivalent functions that operate on __m256d double vectors. The functions are named similarly to the float vector functions, except _ps (which stands for “packed single”) is replaced by _pd (which stands for “packed double”). For example;

The above is a lot of information. Let’s now try a simple program that creates two __m256 vectors and calculates their sum. Create a new file called avx.cpp and copy into it;

#include <iostream>

#ifdef __AVX__
  #include <immintrin.h>
#else
  #warning No AVX support - will not compile
#endif

int main(int argc, char **argv)
{
    __m256 a = _mm256_set_ps(8.0, 7.0, 6.0, 5.0, 
                             4.0, 3.0, 2.0, 1.0);
    __m256 b = _mm256_set_ps(18.0, 17.0, 16.0, 15.0, 
                             14.0, 13.0, 12.0, 11.0);

    __m256 c = _mm256_add_ps(a, b);

    float d[8];
    _mm256_storeu_ps(d, c);

    std::cout << "result equals " << d[0] << "," << d[1]
              << "," << d[2] << "," << d[3] << ","
              << d[4] << "," << d[5] << "," << d[6] << ","
              << d[7] << std::endl;

    return 0;
}

Compile and run using

g++ --std=c++14 -O2 -mavx avx.cpp -o avx
./avx

(note the addition of -mavx to switch on AVX support)

You should see output

result equals 12,14,16,18,20,22,24,26

This is because we have loaded [1,2,3,4,5,6,7,8] into a and [11,12,13,14,15,16,17,18] into b. We calculated the sum, which is [12,14,16,18,20,22,24,26], which was then printed out.

Try editing avx.cpp to use the other arithmetic functions (i.e. _mm256_mul_ps). Is the result what you expect?

Manually vectorising a loop

Now you know how to create and use AVX intrinsics, the next step is to use them to vectorise code. We will look here at vectorising the loop.cpp file that we first met in part 1.

Create a new file called avxloop.cpp and copy into it the below;

#include "workshop.h"

#ifdef __AVX__
  #include <immintrin.h>
#else
  #warning AVX not supported. Code will not compile
#endif

int main(int argc, char **argv)
{
    const int size = 512;

    auto a = workshop::Array<float>(size);
    auto b = workshop::Array<float>(size);
    auto c = workshop::Array<float>(size);

    auto avx_a = workshop::AlignedArray<__m256>(size/8);
    auto avx_b = workshop::AlignedArray<__m256>(size/8);
    auto avx_c = workshop::AlignedArray<__m256>(size/8);


    for (int i=0; i<size; ++i)
    {
        a[i] = 1.0*(i+1);
        b[i] = 2.5*(i+1);
        c[i] = 0.0;
    }

    for (int i=0; i<size; i+=8)
    {
        avx_a[i/8] = _mm256_set_ps(1.0*(i+7+1),
                                   1.0*(i+6+1),
                                   1.0*(i+5+1),
                                   1.0*(i+4+1),
                                   1.0*(i+3+1),
                                   1.0*(i+2+1),
                                   1.0*(i+1+1),
                                   1.0*(i+0+1));

        avx_b[i/8] = _mm256_set_ps(2.5*(i+7+1),
                                   2.5*(i+6+1),
                                   2.5*(i+5+1),
                                   2.5*(i+4+1),
                                   2.5*(i+3+1),
                                   2.5*(i+2+1),
                                   2.5*(i+1+1),
                                   2.5*(i+0+1));

        avx_c[i/8] = _mm256_set1_ps(0.0);
    }

    auto timer = workshop::start_timer();

    for (int j=0; j<100000; ++j)
    {
        for (int i=0; i<size; ++i)
        {
            c[i] = a[i] + b[i];
        }
    }

    auto duration = workshop::get_duration(timer);

    timer = workshop::start_timer();

    for (int j=0; j<100000; ++j)
    {    
        for (int i=0; i<size/8; ++i)
        {
            avx_c[i] = _mm256_add_ps(avx_a[i], avx_b[i]);
        }
    }

    auto vector_duration = workshop::get_duration(timer);

    std::cout << "The standard loop took " << duration
              << " microseconds to complete." << std::endl;

    std::cout << "The vectorised loop took " << vector_duration
              << " microseconds to complete." << std::endl;

    return 0;
}

Compile and run using

g++ -mavx -O2 --std=c++14 -Iinclude avxloop.cpp -o avxloop
./avxloop

You should see that the manually vectorised loop is nearly eight times faster than the scalar loop. On my computer, the vector loop is about 7.9 times faster than the scalar loop, taking 6.2 ms versus 49.1 ms.

To manually vectorise the loop, we have had to make some changes to the code;

Note that the number of iterations of our loop (512) was evenly divisable by 8. If this was not the case, we would have had to manually add additional scalar iterations of the loop to make up any shortfall. For example, if our loop used 514 iterations, then 512 could be performed using the vector loop, and then code would need to be added to perform the remaining 2 iterations using a scalar loop.

Exercises

Hopefully, you should see the (perhaps surprising) result that vector square root is significantly faster than scalar square root. On my computer, scalar square root takes about 211 ms, while vector square root takes 11 ms. This is about a 19 times speed up, which goes beyond the normal eight times speed up associated with vectorisation. The reason is that, like SSE2, AVX has an implementation of square root that is built directly into the processor. This hardware square root is exceptionally fast, and much faster than the scalar square root that is implemented in software. In addition, this hardware square root is an actual processor instruction rather than a function call, meaning that there are no overheads. However, unlike SSE2, the AVX square root does increase the cost of the loop compared to simple addition (11 ms for square root plus addition, while 6 ms for addition only). Despite this, AVX square root is still faster than SSE2 square root (11 ms versus 13.5 ms).

This shows that if your code uses a lot of square roots, you can get a big performance boost by using manual vectorisation with AVX intrinsics.


Previous Up Next