What next?

You have now seen how you can write efficient parallel C++ programs by expressing your algorithm in terms of collective operations, like map/reduce.

You have also now been introduced to Intel’s Threading Building Blocks (TBB) which provide a free, portable and high-level framework for writing efficient task-based parallel programs. You have learned enough of TBB to see how to write a parallel implementation of map/reduce.

If you would like to learn more about TBB, then please check out the documentation on the website, or the much more useful book, Intel Threading Building Blocks, by James Reinders.

The concepts of map/reduce are common to other parallel programming libraries and languages. You can learn how to write map/reduce using OpenMP in my OpenMP course, and to write it in MPI using my MPI course. There are parallel map/reduce libraries available in Python, which are described in my parallel python course.


Previous Up Next