At work recently, I found myself trying to explain the Work-Efficiency vs Step-Efficiency tradeoff to a coworker, but when I searched for online resources to help I couldn’t find any that I liked, so I decided to take a shot at writing my own. I found this idea presented in a video lecture series about programming for GPGPUs on Youtube a while ago. However, it’s just as applicable to any form of parallel processing, from SIMD instructions running on a single CPU core up to massive clusters of thousands of computers.
Good explanation of the difference between work efficiency and step efficiency when talking about parallel algorithms.
This is an application of Amdahl’s Law. Which comes up all the time in parallel computing. The more parallel computing power is available, the more the work itself needs to be parallelizable, otherwise you will be leaving computing power on the table.
This is an application of Amdahl’s Law. Which comes up all the time in parallel computing. The more parallel computing power is available, the more the work itself needs to be parallelizable, otherwise you will be leaving computing power on the table.