Amdahl's Law

  • There's a well-known equation for estimating the speedup that a parallel program can achieve called Amdahl's law, which is named after the computer scientist that formulated it.

  • It's a way to estimate how much bang for your buck you'll actually get by parallelizing a program.

  • In this equation for Amdahl's law, P represents the portion of a program that can be made parallel, and S is the speedup for that parallelized portion of the program running on multiple processors.

    • overall speedup = 1 / ((1 - p) + p/s)

    • So, for this example, if 95% of our cupcake decorating program can be executed in parallel, and doing so with two processors produces a speedup of two for that part of the program, then the theoretical speedup for the entire program is about 1.9, which is a little bit less than two. If we add a third processor to increase the speedup for the parallelized portion to three, then the overall speedup will be around 2.7. Using four processors gives us a speedup of 3.5, and so on.

    • Now let's say we spend a lot of money and buy a computer with 1000 processing cores. Instinctively I would expect that to give me a speedup of somewhere at least close to 1000. That'd be great. But according to Amdahl's law, the overall speedup with 1000 processors will only be around 19.6.

    • If I go wild and bump it up to a million processors, the overall speedup only increases to slightly less than 20.

    • 95% of our program is parallelizable, but the 5% that's not, the part of the program that has to execute sequentially, that's creating an upper limit on the speedup we can achieve. A million processors might be able to execute the parallel portion of the program in the blink of an eye, but that sequential 5% will still take the same amount of time as it would with just one processor.

    • As the number of processors is increased from left to right, the speedup rises until it eventually maxes out at 20. If only 90% of a program can be parallelized, then at best we'll get a speedup of 10. A 75% parallelizable program has a maximum speedup of four, and if only 50% of a program can be executed in parallel, then even with an infinite number of processors, the best we can achieve is a measly speedup of two. If that's all we can get, then we might decide that it's not worth the effort to write the program so it will run in parallel.

  • Amdahl's law illustrates why using multiple processors for parallel computing is only really useful for programs that are highly parallelizable.

  • When first learning about parallel programing, it's natural to be excited and want to parallelize everything. Computers have lots of cores nowadays, so why not make everything as parallel as possible? That's a common trap to fall into. Just because you can write programs to be parallel doesn't mean you always should, because the costs in overhead associated with parallelization can sometimes outweigh the benefits.

    Amdahl's law is one handy tool to estimate the benefits of parallelizing a program to determine whether or not it makes sense to do so.

Last updated