Key Concept: Parallel Computing |
Written by Jeff Sale |
Friday, 28 May 2010 13:27 |
Key Concept: Parallel Computing
Parallel computing is the concurrent use of multiple processors (CPUs) to do computational work. In traditional (serial) programming, a single processor executes the program instructions in a step-by-step manner. Some operations, however, have multiple steps that do not have time dependencies and can therefore be broken up into multiple tasks to be executed simultaneously. For example, adding a number to all the elements of a matrix does not require that the result obtained from summing one element be acquired before summing the next element. Elements in the matrix can be made available to several processors and the sums performed simultaneously, with the results available much more quickly than if the operations had all been performed serially.
Parallel computations can be performed on shared-memory systems with multiple CPUs, or on distributed-memory clusters made up of smaller shared-memory systems or single-CPU systems. Coordinating the concurrent work of the multiple processors and synchronizing the results are handled by program calls to parallel libraries; these tasks usually require parallel programming expertise. At Indiana University, the High Performance Applications team offers programmers help in converting serial codes to parallel code, and optimizing the performance of parallel codes.
|
Last Updated on Thursday, 15 September 2011 21:42 |