Skip to main content
Engineering LibreTexts

19: Parallel Processing

  • Page ID
    19987
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    In the context of computing, parallel processing(For more information, refer to: http://en.Wikipedia.org/wiki/Parallel_processing), or more generically concurrency(For more information, refer to: http://en.Wikipedia.org/wiki/Concurr...puter_science)), refers to multiple processes appearing to execute simultaneously.

    In broad terms, concurrency implies multiple different (not necessarily related) processes simultaneously making progress. This can be accomplished in multiple ways. For example, process A could be executing on core 0 while process B is simultaneously executing on core 1 thus executing in parallel. Another possibility is that the execution of process A and B could be interleaved on a single core where process A might execute for a period of time and then be paused while process B is then executed for a period of time and then paused while process A is resumed. This continues until either or both are completed. If this interleaved execution is performed often and fast enough it will appear that each is executing simultaneously.

    The term parallel processing implies that processes are executing simultaneously. These processes may be unrelated or working together as a coordinated unit to solve a single complex problem.

    This chapter provides an overview of some basic approaches to parallel processing as applied to a single problem. Assume that a large problem can be divided into various sub-problems and those sub-problems can be executed independently. The sub- solutions would be brought together to provide a final solution for the problem. If the sub-problems can be executed simultaneously, a final solution might be found significantly faster than if the sub-problems are executed in series (one after another). The potential rewards for implementing concurrency are significant but so are the challenges. Unfortunately, not all problems can be easily divided into such sub-problems.

    The basic approaches to parallel processing are distributed computing(For more information, refer to: http://en.Wikipedia.org/wiki/Distributed_computing) and multiprocessing(For more information, refer to: http://en.Wikipedia.org/wiki/Multiprocessing) (also referred to as threaded computations). Each approach is explained with an emphasis on the technical issues associated with shared memory multiprocessing. The larger topic of learning to create parallel algorithms is outside the scope of this text.


    This page titled 19: Parallel Processing is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by Ed Jorgensen.

    • Was this article helpful?