This is an abutted introduction to parallel processing. In general, the term parallel processing (PP) can be defined as the processing of an algorithm simultaneously by several processors in a computing platform.
On the other hand, if a number of algorithms are processed simultaneously by several processors in a computing platform, it is known as distributed processing. Parallel processing or distributed processing or both can be called multiprocessing.
The hardware that can be used for parallel processing is known as parallel architecture. In a conventional parallel system, all the processors or processing elements (PEs) are identical. This architecture can be described as homogeneous.
However, for a variety of computational demands of many algorithms, mixed PEs based architectures are becoming popular. A parallel architecture comprising of a variety of PEs is described as a heterogeneous system.
Parallel processing has emerged as a key technology in modern computing to meet the increasing demand for higher performance, lower costs, and sustained productivity in real-life applications.
The concept of PP on different problems or different parts of the same problem is no new. Discussions of parallel computing machines are found in the literature at least as far back as the 1920s. There have been continuing research efforts to understand parallel computation.
Such efforts have intensified dramatically in the past few years, with thousands of projects around the world involving scores of different parallel architectures for all kinds of applications, including signal processing, control, artificial intelligence, pattern recognition, robotics, computer vision, computer-aided design, discrete event simulation, etc.
You may like also: