MEANING
Parallel processing (parallel
processing) is the use of more than one CPU to execute a program
simultaneously.
Ideally, parallel processing makes
the program run faster as more CPUs are used.
GOAL
The main goal of parallel processing
is to increase computing performance. The more things can be done
simultaneously (in the same time), the more work that can be accomplished.
PARALLEL
PROCESSING
parallel computing
Parallel computing is one technique
to compute simultaneously using several computers simultaneously.
Normally required when the required
capacity is very large, either because they have to process large amounts of
data or due process demands a lot of computing.
To perform various types of parallel
computing its necessary infrastructure parallel machine consisting of many
computers connected to the network and able to work in parallel to solve a
problem. It is necessary for various support software commonly referred to as
middleware that acts to regulate the distribution of jobs among nodes in a
single parallel machine. Furthermore, users must make parallel programming to
realize the computation.
Parallel Programming itself is a
computer programming technique that allows the execution of commands /
operations concurrently. When computers are used simultaneously by separate
computers connected in a network of computers, usually called distributed
systems. A popular programming language used in parallel programming is MPI
(Message Passing Interface) and PVM (Parallel Virtual Machine).
The thing to remember is parallel
computing is different from multitasking. the meaning of multitasking is a
computer with a single processor to execute multiple tasks simultaneously.
While some people are struggling in the area of operating systems assume that
a single computer can not do multiple jobs at once, but the process of
scheduling a pass off to the operating system make such computer tasks
simultaneously. While parallel computing has been described previously, that
parallel computing using multiple processors or computers. Besides not using
parallel computing Von Neumann architecture.
To clarify more about the differences
in a single computing (using one processor) and parallel computing (using
multiple processors), then we must know in advance the understanding of the
model of computation. There are 4 models of computing are used, namely:
1. SISD
2. SIMD
3. MISD
4. MIMD
1. SISD
Which stands for Single Instruction,
Single Data is the only one to use the Von Neumann architecture. This is
Because the models is only used one processor only. Therefore, this models can
be regarded as a model for a single computation. While the other three models
is parallel computing using multiple processors. Some examples of using
computer models are SISD UNIVAC1, IBM 360, CDC 7600, Cray 1 and PDP 1.
2. SIMD
Which stands for Single Instruction,
Multiple Data. SIMD processor to use many of the same instructions, but each
processor to process different data. For example we want to find the number 27
in the series of numbers consists of 100 digits, and we use 5 processor. On
each processor we use the same algorithm or command, but the data are processed
differently. For example, processor 1 process data from sequence / order of the
first to rank 20, second processor to process data from sequence 21 to sequence
40, so even for processor-processor to another. Several examples of SIMD
computer model is ILLIAC IV, MasPar, Cray X-MP, Cray Y-MP, Thinking Machine
CM-2 and the Cell processor (GPU).
3. MISD
Which stands for Multiple
Instruction, Single Data. MISD uses many processors with each processor using
different instructions, but to process the same data. This is the opposite of
the model of SIMD. For example, we can use the same case on the SIMD model of
example but a different way of settlement. At MISD when the computer first,
second, third, fourth and fifth both process the data of the order of 1-100,
but the algorithms used to search for different techniques on each processor.
Until now there is no computer model of MISD.
4. MIMD
Which stands for Multiple
Instruction, Multiple Data. MIMD uses many processors with each processor has
different instructions and different data processing. But many computers using
MIMD models also include components for SIMD models. Some models use MIMD
computer is an IBM POWER5, HP / Compaq AlphaServer, Intel IA32, AMD Opteron,
Cray XT3 and IBM BG / L.
Settlement
Issues in Computing A Single
A
Settlement Issues in Parallel Computing
From the difference between the two
images above, we can conclude that the performance of parallel computing is
more effective and can save time for data processing than a single computing.
From the explanations above, we can
get the answers to why and when we need to use parallel computing. The answer
is that parallel computing is much more time-saving and highly effective when
we have to process large amounts of data. However, the effectiveness is lost
when we only process small amounts of data, because the data with a small
amount or a little more effective if we use a single computing.
Parallel
computing requires:
· algorithms
· Programming language
· compiler
Parallel programming is a computer
programming technique that allows the execution of commands / operations
simultaneously in both the computer with one (single processor) or multiple
(dual processors with parallel machines) CPU.
The main goal of parallel programming
is to increase computing performance.
* Message
Passing Interface (MPI)
MPI is a programming standard that
allows programmers
to create an application that can run
in parallel.
MPI provides functions for exchanging
between messages. Uses MPI else is
1. write parallel code is portable
2. get high performance in parallel
programming, and
3. face problems involving irregular
or dynamic data relationships that are not
so suited to parallel data model.
Relationship between Modern Computing
with Parallel Processing
The relationship between modern
computing and parallel processing are related, because the use of a computer or
computing is now considered to be much faster than the settlement of the
problem manually. With such improved computing performance or processes are
increasingly applied, and one way is to increase the speed of the hardware.
Where is the main component in the computer is the processor hardware. While
parallel processing is the use of multiple processors (or multiprocessor
computer architecture with many processors) for faster computer performance.