INTRODUCTION
Paradigms
of parallel computing:The evolution of parallel processing,
even if slow, gave rise to a considerable variety of programming paradigms.
Independently
from the specific paradigm considered, in order to execute a program which
exploits parallelism, the programming language must supply the means to:
- identify
     parallelism,
     by recognizing the components of the program execution that will be
     (potentially) performed by different processors;
- start and stop parallel
     executions;
- coordinate the
     parallel executions (e.g., specify and implement interactions between
     concurrent components).
It
is custom to separate the approaches to parallel processing into explicit versus implicit parallelism.
Parallel
Programming:-Parallel computing refers
to the process of executing several processors an application or computation
simultaneously. Generally, it is a kind of computing architecture where the
large problems break into independent, smaller, usually similar parts that can
be processed in one go. It is done by multiple CPUs communicating via shared
memory, which combines results upon completion. It helps in performing large
computations as it divides the large problem between more than one processor.
Types
of parallel computing
From
the open-source and proprietary parallel computing vendors, there are generally
three types of parallel computing available, which are discussed below:
- Bit-level
     parallelism: The
     form of parallel computing in which every task is dependent on processor
     word size. In terms of performing a task on large-sized data, it reduces
     the number of instructions the processor must execute. There is a need to
     split the operation into series of instructions. For example, there is an
     8-bit processor, and you want to do an operation on 16-bit numbers. First,
     it must operate the 8 lower-order bits and then the 8 higher-order bits.
     Therefore, two instructions are needed to execute the operation. The
     operation can be performed with one instruction by a 16-bit processor.
- Instruction-level
     parallelism: In
     a single CPU clock cycle, the processor decides in instruction-level
     parallelism how many instructions are implemented at the same time. For
     each clock cycle phase, a processor in instruction-level parallelism can
     have the ability to address that is less than one instruction. The
     software approach in instruction-level parallelism functions on static
     parallelism, where the computer decides which instructions to execute
     simultaneously.
- Task
     Parallelism: Task
     parallelism is the form of parallelism in which the tasks are decomposed
     into subtasks. Then, each subtask is allocated for execution. And, the
     execution of subtasks is performed concurrently by processors.
Applications
of Parallel Computing
There
are various applications of Parallel Computing, which are as follows:
- One of the
     primary applications of parallel computing is database and data mining.
- The real-time
     simulation of systems is another use of parallel computing.
- the
     technologies, such as networked videos and multimedia.
- science and
     engineering
- collaborative
     work environments.
- The concept
     of parallel computing is used in augmented reality, advanced graphics, and
     virtual reality.
Advantages
of Parallel Computing
Parallel
computing advantages are discussed below:
- In parallel
     computing, more resources are used to complete the task, which decreases
     the time and cuts possible costs. Also, cheap components are used to
     construct parallel clusters.
- Parallel
     computing, as opposed to serial computing, can solve larger problems in
     less time.
- For
     simulating, modeling, and understanding complex, real-world phenomena,
     parallel computing is much more appropriate than serial computing.
- When the
     local resources are finite, they can offer advantages over non-local
     resources.
- There are
     multiple problems that are very large and may impractical or impossible to
     solve them on a single computer; the concept of parallel computing helps
     to remove these kinds of issues.
- One of the
     best advantages of parallel computing is that it allows you to do several
     things at once by using multiple computing resources.
- Furthermore,
     parallel computing is suited for hardware, as serial computing wastes the
     potential computing power.
Disadvantages
of Parallel Computing
There
are many limitations to parallel computing, which are as follows:
- It addresses
     a parallel architecture that can be difficult to achieve.
- In the case
     of clusters, better cooling technologies are needed in parallel computing.
- It requires
     managed algorithms, which could be handled in the parallel mechanism.
- Multi-core
     architectures use a lot of energy.
- The parallel
     computing system needs low coupling and high cohesion, which is difficult
     to create.
  Synchronous:-Synchronous execution
means the first task in a program must finish processing before moving on to
executing the next task whereas asynchronous execution
means a second task can begin executing in parallel, without waiting for an
earlier task to finish.
Real-World Examples
For
those looking for another way to understand their differences or find a
creative way to explain them to a pal, here are two real-world analogies.
Synchronous: You want a burger and decide to go to
McDonald's. After you order the burger at the counter, you are told to wait as
your burger is prepared. In this synchronous situation, you are stuck at the
counter until you are served your burger.
Asynchronous: You want a burger and decide to go to
Five Guys. You go to the counter and order a burger. Five Guys gives you a
buzzer that will notify you once your burger is ready. In this asynchronous
situation, you have more freedom while you wait.
Disclaimer: just to make
sure we're aligned, we're not judging the performances of McDonald's and Five
Guys here, this is purely fictional.
One
type of programming is not inherently better than the other. They are just
different, each with their own unique advantages, and are used in different
scenarios. Depending on what you're building, you can and probably will use
both sync and async tasks.
Technical Examples
We
have selected 4 common examples of when synchronous and asynchronous processing
are used in applications.
Synchronous Processing
·       
User Interfaces: User
interface (UI) designs are typically synchronous. Since UIs are spaces where
humans and computers interact, it is ideal for them to replicate the
communication standards and practices humans are familiar with. Humans expect
an immediate response when they interact with a computer!
·       
HTTP APIs: HTTP
APIs pass requests and responses in a synchronous fashion. Client programs
sending HTTP requests usually expect a fast answer from the web server.
Asynchronous Processing
·       
Batch-processing: is
a data-processing method to handle large amounts of data asynchronously. With
asynchronous batch-processing, large batches of data are processed at scheduled
times to avoid blocking computing resources.
·       
Long-running tasks: such
as fulfilling an order placed on an e-commerce site are best handled
asynchronously. There is no need to block resources while this task is executed.
 
 
 
 
