Search This Blog

Monday, 2 June 2025

Introduction to Computing Paradigms and HPA

0 comments


  • The term paradigm refers to a set of practices to be followed to accomplish a task.  
  • In the domain of computing, paradigm refers to the standard practices to be followed for performing computing with the help of computing hardware and software. 

Evolution of Computing Paradigms (1960 - 2017)

  • Every aspect of the computer has gone through a sea of change during the past few decades. However, with these changes came the problem of scalability.
  • As we grow or increase in one area, the other related areas must also be able to grow accordingly to accommodate the changes.
  • For example, as we increase the size of the registers used in a computer, the memory size also needs to grow and the software should be able to exploit this power effectively.
  • Therefore, changes in hardware, networking and operating systems caused paradigms of computing to evolve reflecting the need to grow.

Various Computing Paradigms:
  • Along with all computing devices and the Internet, computing paradigms were also developed keeping pace with these advancements. 
  • There are various computing paradigms available now-a-days that include the following:
    • High Performance Computing (HPC)
    • Network Computing
    • Cluster Computing
    • Grid Computing
    • Cloud Computing
    • Bio Computing
    • Mobile Computing
    • Quantum Computing
    • Optical Computing and
    • Nano Computing


High Performance Computing (HPC)

  • In High Performance Computing, systems are having a pool of processors (CPU) connected (networked) with other resources such as memory, storage, and input and output devices.  
  • The software deployed on a HPC system is normally enabled to run on the entire system of connected components.  
  • Examples of HPC include a small cluster of desktop computers or a super computer. They are generally used for solving scientific problems.
  • High Performance Computing makes use of any one of the following architectures for connecting devices in a network:

    1. Server Architecture
    2. Parallel Architecture
    3. Distributed Architecture

  • While initially computing was perceived to be centralized, they were decentralized and distributed with the advent of Internet, client/server and parallel architecture. 
  • In centralized computing paradigm, all computer resources such as processors, memory, storage etc. are typically located in one physical system and controlled centrally.
  • In server architecture, computers are connected to a network that consists of one server system and multiple client systems.  
  • In this architecture, functionality of the system is split between two computers - a server and a client.  The server satisfies the requests generated by client systems.

Client/Server Architecture

  • Systems that deal with large number of users adopt a three-tier architecture, in which the front end is a Web browser which talks to an application server running remotely.  
  • The application server, in turn, talks to the database server for storage and retrieval of data from a centralized database.
  • Computing system that makes use of client/server or multi-tier architecture is also called as Networked Computing.


Parallel and Distributed Architecture

  • Parallel computing refers to the mechanism of a computer system, which makes it capable of running computations parallel or simultaneously by multiple CPUs.
  • In parallel architecture, processing takes place in multiple CPU of the same computer,  or in multiple processors of various computers that run simultaneously.  The processors (CPU) are mostly of homogeneous type.
  • All the processors running simultaneously share a central memory and communicate through a shared memory in a tightly coupled environment in a parallel computer.
  • Super computer is an example of parallel computer, in which hundreds of thousands of processors are interconnected to solve a computational problem.
  • Parallel systems improve processing and I/O speed by using multiple CPUs and disks in parallel.  
  • In parallel processing, many operations are performed simultaneously as opposed to serial processing, in which the computational steps are performed sequentially.  
Given below are the major differences between a conventional computer (also called Serial computer) and a Parallel computer:

Characteristics of a Serial Computer:

  1. Runs on a single Computer/processor machine having a single CPU
  2. Given problem is broken down into a discrete set of instructions executed by a single CPU
  3. Instructions are executed one after another

Characteristics of a Parallel Computer:

  1. Runs multiple processors (CPU) simultaneously for executing a complex task
  2. Given problem is broken down into discrete parts that can be processed concurrently by multiple CPUs
  3. Each discrete part is further broken down into a set of instructions and are executed simultaneously on different CPUs, which are controlled by a special mechanism

  • There are two main measures of performance of a parallel computer that makes use of parallel processing: through-put and response time.  
  • Through-put refers to the number of tasks that can be completed in a given time interval and response time refers to the amount of time it takes to complete a single task from the time it is submitted.


Parallel Architecture

There are several architecture models for parallel machines.  The following are four architectures in which multiple processors are running parallel, and the resources such as memory, processor and databases are shared among them in four different ways:

  1. Shared Memory Architecture:  In this architecture, all the processors share a common memory.
  2. Shared Disk Architecture:  All the processors share a common set of disks and the shared-disks connected to this system are called clusters.
  3. Shared Nothing Architecture:  In this kind of architecture, the processors share neither a common memory nor common disk among themselves.
  4. Hierarchical Architecture:  In this model of parallel processing, a hybrid architecture that makes use of more than one of the above mentioned architecture is used.



Cluster computing is a type of parallel computing, in which cluster of homogeneous computers are connected, which are cooperatively working together to accomplish a task that can't be easily solved by a serial computer


Distributed Architecture

  • Distributed computing paradigm consists of a collection of independent computers, each having its own memory and other capabilities, but cooperating with each other computers to solve a problem.
  • The computers connected to the distributed environment communicate with one another through various communication media, such as high-speed networks or telephone lines.
  • The participating computers communicate with each other through message passing and the group appears to be a single coherent system to the users.
  • The computers connected in a distributed system are referred to as sites or nods.
  • Nodes do not share main memory or disks.  They may also vary in size and function, ranging from workstations up to mainframe systems.
  • Distributed architecture looks similar to that of Shared Nothing Architecture in parallel systems.
The main differences between distributed architecture and  shared-nothing parallel architecture are the following:
  1. Distributed systems are typically geographically separated
  2. They are separately administered and
  3. They may have a slower interconnection between the systems connected in the network

Video Presentation on Computing Paradigms

Leave a Reply