Book Details

A Study of parallel processing and its contemporary relevance

IT Skills Show & International Conference on Advancements in Computing Resources, (SSICACR-2017) 15 and 16 February 2017, Alagappa University, Karaikudi, Tamil Nadu, India. International Journal of Computer Science (IJCS) Published by SK Research Group of Companies (SKRGC)

Download this PDF format

Abstract

Parallel processing is a type of computation in which many calculations or the execution of processes are carried out simultaneously.[1] Serial memory processing is the act of attending to and processing one item at a time, while parallel memory processing is the act of attending to and processing all items simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. [2] As power consumption by computers has become a concern in recent years,[3] parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors. [4] Parallel computing is closely related to concurrent computing—they are frequently used together, and often conflated, though the two are distinct: it is possible to have parallelism without concurrency and concurrency withoutparallelism.[5][6] Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.

References

[1] Gottlieb, Allan; Almasi, George S. (1989). Highly parallel computing. Redwood City, Calif.: Benjamin/Cummings. ISBN 0-8053-0177-1.

[2] S.V. Adve et al. (November 2008). "Parallel Computing Research at Illinois: The UPCRC Agenda" (PDF). Parallel@Illinois, University of Illinois at Urbana-Champaign. "The main techniques for these performance benefits— increased clock frequency and smarter but increasingly complex architectures—are now hitting the so-called power wall. The computer industry has accepted that future performance increases must largely come from increasing the number of processors (or cores) on a die, rather than making a single core go faster."

 [3] Asanovic et al. Old [conventional wisdom]: Power is free, but transistors are expensive. New [conventional wisdom] is [that] power is expensive, but transistors are "free".

 [4] Asanovic, Krste et al. (December 18, 2006). "The Landscape of Parallel Computing Research: A View from Berkeley" (PDF). University of California, Berkeley. Technical Report No. UCB/EECS-2006-183. "Old [conventional wisdom]: Increasing clock frequency is the primary method of improving processor performance. New [conventional wisdom]: Increasing parallelism is the primary method of improving processor performance… Even representatives from Intel, a company generally associated with the 'higher clock-speed is better' position, warned that traditional approaches to maximizing performance through maximizing clock speed have been pushed to their limits."

[5] "Concurrency is not Parallelism", Waza conference Jan 11, 2012, Rob Pike

[6] "Parallelism vs. Concurrency"

[7] "The Microprocessor Ten Years From Now: What Are The Challenges, How Do We Meet Them? (wmv). Distinguished Lecturer talk at Carnegie Mellon University. Retrieved on November 7, 2007.

[8] A Survey Of Paradigms For Building And Designing Parallel Computing Machines Ahmed Faraz 1 Faiz Ul Haque Zeya 2 and Majid Kaleem 3 123Department of Computer and Software Engineering, Bahria University Karachi Campus, Stadium Road Karachi, Pakistan

Keywords

Parallel processing, Parallelism, Performance, SISD machine, SIMD machine, MISD machine, MIMD machine

Image
  • Format Volume 5, Issue 1, No 20, 2017
  • Copyright All Rights Reserved ©2017
  • Year of Publication 2017
  • Author Mrs.P.Sudha, Mrs.S.Valli
  • Reference IJCS-254
  • Page No 1618-1626

Copyright 2024 SK Research Group of Companies. All Rights Reserved.