Shared-Memory Parallelism Can be Simple, Fast, and Scalable

Nonfiction, Computers, Programming, Parallel Programming, Advanced Computing, Engineering, Computer Architecture
Cover of the book Shared-Memory Parallelism Can be Simple, Fast, and Scalable by Julian Shun, Association for Computing Machinery and Morgan & Claypool Publishers
View on Amazon View on AbeBooks View on Kobo View on B.Depository View on eBay View on Walmart
Author: Julian Shun ISBN: 9781970001907
Publisher: Association for Computing Machinery and Morgan & Claypool Publishers Publication: June 1, 2017
Imprint: ACM Books Language: English
Author: Julian Shun
ISBN: 9781970001907
Publisher: Association for Computing Machinery and Morgan & Claypool Publishers
Publication: June 1, 2017
Imprint: ACM Books
Language: English

Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to allow the solutions developed to run efficiently under many different settings. This thesis addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The thesis provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results developed in this thesis serve to ease the transition into the multicore era.

The first part of this thesis introduces tools and techniques for deterministic parallel programming, including means for encapsulating nondeterminism via powerful commutative building blocks, as well as a novel framework for executing sequential iterative loops in parallel, which lead to deterministic parallel algorithms that are efficient both in theory and in practice. The second part of this thesis introduces Ligra, the first high-level shared memory framework for parallel graph traversal algorithms. The framework allows programmers to express graph traversal algorithms using very short and concise code, delivers performance competitive with that of highly-optimized code, and is up to orders of magnitude faster than existing systems designed for distributed memory. This part of the thesis also introduces Ligra+, which extends Ligra with graph compression techniques to reduce space usage and improve parallel performance at the same time, and is also the first graph processing system to support in-memory graph compression.

The third and fourth parts of this thesis bridge the gap between theory and practice in parallel algorithm design by introducing the first algorithms for a variety of important problems on graphs and strings that are efficient both in theory and in practice. For example, the thesis develops the first linear-work and polylogarithmic-depth algorithms for suffix tree construction and graph connectivity that are also practical, as well as a work-efficient, polylogarithmic-depth, and cache-efficient shared-memory algorithm for triangle computations that achieves a 2–5x speedup over the best existing algorithms on 40 cores.

This is a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award.

View on Amazon View on AbeBooks View on Kobo View on B.Depository View on eBay View on Walmart

Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to allow the solutions developed to run efficiently under many different settings. This thesis addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The thesis provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results developed in this thesis serve to ease the transition into the multicore era.

The first part of this thesis introduces tools and techniques for deterministic parallel programming, including means for encapsulating nondeterminism via powerful commutative building blocks, as well as a novel framework for executing sequential iterative loops in parallel, which lead to deterministic parallel algorithms that are efficient both in theory and in practice. The second part of this thesis introduces Ligra, the first high-level shared memory framework for parallel graph traversal algorithms. The framework allows programmers to express graph traversal algorithms using very short and concise code, delivers performance competitive with that of highly-optimized code, and is up to orders of magnitude faster than existing systems designed for distributed memory. This part of the thesis also introduces Ligra+, which extends Ligra with graph compression techniques to reduce space usage and improve parallel performance at the same time, and is also the first graph processing system to support in-memory graph compression.

The third and fourth parts of this thesis bridge the gap between theory and practice in parallel algorithm design by introducing the first algorithms for a variety of important problems on graphs and strings that are efficient both in theory and in practice. For example, the thesis develops the first linear-work and polylogarithmic-depth algorithms for suffix tree construction and graph connectivity that are also practical, as well as a work-efficient, polylogarithmic-depth, and cache-efficient shared-memory algorithm for triangle computations that achieves a 2–5x speedup over the best existing algorithms on 40 cores.

This is a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award.

More books from Association for Computing Machinery and Morgan & Claypool Publishers

Cover of the book The VR Book by Julian Shun
Cover of the book Smarter Than Their Machines by Julian Shun
Cover of the book The Handbook of Multimodal-Multisensor Interfaces, Volume 1 by Julian Shun
Cover of the book Text Data Management and Analysis by Julian Shun
Cover of the book An Architecture for Fast and General Data Processing on Large Clusters by Julian Shun
Cover of the book Computational Prediction of Protein Complexes from Protein Interaction Networks by Julian Shun
Cover of the book Trust Extension as a Mechanism for Secure Code Execution on Commodity Computers by Julian Shun
Cover of the book Embracing Interference in Wireless Systems by Julian Shun
Cover of the book Edmund Berkeley and the Social Responsibility of Computer Professionals by Julian Shun
Cover of the book The Handbook of Multimodal-Multisensor Interfaces, Volume 3 by Julian Shun
Cover of the book The Sparse Fourier Transform by Julian Shun
Cover of the book Frontiers of Multimedia Research by Julian Shun
Cover of the book Verified Functional Programming in Agda by Julian Shun
Cover of the book Declarative Logic Programming by Julian Shun
Cover of the book The Handbook of Multimodal-Multisensor Interfaces, Volume 2 by Julian Shun
We use our own "cookies" and third party cookies to improve services and to see statistical information. By using this website, you agree to our Privacy Policy