Show simple item record

dc.contributor Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.contributor Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.creator Lee, I-Ting Angelina
dc.creator Leiserson, Charles E.
dc.creator Schardl, Tao Benjamin
dc.creator Sukha, Jim
dc.creator Zhang, Zhunping
dc.date 2022-07-08T17:49:47Z
dc.date 2021-10-27T20:30:50Z
dc.date 2022-07-08T17:49:47Z
dc.date 2015
dc.date 2019-06-12T15:27:15Z
dc.date.accessioned 2023-02-17T20:01:19Z
dc.date.available 2023-02-17T20:01:19Z
dc.identifier https://hdl.handle.net/1721.1/136108.2
dc.identifier.uri http://localhost:8080/xmlui/handle/CUHPOERS/242091
dc.description © 2015 ACM 2329-4949/2015/09-ART17 $15.00 Pipeline parallelism organizes a parallel program as a linear sequence of stages. Each stage processes elements of a data stream, passing each processed data element to the next stage, and then taking on a new element before the subsequent stages have necessarily completed their processing. Pipeline parallelism is used especially in streaming applications that perform video, audio, and digital signal processing. Three out of 13 benchmarks in PARSEC, a popular software benchmark suite designed for shared-memory multiprocessors, can be expressed as pipeline parallelism. Whereas most concurrency platforms that support pipeline parallelism use a “construct-and-run” approach, this article investigates “on-the-fly” pipeline parallelism, where the structure of the pipeline emerges as the program executes rather than being specified a priori. On-the-fly pipeline parallelism allows the number of stages to vary from iteration to iteration and dependencies to be data dependent. We propose simple linguistics for specifying on-the-fly pipeline parallelism and describe a provably efficient scheduling algorithm, the PIPER algorithm, which integrates pipeline parallelism into a work-stealing scheduler, allowing pipeline and fork-join parallelism to be arbitrarily nested. The PIPER algorithm automatically throttles the parallelism, precluding “runaway” pipelines. Given a pipeline computation with T1 work and T∞ span (critical-path length), PIPER executes the computation on P processors in TP ≤ T1/P+ O(T∞ +lg P) expected time. PIPER also limits stack space, ensuring that it does not grow unboundedly with running time. We have incorporated on-the-fly pipeline parallelism into a Cilk-based work-stealing runtime system. Our prototype Cilk-P implementation exploits optimizations such as “lazy enabling” and “dependency folding.” We have ported the three PARSEC benchmarks that exhibit pipeline parallelism to run on Cilk-P. One of these, x264, cannot readily be executed by systems that support only construct-and-run pipeline parallelism. Benchmark results indicate that Cilk-P has low serial overhead and good scalability. On x264, for example, Cilk-P exhibits a speedup of 13.87 over its respective serial counterpart when running on 16 processors.
dc.format application/octet-stream
dc.language en
dc.publisher Association for Computing Machinery (ACM)
dc.relation 10.1145/2809808
dc.relation ACM Transactions on Parallel Computing
dc.rights Creative Commons Attribution-Noncommercial-Share Alike
dc.rights http://creativecommons.org/licenses/by-nc-sa/4.0/
dc.source MIT web domain
dc.title On-the-Fly Pipeline Parallelism
dc.type Article
dc.type http://purl.org/eprint/type/ConferencePaper


Files in this item

Files Size Format View
spaa030-lee.pdf 317.8Kb application/octet-stream View/Open

This item appears in the following Collection(s)

  • DSpace@MIT [2699]
    DSpace@MIT is a digital repository for MIT's research, including peer-reviewed articles, technical reports, working papers, theses, and more.

Show simple item record

Search DSpace


Advanced Search

Browse