Sangam: A Confluence of Knowledge Streams

Format Abstractions for the Compilation of Sparse Tensor Algebra

Show simple item record

dc.contributor Amarasinghe, Saman
dc.contributor Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.creator Chou, Stephen
dc.date 2023-01-19T19:51:34Z
dc.date 2023-01-19T19:51:34Z
dc.date 2022-09
dc.date 2022-10-19T19:07:52.557Z
dc.date.accessioned 2023-03-01T07:20:55Z
dc.date.available 2023-03-01T07:20:55Z
dc.identifier https://hdl.handle.net/1721.1/147453
dc.identifier 0000-0002-5048-7131
dc.identifier.uri http://localhost:8080/xmlui/handle/CUHPOERS/275705
dc.description Tensors are commonly used to represent data in many domains, including data analytics, machine learning, science, and engineering. Many highly-optimized libraries and compilers have been developed for efficiently computing on dense tensors. However, existing libraries and compilers are limited in their ability to support real-world applications that work with sparse tensors, which contain mostly zeros. In particular, there exist countless specialized formats for storing sparse tensors in memory, each suited to specific types of applications and data. Since different formats often use very different data structures to store nonzeros though, computing with sparse tensors that are stored in different formats can require vastly dissimilar code that are all difficult to implement by hand and non-trivial to generate automatically. Existing libraries and compilers must therefore limit the set of computations and formats that they directly support, sacrificing usability and performance as a result. In this dissertation, I describe how to build a compiler that supports efficiently computing on sparse tensors that may be stored in a wide variety of formats. I first show how many commonly-used sparse tensor formats—from array-based formats like CSR, COO, and DIA to formats that store nonzeros using pointer-based data structures like linked lists, BSTs, and C-trees—can all be expressed as compositions of per-dimension formats. I further show how such per-dimension formats can be precisely defined by implementing a common set of abstractions that capture how their underlying data structures store nonzeros in memory and that capture how these data structures can be efficiently accessed or constructed. I then demonstrate how, with such specifications of per-dimension formats at hand, a compiler can generate code to efficiently compute on tensors that are stored in any of the aforementioned—and countless other—formats. We have implemented our technique in the TACO sparse tensor algebra compiler, which is the first compiler to generate code that computes any basic tensor algebra expression with sparse tensors that may be stored in arbitrary formats. Our technique generates code that has performance competitive with, if not better than, equivalent code in hand-optimized libraries and frameworks.
dc.description Ph.D.
dc.format application/pdf
dc.publisher Massachusetts Institute of Technology
dc.rights In Copyright - Educational Use Permitted
dc.rights Copyright MIT
dc.rights http://rightsstatements.org/page/InC-EDU/1.0/
dc.title Format Abstractions for the Compilation of Sparse Tensor Algebra
dc.type Thesis


Files in this item

Files Size Format View
Chou-s3chou-PhD-EECS-2022-thesis.pdf 2.587Mb application/pdf View/Open

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse