Publications
Extending sparse tensor accelerators to support multiple compression formats
Qin, Eric; Jeong, Geonhwa; Won, William; Kao, Sheng C.; Kwon, Hyoukjun; Srinivasan, Sudarshan; Das, Dipankar; Moon, Gordon E.; Rajamanickam, Sivasankaran R.; Krishna, Tushar
Sparsity, which occurs in both scientific applications and Deep Learning (DL) models, has been a key target of optimization within recent ASIC accelerators due to the potential memory and compute savings. These applications use data stored in a variety of compression formats. We demonstrate that both the compactness of different compression formats and the compute efficiency of the algorithms enabled by them vary across tensor dimensions and amount of sparsity. Since DL and scientific workloads span across all sparsity regions, there can be numerous format combinations for optimizing memory and compute efficiency. Unfortunately, many proposed accelerators operate on one or two fixed format combinations. This work proposes hardware extensions to accelerators for supporting numerous format combinations seamlessly and demonstrates ∼ 4 × speedup over performing format conversions in software.