Sandia National Laboratories, New Mexico P.O. Box 5800 Albuquerque, NM 87185-MS 1320
Interests
Sivasankaran Rajamanickam is a distinguished member of technical staff at Sandia National Laboratories. His interest is broadly in the areas of high performance computing, specifically in machine learning for science, performance portable algorithms for linear solvers and linear algebra / graph kernels, and co-design of algorithms and architectures. Most of his works are in the intersection of these areas where interesting opportunities lie to solve problems that are of importance to computational science applications.
Education
PhD in Computer Science and Engineering, 2009, University of Florida
BE in Computer Science and Engineering, 1999, Madurai Kamaraj University
R&D 100 Award for MALA Materials Learning Algorithm
With Attila Cangi, Lenz Fiedler, Normand Modine, Aidan Thompson, Jon Vogel, Adam Stephens
Best paper candidate, IEEE International Parallel and Distributed Processing Symposium, 2022
Garg, Raveesh, Eric Qin, Francisco Muñoz-Matrínez, Robert Guirado, Akshay Jain, Sergi Abadal, José L. Abellán et al. “Understanding the design-space of sparse/dense multiphase gnn dataflows on spatial accelerators.” In 2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pp. 571-582. IEEE, 2022.
IEEE service award for leading the workshop proceedings of Supercomputing conference for five years, 2022
Defense programs awards of excellence, Recognition and letter of gratitude from Advanced Simulation and Computing program director, National Nuclear Security Administration, 2020
Best paper award, IEEE International Conference on Parallel Processing, 2019
Bogle, Ian, Karen Devine, Mauro Perego, Sivasankaran Rajamanickam, and George M. Slota. “A parallel graph algorithm for detecting mesh singularities in distributed memory ice sheet simulations.” In Proceedings of the 48th International Conference on Parallel Processing, pp. 1-10. 2019.
3 x IEEE/Amazon/DARPA Graph challenge innovation award, 2019 for distributed memory / GPU triangle counting and sparse neural network inference
Yaşar, Abdurrahman, Sivasankaran Rajamanickam, Jonathan Berry, Michael Wolf, Jeffrey S. Young, and Ümit V. ÇatalyÜrek. “Linear algebra-based triangle counting via fine-grained tasking on heterogeneous environments:(update on static graph challenge).” In 2019 IEEE High Performance Extreme Computing Conference (HPEC), pp. 1-4. IEEE, 2019.
“Acer, Seher, Abdurrahman Yaşar, Sivasankaran Rajamanickam, Michael Wolf, and Ümit V. Catalyürek. “Scalable triangle counting on distributed-memory systems.” In 2019 IEEE High Performance Extreme Computing Conference (HPEC), pp. 1-5. IEEE, 2019.
Ellis, J. Austin, and Sivasankaran Rajamanickam. “Scalable inference for sparse deep neural networks using Kokkos kernels.” In 2019 IEEE High Performance Extreme Computing Conference (HPEC), pp. 1-7. IEEE, 2019.
2 x IEEE/Amazon/DARPA Graph challenge champions, 2017 and 2018, for triangle counting
Wolf, Michael M., Mehmet Deveci, Jonathan W. Berry, Simon D. Hammond, and Sivasankaran Rajamanickam. “Fast linear algebra-based triangle counting with kokkoskernels.” In 2017 IEEE High Performance Extreme Computing Conference (HPEC), pp. 1-7. IEEE, 2017.
Yaşar, Abdurrahman, Sivasankaran Rajamanickam, Michael Wolf, Jonathan Berry, and Ümit V. Çatalyürek. “Fast triangle counting using cilk.” In 2018 IEEE High Performance extreme Computing Conference (HPEC), pp. 1-7. IEEE, 2018.
Best Paper Award, AsHES workshop, IEEE International Parallel and Distributed Processing Symposium Workshops 2017
Deveci, Mehmet, Christian Trott, and Sivasankaran Rajamanickam. “Performance-portable sparse matrix-matrix multiplication for many-core architectures.” In 2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pp. 693-702. IEEE, 2017.
Fiedler, Lenz, Normand A. Modine, Steve Schmerler, Dayton J. Vogel, Gabriel A. Popoola, Aidan P. Thompson, Sivasankaran Rajamanickam, and Attila Cangi. “Predicting electronic structures at any length scale with machine learning.” npj Computational Materials 9, no. 1 (2023): 115.
Fiedler, Lenz, Nils Hoffmann, Parvez Mohammed, Gabriel A. Popoola, Tamar Yovell, Vladyslav Oles, J. Austin Ellis, Sivasankaran Rajamanickam, and Attila Cangi. “Training-free hyperparameter optimization of neural networks for electronic structures in matter.” Machine Learning: Science and Technology 3, no. 4 (2022): 045008.
Garg, Raveesh, Eric Qin, Francisco Muñoz-Matrínez, Robert Guirado, Akshay Jain, Sergi Abadal, José L. Abellán et al. “Understanding the design-space of sparse/dense multiphase gnn dataflows on spatial accelerators.” In 2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pp. 571-582. IEEE, 2022.
Fox, James, Bo Zhao, Beatriz Gonzalez Del Rio, Sivasankaran Rajamanickam, Rampi Ramprasad, and Le Song. “Concentric Spherical Neural Network for 3D Representation Learning.” In 2022 International Joint Conference on Neural Networks (IJCNN), pp. 1-8. IEEE, 2022.
Ellis, J. Austin, Lenz Fiedler, Gabriel A. Popoola, Normand A. Modine, John A. Stephens, Aidan P. Thompson, Attila Cangi, and Sivasankaran Rajamanickam. “Accelerating finite-temperature Kohn-Sham density functional theory with deep neural networks.” Physical Review B 104, no. 3 (2021): 035120.
Trott, Christian R., Damien Lebrun-Grandié, Daniel Arndt, Jan Ciesko, Vinh Dang, Nathan Ellingwood, Rahulkumar Gayatri et al. “Kokkos 3: Programming model extensions for the exascale era.” IEEE Transactions on Parallel and Distributed Systems 33, no. 4 (2021): 805-817.
Trott, Christian, Luc Berger-Vergiat, David Poliakoff, Sivasankaran Rajamanickam, Damien Lebrun-Grandie, Jonathan Madsen, Nader Al Awar, Milos Gligoric, Galen Shipman, and Geoff Womeldorff. “The kokkos ecosystem: Comprehensive performance portability for high performance computing.” Computing in Science & Engineering 23, no. 5 (2021): 10-18.