Dr. Hang Liu joined the department of Electrical and Computer Engineering at Rutgers, the state university of New Jersey, as an assistant professor in the Spring of 2023. Before that, he was an assistant professor at the Stevens Institute of Technology (2019 - 2022) and the University of Massachusetts Lowell (2017 - 2019). His research exploits powerful hardware resources, e.g., Graphics Processing Unit (GPU), Field-Programmable Gate Array (FPGA), and Solid-State Drive (SSD), to build high-performance systems for graph analytics, machine learning, and numerical methods. He received his Ph.D. degree from George Washington University (2017) and his B.E. degree from Huazhong University of Science and Technology (2011).
He is the recipient of the prestigious IEEE CS TCHPC Early Career Researchers Award for Excellence in High-Performance Computing (2022), NSF Early CAREER Award (2021), Presidential Fellowship (Stevens, 2022 - 2025), Provost Early Career Award for Research Excellence (Stevens, 2022), 3rd Prize at the 2022 Ansary Entrepreneurship Competition (Senior Design, Role: Advisor), Champion of AWS/MIT GraphChallenge Competition (2018 and 2019), ECE Outstanding Research Award (Stevens, 2021), Excellence Teaching Evaluation Award (Stevens, 2020 and 2022), Lawrence Berkeley National Laboratory SRP Fellowship (2019 and 2021), One of the Best Papers in VLDB 2020, NSF CRII Award (2019), Best Dissertation Award of Electrical and Computer Engineering at the George Washington University (2018), Phillip/Temofel Sprawcew Endowment Scholarship (2016), and No. 1 Most Energy Efficient Graph Traversal at GreenGraph 500 (Small Graph Category, 2015).
Ph.D., Electrical and Computer Engineering, George Washington University, 2017
B.E. Software Engineering, Huazhong University of Science and Technology, 2011
Chengying Huan, Shuaiwen Leon Song, Santosh Pandey, Hang Liu, Yongchao Liu, Baptiste Lepers, Charles He, Kang Chen, Jinlei Jiang and Yongwei Wu. “TEA: A GeneralPurpose Temporal Graph Random Walk Engine.” In Proceedings of the European Conference on Computer Systems (Eurosys). ACM, 2023
Santosh Pandey, Lingda Li, Thomas Flynn, Adolfy Hoisie and Hang Liu. “Scaling Deep Learning-based Microarchitecture Simulation on GPUs.” In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC). IEEE, 2022.
Heng Zhang, Lingda Li, Hang Liu, Donglin Zhuang, Rui Liu, Chengying Huan, Shuang Song et al. “Bring Orders into Uncertainty: Enabling Efficient Uncertain Graph Processing via Novel Path Sampling on Multi-Accelerator Systems.” In Proceedings of the 36th ACM International Conference on Supercomputing (ICS), 2022.
Lingda Li, Santosh Pandey, Thomas Flynn, Hang Liu, Noel Wheeler, and Adolfy Hoisie. “SimNet: Accurate and High-Performance Computer Architecture Simulation using Deep Learning.” In Proceedings of the ACM on Measurement and Analysis of Computing Systems (SIGMETRICS), 2022.
Shiyang Chen, Shaoyi Huang, Santosh Pandey, Bingbing Li, Guang Gao, Long Zheng, Caiwen Ding and Hang Liu. E.T.: Rethinking Transformer Models on GPUs. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC). ACM, 2021.
Anil Gaihre, Da Zheng, Scott Weitze, Lingda Li, Caiwen Ding, Shuaiwen Song and Hang Liu. Dr. Top-k: Delegate Centric Top-k Computation on GPUs. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC). ACM, 2021.
Santosh Pandey, Lingda Li, Adolfy Hoisie, Xiaoye S. Li and Hang Liu. C-SAW: A Framework for Graph Sampling and Random Walk on GPUs. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC). IEEE, 2020.
Bolong Zheng, Xi Zhao, Lianggui Weng, Nguyen Quoc Viet Hung, Hang Liu and Christian S. Jensen. PM-LSH: A Fast and Accurate LSH Framework for High-Dimensional Approximate NN Search. In Proceedings of the VLDB Endowment (VLDB). 2020. One of the best papers in VLDB ’20.