picture

  Hongyang Zhang   张弘扬


  Assistant Professor


David R. Cheriton School of Computer Science

Faculty of Mathematics

University of Waterloo & Vector Institute for AI

Office: DC 2641

Email: hongyang.zhang AT uwaterloo.ca

[Google Scholar] [Personal GitHub] [Lab GitHub] [DBLP]
   

I am a tenure-track Assistant Professor at University of Waterloo, David R. Cheriton School of Computer Science, leading SafeAI Lab. I am also a member of AI Institute and Cybersecurity and Privacy Institute, and a faculty affiliated with Vector Institute for AI. I am interested in the problems where beautiful theory and practical methodology meet, which broadly include theories and applications of machine learning and algorithms.

I completed my Ph.D. degree in 2019 at Machine Learning Department, Carnegie Mellon University, where I was fortunate to be co-advised by Maria-Florina Balcan and David P. Woodruff. Before joining Waterloo, I was a Postdoc fellow at Toyota Technological Institute at Chicago (TTIC), hosted by Avrim Blum and Greg Shakhnarovich. I graduated from Peking University in 2015. I had long-term visiting experiences in Simons Institute and IPAM.

Software Projects

  • EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty [blog] [code] [paper 1] [paper 2]

  • zkDL: Zero-Knowledge Proofs of Deep Learning on CUDA [code]

  • AON-PRISMA: All-Or-Nothing Private Similarity Matching [website] [code]

    Research Areas

    Machine Learning, AI Security and Privacy, Inference Acceleration. Current research focus of my group includes:

  • World Models and Agents: Developing world models that are able to imagine and generate data to train agents for real-world applications such as robotics and self-driving cars. We follow a path of AAA Games -> World Models -> Real World.

  • Efficient Inference: Pioneering advanced algorithms for enhancing AI inference speed. Drastically lowering the deployment costs of foundation models, making them more accessible and efficient. [e.g., EAGLE: fastest-known speculative sampling]

  • System-2 LLMs: Creating universal algorithms focusing on test-time compute of LLMs for boosted reasoning and alignment. [e.g., RAIN: LLM alignment without finetuning]

  • AI Security and Verification: Building foundations of defenses against o.o.d. attacks, adversarial attacks, and privacy attacks. Developing watermarking techniques and prioritizing copyright and privacy protection in LLMs. [e.g., TRADES: SOTA adversarial training methodology with 5-year time test]

    Competition Awards

  • 2021. 1st place (out of 1,559 teams) in CVPR 2021 Security AI Challenger: Unrestricted Adversarial Attacks on ImageNet [certificate]
  • 2020 - present. In the defense benchmark RobustBench, 10 out of top-10 methods use TRADES as training algorithms
  • Jan. 2019 - Dec. 2019. 1st place in Unrestricted Adversarial Example Challenge (hosted by Google)
  • 2018. 1st place (out of 396 teams) in NeurIPS 2018 Adversarial Vision Challenge (Robust Model Track)
  • 2018. 1st place (out of 75 teams) in NeurIPS 2018 Adversarial Vision Challenge (Targeted Attacks Track)
  • 2018. 3rd place (out of 101 teams) in NeurIPS 2018 Adversarial Vision Challenge (Untargeted Attacks Track)

    News

  • 2024/9/26. One paper was accepted to NeurIPS 2024.
  • 2024/9/15. One paper was accepted to EMNLP 2024.
  • 2024/8/7. I will serve as an area chair for ICLR 2025, ALT 2025 and AISTATS 2025. I am promoted to IEEE Senior Member.
  • 2024/6/27. EAGLE-2 was released, with up to 1.4x speedup compared to EAGLE-1. [paper] [code] [机器之心]
  • 2024/5/27. One paper was accepted to ECML PKDD 2024.
  • 2024/5/6. I served as an area chair for NeurIPS 2024 and an oral session chair for ICLR 2024.
  • 2024/5/1. Three papers were accepted to ICML 2024.
  • 2024/4/3. One paper was accepted to ACM CCS 2024.
  • 2024/1/17. EAGLE v1.1 was released, with 6.5x speedup of LLM decoding. [code]
  • 2024/1/16. Two papers were accepted to ICLR 2024. I served on the program committee for COLT 2024.
  • 2023/12/9. I was selected as AAAI New Faculty Highlights, and one paper was accepted to AAAI 2024.
  • 2023/12/8. EAGLE v1.0 was released, with 3x speedup of LLM decoding. [blog] [code] [机器之心]
  • 2023/9/15. I served as an area chair for ICLR 2024 and AISTATS 2024, and an action editor for DMLR.
  • 2023/6/11. One paper was accepted to IEEE Transactions on Information Theory.
  • 2023/4/24. Two papers were accepted to ICML 2023.
  • 2023/3/7. I served as an area chair for NeurIPS 2023 and served on the technical program committee for ACM CCS 2023.
  • 2023/2/28. One paper was accepted to CVPR 2023.
  • 2023/2/22. One paper was accepted to Journal of Machine Learning Research.
  • 2023/1/21. One paper was accepted to ICLR 2023.
  • 2023/1/20. One paper was accepted to AISTATS 2023.
  • 2022/12/11. One paper was accepted to IEEE SaTML 2023.
  • 2022/10/30. One paper was accepted to ITCS 2023.
  • 2022/10/13. One paper was accepted to IEEE Transactions on Pattern Analysis and Machine Intelligence.
  • 2022/10/6. One paper was accepted to EMNLP 2022.
  • 2022/9/14. One paper was accepted to NeurIPS 2022.
  • 2022/5/15. Two papers were accepted to ICML 2022.

    Selected Recent Publications [Full List of Publications]

    • Yuhui Li, Fangyun Wei, Chao Zhang, Hongyang Zhang. "EAGLE-2: Faster Inference of Language Models with Dynamic Draft Trees", EMNLP 2024, Miami, USA. [arXiv] [code]

    • Yuhui Li, Fangyun Wei, Chao Zhang, Hongyang Zhang. "EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty", ICML 2024, Vienna, Austria. [arXiv] [code]

    • Yu Du, Fangyun Wei, Hongyang Zhang. "AnyTool: Self-Reflective, Hierarchical Agents for Large-Scale API Calls", ICML 2024, Vienna, Austria. [arXiv] [code]

    • Haochen Sun, Jason Li, Hongyang Zhang. "zkLLM: Zero Knowledge Proofs for Large Language Models", ACM CCS 2024, Salt Lake City, USA. [pdf]

    • Yuhui Li, Fangyun Wei, Jinjing Zhao, Chao Zhang, Hongyang Zhang. "RAIN: Your Language Models Can Align Themselves without Finetuning", ICLR 2024, Vienna, Austria. [arXiv] [code]

    • Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, Michael I. Jordan. "Theoretically Principled Trade-off between Robustness and Accuracy", ICML 2019 (Long Talk), Long Beach, USA. [arXiv] [code] (Champion of NeurIPS 2018 Adversarial Vision Challenge, Among top-10 cited papers in ICML 2019-2023 according to Google Scholar Metrics)

    • BOOK: Zhouchen Lin, Hongyang Zhang. “Low Rank Models in Visual Analysis: Theories, Algorithms and Applications”, Academic Press, Elsevier, 2017. [Press link]

      picture

      Table of Contents

    • Introduction
    • Linear Models (Single Subspace Models, Multiple-Subspace Models, Theoretical Analysis)
    • Non-Linear Models (Kernel Methods, Laplacian and Hyper-Laplacian Methods, Locally Linear Representation, Transformation Invariant Clustering)
    • Optimization Algorithms (Convex Algorithms, Non-Convex Algorithms, Randomized Algorithms)
    • Representative Applications (Video Denoising, Background Modeling, Robust Alignment by Sparse and Low-Rank Decomposition, Transform Invariant Low-Rank Textures, Motion and Image Segmentation, Image Saliency Detection, Partial-Duplicate Image Search, Image Tag Completion and Refinement, Other Applications)
    • Conclusions (Low-Rank Models for Tensorial Data, Nonlinear Manifold Clustering, Randomized Algorithms)

    Academic Activities

    Area Chair: AISTATS 2025, ICLR 2025, ALT 2025, NeurIPS 2024, ICML 2024 (TF2M), ICLR 2024, AISTATS 2024, PRCV 2024 (Senior Area Chair), NeurIPS 2023, AAAI 2022, AAAI 2021, VALSE 2021-2025.

    Action Editor: Data-centric Machine Learning Research (DMLR).

    Membership: IEEE Senior Member, ACM Member, AAAI Member.

    Journal Referee: Annals of Statistics, Journal of the American Statistical Association, Mathematical Reviews, Journal of Machine Learning Research, Machine Learning, International Journal of Computer Vision, Proceedings of the IEEE, IEEE Journal of Selected Topics in Signal Processing, IEEE Transactions on Information Theory, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Transactions on Signal Processing, IEEE Transactions on Information Forensics & Security, IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Cybernetics, IEEE Transactions on Dependable and Secure Computing, IEEE Signal Processing Letters, IEEE Access, Neurocomputing, ACM Transactions on Knowledge Discovery from Data, Management Science.

    Conference Referee: AAAI 2016, ICML 2016, NIPS 2016, IJCAI 2017, STOC 2017, NIPS 2017, AAAI 2018, STOC 2018, ISIT 2018, ICML 2018, COLT 2018, NeurIPS 2018, APPROX 2018, ACML 2018, AISTATS 2019, ITCS 2019, NeurIPS 2019, AAAI 2020, STOC 2020, ICML 2020, NeurIPS 2020, FOCS 2020, IEEE Conference on Decision and Control 2021, NeurIPS 2021, ICLR 2021, MSML 2022, ALT 2022, ESA 2023, ACM CCS 2023, ALT 2023, IEEE SaTML 2023, STOC 2024, COLT 2024.

    Selected Talks

  • EAGLE v1 and v2, Weizmann Institute, UIUC, 2024. [website]

  • New Advances in Safe, Self-evolving, and Efficient Large Language Models, AAAI New Faculty Highlights, UBC CAIDA, Google, 2024. [video]

  • AI Safety by the People, for the People, HKU, HKUST, 2023.

  • zkDL: Efficient Zero-Knowledge Proofs of Deep Learning, Peking University, Tsinghua University (IIIS), Intellect Horizon, 2023.

  • New Advances in (Adversarially) Robust and Secure Machine Learning, Qualcomm, Peking University, UMN, Yale, Waterloo, UChicago, NUS, MPI, USC, GaTech, Duke, BAAI 2021 2022. [slide]

  • Theoretically Principled Trade-off between Robustness and Accuracy, Simons Institute, IPAM, TTIC, Caltech, CMU, ICML 2019, ICML Workshop on the Security and Privacy of Machine Learning, Peking University. [slide] [video]

  • Testing Matrix Rank, Optimally, SODA 2019. [slide]

  • Testing and Learning from Big Data, Optimally, CMU AI Lunch 2018. [slide]

  • New Paradigms and Global Optimality in Non-Convex Optimization, CMU Theory Lunch 2017. [slide] [video]

  • Active Learning of Linear Separators under Asymmetric Noise, invited by Asilomar 2017. [slide]

  • Noise-Tolerant Life-Long Matrix Completion via Adaptive Sampling, CMU Machine Learning Lunch 2016. [slide]

    Teaching

  • CS480/680 Introduction to Machine Learning (at UWaterloo, instructor): Spring 2025.

  • CS480/680 Introduction to Machine Learning (at UWaterloo, instructor): Winter 2024.

  • CS480/680 Introduction to Machine Learning (at UWaterloo, instructor): Spring 2023.

  • CS858 Security and Privacy of Machine Learning (at UWaterloo, instructor): Fall 2022.

  • CS886 Robustness of Machine Learning (at UWaterloo, instructor): Spring 2022.

  • 10-702/36-702 Statistical Machine Learning (at CMU, TA for Larry Wasserman): Spring 2018.

  • 10-725/36-725 Convex Optimization (at CMU, TA for Pradeep Ravikumar and Aarti Singh): Fall 2017.

  • Image Processing (at PKU, TA for Chao Zhang): Spring 2014.

    Misc

    I like traveling and photography. Check here some of the photos that I took.