Hongyang Zhang 张弘扬
I have multiple *funded* Master and Ph.D. opening positions. Students (from CS, Math, or EE department) with good mathematical abilities are highly encouraged to apply and mention me as your potential advisor. I am hiring strong URA and URF students from University of Waterloo. Drop me an email if you are interested in. I am also open to any form of (industrial or personal) collaboration. Let me know if you want to collaborate, especially if you are interested in AI security problems.
I am a tenure-track assistant professor at University of Waterloo, David R. Cheriton School of Computer Science, a member of AI Institute and Cybersecurity and Privacy Institute, and a faculty affiliated with Vector Institute for AI.
I am interested in the problems where beautiful theory and practical methodology meet, which broadly include theories and applications of machine learning and algorithms with emphasis on robustness, security, and trustworthiness.
I completed my Ph.D. degree in 2019 with wonderful 3.5-year study at Machine Learning Department, Carnegie Mellon University, where I was fortunate to be co-advised by Maria-Florina Balcan and David P. Woodruff. Before joining Waterloo, I was a Postdoc fellow at Toyota Technological Institute at Chicago (TTIC), hosted by Avrim Blum and Greg Shakhnarovich. I graduated from Peking University in 2015, working with Zhouchen Lin and Chao Zhang. I had long-term visiting experiences in Simons Institute and IPAM.
AON-PRISMA: All-Or-Nothing Private Similarity Matching [website
Trustworthy Machine Learning, AI Security, Privacy and Safety, Foundation Models. Current research focus includes:
Reliability: Building theoretical foundations for defenses against o.o.d. data, adversarial attacks, privacy attacks, random adversaries (random noise models) and semi-random adversaries (mixed random/adversarial corruption models). Developing practical, large-scaled algorithms for real-world AI security problems in computer vision and natural language processing.
Optimization: Developing new algorithms enabling parameter-efficient fine-tuning or prompt tuning. Aligning various foundation models of probably different input and output modalities efficiently.
Sample Efficiency: Designing principled, practical and sub-linear algorithms for (robust) learning problems with provable sample complexity. Understanding generalization of deep neural networks and self-supervised learning.
2021. 1st place (out of 1,559 teams) in CVPR 2021 Security AI Challenger: Unrestricted Adversarial Attacks on ImageNet [certificate]
2020 - present. In the defense benchmark RobustBench, 10 out of top-10 methods use TRADES as training algorithms
Jan. 2019 - Dec. 2019. 1st place in Unrestricted Adversarial Example Challenge (hosted by Google)
2018. 1st place (out of 396 teams) in NeurIPS 2018 Adversarial Vision Challenge (Robust Model Track)
2018. 1st place (out of 75 teams) in NeurIPS 2018 Adversarial Vision Challenge (Targeted Attacks Track)
2018. 3rd place (out of 101 teams) in NeurIPS 2018 Adversarial Vision Challenge (Untargeted Attacks Track)
2023/4/24. Two papers were accepted to ICML 2023.
2023/3/7. I will be serving as an area chair for NeurIPS 2023, and serving on the technical program committee for ACM CCS 2023.
2023/2/28. One paper was accepted to CVPR 2023.
2023/2/22. One paper was accepted to Journal of Machine Learning Research.
2023/1/21. One paper was accepted to ICLR 2023.
2023/1/20. One paper was accepted to AISTATS 2023.
2022/12/11. One paper was accepted to IEEE SaTML 2023.
2022/10/30. One paper was accepted to ITCS 2023.
2022/10/13. One paper was accepted to IEEE Transactions on Pattern Analysis and Machine Intelligence.
2022/10/6. One paper was accepted to EMNLP 2022.
2022/9/14. One paper was accepted to NeurIPS 2022.
2022/5/15. Two papers were accepted to ICML 2022.
BOOK: With Zhouchen Lin (α-β order). “Low Rank Models in Visual Analysis: Theories, Algorithms and Applications”, Academic Press, Elsevier, 2017. [Press link]
- Yihan Wu, Heng Huang, Hongyang Zhang. "A Law of Robustness beyond Isoperimetry", ICML 2023, Hawaii, USA. [pdf]
- With Maria-Florina Balcan, Avrim Blum, Dravyansh Sharma (α-β order). "An Analysis of Robustness of Non-Lipschitz Networks", Journal of Machine Learning Research, 2023. [pdf]
- With Zhuangfei Hu, Xinda Li, David P. Woodruff, Shufan Zhang (α-β order). "Recovery from Non-Decomposable Distance Oracles", ITCS 2023, Cambridge, USA. [arXiv]
- Lang Huang, Chao Zhang, Hongyang Zhang. "Self-Adaptive Training: Bridging the Supervised and Self-Supervised Learning", IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. [arXiv] [code]
- A preliminary version of this paper appears in NeurIPS 2020, Vancouver, Canada. [arXiv] [code]
- With Avrim Blum, Omar Montasser, Greg Shakhnarovich (α-β order). "Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness", NeurIPS 2022, New Orleans, USA. [arXiv]
- With Avrim Blum, Travis Dick, Naren Manoj (α-β order). "Random Smoothing Might be Unable to Certify L∞ Robustness for High-Dimensional Images", Journal of Machine Learning Research, 2020. [pdf] [code]
- Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, Michael I. Jordan. "Theoretically Principled Trade-off between Robustness and Accuracy", ICML 2019 (Long Talk), Long Beach, USA. [arXiv] [code] (Champion of NeurIPS 2018 Adversarial Vision Challenge out of 400+ teams and 1,995 submissions)
- On the dynamically-updated third-party benchmark RobustBench, 10 out of top-10 methods use TRADES as their training algorithms.
- A further study of the trade-off appears in "A Closer Look at Accuracy vs. Robustness" with Yao-Yuan Yang, Cyrus Rashtchian, Ruslan Salakhutdinov, Kamalika Chaudhuri, NeurIPS 2020, Vancouver, Canada. [blog] [arXiv] [code]
- With Maria-Florina Balcan, Yi Li, David P. Woodruff (α-β order). "Testing Matrix Rank, Optimally", SODA 2019, San Diego, USA. [pdf] [arXiv]
- With Maria-Florina Balcan, Yingyu Liang, David P. Woodruff (α-β order). "Non-Convex Matrix Completion and Related Problems via Strong Duality", Journal of Machine Learning Research, 2019. [pdf]
- A preliminary version of this paper appears in ITCS 2018, Cambridge, USA. [arXiv]
- With Pranjal Awasthi, Maria-Florina Balcan, Nika Haghtalab (α-β order). “Learning and 1-bit Compressed Sensing under Asymmetric Noise”, COLT 2016, New York, USA. [pdf]
Table of Contents
Linear Models (Single Subspace Models, Multiple-Subspace Models, Theoretical Analysis)
Non-Linear Models (Kernel Methods, Laplacian and Hyper-Laplacian Methods, Locally Linear Representation, Transformation Invariant Clustering)
Optimization Algorithms (Convex Algorithms, Non-Convex Algorithms, Randomized Algorithms)
Representative Applications (Video Denoising, Background Modeling, Robust Alignment by Sparse and Low-Rank Decomposition, Transform Invariant Low-Rank Textures, Motion and Image Segmentation, Image Saliency Detection, Partial-Duplicate Image Search, Image Tag Completion and Refinement, Other Applications)
Conclusions (Low-Rank Models for Tensorial Data, Nonlinear Manifold Clustering, Randomized Algorithms)
Area Chair (or equivalent): ACM CCS 2023, NeurIPS 2023, IEEE SaTML 2023, ALT 2023, VALSE 2023, ALT 2022, AAAI 2022, VALSE 2022, AAAI 2021, VALSE 2021.
Journal Referee: Journal of the American Statistical Association, Journal of Machine Learning Research, Machine Learning, International Journal of Computer Vision, Proceedings of the IEEE, IEEE Journal of Selected Topics in Signal Processing, IEEE Transactions on Information Theory, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Transactions on Signal Processing, IEEE Transactions on Information Forensics & Security, IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Cybernetics, IEEE Transactions on Dependable and Secure Computing, IEEE Signal Processing Letters, IEEE Access, Neurocomputing, ACM Transactions on Knowledge Discovery from Data, Management Science.
Conference Referee: AAAI 2016, ICML 2016, NIPS 2016, IJCAI 2017, STOC 2017, NIPS 2017, AAAI 2018, STOC 2018, ISIT 2018, ICML 2018, COLT 2018, NeurIPS 2018, APPROX 2018, ACML 2018, AISTATS 2019, ITCS 2019, NeurIPS 2019, AAAI 2020, STOC 2020, ICML 2020, NeurIPS 2020, FOCS 2020, IEEE Conference on Decision and Control 2021, NeurIPS 2021, ICLR 2021, MSML 2022.
New Advances in (Adversarially) Robust and Secure Machine Learning, Qualcomm, Peking University, UMN, Yale, Waterloo, UChicago, NUS, MPI, USC, GaTech, Duke, BAAI 2021 2022. [slide]
Theoretically Principled Trade-off between Robustness and Accuracy, Simons Institute, IPAM, TTIC, Caltech, CMU, ICML 2019, ICML Workshop on the Security and Privacy of Machine Learning, Peking University. [slide] [video]
Testing Matrix Rank, Optimally, SODA 2019. [slide]
Testing and Learning from Big Data, Optimally, CMU AI Lunch 2018. [slide]
New Paradigms and Global Optimality in Non-Convex Optimization, CMU Theory Lunch 2017. [slide] [video]
Active Learning of Linear Separators under Asymmetric Noise, invited by Asilomar 2017. [slide]
Noise-Tolerant Life-Long Matrix Completion via Adaptive Sampling, CMU Machine Learning Lunch 2016. [slide]
CS480/680 Introduction to Machine Learning (at UWaterloo, instructor): Spring 2023.
CS858 Security and Privacy of Machine Learning (at UWaterloo, instructor): Fall 2022.
CS886 Robustness of Machine Learning (at UWaterloo, instructor): Spring 2022.
10-702/36-702 Statistical Machine Learning (at CMU, TA for Larry Wasserman): Spring 2018.
10-725/36-725 Convex Optimization (at CMU, TA for Pradeep Ravikumar and Aarti Singh): Fall 2017.
Image Processing (at PKU, TA for Chao Zhang): Spring 2014.
I like traveling and photography. Check here some of the photos that I took.