Huan Zhang (UIUC)

Huan Zhang 

Huan Zhang

Assistant Professor
University of Illinois Urbana-Champaign (UIUC)
Department of Electrical and Computer Engineering
Department of Computer Science (affiliated)
Coordinated Science Laboratory (affiliated)

Email: huan at huan-zhang dot com

Google Scholar
CV

Multiple openings available; see information below.


My research aims to build trustworthy AI systems that can be safely and reliably used in mission-critical tasks, with a focus on using formal verification techniques to give provable worst-case performance guarantees. I proposed and advanced a novel linear bound propagation based verification framework for deep neural networks that enables formal verification for networks with millions of neurons. I am leading the development of the α,β-CROWN neural network verifier, which won VNN-COMP 2023, VNN-COMP 2022 and VNN-COMP 2021. In addition, I studied the security and safety of AI models, especially their adversarial robustness. I am a recipient of IBM PhD Fellowship and Schmidt Futures AI2050 Early Career Fellowship with a $300,000 research grant.

Before joining UIUC, I obtained my PhD degree in Computer Science from UCLA in 2020 and my advisor was Prof. Cho-Jui Hsieh. I received my Bachelor's degree at Zhejiang University (ZJU) in 2012. During 2021 - 2023, I was a postdoctoral researcher at Carnegie Mellon University (CMU) with Prof. Zico Kolter.

Openings

I am looking for passionate students with strong technical backgrounds in machine learning, artificial intelligence, and their applications. Relevant experience in trustworthy machine learning, formal verification/certification, or AI safety/security is preferred but not required. For PhD applicants, please submit your application to the ECE and/or CS PhD programs and email me about your application. For postdocs, visiting students, or interns, please email your CV and a brief research statement.

Awards

Schmidt Futures AI2050 Early Career Fellowship with a $300,000 research grant for year 2023 and 2024. See the list of awarded fellows here.

Adversarial Machine Learning (AdvML) Rising Star Award. Sponsored by MIT-IBM Watson AI Lab, 2021. See award details here.

First Place in the 2021 and the 2022 International Verification of Neural Networks Competition (VNN-COMP). I led a multi-institutional team (with members from CMU, UCLA, Northeastern University, Columbia University, and UIUC) and we developed the α,β-CROWN (alpha-beta-CROWN) verification toolbox which won VNN-COMP 2021 and 2022 with the highest total score. α,β-CROWN implements the linear bound propagation-based framework for neural network verification. More details of this competition can be found in this article and this article.

IBM PhD Fellowship, 2018.

Research

My research roughly falls into the following categories:

1. Formal verification of machine learning: CROWN (NeurIPS 2018) is a general theoretical framework for formal verification of neural networks through efficient linear bound propagation. β-CROWN (NeurIPS 2021) and GCP-CROWN (NeurIPS 2022) introduced branch-and-bound and cutting plane methods to the bound propagation framework, greatly improving its strength and scalability. Based on these papers, I lead the development of the α,β-CROWN neural network verifier, winner of VNN-COMP 2021 and 2022, and auto_LiRPA, a PyTorch-based library for perturbation analysis on general computational graphs (NeurIPS 2020). I also studied the verification problem for ensembled trees (NeurIPS 2019).

2. Training trustworthy machine learning models: CROWN-IBP (ICLR 2020) is an efficient training approach that enables verified robustness of large neural networks and has become a standard baseline. I also studied robust tree-based models (GBDT and random forests) (NeurIPS 2019, ICML 2020). Additionally, I studied pruning-based approaches (ICML 2022) and randomization-based methods (ECCV 2018, NeurIPS 2019, ICLR 2020) to improve verifiable robustness. I also studied the fairness of large language models (EMNLP 2020) and neural language classifiers (NAACL 2021).

3. Machine learning safety and adversarial attacks: My work on zeroth-order-optimization-based black-box attack (CCS AISec 2017) was the first work demonstrating attacks to machine learning in the black-box and query-based setting, and has been extended to non-smooth and non-differentiable settings (ICLR 2019). I also studied adversarial attacks to neural image captioning (ACL 2018), image classification (ECCV 2018), image super-resolution (ICCV 2019), and NLP classifiers (AAAI 2020). I proposed new formulations for finding adversarial examples including branch and bound based attacks (ICML 2022), EAD attack (AAAI 2018), and attacks on tree-based classifiers (NeurIPS 2020).

4. Reinforcement learning (RL): I proposed the SA-MDP framework (NeurIPS 2020) to study the robustness of RL under adversarial perturbations on observed states, and proposed the state-adversarial regularization and alternating training with learned optimal adversaries (ICLR 2021) for building robust deep RL agents. I also studied the robustness of AlphaZero agents playing Go (NeurIPS 2022), the generalization issues of RL (ICLR 2023a), and safe RL (ICLR 2022, ICLR 2023).

5. Optimization and scalable machine learning: Zeroth order optimization (NeurIPS 2016), asynchronous gradient descent and coordinate descent (ICDM 2016a, ICDM 2016b), distributed and decentralized optimization of neural networks (NeurIPS 2017), extreme multi-label learning (ICML 2017), tensor decomposition (NeurIPS 2016) and GPU acceleration of gradient boosted decision trees (SysML 2016) (part of LightGBM).

I also worked on computer architecture [ZAN+14] [SZN+13] and computer networks [YZZ+13] [KPZ+15] during the early years of my PhD.


Publications (“*” indicates equal contribution)

[YDS+24] Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation for Efficient Synthesis and Verification Lujie Yang*, Hongkai Dai*, Zhouxing Shi, Cho-Jui Hsieh, Russ Tedrake, and Huan Zhang. ICML 2024. (paper) (code)

[GYZ+24] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability Xingang Guo*, Fangxu Yu*, Huan Zhang, Lianhui Qin, and Bin Hu. ICML 2024. (paper) (code)

[GYZ+24] Fine-grained Local Sensitivity Analysis of Standard Dot-Product Self-Attention Aaron J Havens, Alexandre Araujo, Huan Zhang, Bin Hu. ICML 2024.

[TLLM24] TrustLLM: Trustworthiness in Large Language Models. ICML 2024. (The Trust-LLM Team).

[KBK+23] Provably Bounding Neural Network Preimages. Suhas Kotha, Christopher Brix, Zico Kolter, Krishnamurthy Dvijotham*, Huan Zhang*. NeurIPS 2023 (Spotlight). (paper)

[ZCC+23] Robust Mixture-of-Expert Training for Convolutional Neural Networks. Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Huan Zhang, Pin-Yu Chen, Shiyu Chang, Zhangyang Wang, Sijia Liu. ICCV 2023 (Oral)

[ZCZ+23] DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local Smoothing. Jiawei Zhang, Zhongzhu Chen, Huan Zhang, Chaowei Xiao, Bo Li. USENIX Security 2023.

[LZH23] Can Agents Run Relay Race with Strangers? Generalization of RL to Out-of-Distribution Trajectories, Li-Cheng Lan, Huan Zhang, Cho-Jui Hsieh. ICLR 2023.

[LGC+23] On the Robustness of Safe Reinforcement Learning under Observational Perturbations, Zuxin Liu, Zijian Guo, Zhepeng Cen, Huan Zhang, Jie Tan, Bo Li, Ding Zhao. ICLR 2023.

[ZWX+22b] General Cutting Planes for Bound-Propagation-Based Neural Network Verification, Huan Zhang*, Shiqi Wang*, Kaidi Xu*, Linyi Li, Bo Li, Suman Jana, Cho-Jui Hsieh, Zico Kolter. NeurIPS 2022. (code) (paper)

[LZH22] Are AlphaZero-like Agents Robust to Adversarial Perturbations?, Li-Cheng Lan, Huan Zhang, Ti-Rong Wu, Meng-Yu Tsai, I-Chen Wu, Cho-Jui Hsieh. NeurIPS 2022 (code) (paper).

[SWZ+22] Efficiently Computing Local Lipschitz Constants of Neural Networks via Bound Propagation, Zhouxing Shi, Yihan Wang, Huan Zhang, Zico Kolter, Cho-Jui Hsieh. NeurIPS 2022 (code) (paper).

[ZLZ+22] δ-SAM: Sharpness-Aware Minimization with Dynamic Reweighting. Wenxuan Zhou, Fangyu Liu, Huan Zhang, Muhao Chen. Findings in EMNLP, 2022.

[ZWX+22] A Branch and Bound Framework for Stronger Adversarial Attacks of ReLU Networks, Huan Zhang*, Shiqi Wang*, Kaidi Xu, Yihan Wang, Suman Jana, Cho-Jui Hsieh, Zico Kolter. ICML 2022. (code) (paper)

[CZZ+22] Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness, Tianlong Chen*, Huan Zhang*, Zhenyu Zhang, Shiyu Chang, Sijia Liu, Pin-Yu Chen, Zhangyang Wang. ICML 2022. (code) (paper)

[LZX22] ViP: Unified Certified Detection and Recovery for Patch Attack with Vision Transformers, Junbo Li, Huan Zhang, Cihang Xie. ECCV 2022.

[WLZ+22] COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks, Fan Wu, Linyi Li, Huan Zhang, Bhavya Kailkhura, Krishnaram Kenthapadi, Ding Zhao and Bo Li. ICLR 2022. (code) (paper)

[WZX+21] Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Verification, Shiqi Wang*, Huan Zhang*, Kaidi Xu*, Xue Lin, Suman Jana, Cho-Jui Hsieh and Zico Kolter (* Equal contribution). NeurIPS 2021. (code) (paper)

[HZS+21] Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds, Yujia Huang, Huan Zhang, Yuanyuan Shi, Zico Kolter and Anima Anandkumar. NeurIPS 2021.

[RBZ+21] Robustness between the worst and average case, Leslie Rice, Anna Bair, Huan Zhang and Zico Kolter. NeurIPS 2021.

[SWZ+21] Fast Certified Robust Training via Better Initialization and Shorter Warmup, Zhouxing Shi*, Yihan Wang*, Huan Zhang, Jinfeng Yi and Cho-Jui Hsieh. NeurIPS 2021. (code) (paper)

[ZZZ+21] Double Perturbation: On the Robustness of Robustness and Counterfactual Bias Evaluation, Chong Zhang, Jieyu Zhao, Huan Zhang, Kai-Wei Chang, and Cho-Jui Hsieh. NAACL 2021. (code) (paper)

[ZCB+21] Robust Reinforcement Learning on State Observations with Learned Optimal Adversary, Huan Zhang*, Hongge Chen*, Duane Boning, Cho-Jui Hsieh. ICLR 2021. (code) (pdf)

[XZW+21] Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers, Kaidi Xu*, Huan Zhang*, Shiqi Wang, Yihan Wang, Suman Jana, Xue Lin, Cho-Jui Hsieh. ICLR 2021. (code) (pdf)

[ZCX+20b] Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations. Huan Zhang*, Hongge Chen*, Chaowei Xiao, Bo Li, Duane Boning, Cho-Jui Hsieh. NeurIPS 2020 (spotlight). (code) (pdf)

[XSZ+20] Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond. Kaidi Xu*, Zhouxing Shi*, Huan Zhang*, Yihan Wang, Minlie Huang, Kai-Wei Chang, Bhavya Kailkhura, Xue Lin, Cho-Jui Hsieh. NeurIPS 2020. (*Equal contribution) (code) (pdf)

[ZZH+20] An Efficient Adversarial Attack for Tree Ensembles. Chong Zhang, Huan Zhang, Cho-Jui Hsieh. NeurIPS 2020. (code) (paper)

[HZJ+20] Reducing Sentiment Bias in Language Models via Counterfactual Evaluation. Po-Sen Huang*, Huan Zhang*, Ray Jiang, Robert Stanforth, Johannes Welbl, Jack Rae, Vishal Maini, Dani Yogatama, Pushmeet Kohli. Findings in EMNLP 2020. (pdf)

[WZC+20] On ₚ-norm Robustness of Ensemble Decision Stumps and Trees. Yihan Wang, Huan Zhang, Hongge Chen, Duane Boning and Cho-Jui Hsieh. ICML 2020. (code) (pdf)

[ZCX+20] Towards Stable and Efficient Training of Verifiably Robust Neural Networks. Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, Cho-Jui Hsieh. ICLR 2020. (code) (pdf)

[SZC+20] Robustness Verification for Transformers. Zhouxing Shi, Huan Zhang, Kai-Wei Chang, Minlie Huang, Cho-Jui Hsieh. ICLR 2020. (pdf)

[ZDH+20] MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius. Runtian Zhai, Chen Dan, Di He, Huan Zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, Liwei Wang. ICLR 2020. (pdf)

[CYZ+20] Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples. Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, Cho-Jui Hsieh. AAAI 2020. (pdf)

[CZS+19] Robustness Verification of Tree-based Models. Hongge Chen*, Huan Zhang*, Si Si, Yang Li, Duane Boning and Cho-Jui Hsieh (*Equal contribution). NeurIPS 2019. (code). (pdf)

[SYZ+19] A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks, Hadi Salman, Greg Yang, Huan Zhang, Cho-Jui Hsieh and Pengchuan Zhang. NeurIPS 2019. (code) (pdf)

[SYL+19] Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers, Hadi Salman, Greg Yang, Jerry Li, Pengchuan Zhang, Huan Zhang, Ilya Razenshteyn, Sebastien Bubeck. NeurIPS 2019 (spotlight). (code) (pdf)

[CZK+19] Evaluating Robustness of Deep Image Super-Resolution Against Adversarial Attacks. Jun-Ho Choi, Huan Zhang, Jun-Hyuk Kim, Cho-Jui Hsieh and Jong-Seok Lee. ICCV 2019. (pdf)

[YXL+19] Second Rethinking of Network Pruning in the Adversarial Setting. Shaokai Ye, Kaidi Xu, Sijia Liu, Hao Cheng, Jan-Henrik Lambrechts, Huan Zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang and Xue Lin. ICCV 2019. (pdf)

[CZB+19] Robust Decision Trees Against Adversarial Examples, Hongge Chen, Huan Zhang, Duane Boning, Cho-Jui Hsieh. ICML 2019 (20-min long oral). (pdf)

[ZCS+19] The Limitations of Adversarial Training and the Blind-Spot Attack, Huan Zhang*, Hongge Chen*, Zhao Song, Duane Boning, Inderjit Dhillon, Cho-Jui Hsieh. ICLR 2019. (* Equal contribution) (pdf)

[CLC+19] Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach, Minhao Cheng, Thong Le, Pin-Yu Chen, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh. ICLR 2019. (pdf)

[XLZ+19] Structured Adversarial Attack: Towards General Implementation and Better Interpretability. Kaidi Xu*, Sijia Liu*, Pu Zhao*, Pin-Yu Chen, Huan Zhang, Quanfu Fan, Deniz Erdogmus, Yanzhi Wang, Xue Lin, ICLR 2019. (pdf)

[ZZH19] RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications, Huan Zhang, Pengchuan Zhang, Cho-Jui Hsieh. AAAI 2019. (pdf) (reference implementation) (slides)

[TTC+19] AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks, Chun-Chen Tu, Paishun Ting, Pin-Yu Chen, Sijia Liu, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh, Shin-Ming Cheng. AAAI 2019. (pdf)

[ZWC+18] Efficient Neural Network Robustness Certification with General Activation Functions, Huan Zhang*, Tsui-Wei Weng*, Pin-Yu Chen, Cho-Jui Hsieh, Luca Daniel. (* Equal contribution). NIPS 2018. (pdf) (reference implementation)

[SZC+18] Is Robustness the Cost of Accuracy? Lessons Learned from 18 Deep Image Classifiers, Dong Su*, Huan Zhang*, Hongge Chen, Jinfeng Yi, Pin-Yu Chen, Yupeng Gao. (* Equal contribution). ECCV 2018. (pdf) (code)

[LCZ+18] Towards Robust Neural Networks via Random Self-ensemble, Xuanqing Liu, Minhao Cheng, Huan Zhang, Cho-Jui Hsieh. ECCV 2018. (pdf)

[WZM+18] Realtime query completion via deep language models, Po-Wei Wang, Huan Zhang, Vijai Mohan, Inderjit S. Dhillon and J. Zico Kolter. SIGIR Workshop On eCommerce, 2018. (pdf) (code)

[WZC+18b] Towards Fast Computation of Certified Robustness for ReLU Networks , Tsui-Wei Weng*, Huan Zhang*, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Duane Boning, Inderjit S. Dhillon, Luca Daniel. (* Equal contribution). ICML 2018 (pdf) (reference implementation)

[CZC+18] Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning. Hongge Chen*, Huan Zhang*, Pin-Yu Chen, Jinfeng Yi and Cho-Jui Hsieh (* Equal contribution). ACL 2018 (pdf) (code).

[WZC+18a] Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach , Tsui-Wei Weng*, Huan Zhang*, Pin-Yu Chen, Jinfeng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, Luca Daniel (* Equal contribution). ICLR 2018 (pdf) (code)

[CSZ+18] EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples, Pin-Yu Chen*, Yash Sharma*, Huan Zhang, Jinfeng Yi and Cho-Jui Hsieh. AAAI 2018. (pdf) (code)

[CZS+18] ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models, Pin-Yu Chen*, Huan Zhang*, Yash Sharma, Jinfeng Yi, Cho-Jui Hsieh. (* Equal contribution) ACM Conference on Computer and Communications Security (CCS) Workshop on Artificial Intelligence and Security (AISec), 2017. (pdf) (code)

[ZSH18] GPU-acceleration for Large-scale Tree Boosting, Huan Zhang, Si Si, Cho-Jui Hsieh. SysML Conference, 2018. (pdf) (code)

[LZZ+17] Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent, Xiangru Lian, Ce Zhang, Huan Zhang, Cho-Jui Hsieh, Wei Zhang, and Ji Liu. NIPS 2017. (Oral paper) (pdf)

[SZK+17] Gradient Boosted Decision Trees for High Dimensional Sparse Output, Si Si, Huan Zhang, Sathiya Keerthi, Dhruv Mahajan, Inderjit Dhillon, Cho-Jui Hsieh. ICML 2017. (pdf)

[ZHA16] HogWild++: A New Mechanism for Decentralized Asynchronous Stochastic Gradient Descent, Huan Zhang, Cho-Jui Hsieh and Venkatesh Akella. ICDM 2016 (full-length paper). (pdf) (code)

[ZH16] Fixing the Convergence Problems in Parallel Asynchronous Dual Coordinate Descent, Huan Zhang, Cho-Jui Hsieh. ICDM 2016 (full-length paper). (pdf) (code)

[SWZ16] Sublinear Time Orthogonal Tensor Decomposition, Zhao Song, David P. Woodruff and Huan Zhang. NIPS 2016. (pdf) (code)

[LZH+16] A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order, Xiangru Lian, Huan Zhang, Cho-Jui Hsieh, Yijun Huang, Ji Liu. NIPS 2016. (pdf)

[KPZ+15] Field demonstration of 100-Gb/s real-time coherent optical OFDM detection, by Noriaki Kaneda, Timo Pfau, Huan Zhang, Jeffrey Lee, Young-Kai Chen, Chun Ju Youn, Yong Hwan Kwon, Eun Soo Num, S. Chandrasekhar. Journal of Lightwave Technology, Vol. 33, No. 7, April 1 2015.

[ZAN+14] Burst Mode Processing: An Architectural Framework for Improving Performance in Future Chip Microprocessors, by Huan Zhang, Rajeevan Amirtharajah, Christopher Nitta, Matthew Farrens and Venkatesh Akella. Workshop on Workshop on Managing Overprovisioned Systems, Co-located with ASPLOS-19, 2014.

[SZN+13] HySIM: Towards a Scalable, Accurate and Fast Simulator for Manycore Processors by Kramer Straube, Huan Zhang, Christopher Nitta, Matthew Farrenss and Venkatesh Akella.3rd Workshop on the Intersections of Computer Architecture and Reconfigurable Logic, Co-located with MICRO-46, December 2013.

[YZZ+13] Spectral and Spatial 2D Fragmentation-Aware Routing and Spectrum Assignment Algorithms in Elastic Optical Networks, by Yawei Yin, Huan Zhang, Mingyang Zhang, Ming Xia, Zuqing Zhu, S. Dahlfort and S.J.B Yoo. IEEE/OSA Journal of Optical Communications and Networking, Vol. 5, No. 10, October 2013.

Teaching Experience

Guest lecture at Yale University: “Formal Verification and Adversarial Attacks of Neural Networks”, for CPSC 680: Trustworthy Machine Learning, Spring 2023.

Guest lecture at UIUC, title “Formal Verification of Deep Neural Networks: Challenges and Recent Advances”, for CS 562: Advanced Topics in Security, Privacy and Machine Learning, Spring 2022.

Guest lecture at Stony Brook University, “Complete and Incomplete Neural Network Verification with Efficient Bound Propagations”, for CSE 510: Hybrid Systems, Spring 2021.

Guest lecture at University of Nebraska Lincoln, “CROWN: A Linear Relaxation Framework for Neural Network Verification”, for CSCE 990: Deep Learning and Assured Autonomy Analysis, Fall 2020.

Tutorial, Formal Verification of Deep Neural Networks: Theory and Practice. AAAI 2022.

Teaching assistant for STA 141C, Big Data & High Performance Statistical Computing, Spring 2017

Teaching assistant for ECS 132, Probability and Statistical Modeling for Computer Science, Fall 2015

Teaching assistant for EEC 171, Parallel Computer Architecture, Spring 2013

Students mentored: Suhas Kotha (CMU), Jinqi Chen (CMU), Leslie Rice (CMU), Zhouxing Shi (UCLA), Yihan Wang (UCLA), Lucas Tecot (UCLA), Mengyao Shi (UCLA), Jiawei Zhang (UIUC), Zhuolin Yang (UIUC), Qirui Jin (Columbia University).

Software

1. α,β-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier

I lead the development of α,β-CROWN (alpha-beta-CROWN), an efficient and scalable neural network verification toolbox that won the highest total score in 2nd and 3rd International Verification of Neural Network Competition (VNN-COMP 2021 and 2022).

2. auto_LiRPA: Automatic Linear Relaxation based Perturbation Analysis for Neural Networks

I lead the development of auto_LiRPA, an easy-to-use library capable of automatically giving provable bounds under input or weight perturbations for complex neural networks and other general computational functions.

3. LightGBM on GPU

LightGBM is a popular tree boosting package with high efficiency on large-scale datasets. I accelerated its decision tree construction process on GPUs with 7 to 8 times speedup. My code reaches production quality and has been merged into the LightGBM official repository.