top of page

Keynote Speaker.

Department of Computer Science,
University of Liverpool, UK
XiaoweiHuang.jpg
Robustness Certification of AI
and Generative AI
The popularity of deep learning has brought not only excitement about its wide-ranging applications, but also concerns regarding its suitability for safety-critical domains—especially following the discovery of vulnerabilities in areas such as robustness, security, and explainability. To enable the safe adoption of deep learning in such contexts, rigorous methods are needed to provide guarantees about system behaviour. In this talk, I will focus primarily on robustness—an essential property that ensures a system continues to perform reliably under small environmental perturbations—and discuss methods for its certification. Certification methods are techniques that can confirm or refute the satisfaction of a given property, backed by provable guarantees. In the context of deep learning, a variety of such methods have been developed to address the entire lifecycle of neural network development and deployment, aiming to quantify and mitigate risks. These include, for example, formal verification, regulated training, randomized smoothing, conformal prediction, and runtime monitoring. This talk will provide an overview of these approaches from the perspective of the guarantees they offer. I will also share a few work-in-progress demonstrations, intended to spark discussion and invite constructive criticism.
Invited

© 2025 WAISE

waise2025@easychair.org

  • Twitter Classic
bottom of page