top of page

Keynote Speaker.

Bettina Könighofer
Graz University of Technology,
Austria
Bettina Könighofer will be an Assistant Professor at Graz University of Technology in Austria starting from October 2022. She currently leads the Trusted AI research group at Lamarr Security Research in Graz. Her research interests lie in the area of AI and Formal Methods. For AI, her interests lie especially in reinforcement learning and questions addressing explainability, accountability, and safety of AI. For formal methods, she is especially interested in probabilistic model checking, hardware synthesis, and runtime verification and enforcement. Bettina's work on shielding was one of the first that combined correct-by-construction formal methods with AI. Bettina is a work package leader in the H2020 project called FOCETA on Foundations for Continuous Engineering of Trustworthy Autonomy. At TU Graz, Bettina teaches an undergraduate course on "Logic and Compatibility" and partly teaches a graduate course on "Model Checking".
Formal Methods for Trusted AI
The enormous influence of systems deploying artificial intelligence (AI), particularly of systems that learn from data and experience, is contrasted by the growing concerns about their safety and the relative lack of trust by the society. To enable the broader deployment of AI-based systems, we need new methods to guarantee safety as well as to address the questions of explainability, predictability and accountability of the decision-making process of AI-based systems and thus establish trust.
In this talk, I will consider safety and trustworthiness of AI from a formal methods perspective. I will describe the challenges for achieving trustworthy AI and present our ideas and work to tackle these challenges, with a special focus on our work on correct-by-construction run-time assurance, also known as shielding. Additionally, I will touch upon our recent works on search-based testing strategies for AI and the analysis of intention and responsibility of AI-based systems via probabilistic model checking techniques.
Invited
bottom of page