Programme.

Starting time: 13:00 UTC, 14:00 BST,
15:00 CEST, 09:00 EST, 22:00 JST

(The times below are in UTC)

proceedings2020.jpg

​12:45-13:00

Opening and welcome to WAISE 2020, Organisation Committee

Session 1: Machine Learning Uncertainty and Reliability

Chair: Orlando Avila Garcia

​13:00-13:15

Revisiting Neuron Coverage and its Application to Test Generation, Matthias Woehrle, Stephanie Abrecht, Maram Akila, Sujan Sai Gannamaneni, Konrad Groh, Christian Heinzemann and Sebastian Houben

pdf-icon-png-16x16-pictures-26.png

A Principal Component Analysis approach for embedding local symmetries into Deep Learning algorithms, Pierre-Yves Lagrave

​13:15-13:30

A Framework for Building Uncertainty Wrappers for AI/ML-based Data-Driven Components, 
Michael Kläs and Lisa Jöckel

​13:30-13:45

Session 2: Machine Learning Safeguards

Chair: Rob Alexander

Rule-based Safety Evidence for Neural Networks, Tewodros A. Beyene and Amit Sahu

Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks, Oliver Willers, Sebastian Sudholt, Shervin Raafatnia and Stephanie Abrecht

Positive Trust Balance for Self-Driving Car Deployment, Philip Koopman and Michael Wagner

Integration of Formal Safety Models on System Level using the Example of Responsibility Sensitive Safety and CARLA Driving Simulator, Bernd Gassmann, Frederik Pasch, Fabian Oboril and Kay-Ulrich Scholl

​13:45-14:00

​14:00-14:15

​14:15-14:30

​14:30-14:45

pdf-icon-png-16x16-pictures-26.png
pdf-icon-png-16x16-pictures-26.png
pdf-icon-png-16x16-pictures-26.png
pdf-icon-png-16x16-pictures-26.png

​14:45-15:00

Comfort break

Chair: Simos Gerasimou

SafeDNN: Understanding and Verifying Neural Networks

Corina Pasareanu, NASA Ames and Carnegie Mellon University, United States

​15:00-16:15

Comfort break

​16:15-16:30

pdf-icon-png-16x16-pictures-26.png
Session 3: Assurances for Autonomous Systems

Chair: Chih-Hong Cheng

A Safety Case Pattern for Systems with Machine Learning Components, Ernest Wozniak, Carmen Carlan, Esra Acar-Celik and Henrik J. Putzer

Structuring the Safety Argumentation for Deep Neural Network Based Perception in Automotive Applications, Gesina Schwalbe, Bernhard Knie, Timo Sämann, Timo Dobberphul, Lydia Gauerhof, Shervin Raafatnia and Vittorio Rocco

An Assurance Case Pattern for the Interpretability of Machine Learning in Safety-Critical Systems, Francis Rhys Ward and Ibrahim Habli

A Structured Argument for Assuring Safety of the Intended Functionality (SOTIF), John Birch, David Blackburn, John Botham, Ibrahim Habli, David Higham, Helen Monkhouse, Gareth Price, Norina Ratiu and Roger Rivett

​16:30-16:45

​16:45-17:00

​17:00-17:15

​17:15-17:30

pdf-icon-png-16x16-pictures-26.png
pdf-icon-png-16x16-pictures-26.png
New Ideas and Emerging Results

Chair: Zakaria Chihani

Dependability engineering concepts for autonomous AI-based systems, Georg Macher, Eric Armengaud, Davide Bacciu, Jürgen Dobaj, Omar Veledar and Matthias Seidl

Applying Heinrich’s Triangle to Autonomous Vehicles: Analyzing the Long Tail of Human and Artificial Intelligence Failures, Amitai Bin-Nun, Anthony Panasci and Radboud Duintjer Tebbens

Solving AI Certification in SAE G-34/EUROCAE WG-114, Mark Roboff

​17:30-17:40

​17:40-17:50

​17:50-18:00

Comfort break

​18:00-18:15

pdf-icon-png-16x16-pictures-26.png
pdf-icon-png-16x16-pictures-26.png
Plenary Discussion

Chair: Philip Koopman

​18:15-18:45

Plenary Discussion: How Close Are We in Engineering Safe AI Systems?

​18:45-19:00

Wrap Up & Best Paper Award