top of page

Previous Editions: WAISE 2018

​First International Workshop on

Artificial Intelligence

Safety Engineering 

Västerås, Sweden

Sept 18th, 2018

Programme

Programme.

Presentations available in Links below

springer.jpg

​8:25-8:30

Keynote - Chair: Huascar Espinoza

​8:30-9:30

Session 1: Machine Learning Safety and Reliability - Chair: Orlando Avila-García

9:30-10:30

Debate Panel - Paper Discussants: Jin Zhang, Rob Ashmore

10:30-11:00

Coffee Break - Poster Sessions

Session 2: Uncertainty in Automated Driving - Chair: Timo Latvala

11:00-12:20

Uncertainty in Machine Learning Applications - A Practice-Driven Classification of Uncertainty, Michael Kläs and Anna Maria Vollmer

Debate Panel - Paper Discussants: Hari Balaji

12:20-13:20

Lunch - Poster Sessions

Invited Talk - Chair: Orlando Avila-García

​13:20-13:50

Session 3: Challenges in AI Safety - Chair: Rob Ashmore

13:50-14:50

Debate Panel - Paper Discussants: Huascar Espinoza, Alexandre Moreira Nascimento

Session 4: Ethically Aligned Design of Autonomous Systems- Chair: Rob Alexander

14:50-15:30

Debate Panel - Paper Discussants: Mauricio Castillo-Effen, Orlando Avila-Garcia

15:30-16:00

Coffee Break - Poster Sessions

Session 5: Human-Inspired Approaches to AI Safety - Chair: Andreas Theodorou

16:00-17:00

Debate Panel - Paper Discussants: Ilse Verdiesen

Session 6: Runtime Risk Assessment in Automated Driving - Chair: Jérémie Guiochet

17:00-17:40

Debate Panel - Paper Discussants: Timo Latvala

17:40-18:00

Wrap-up - Best Paper Award

Speakers

Speakers.

Prof. Philip Koopman

Carnegie Mellon University / Edge Case Research

Prof. Philip Koopman is a faculty member at the Carnegie Mellon University ECE department, with additional affiliations with the Institute for Software Research and the Robotics Institute. He leads research on safe and secure embedded systems and teaches cost-effective embedded system design techniques.

 

He has over 20 years of experience with autonomous vehicle safety, dating back to the CMU Navlab team and the Automated Highway Systems (AHS) program. His most recent projects include using stress testing and run time monitoring to ensure safety for a variety of vehicle and robotic applications for the research, industry, and defense sectors.  He has additional experience with automotive and industrial functional safety, including testifying as an expert in vehicle safety class action litigation and consulting to NHTSA.

 

He is co-founder of Edge Case Research, which provides tools and services for autonomous vehicle testing and safety validation. His pre-university career includes experience as a US Navy submarine officer, embedded CPU designer at Harris Semiconductor, and embedded system architect at United Technologies. He is a Senior Member of IEEE, a Senior Member of ACM, and a member of SAE. 

 

http://www.ece.cmu.edu/~koopman

Keynote:

Autonomous Vehicle Safety Technical and Social Issues

More than three decades of ground vehicle autonomy development have led to the current high profile deployment of autonomous vehicle technology on public roads. Ensuring the safety of these vehicles requires solving a number of challenging technical, social, and political problems. Significant progress can be made on control and planning safety via the use of a doer/checker architecture. Perception validation is more challenging, and thus far has primarily relied upon road testing for most developers. Even if closed course testing and simulation are increased, the problem of edge cases not seen in on-road data collection will remain due to the likelihood of a heavy tail distribution of surprises. Part of this heavy tail is subtle environmental degradation, which our work has shown can cause failures that reveal potential weak spots in perception systems. The talk will summarize my experiences in these areas as well as lay out the basis for the broader hard questions of how safe is safe enough, whether deployment delay cost lives, and the topic of regulation.

koopman16.jpg

Prof. François Terrier

Commissariat à l’Energie Atomique (CEA)

François Terrier is head of the software and system engineering department at CEA LIST Institute. François Terrier holds a PhD in artificial intelligence and worked 10 years in the domain of expert systems using three-valued, temporal or fuzzy logics. Since 1994, he conducts research on software engineering. He was CEA representative in the European Network of Excellence on embedded systems and for OMG’s standardization. He conducts and leads research on model-driven engineering solutions for trustable systems and software.

CEA LIST, is a major hub for trustworthy digital systems and artificial intelligence, including safety and cybersecurity research. They develop tools to boost digital trust and make communications more secure, software to ensure aircraft operating security with an aeronautics-industry leader, formal methods to ensure software security with a major security-industry player. In Artificial Intelligence, CEA LIST, works with a major automotive manufacturer and its suppliers to develop embedded intelligence for autonomous vehicles.

The system and software engineering department’s activity is centered on the definition of methods and the development of tools for trustable systems. As head of the system and software engineering department, François Terrier is in charge of the new research program on trustworthy artificial intelligence for CEA LIST.

Invited Talk:

Challenges in the Qualification of Safety-Critical Machine Learning-based Components

The explosion of Machine Learning (ML) performance push strongly to consider their use in safety-critical embedded systems, such as autonomous vehicles. The assurance process of safety-critical software and systems in regulated industries like aerospace, nuclear power, railway or automotive is well streamlined and mastered for a long time already. These industries use well-defined standards, regulatory frameworks and processes, as well as formal techniques to assess and demonstrate the safety of the developed systems and software. However, the uncertainties and opaqueness of ML-based systems and components are difficult to validate and verify against most of traditional safety engineering methods. This raises the question of the complexity for integrating those components in safety-critical systems. This talk will explore the challenges and barriers for the integration of ML-based components in safety-critical systems. It will also provide insights about the main concerns reported by various collaboration cases with industry.

Best Paper Award

Best Paper Award.

The Programme Committee (PC) will designate up to three papers as candidates to the WAISE Best Paper Award.

The best paper will be selected based on the votes of the workshop’s participants.

During the workshop, all participants will receive a sheet to vote for the best paper. Voters will not be allowed to vote for papers they are co-authoring (voter’s name will be also provided on the sheet to ensure there are no conflicts of interest).

Each author of the Best Paper Award will receive a certificate with the name of the award, the name of the paper, and the names of the authors of the paper, at the workshop's closing.

The WAISE 2018 Best Paper Award was granted to:

Krzysztof Czarnecki and Rick Salay for "Towards a Framework to Manage Perceptual Uncertainty for Safe Automated Driving".

Committees

Committees.

Organization Committee

  • Huascar Espinoza, CEA LIST, France

  • Orlando Avila-García, Atos, Spain

  • Rob Alexander, University of York, UK

  • Andreas Theodorou, University of Bath, UK

Steering Committee

  • Stuart Russell, UC Berkeley, USA

  • Raja Chatila, ISIR - Sorbonne University, France

  • Roman V. Yampolskiy, University of Louisville, USA

  • Nozha Boujemaa, DATAIA Institute & INRIA, France

  • Mark Nitzberg, Center for Human-Compatible AI, USA

  • Philip Koopman, Carnegie Mellon University, USA

Programme Committee

  • Roman V. Yampolskiy, University of Louisville, USA

  • Raja Chatila, ISIR - Sorbonne University, France

  • Stuart Russell, UC Berkeley, USA

  • Nozha Boujemaa, DATAIA Institute & INRIA, France

  • Mark Nitzberg, Center for Human-Compatible AI, USA

  • Victoria Krakovna, Google DeepMind, UK

  • Chokri Mraidha, CEA LIST, France

  • Heather Roff, Leverhulme Centre for the Future of Intelligence at University of Cambridge, UK

  • Bernhard Kaiser, ANSYS, Germany

  • John Favaro, INTECS, Italy

  • Jonas Nilsson, Zenuity, Sweden

  • Philippa Ryan, Adelard, UK

  • José Hernández-Orallo, Universitat Politècnica de València, Spain

  • Andrew Banks, LDRA, UK

  • Carlos Hernández, TU Delft, Netherlands

  • José M. Faria, Safe Perspective Ltd., UK

  • Philip Koopman, Carnegie Mellon University, USA
  • Florent Kirchner, CEA LIST, France

  • Joanna Bryson, University of Bath, UK

  • Stefan Kugele, Technical University of Munich, Germany

  • Virginia Dignum, TU Delft, Netherlands

  • Timo Latvala, Space Systems Finland, Finland

  • Mehrdad Saadatmand, RISE SICS, Sweden

  • Rick Salay, University of Waterloo, Canada

  • Lavinia Burski, AECOM, UK

  • Jérémie Guiochet, LAAS-CNRS, France

  • Mario Gleirscher, University of York, UK

  • François Terrier, CEA LIST, France

  • Rob Ashmore, Defence Science and Technology Laboratory, UK

  • Erwin Schoitsch, Austrian Institute of Technology, Austria

  • Chris Allsopp, Frazer-Nash Consultancy, UK

  • Mauricio Castillo-Effen, Lockheed Martin, USA

bottom of page