Publikation:

On the Road to Clarity : Exploring Explainable AI for World Models in a Driver Assistance System

Lade...
Vorschaubild

Dateien

Zu diesem Dokument gibt es keine Dateien.

Datum

2024

Autor:innen

Roshdi, Mohamed
Ebrahim, Hussein
Berekovic, Mladen

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

URI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Beitrag zu einem Konferenzband
Publikationsstatus
Published

Erschienen in

NERI, Ferrante, Hrsg., Guansong PANG, Hrsg., Mengmi ZHANG, Hrsg.. 2024 IEEE Conference on Artificial Intelligence : CAI 2024 : 25-27 June 2024, Marina Bay Sands, Singapore, proceedings. Piscataway, NJ: IEEE, 2024, S. 1032-1039. ISBN 979-8-3503-5410-2. Verfügbar unter: doi: 10.1109/cai59869.2024.00187

Zusammenfassung

In Autonomous Driving (AD) transparency and safety are paramount, as mistakes are costly. However, neural networks used in AD systems are generally considered black boxes. As a countermeasure, we have methods of explainable AI (XAI), such as feature relevance estimation and dimensionality reduction. Coarse graining techniques can also help reduce dimensionality and find interpretable global patterns. A specific coarse graining method is Renormalization Groups from statistical physics. It has previously been applied to Restricted Boltzmann Machines (RBMs) to interpret unsupervised learning. We refine this technique by building a transparent backbone model for convolutional variational autoencoders (VAE) that allows mapping latent values to input features and has performance comparable to trained black box VAEs. Moreover, we propose a custom feature map visualization technique to analyze the internal convolutional layers in the VAE to explain internal causes of poor reconstruction that may lead to dangerous traffic scenarios in AD applications. In a second key contribution, we propose explanation and evaluation techniques for the internal dynamics and feature relevance of prediction networks. We test a long short-term memory (LSTM) network in the computer vision domain to evaluate the predictability and in future applications potentially safety of prediction models. We showcase our methods by analyzing a VAE-LSTM world model that predicts pedestrian perception in an urban traffic situation.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
004 Informatik

Schlagwörter

Visualization, Explainable AI, Computational modeling, Roads, Predictive models, Safety, Long short term memory

Konferenz

CAI 2024 : IEEE Conference on Artificial Intelligence, 25. Juni 2024 - 27. Juni 2024, Marina Bay Sands, Singapore
Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690ROSHDI, Mohamed, Julian PETZOLD, Mostafa WAHBY, Hussein EBRAHIM, Mladen BEREKOVIC, Heiko HAMANN, 2024. On the Road to Clarity : Exploring Explainable AI for World Models in a Driver Assistance System. CAI 2024 : IEEE Conference on Artificial Intelligence. Marina Bay Sands, Singapore, 25. Juni 2024 - 27. Juni 2024. In: NERI, Ferrante, Hrsg., Guansong PANG, Hrsg., Mengmi ZHANG, Hrsg.. 2024 IEEE Conference on Artificial Intelligence : CAI 2024 : 25-27 June 2024, Marina Bay Sands, Singapore, proceedings. Piscataway, NJ: IEEE, 2024, S. 1032-1039. ISBN 979-8-3503-5410-2. Verfügbar unter: doi: 10.1109/cai59869.2024.00187
BibTex
@inproceedings{Roshdi2024-06-25Clari-71403,
  year={2024},
  doi={10.1109/cai59869.2024.00187},
  title={On the Road to Clarity : Exploring Explainable AI for World Models in a Driver Assistance System},
  isbn={979-8-3503-5410-2},
  publisher={IEEE},
  address={Piscataway, NJ},
  booktitle={2024 IEEE Conference on Artificial Intelligence : CAI 2024 : 25-27 June 2024, Marina Bay Sands, Singapore, proceedings},
  pages={1032--1039},
  editor={Neri, Ferrante and Pang, Guansong and Zhang, Mengmi},
  author={Roshdi, Mohamed and Petzold, Julian and Wahby, Mostafa and Ebrahim, Hussein and Berekovic, Mladen and Hamann, Heiko}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/71403">
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dcterms:title>On the Road to Clarity : Exploring Explainable AI for World Models in a Driver Assistance System</dcterms:title>
    <dc:creator>Hamann, Heiko</dc:creator>
    <dc:language>eng</dc:language>
    <dc:creator>Wahby, Mostafa</dc:creator>
    <dc:contributor>Wahby, Mostafa</dc:contributor>
    <dc:creator>Petzold, Julian</dc:creator>
    <dc:creator>Ebrahim, Hussein</dc:creator>
    <dc:contributor>Petzold, Julian</dc:contributor>
    <dcterms:abstract>In Autonomous Driving (AD) transparency and safety are paramount, as mistakes are costly. However, neural networks used in AD systems are generally considered black boxes. As a countermeasure, we have methods of explainable AI (XAI), such as feature relevance estimation and dimensionality reduction. Coarse graining techniques can also help reduce dimensionality and find interpretable global patterns. A specific coarse graining method is Renormalization Groups from statistical physics. It has previously been applied to Restricted Boltzmann Machines (RBMs) to interpret unsupervised learning. We refine this technique by building a transparent backbone model for convolutional variational autoencoders (VAE) that allows mapping latent values to input features and has performance comparable to trained black box VAEs. Moreover, we propose a custom feature map visualization technique to analyze the internal convolutional layers in the VAE to explain internal causes of poor reconstruction that may lead to dangerous traffic scenarios in AD applications. In a second key contribution, we propose explanation and evaluation techniques for the internal dynamics and feature relevance of prediction networks. We test a long short-term memory (LSTM) network in the computer vision domain to evaluate the predictability and in future applications potentially safety of prediction models. We showcase our methods by analyzing a VAE-LSTM world model that predicts pedestrian perception in an urban traffic situation.</dcterms:abstract>
    <dc:contributor>Berekovic, Mladen</dc:contributor>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/71403"/>
    <dc:contributor>Roshdi, Mohamed</dc:contributor>
    <dc:creator>Roshdi, Mohamed</dc:creator>
    <dc:contributor>Ebrahim, Hussein</dc:contributor>
    <dcterms:issued>2024-06-25</dcterms:issued>
    <dc:creator>Berekovic, Mladen</dc:creator>
    <dc:contributor>Hamann, Heiko</dc:contributor>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-11-22T10:08:57Z</dc:date>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-11-22T10:08:57Z</dcterms:available>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Diese Publikation teilen