Publikation:

SpecRepair : Counter-Example Guided Safety Repair of Deep Neural Networks

Lade...
Vorschaubild

Dateien

Bauer-Marquart_2-19tqdmlko0rgt3.PDF
Bauer-Marquart_2-19tqdmlko0rgt3.PDFGröße: 441.19 KBDownloads: 58

Datum

2022

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Open Access Green
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Beitrag zu einem Konferenzband
Publikationsstatus
Published

Erschienen in

LEGUNSEN, Owolabi, ed., Grigore ROSU, ed.. Model Checking Software : 28th International Symposium, SPIN 2022, Virtual Event, May 21, 2022 : Proceedings. Cham: Springer, 2022, pp. 79-96. Lecture Notes in Computer Science. 13255. ISSN 0302-9743. eISSN 1611-3349. ISBN 978-3-031-15076-0. Available under: doi: 10.1007/978-3-031-15077-7_5

Zusammenfassung

Deep neural networks (DNNs) are increasingly applied in safety-critical domains, such as self-driving cars, unmanned aircraft, and medical diagnosis. It is of fundamental importance to certify the safety of these DNNs, i.e. that they comply with a formal safety specification. While safety certification tools exactly answer this question, they are of no help in debugging unsafe DNNs, requiring the developer to iteratively verify and modify the DNN until safety is eventually achieved. Hence, a repair technique needs to be developed that can produce a safe DNN automatically. To address this need, we present SpecRepair, a tool that efficiently eliminates counter-examples from a DNN and produces a provably safe DNN without harming its classification accuracy. SpecRepair combines specification-based counter-example search and resumes training of the DNN, penalizing counter-examples and certifying the resulting DNN. We evaluate SpecRepair’s effectiveness on the ACAS Xu benchmark, a DNN-based controller for unmanned aircraft, and two image classification benchmarks. The results show that SpecRepair is more successful in producing safe DNNs than comparable methods, has a shorter runtime, and produces safe DNNs while preserving their classification accuracy.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
004 Informatik

Schlagwörter

Neural networks, Safety repair, Safety specification

Konferenz

Model Checking Software : 28th International Symposium, SPIN 2022 (virtual), 21. Mai 2022
Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690BAUER-MARQUART, Fabian, David BOETIUS, Stefan LEUE, Christian SCHILLING, 2022. SpecRepair : Counter-Example Guided Safety Repair of Deep Neural Networks. Model Checking Software : 28th International Symposium, SPIN 2022 (virtual), 21. Mai 2022. In: LEGUNSEN, Owolabi, ed., Grigore ROSU, ed.. Model Checking Software : 28th International Symposium, SPIN 2022, Virtual Event, May 21, 2022 : Proceedings. Cham: Springer, 2022, pp. 79-96. Lecture Notes in Computer Science. 13255. ISSN 0302-9743. eISSN 1611-3349. ISBN 978-3-031-15076-0. Available under: doi: 10.1007/978-3-031-15077-7_5
BibTex
@inproceedings{BauerMarquart2022SpecR-59172,
  year={2022},
  doi={10.1007/978-3-031-15077-7_5},
  title={SpecRepair : Counter-Example Guided Safety Repair of Deep Neural Networks},
  number={13255},
  isbn={978-3-031-15076-0},
  issn={0302-9743},
  publisher={Springer},
  address={Cham},
  series={Lecture Notes in Computer Science},
  booktitle={Model Checking Software : 28th International Symposium, SPIN 2022, Virtual Event, May 21, 2022 : Proceedings},
  pages={79--96},
  editor={Legunsen, Owolabi and Rosu, Grigore},
  author={Bauer-Marquart, Fabian and Boetius, David and Leue, Stefan and Schilling, Christian}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/59172">
    <dc:rights>terms-of-use</dc:rights>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-11-16T12:02:29Z</dcterms:available>
    <dc:language>eng</dc:language>
    <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/59172/1/Bauer-Marquart_2-19tqdmlko0rgt3.PDF"/>
    <dcterms:abstract xml:lang="eng">Deep neural networks (DNNs) are increasingly applied in safety-critical domains, such as self-driving cars, unmanned aircraft, and medical diagnosis. It is of fundamental importance to certify the safety of these DNNs, i.e. that they comply with a formal safety specification. While safety certification tools exactly answer this question, they are of no help in debugging unsafe DNNs, requiring the developer to iteratively verify and modify the DNN until safety is eventually achieved. Hence, a repair technique needs to be developed that can produce a safe DNN automatically. To address this need, we present SpecRepair, a tool that efficiently eliminates counter-examples from a DNN and produces a provably safe DNN without harming its classification accuracy. SpecRepair combines specification-based counter-example search and resumes training of the DNN, penalizing counter-examples and certifying the resulting DNN. We evaluate SpecRepair’s effectiveness on the ACAS Xu benchmark, a DNN-based controller for unmanned aircraft, and two image classification benchmarks. The results show that SpecRepair is more successful in producing safe DNNs than comparable methods, has a shorter runtime, and produces safe DNNs while preserving their classification accuracy.</dcterms:abstract>
    <dcterms:title>SpecRepair : Counter-Example Guided Safety Repair of Deep Neural Networks</dcterms:title>
    <dc:contributor>Schilling, Christian</dc:contributor>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/59172"/>
    <dc:contributor>Leue, Stefan</dc:contributor>
    <dc:creator>Schilling, Christian</dc:creator>
    <dcterms:issued>2022</dcterms:issued>
    <dc:creator>Bauer-Marquart, Fabian</dc:creator>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:contributor>Boetius, David</dc:contributor>
    <dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-11-16T12:02:29Z</dc:date>
    <dc:contributor>Bauer-Marquart, Fabian</dc:contributor>
    <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/59172/1/Bauer-Marquart_2-19tqdmlko0rgt3.PDF"/>
    <dc:creator>Boetius, David</dc:creator>
    <dc:creator>Leue, Stefan</dc:creator>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Diese Publikation teilen