Publikation: A Robust Optimisation Perspective on Counterexample-Guided Repair of Neural Networks
Dateien
Datum
Autor:innen
Herausgeber:innen
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
URI (zitierfähiger Link)
Internationale Patentnummer
Link zur Lizenz
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Core Facility der Universität Konstanz
Titel in einer weiteren Sprache
Publikationstyp
Publikationsstatus
Erschienen in
Zusammenfassung
Counterexample-guided repair aims at creating neural networks with mathematical safety guarantees, facilitating the application of neural networks in safety-critical domains. However, whether counterexample-guided repair is guaranteed to terminate remains an open question. We approach this question by showing that counterexample-guided repair can be viewed as a robust optimisation algorithm. While termination guarantees for neural network repair itself remain beyond our reach, we prove termination for more restrained machine learning models and disprove termination in a general setting. We empirically study the practical implications of our theoretical results, demonstrating the suitability of common verifiers and falsifiers for repair despite a disadvantageous theoretical result. Additionally, we use our theoretical insights to devise a novel algorithm for repairing linear regression models based on quadratic programming, surpassing existing approaches.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
Schlagwörter
Konferenz
Rezension
Zitieren
ISO 690
BOETIUS, David, Stefan LEUE, Tobias SUTTER, 2023. A Robust Optimisation Perspective on Counterexample-Guided Repair of Neural Networks. 40th International Conference on Machine Learning : ICML 2023. Honolulu, Hawaii, 23. Juli 2023 - 29. Juli 2023. In: Proceedings of the 40th International Conference on Machine Learning : ICML 2023. OpenReview, 2023BibTex
@inproceedings{Boetius2023Robus-70510, year={2023}, title={A Robust Optimisation Perspective on Counterexample-Guided Repair of Neural Networks}, url={https://openreview.net/forum?id=z3hnQh5UJd}, publisher={OpenReview}, booktitle={Proceedings of the 40th International Conference on Machine Learning : ICML 2023}, author={Boetius, David and Leue, Stefan and Sutter, Tobias} }
RDF
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/70510"> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dc:creator>Sutter, Tobias</dc:creator> <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/70510/1/Boetius_2-y8w0lfpv6ens3.pdf"/> <dcterms:title>A Robust Optimisation Perspective on Counterexample-Guided Repair of Neural Networks</dcterms:title> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/70510"/> <dc:creator>Boetius, David</dc:creator> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dc:contributor>Leue, Stefan</dc:contributor> <dc:language>eng</dc:language> <dcterms:issued>2023</dcterms:issued> <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/70510/1/Boetius_2-y8w0lfpv6ens3.pdf"/> <dc:contributor>Sutter, Tobias</dc:contributor> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-07-31T10:30:18Z</dc:date> <dc:creator>Leue, Stefan</dc:creator> <dcterms:abstract>Counterexample-guided repair aims at creating neural networks with mathematical safety guarantees, facilitating the application of neural networks in safety-critical domains. However, whether counterexample-guided repair is guaranteed to terminate remains an open question. We approach this question by showing that counterexample-guided repair can be viewed as a robust optimisation algorithm. While termination guarantees for neural network repair itself remain beyond our reach, we prove termination for more restrained machine learning models and disprove termination in a general setting. We empirically study the practical implications of our theoretical results, demonstrating the suitability of common verifiers and falsifiers for repair despite a disadvantageous theoretical result. Additionally, we use our theoretical insights to devise a novel algorithm for repairing linear regression models based on quadratic programming, surpassing existing approaches.</dcterms:abstract> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-07-31T10:30:18Z</dcterms:available> <dc:contributor>Boetius, David</dc:contributor> <foaf:homepage rdf:resource="http://localhost:8080/"/> </rdf:Description> </rdf:RDF>