BARReL : Bottleneck Attention for Adversarial Robustness in Vision-Based Reinforcement Learning

Lade...
Vorschaubild
Dateien
Zu diesem Dokument gibt es keine Dateien.
Datum
2022
Autor:innen
Bykovets, Eugene
Buhmann, Joachim M.
Herausgeber:innen
Kontakt
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
URI (zitierfähiger Link)
DOI (zitierfähiger Link)
Internationale Patentnummer
EU-Projektnummer
DFG-Projektnummer
Projekt
Open Access-Veröffentlichung
Gesperrt bis
Titel in einer weiteren Sprache
Forschungsvorhaben
Organisationseinheiten
Zeitschriftenheft
Publikationstyp
Preprint
Publikationsstatus
Published
Erschienen in
Zusammenfassung

Robustness to adversarial perturbations has been explored in many areas of computer vision. This robustness is particularly relevant in vision-based reinforcement learning, as the actions of autonomous agents might be safety-critic or impactful in the real world. We investigate the susceptibility of vision-based reinforcement learning agents to gradient-based adversarial attacks and evaluate a potential defense. We observe that Bottleneck Attention Modules (BAM) included in CNN architectures can act as potential tools to increase robustness against adversarial attacks. We show how learned attention maps can be used to recover activations of a convolutional layer by restricting the spatial activations to salient regions. Across a number of RL environments, BAM-enhanced architectures show increased robustness during inference. Finally, we discuss potential future research directions.

Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
004 Informatik
Schlagwörter
Konferenz
Rezension
undefined / . - undefined, undefined
Zitieren
ISO 690BYKOVETS, Eugene, Yannick METZ, Mennatallah EL-ASSADY, Daniel A. KEIM, Joachim M. BUHMANN, 2022. BARReL : Bottleneck Attention for Adversarial Robustness in Vision-Based Reinforcement Learning
BibTex
@unpublished{Bykovets2022BARRe-58409,
  year={2022},
  title={BARReL : Bottleneck Attention for Adversarial Robustness in Vision-Based Reinforcement Learning},
  author={Bykovets, Eugene and Metz, Yannick and El-Assady, Mennatallah and Keim, Daniel A. and Buhmann, Joachim M.}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/58409">
    <dc:contributor>Bykovets, Eugene</dc:contributor>
    <dc:creator>Keim, Daniel A.</dc:creator>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-08-29T14:25:15Z</dc:date>
    <dc:contributor>Buhmann, Joachim M.</dc:contributor>
    <dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/58409"/>
    <dc:creator>El-Assady, Mennatallah</dc:creator>
    <dc:contributor>Metz, Yannick</dc:contributor>
    <dc:rights>terms-of-use</dc:rights>
    <dc:creator>Buhmann, Joachim M.</dc:creator>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:contributor>Keim, Daniel A.</dc:contributor>
    <dcterms:issued>2022</dcterms:issued>
    <dc:contributor>El-Assady, Mennatallah</dc:contributor>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dcterms:abstract xml:lang="eng">Robustness to adversarial perturbations has been explored in many areas of computer vision. This robustness is particularly relevant in vision-based reinforcement learning, as the actions of autonomous agents might be safety-critic or impactful in the real world. We investigate the susceptibility of vision-based reinforcement learning agents to gradient-based adversarial attacks and evaluate a potential defense. We observe that Bottleneck Attention Modules (BAM) included in CNN architectures can act as potential tools to increase robustness against adversarial attacks. We show how learned attention maps can be used to recover activations of a convolutional layer by restricting the spatial activations to salient regions. Across a number of RL environments, BAM-enhanced architectures show increased robustness during inference. Finally, we discuss potential future research directions.</dcterms:abstract>
    <dc:creator>Metz, Yannick</dc:creator>
    <dc:creator>Bykovets, Eugene</dc:creator>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-08-29T14:25:15Z</dcterms:available>
    <dcterms:title>BARReL : Bottleneck Attention for Adversarial Robustness in Vision-Based Reinforcement Learning</dcterms:title>
    <dc:language>eng</dc:language>
  </rdf:Description>
</rdf:RDF>
Interner Vermerk
xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter
Kontakt
URL der Originalveröffentl.
Prüfdatum der URL
Prüfungsdatum der Dissertation
Finanzierungsart
Kommentar zur Publikation
Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet