Adversarial Machine Learning for Protecting Against Online Manipulation
Lade...
Dateien
Zu diesem Dokument gibt es keine Dateien.
Datum
2022
Autor:innen
Herausgeber:innen
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
DOI (zitierfähiger Link)
Internationale Patentnummer
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Core Facility der Universität Konstanz
Titel in einer weiteren Sprache
Publikationstyp
Zeitschriftenartikel
Publikationsstatus
Published
Erschienen in
IEEE Internet Computing. IEEE. 2022, 26(2), pp. 47-52. ISSN 1089-7801. eISSN 1941-0131. Available under: doi: 10.1109/MIC.2021.3130380
Zusammenfassung
Adversarial examples are inputs to a machine learning system that result in an incorrect output from that system. Attacks launched through this type of input can cause severe consequences: for example, in the field of image recognition, a stop signal can be misclassified as a speed limit indication. However, adversarial examples also represent the fuel for a flurry of research directions in different domains and applications. Here, we give an overview of how they can be profitably exploited as powerful tools to build stronger learning models, capable of better-withstanding attacks, for two crucial tasks: fake news and social bot detection.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
570 Biowissenschaften, Biologie
Schlagwörter
Konferenz
Rezension
undefined / . - undefined, undefined
Zitieren
ISO 690
CRESCI, Stefano, Marinella PETROCCHI, Angelo SPOGNARDI, Stefano TOGNAZZI, 2022. Adversarial Machine Learning for Protecting Against Online Manipulation. In: IEEE Internet Computing. IEEE. 2022, 26(2), pp. 47-52. ISSN 1089-7801. eISSN 1941-0131. Available under: doi: 10.1109/MIC.2021.3130380BibTex
@article{Cresci2022Adver-57836, year={2022}, doi={10.1109/MIC.2021.3130380}, title={Adversarial Machine Learning for Protecting Against Online Manipulation}, number={2}, volume={26}, issn={1089-7801}, journal={IEEE Internet Computing}, pages={47--52}, author={Cresci, Stefano and Petrocchi, Marinella and Spognardi, Angelo and Tognazzi, Stefano} }
RDF
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/57836"> <dcterms:title>Adversarial Machine Learning for Protecting Against Online Manipulation</dcterms:title> <dcterms:issued>2022</dcterms:issued> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dc:creator>Tognazzi, Stefano</dc:creator> <dc:contributor>Cresci, Stefano</dc:contributor> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43615"/> <dc:contributor>Tognazzi, Stefano</dc:contributor> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-06-23T13:50:39Z</dcterms:available> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43615"/> <dc:creator>Petrocchi, Marinella</dc:creator> <dc:creator>Cresci, Stefano</dc:creator> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/57836"/> <dc:language>eng</dc:language> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dc:creator>Spognardi, Angelo</dc:creator> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-06-23T13:50:39Z</dc:date> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dc:contributor>Petrocchi, Marinella</dc:contributor> <dc:contributor>Spognardi, Angelo</dc:contributor> <foaf:homepage rdf:resource="http://localhost:8080/"/> <dcterms:abstract xml:lang="eng">Adversarial examples are inputs to a machine learning system that result in an incorrect output from that system. Attacks launched through this type of input can cause severe consequences: for example, in the field of image recognition, a stop signal can be misclassified as a speed limit indication. However, adversarial examples also represent the fuel for a flurry of research directions in different domains and applications. Here, we give an overview of how they can be profitably exploited as powerful tools to build stronger learning models, capable of better-withstanding attacks, for two crucial tasks: fake news and social bot detection.</dcterms:abstract> </rdf:Description> </rdf:RDF>
Interner Vermerk
xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter
Prüfungsdatum der Dissertation
Finanzierungsart
Kommentar zur Publikation
Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Nein
Begutachtet
Unbekannt