Datensatz: Replication Data for: Improving Computer Vision Interpretability: Transparent Two-level Classification for Complex Scenes
Datum der Erstveröffentlichung
Autor:innen
Andere Beitragende
Repositorium der Erstveröffentlichung
Version des Datensatzes
DOI (Link zu den Daten)
Link zur Lizenz
Angaben zur Forschungsförderung
Projekt
Core Facility der Universität Konstanz
Titel in einer weiteren Sprache
Publikationsstatus
Zusammenfassung
Treating images as data has become increasingly popular in political science. While existing classifiers for images reach high levels of accuracy, it is difficult to systematically assess the visual features on which they base their classification. This paper presents a two-level classification method that addresses this transparency problem. At the first stage, an image segmenter detects the objects present in the image and a feature vector is created from those objects. In the second stage, this feature vector is used as input for standard machine learning classifiers to discriminate between images. We apply this method to a new dataset of more than 140,000 images to detect which ones display political protest. This analysis demonstrates three advantages to this paper's approach. First, identifying objects in images improves transparency by providing human-understandable labels for the objects shown on an image. Second, knowing these objects enables analysis of which distinguish protest images from non-protest ones. Third, comparing the importance of objects across countries reveals how protest behavior varies. These insights are not available using conventional computer vision classifiers and provide new opportunities for comparative research.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
Schlagwörter
Zitieren
ISO 690
SCHOLZ, Stefan, Nils B. WEIDMANN, Zachary C. STEINERT-THRELKELD, Eda KEREMOGLU, Bastian GOLDLÜCKE, 2024. Replication Data for: Improving Computer Vision Interpretability: Transparent Two-level Classification for Complex ScenesBibTex
RDF
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/72502"> <dcterms:issued>2024</dcterms:issued> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/71935"/> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/72502"/> <dc:creator>Keremoglu, Eda</dc:creator> <dcterms:created rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-05-29T21:25:35.000Z</dcterms:created> <dc:contributor>Goldlücke, Bastian</dc:contributor> <dc:creator>Scholz, Stefan</dc:creator> <dcterms:abstract>Treating images as data has become increasingly popular in political science. While existing classifiers for images reach high levels of accuracy, it is difficult to systematically assess the visual features on which they base their classification. This paper presents a two-level classification method that addresses this transparency problem. At the first stage, an image segmenter detects the objects present in the image and a feature vector is created from those objects. In the second stage, this feature vector is used as input for standard machine learning classifiers to discriminate between images. We apply this method to a new dataset of more than 140,000 images to detect which ones display political protest. This analysis demonstrates three advantages to this paper's approach. First, identifying objects in images improves transparency by providing human-understandable labels for the objects shown on an image. Second, knowing these objects enables analysis of which distinguish protest images from non-protest ones. Third, comparing the importance of objects across countries reveals how protest behavior varies. These insights are not available using conventional computer vision classifiers and provide new opportunities for comparative research.</dcterms:abstract> <dc:contributor>Weidmann, Nils B.</dc:contributor> <dc:contributor>Steinert-Threlkeld, Zachary C.</dc:contributor> <dcterms:title>Replication Data for: Improving Computer Vision Interpretability: Transparent Two-level Classification for Complex Scenes</dcterms:title> <dc:creator>Goldlücke, Bastian</dc:creator> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2025-02-28T11:59:05Z</dc:date> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2025-02-28T11:59:05Z</dcterms:available> <foaf:homepage rdf:resource="http://localhost:8080/"/> <dc:creator>Weidmann, Nils B.</dc:creator> <dc:contributor>Keremoglu, Eda</dc:contributor> <dc:language>eng</dc:language> <dc:contributor>Scholz, Stefan</dc:contributor> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/71935"/> <dc:creator>Steinert-Threlkeld, Zachary C.</dc:creator> </rdf:Description> </rdf:RDF>