Replication Data for: Improving Computer Vision Interpretability: Transparent Two-level Classification for Complex Scenes

creativework.versionV1
dc.contributor.authorScholz, Stefan
dc.contributor.authorWeidmann, Nils B.
dc.contributor.authorSteinert-Threlkeld, Zachary C.
dc.contributor.authorKeremoglu, Eda
dc.contributor.authorGoldlücke, Bastian
dc.date.accessioned2025-02-28T11:59:05Z
dc.date.available2025-02-28T11:59:05Z
dc.date.created2024-05-29T21:25:35.000Z
dc.date.issued2024
dc.description.abstractTreating images as data has become increasingly popular in political science. While existing classifiers for images reach high levels of accuracy, it is difficult to systematically assess the visual features on which they base their classification. This paper presents a two-level classification method that addresses this transparency problem. At the first stage, an image segmenter detects the objects present in the image and a feature vector is created from those objects. In the second stage, this feature vector is used as input for standard machine learning classifiers to discriminate between images. We apply this method to a new dataset of more than 140,000 images to detect which ones display political protest. This analysis demonstrates three advantages to this paper's approach. First, identifying objects in images improves transparency by providing human-understandable labels for the objects shown on an image. Second, knowing these objects enables analysis of which distinguish protest images from non-protest ones. Third, comparing the importance of objects across countries reveals how protest behavior varies. These insights are not available using conventional computer vision classifiers and provide new opportunities for comparative research.
dc.description.versionpublished
dc.identifier.doi10.7910/dvn/tftef2
dc.identifier.urihttps://kops.uni-konstanz.de/handle/123456789/72502
dc.language.isoeng
dc.subjectSocial Sciences
dc.subjectimage analysis
dc.subjectcomputer vision
dc.subjectexplainable AI
dc.subjecttwo-level classification
dc.subjectprotest analysis
dc.subject.ddc004
dc.titleReplication Data for: Improving Computer Vision Interpretability: Transparent Two-level Classification for Complex Sceneseng
dspace.entity.typeDataset
kops.citation.bibtex
kops.citation.iso690SCHOLZ, Stefan, Nils B. WEIDMANN, Zachary C. STEINERT-THRELKELD, Eda KEREMOGLU, Bastian GOLDLÜCKE, 2024. Replication Data for: Improving Computer Vision Interpretability: Transparent Two-level Classification for Complex Scenesdeu
kops.citation.iso690SCHOLZ, Stefan, Nils B. WEIDMANN, Zachary C. STEINERT-THRELKELD, Eda KEREMOGLU, Bastian GOLDLÜCKE, 2024. Replication Data for: Improving Computer Vision Interpretability: Transparent Two-level Classification for Complex Sceneseng
kops.citation.rdf
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/72502">
    <dcterms:issued>2024</dcterms:issued>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/71935"/>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/72502"/>
    <dc:creator>Keremoglu, Eda</dc:creator>
    <dcterms:created rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-05-29T21:25:35.000Z</dcterms:created>
    <dc:contributor>Goldlücke, Bastian</dc:contributor>
    <dc:creator>Scholz, Stefan</dc:creator>
    <dcterms:abstract>Treating images as data has become increasingly popular in political science. While existing classifiers for images reach high levels of accuracy, it is difficult to systematically assess the visual features on which they base their classification. This paper presents a two-level classification method that addresses this transparency problem. At the first stage, an image segmenter detects the objects present in the image and a feature vector is created from those objects. In the second stage, this feature vector is used as input for standard machine learning classifiers to discriminate between images. We apply this method to a new dataset of more than 140,000 images to detect which ones display political protest. This analysis demonstrates three advantages to this paper's approach. First, identifying objects in images improves transparency by providing human-understandable labels for the objects shown on an image. Second, knowing these objects enables analysis of which distinguish protest images from non-protest ones. Third, comparing the importance of objects across countries reveals how protest behavior varies. These insights are not available using conventional computer vision classifiers and provide new opportunities for comparative research.</dcterms:abstract>
    <dc:contributor>Weidmann, Nils B.</dc:contributor>
    <dc:contributor>Steinert-Threlkeld, Zachary C.</dc:contributor>
    <dcterms:title>Replication Data for: Improving Computer Vision Interpretability: Transparent Two-level Classification for Complex Scenes</dcterms:title>
    <dc:creator>Goldlücke, Bastian</dc:creator>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2025-02-28T11:59:05Z</dc:date>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2025-02-28T11:59:05Z</dcterms:available>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:creator>Weidmann, Nils B.</dc:creator>
    <dc:contributor>Keremoglu, Eda</dc:contributor>
    <dc:language>eng</dc:language>
    <dc:contributor>Scholz, Stefan</dc:contributor>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/71935"/>
    <dc:creator>Steinert-Threlkeld, Zachary C.</dc:creator>
  </rdf:Description>
</rdf:RDF>
kops.datacite.repositoryHarvard Dataverse
kops.flag.knbibliographytrue
relation.isAuthorOfDatasetf57a4611-18be-4b95-8705-4e97c85ec9e8
relation.isAuthorOfDataset0d17e0e1-ceb4-4f29-b742-158f78d0aa95
relation.isAuthorOfDatasetb4246e8a-c152-46b8-925a-04b1ed022e8a
relation.isAuthorOfDatasetc4ecb499-9c85-4481-832e-af061f18cbdc
relation.isAuthorOfDataset.latestForDiscoveryf57a4611-18be-4b95-8705-4e97c85ec9e8
relation.isPublicationOfDataseta99cda33-2332-45b7-993f-157605d1d1ef
relation.isPublicationOfDataset.latestForDiscoverya99cda33-2332-45b7-993f-157605d1d1ef

Dateien