Publikation:

Biased Machines in the Realm of Politics

Lade...
Vorschaubild

Dateien

Roth_2-1pnwchwzwnqav4.pdf
Roth_2-1pnwchwzwnqav4.pdfGröße: 4.64 MBDownloads: 130

Datum

2022

Autor:innen

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

DOI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Open Access Green
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Dissertation
Publikationsstatus
Published

Erschienen in

Zusammenfassung

This dissertation addresses one of the most serious risks associated with automated decision-making: bias. This is not a new phenomenon, and decisions have always been biased, but automated decision-making multiplies the risks in many ways. The main challenges are: How can we detect biases? Who should be held accountable for biased predictions? And how can biases be mitigated or corrected? The three studies within this dissertation help answer these questions by emphasizing the importance of monitoring our own machine learning (ML) pipelines, auditing third party prediction systems, and exposing the potential abuse of predictive algorithms when given sensitive data.

The first paper (section 2) addresses the question of how to direct ML users to high-performing, robust, and fair models. ML systems have been shown to harm human lives via discrimination, distortion, exploitation, or misjudgment. Although bias is often associated with malicious behavior, this is not always the case. Inductive biases, for example, such as knowledge about parameter ranges or priors can help to stabilize a model optimization process. Furthermore, decomposing into statistical bias and variance, allows for model selection with minimum future risk. Since "all models are wrong, but some are useful", we should analyze as many biases in ML as feasible before putting faith in our predictions.

The second paper (section 3) addresses the question about how to audit recommender bias on social media. The goal of this experiment is to quantify the causes of algorithmic filter bubbles by analyzing amplification bias in the recommender system of Twitter. Using simulation of human behavior with bots we can show that 'filter bubbles' exist and that they add an additional layer of bias to 'echo chambers'. More precisely, the algorithm responded far more strongly to bots that actually engage with content than to bots that just follow human accounts. This demonstrates that the Twitter algorithm significantly depends on human interactions to adapt to preferences of its users. This has serious consequences since users may be unaware of the large personalization bias that happens when they like or share content.

The third paper (section 4) addresses the question whether online communication is predictive of offline political behavior. We can predict the party affiliation and turnout likelihood of a person with fair accuracy using a unique dataset consisting of thousands of ordinary citizens, including their Twitter statuses, integrated with public US voter registration files. Our results show social media communication is sufficiently biased to provide information about attitudes and political behavior of an average person in the real world. We demonstrate how, in addition to us, political, commercial, or bad faith actors may acquire this sensitive data to build prediction models, for example, to influence a customers retail journey or perhaps worse discourage them from voting on scale.

Biases can limit the potential of ML for business and society by cultivating distrust and delivering distorting or discriminating results. However, if our societies can (1) implement effective data privacy regulations (2) require internal debaising steps and encourage external independent auditing (3) educate the broader public of biases and ways to report them (4) and invest in training interdisciplinary computational scientists, we may be better prepared for negative consequences of the next industrial revolution.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
320 Politik

Schlagwörter

Konferenz

Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690ROTH, Simon, 2022. Biased Machines in the Realm of Politics [Dissertation]. Konstanz: University of Konstanz
BibTex
@phdthesis{Roth2022Biase-66600,
  year={2022},
  title={Biased Machines in the Realm of Politics},
  author={Roth, Simon},
  address={Konstanz},
  school={Universität Konstanz}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/66600">
    <dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/>
    <dcterms:title>Biased Machines in the Realm of Politics</dcterms:title>
    <dc:language>eng</dc:language>
    <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/66600/4/Roth_2-1pnwchwzwnqav4.pdf"/>
    <dc:creator>Roth, Simon</dc:creator>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/42"/>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/42"/>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dcterms:issued>2022</dcterms:issued>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2023-04-17T05:44:25Z</dc:date>
    <dcterms:abstract>This dissertation addresses one of the most serious risks associated with automated decision-making: bias. This is not a new phenomenon, and decisions have always been biased, but automated decision-making multiplies the risks in many ways. The main challenges are: How can we detect biases? Who should be held accountable for biased predictions? And how can biases be mitigated or corrected? The three studies within this dissertation help answer these questions by emphasizing the importance of monitoring our own machine learning (ML) pipelines, auditing third party prediction systems, and exposing the potential abuse of predictive algorithms when given sensitive data. 

The first paper (section 2) addresses the question of how to direct ML users to high-performing, robust, and fair models. ML systems have been shown to harm human lives via discrimination, distortion, exploitation, or misjudgment. Although bias is often associated with malicious behavior, this is not always the case. Inductive biases, for example, such as knowledge about parameter ranges or priors can help to stabilize a model optimization process. Furthermore, decomposing into statistical bias and variance, allows for model selection with minimum future risk. Since "all models are wrong, but some are useful", we should analyze as many biases in ML as feasible before putting faith in our predictions.

The second paper (section 3) addresses the question about how to audit recommender bias on social media. The goal of this experiment is to quantify the causes of algorithmic filter bubbles by analyzing amplification bias in the recommender system of Twitter. Using simulation of human behavior with bots we can show that 'filter bubbles' exist and that they add an additional layer of bias to 'echo chambers'. More precisely, the algorithm responded far more strongly to bots that actually engage with content than to bots that just follow human accounts. This demonstrates that the Twitter algorithm significantly depends on human interactions to adapt to preferences of its users. This has serious consequences since users may be unaware of the large personalization bias that happens when they like or share content. 

The third paper (section 4) addresses the question whether online communication is predictive of offline political behavior. We can predict the party affiliation and turnout likelihood of a person with fair accuracy using a unique dataset consisting of thousands of ordinary citizens, including their Twitter statuses, integrated with public US voter registration files. Our results show social media communication is sufficiently biased to provide information about attitudes and political behavior of an average person in the real world. We demonstrate how, in addition to us, political, commercial, or bad faith actors may acquire this sensitive data to build prediction models, for example, to influence a customers retail journey or perhaps worse discourage them from voting on scale.

Biases can limit the potential of ML for business and society by cultivating distrust and delivering distorting or discriminating results. However, if our societies can (1) implement effective data privacy regulations (2) require internal debaising steps and encourage external independent auditing (3) educate the broader public of biases and ways to report them (4) and invest in training interdisciplinary computational scientists, we may be better prepared for negative consequences of the next industrial revolution.</dcterms:abstract>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/66600"/>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2023-04-17T05:44:25Z</dcterms:available>
    <dc:rights>terms-of-use</dc:rights>
    <dc:contributor>Roth, Simon</dc:contributor>
    <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/66600/4/Roth_2-1pnwchwzwnqav4.pdf"/>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

December 1, 2022
Hochschulschriftenvermerk
Konstanz, Univ., Diss., 2022
Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Begutachtet
Diese Publikation teilen