Analyzing Biases in Perception of Truth in News Stories and Their Implications for Fact Checking
Dateien
Datum
Autor:innen
Herausgeber:innen
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
DOI (zitierfähiger Link)
Internationale Patentnummer
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Sammlungen
Core Facility der Universität Konstanz
Titel in einer weiteren Sprache
Publikationstyp
Publikationsstatus
Erschienen in
Zusammenfassung
Misinformation on social media has become a critical problem, particularly during a public health pandemic. Most social platforms today rely on users' voluntary reports to determine which news stories to fact-check first. Despite the importance, no prior work has explored the potential biases in such a reporting process. This work proposes a novel methodology to assess how users perceive truth or misinformation in online news stories. By conducting a large-scale survey (N = 15,000), we identify the possible biases in news perceptions and explore how partisan leanings influence the news selection algorithm for fact checking. Our survey reveals several perception biases or inaccuracies in estimating the truth level of stories. The first kind, called the total perception bias (TPB), is the aggregate difference in the ground truth and perceived truth level. The next two are the false-positive bias (FPB) and false-negative bias (FNB), which measures users' gullibility and cynicality of a given claim. We also propose ideological mean perception bias (IMPB), which quantifies a news story's ideological disputability. Collectively, these biases indicate that user perceptions are not correlated with the ground truth of new stories; users believe some stories to be more false and vice versa. This calls for the need to fact-check news stories that exhibit the most considerable perception biases first, which the current voluntary reporting does not offer. Based on these observations, we propose a new framework that can best leverage users' truth perceptions to remove false stories, correct misperceptions of users, or decrease ideological disagreements. We discuss how this new prioritizing scheme can aid platforms to significantly reduce the impact of fake news on user beliefs.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
Schlagwörter
Konferenz
Rezension
Zitieren
ISO 690
BABAEI, Mahmoudreza, Juhi KULSHRESTHA, Abhijnan CHAKRABORTY, Elissa M. REDMILES, Meeyoung CHA, Krishna P. GUMMADI, 2022. Analyzing Biases in Perception of Truth in News Stories and Their Implications for Fact Checking. In: IEEE Transactions on Computational Social Systems. IEEE. 2022, 9(3), pp. 839-850. eISSN 2329-924X. Available under: doi: 10.1109/TCSS.2021.3096038BibTex
@article{Babaei2022Analy-56086, year={2022}, doi={10.1109/TCSS.2021.3096038}, title={Analyzing Biases in Perception of Truth in News Stories and Their Implications for Fact Checking}, number={3}, volume={9}, journal={IEEE Transactions on Computational Social Systems}, pages={839--850}, author={Babaei, Mahmoudreza and Kulshrestha, Juhi and Chakraborty, Abhijnan and Redmiles, Elissa M. and Cha, Meeyoung and Gummadi, Krishna P.} }
RDF
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/56086"> <dc:contributor>Cha, Meeyoung</dc:contributor> <dc:creator>Babaei, Mahmoudreza</dc:creator> <dc:creator>Chakraborty, Abhijnan</dc:creator> <dc:creator>Kulshrestha, Juhi</dc:creator> <foaf:homepage rdf:resource="http://localhost:8080/"/> <dc:creator>Gummadi, Krishna P.</dc:creator> <dcterms:issued>2022</dcterms:issued> <dc:contributor>Gummadi, Krishna P.</dc:contributor> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/42"/> <dc:contributor>Babaei, Mahmoudreza</dc:contributor> <dc:contributor>Kulshrestha, Juhi</dc:contributor> <dc:contributor>Redmiles, Elissa M.</dc:contributor> <dc:language>eng</dc:language> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-01-10T09:46:06Z</dcterms:available> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/42"/> <dcterms:abstract xml:lang="eng">Misinformation on social media has become a critical problem, particularly during a public health pandemic. Most social platforms today rely on users' voluntary reports to determine which news stories to fact-check first. Despite the importance, no prior work has explored the potential biases in such a reporting process. This work proposes a novel methodology to assess how users perceive truth or misinformation in online news stories. By conducting a large-scale survey (N = 15,000), we identify the possible biases in news perceptions and explore how partisan leanings influence the news selection algorithm for fact checking. Our survey reveals several perception biases or inaccuracies in estimating the truth level of stories. The first kind, called the total perception bias (TPB), is the aggregate difference in the ground truth and perceived truth level. The next two are the false-positive bias (FPB) and false-negative bias (FNB), which measures users' gullibility and cynicality of a given claim. We also propose ideological mean perception bias (IMPB), which quantifies a news story's ideological disputability. Collectively, these biases indicate that user perceptions are not correlated with the ground truth of new stories; users believe some stories to be more false and vice versa. This calls for the need to fact-check news stories that exhibit the most considerable perception biases first, which the current voluntary reporting does not offer. Based on these observations, we propose a new framework that can best leverage users' truth perceptions to remove false stories, correct misperceptions of users, or decrease ideological disagreements. We discuss how this new prioritizing scheme can aid platforms to significantly reduce the impact of fake news on user beliefs.</dcterms:abstract> <dc:contributor>Chakraborty, Abhijnan</dc:contributor> <dcterms:title>Analyzing Biases in Perception of Truth in News Stories and Their Implications for Fact Checking</dcterms:title> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/56086"/> <dc:creator>Cha, Meeyoung</dc:creator> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-01-10T09:46:06Z</dc:date> <dc:creator>Redmiles, Elissa M.</dc:creator> </rdf:Description> </rdf:RDF>