KOPS - The Institutional Repository of the University of Konstanz

Analyzing Biases in Perception of Truth in News Stories and Their Implications for Fact Checking

Analyzing Biases in Perception of Truth in News Stories and Their Implications for Fact Checking

Cite This

Files in this item

Files Size Format View

There are no files associated with this item.

BABAEI, Mahmoudreza, Juhi KULSHRESTHA, Abhijnan CHAKRABORTY, Elissa M. REDMILES, Meeyoung CHA, Krishna P. GUMMADI, 2022. Analyzing Biases in Perception of Truth in News Stories and Their Implications for Fact Checking. In: IEEE Transactions on Computational Social Systems. IEEE. 9(3), pp. 839-850. eISSN 2329-924X. Available under: doi: 10.1109/TCSS.2021.3096038

@article{Babaei2022Analy-56086, title={Analyzing Biases in Perception of Truth in News Stories and Their Implications for Fact Checking}, year={2022}, doi={10.1109/TCSS.2021.3096038}, number={3}, volume={9}, journal={IEEE Transactions on Computational Social Systems}, pages={839--850}, author={Babaei, Mahmoudreza and Kulshrestha, Juhi and Chakraborty, Abhijnan and Redmiles, Elissa M. and Cha, Meeyoung and Gummadi, Krishna P.} }

<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/rdf/resource/123456789/56086"> <foaf:homepage rdf:resource="http://localhost:8080/jspui"/> <dcterms:title>Analyzing Biases in Perception of Truth in News Stories and Their Implications for Fact Checking</dcterms:title> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dc:creator>Babaei, Mahmoudreza</dc:creator> <dc:contributor>Redmiles, Elissa M.</dc:contributor> <dc:contributor>Babaei, Mahmoudreza</dc:contributor> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/rdf/resource/123456789/42"/> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/rdf/resource/123456789/42"/> <dc:creator>Redmiles, Elissa M.</dc:creator> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/56086"/> <dc:creator>Chakraborty, Abhijnan</dc:creator> <dcterms:abstract xml:lang="eng">Misinformation on social media has become a critical problem, particularly during a public health pandemic. Most social platforms today rely on users' voluntary reports to determine which news stories to fact-check first. Despite the importance, no prior work has explored the potential biases in such a reporting process. This work proposes a novel methodology to assess how users perceive truth or misinformation in online news stories. By conducting a large-scale survey (N = 15,000), we identify the possible biases in news perceptions and explore how partisan leanings influence the news selection algorithm for fact checking. Our survey reveals several perception biases or inaccuracies in estimating the truth level of stories. The first kind, called the total perception bias (TPB), is the aggregate difference in the ground truth and perceived truth level. The next two are the false-positive bias (FPB) and false-negative bias (FNB), which measures users' gullibility and cynicality of a given claim. We also propose ideological mean perception bias (IMPB), which quantifies a news story's ideological disputability. Collectively, these biases indicate that user perceptions are not correlated with the ground truth of new stories; users believe some stories to be more false and vice versa. This calls for the need to fact-check news stories that exhibit the most considerable perception biases first, which the current voluntary reporting does not offer. Based on these observations, we propose a new framework that can best leverage users' truth perceptions to remove false stories, correct misperceptions of users, or decrease ideological disagreements. We discuss how this new prioritizing scheme can aid platforms to significantly reduce the impact of fake news on user beliefs.</dcterms:abstract> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-01-10T09:46:06Z</dc:date> <dcterms:issued>2022</dcterms:issued> <dc:language>eng</dc:language> <dc:creator>Cha, Meeyoung</dc:creator> <dc:creator>Kulshrestha, Juhi</dc:creator> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-01-10T09:46:06Z</dcterms:available> <dc:contributor>Cha, Meeyoung</dc:contributor> <dc:contributor>Kulshrestha, Juhi</dc:contributor> <dc:contributor>Chakraborty, Abhijnan</dc:contributor> <dc:contributor>Gummadi, Krishna P.</dc:contributor> <dc:creator>Gummadi, Krishna P.</dc:creator> </rdf:Description> </rdf:RDF>

This item appears in the following Collection(s)

Search KOPS


Browse

My Account