Person:
Gipp, Bela

Loading...
Profile Picture
Email Address
ORCID
Birth Date
Research Projects
Organizational Units
Job Title
Last Name
Gipp
First Name
Bela
Name

Search Results

Now showing 1 - 10 of 114
Loading...
Thumbnail Image
Publication

Do the Math : Making Mathematics in Wikipedia Computable

2023, Greiner-Petter, Andre, Schubotz, Moritz, Breitinger, Corinna, Scharpf, Philipp, Aizawa, Akiko, Gipp, Bela

Wikipedia combines the power of AI solutions and human reviewers to safeguard article quality. Quality control objectives include detecting malicious edits, fixing typos, and spotting inconsistent formatting. However, no automated quality control mechanisms currently exist for mathematical formulae. Spell checkers are widely used to highlight textual errors, yet no equivalent tool exists to detect algebraically incorrect formulae. Our paper addresses this shortcoming by making mathematical formulae computable. We present a method that (1) gathers the semantic information surrounding the context of each mathematical formulae, (2) provides access to the information in a graph-structured dependency hierarchy, and (3) performs automatic plausibility checks on equations. We evaluate the performance of our approach on 6,337 mathematical expressions contained in 104 Wikipedia articles on the topic of orthogonal polynomials and special functions. Our system, LACAST , verified 358 out of 1,516 equations as error-free. LACAST successfully translated 27% of the mathematical expressions and outperformed existing translation approaches by 16%. Additionally, LACAST achieved an F1 score of .495 for annotating mathematical expressions with relevant textual descriptions, which is a significant step towards advancing searchability, readability, and accessibility of mathematical formulae in Wikipedia. A prototype of LACAST and the semantically enhanced Wikipedia articles are available at: https://tpami.wmflabs.org .

Loading...
Thumbnail Image
Publication

Mining mathematical documents for question answering via unsupervised formula labeling

2022, Scharpf, Philipp, Schubotz, Moritz, Gipp, Bela

The increasing number of questions on Question Answering (QA) platforms like Math Stack Exchange (MSE) signifies a growing information need to answer math-related questions. However, there is currently very little research on approaches for an open data QA system that retrieves mathematical formulae using their concept names or querying formula identifier relationships from knowledge graphs. In this paper, we aim to bridge the gap by presenting data mining methods and benchmark results to employ Mathematical Entity Linking (MathEL) and Unsupervised Formula Labeling (UFL) for semantic formula search and mathematical question answering (MathQA) on the arXiv preprint repository, Wikipedia, and Wikidata. The new methods extend our previously introduced system, which is part of the Wikimedia ecosystem of free knowledge. Based on different types of information needs, we evaluate our system in 15 information need modes, assessing over 7,000 query results. Furthermore, we compare its performance to a commercial knowledge-base and calculation-engine (Wolfram Alpha) and search-engine (Google). The open source system is hosted by Wiki-media at https://mathqa.wmflabs.org. A demovideo is available at purl.org/mathqa.

Loading...
Thumbnail Image
Publication

Automated identification of bias inducing words in news articles using linguistic and context-oriented features

2021-05, Spinde, Timo, Rudnitckaia, Lada, Mitrović, Jelena, Hamborg, Felix, Granitzer, Michael, Gipp, Bela, Donnay, Karsten

Media has a substantial impact on public perception of events, and, accordingly, the way media presents events can potentially alter the beliefs and views of the public. One of the ways in which bias in news articles can be introduced is by altering word choice. Such a form of bias is very challenging to identify automatically due to the high context-dependence and the lack of a large-scale gold-standard data set. In this paper, we present a prototypical yet robust and diverse data set for media bias research. It consists of 1,700 statements representing various media bias instances and contains labels for media bias identification on the word and sentence level. In contrast to existing research, our data incorporate background information on the participants’ demographics, political ideology, and their opinion about media in general. Based on our data, we also present a way to detect bias-inducing words in news articles automatically. Our approach is feature-oriented, which provides a strong descriptive and explanatory power compared to deep learning techniques. We identify and engineer various linguistic, lexical, and syntactic features that can potentially be media bias indicators. Our resource collection is the most complete within the media bias research area to the best of our knowledge. We evaluate all of our features in various combinations and retrieve their possible importance both for future research and for the task in general. We also evaluate various possible Machine Learning approaches with all of our features. XGBoost, a decision tree implementation, yields the best results. Our approach achieves an F1-score of 0.43, a precision of 0.29, a recall of 0.77, and a ROC AUC of 0.79, which outperforms current media bias detection methods based on features. We propose future improvements, discuss the perspectives of the feature-based approach and a combination of neural networks and deep learning with our current system.

No Thumbnail Available
Publication

Do You Think It's Biased? : How To Ask For The Perception Of Media Bias

2021, Spinde, Timo, Kreuter, Christina, Gaissmaier, Wolfgang, Hamborg, Felix, Gipp, Bela, Giese, Helge

Media coverage possesses a substantial effect on the public perception of events. The way media frames events can significantly alter the beliefs and perceptions of our society. Nevertheless, nearly all media outlets are known to report news in a biased way. While such bias can be introduced by altering the word choice or omitting information, the perception of bias also varies largely depending on a reader's personal background. Therefore, media bias is a very complex construct to identify and analyze. Even though media bias has been the subject of many studies, previous assessment strategies are oversimplified, lack overlap and empirical evaluation. Thus, this study aims to develop a scale that can be used as a reliable standard to evaluate article bias. To name an example: Intending to measure bias in a news article, should we ask, “How biased is the article?” or should we instead ask, “How did the article treat the American president?”. We conducted a literature search to find 824 relevant questions about text perception in previous research on the topic. In a multi-iterative process, we summarized and condensed these questions semantically to conclude a complete and representative set of possible question types about bias. The final set consisted of 25 questions with varying answering formats, 17 questions using semantic differentials, and six ratings of feelings. We tested each of the questions on 190 articles with overall 663 participants to identify how well the questions measure an article's perceived bias. Our results show that 21 final items are suitable and reliable for measuring the perception of media bias. We publish the final set of questions on http://bias-guestion-tree.gipplab.org/.

No Thumbnail Available
Publication

XCoref: Cross-document Coreference Resolution in the Wild

2022, Zhukova, Anastasia, Hamborg, Felix, Donnay, Karsten, Gipp, Bela

Datasets and methods for cross-document coreference resolution (CDCR) focus on events or entities with strict coreference relations. They lack, however, annotating and resolving coreference mentions with more abstract or loose relations that may occur when news articles report about controversial and polarized events. Bridging and loose coreference relations trigger associations that may expose news readers to bias by word choice and labeling. For example, coreferential mentions of “direct talks between U.S. President Donald Trump and Kim” such as “an extraordinary meeting following months of heated rhetoric” or “great chance to solve a world problem” form a more positive perception of this event. A step towards bringing awareness of bias by word choice and labeling is the reliable resolution of coreferences with high lexical diversity. We propose an unsupervised method named XCoref, which is a CDCR method that capably resolves not only previously prevalent entities, such as persons, e.g., “Donald Trump,” but also abstractly defined concepts, such as groups of persons, “caravan of immigrants,” events and actions, e.g., “marching to the U.S. border.” In an extensive evaluation, we compare the proposed XCoref to a state-of-the-art CDCR method and a previous method TCA that resolves such complex coreference relations and find that XCoref outperforms these methods. Outperforming an established CDCR model shows that the new CDCR models need to be evaluated on semantically complex mentions with more loose coreference relations to indicate their applicability of models to resolve mentions in the “wild” of political news articles.

No Thumbnail Available
Publication

Exploiting Transformer-Based Multitask Learning for the Detection of Media Bias in News Articles

2022, Spinde, Timo, Krieger, Jan-David, Ruas, Terry, Mitrović, Jelena, Götz-Hahn, Franz, Aizawa, Akiko, Gipp, Bela

Media has a substantial impact on the public perception of events. A one-sided or polarizing perspective on any topic is usually described as media bias. One of the ways how bias in news articles can be introduced is by altering word choice. Biased word choices are not always obvious, nor do they exhibit high context-dependency. Hence, detecting bias is often difficult. We propose a Transformer-based deep learning architecture trained via Multi-Task Learning using six bias-related data sets to tackle the media bias detection problem. Our best-performing implementation achieves a macro F1 of 0.776, a performance boost of 3% compared to our baseline, outperforming existing methods. Our results indicate Multi-Task Learning as a promising alternative to improve exist- ing baseline models in identifying slanted reporting.

Loading...
Thumbnail Image
Publication

Media Bias in German News Articles : A Combined Approach

2021-02-02, Spinde, Timo, Hamborg, Felix, Gipp, Bela

Slanted news coverage, also called media bias, can heavily influence how news consumers interpret and react to the news. Models to identify and describe biases have been proposed across various scientific fields, focusing mostly on English media. In this paper, we propose a method for analyzing media bias in German media. We test different natural language processing techniques and combinations thereof. Specifically, we combine an IDF-based component, a specially created bias lexicon, and a linguistic lexicon. We also flexibly extend our lexica by the usage of word embeddings. We evaluate the system and methods in a survey (N = 46), comparing the bias words our system detected to human annotations. So far, the best component combination results in an F1 score of 0.31 of words that were identified as biased by our system and our study participants. The low performance shows that the analysis of media bias is still a difficult task, but using fewer resources, we achieved the same performance on the same task than recent research on English. We summarize the next steps in improving the resources and the overall results.

Loading...
Thumbnail Image
Publication

A Domain-adaptive Pre-training Approach for Language Bias Detection in News

2022, Krieger, Jan-David, Spinde, Timo, Ruas, Terry, Kulshrestha, Juhi, Gipp, Bela

Media bias is a multi-faceted construct influencing individual behavior and collective decision-making. Slanted news reporting is the result of one-sided and polarized writing which can occur in various forms. In this work, we focus on an important form of media bias, i.e. bias by word choice. Detecting biased word choices is a challenging task due to its linguistic complexity and the lack of representative gold-standard corpora. We present DA-RoBERTa, a new state-of-the-art transformer-based model adapted to the media bias domain which identifies sentence-level bias with an F1 score of 0.814. In addition, we also train, DA-BERT and DA-BART, two more transformer models adapted to the bias domain. Our proposed domain-adapted models outperform prior bias detection approaches on the same data.

Loading...
Thumbnail Image
Publication

Collaborative and AI-aided Exam Question Generation using Wikidata in Education

2022, Scharpf, Philipp, Schubotz, Moritz, Spitz, Andreas, Greiner-Petter, Andre, Gipp, Bela

Since the COVID-19 outbreak, the use of digital learning or education platforms has substantially increased. Teachers now digitally distribute homework and provide exercise questions. In both cases, teachers need to develop novel and individual questions continuously. This process can be very time-consuming and should be facilitated and accelerated both through exchange with other teachers and by using Artificial Intelligence (AI) capabilities. To address this need, we propose a multilingual Wikimedia framework that allows for collaborative worldwide teacher knowledge engineering and subsequent AI-aided question generation, test, and correction. As a proof of concept, we present »PhysWikiQuiz«, a physics question generation and test engine. Our system (hosted by Wikimedia at https://physwikiquiz.wmflabs.org) retrieves physics knowledge from the open community-curated database Wikidata. It can generate questions in different variations and verify answer values and units using a Computer Algebra System (CAS). We evaluate the performance on a public benchmark dataset at each stage of the system workflow. For an average formula with three variables, the system can generate and correct up to 300 questions for individual students, based on a single formula concept name as input by the teacher.

No Thumbnail Available
Publication

ANEA: Automated (Named) Entity Annotation for German Domain-Specific Texts

2021, Zhukova, Anastasia, Hamborg, Felix, Gipp, Bela

Named entity recognition (NER) is an important task that aims to resolve universal categories of named entities, e.g., persons, locations, organizations, and times. Despite its common and viable use in many use cases, NER is barely applicable in domains where general categories are suboptimal, such as engineering or medicine. To facilitate NER of domain-specific types, we propose ANEA, an automated (named) entity annotator to assist human annotators in creating domain-specific NER corpora for German text collections when given a set of domain-specific texts. In our evaluation, we find that ANEA automatically identifies terms that best represent the texts’ content, identifies groups of coherent terms, and extracts and assigns descriptive labels to these groups, i.e., annotates text datasets into the domain (named) entities.