Publikation:

Artificial intelligence in history education : Linguistic content and complexity analyses of student writings in the CAHisT project (Computational assessment of historical thinking)

Lade...
Vorschaubild

Dateien

Zu diesem Dokument gibt es keine Dateien.

Datum

2022

Autor:innen

Weiss, Zarah
Ziai, Ramon

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

URI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Open Access Gold
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Zeitschriftenartikel
Publikationsstatus
Published

Erschienen in

Computers and Education : Artificial Intelligence. Elsevier. ISSN 0360-1315. eISSN 1873-782X. Available under: doi: 10.1016/j.caeai.2021.100038

Zusammenfassung

The use of standardized test formats in the assessment of historical competencies has recently come under severe criticism, especially in the United States, where standardized tests are particularly common. History researchers have argued that open-ended items are more appropriate for assessment. However, providing largescale evaluations of open-ended answers is time consuming and poses challenges regarding the objectivity, validity, and replicability of ratings. To address this issue, we investigated the extent to which computer-based evaluation methods are suitable for evaluating student answers by combining qualitative methods from history education research with quantitative, computer-based linguistic analyses. In two studies, we analyzed data from an intervention study in which 962 students (ninth graders) completed seven open-ended tasks. In Study 1, we investigated the extent to which task complexity could be predicted from the linguistic complexity of the students' answers. In Study 2, we conducted an automatic content assessment by aligning student answers with predefined target answers and compared automatic scores with human ratings that were based on an elaborate evaluation scheme we developed. In our first study, we identified several linguistic features of students’ answers that successfully predicted task complexity. In our second study, the correlations between manual and computational task scores were encouraging. However, our human interrater agreement left room for improvement and demonstrated the challenges of reliably applying explicit evaluation criteria to open tasks. Overall, our findings illustrate that the combination of qualitative methods from history education research and quantitative computational-linguistic analyses may support the large-scale evaluation of open tasks.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
370 Erziehung, Schul- und Bildungswesen

Schlagwörter

Konferenz

Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690BERTRAM, Christiane, Zarah WEISS, Lisa ZACHRICH, Ramon ZIAI, 2022. Artificial intelligence in history education : Linguistic content and complexity analyses of student writings in the CAHisT project (Computational assessment of historical thinking). In: Computers and Education : Artificial Intelligence. Elsevier. ISSN 0360-1315. eISSN 1873-782X. Available under: doi: 10.1016/j.caeai.2021.100038
BibTex
@article{Bertram2022-12Artif-54965,
  year={2022},
  doi={10.1016/j.caeai.2021.100038},
  title={Artificial intelligence in history education : Linguistic content and complexity analyses of student writings in the CAHisT project (Computational assessment of historical thinking)},
  issn={0360-1315},
  journal={Computers and Education : Artificial Intelligence},
  author={Bertram, Christiane and Weiss, Zarah and Zachrich, Lisa and Ziai, Ramon}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/54965">
    <dc:creator>Zachrich, Lisa</dc:creator>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-09-22T14:41:38Z</dc:date>
    <dc:creator>Bertram, Christiane</dc:creator>
    <dc:contributor>Weiss, Zarah</dc:contributor>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/31"/>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-09-22T14:41:38Z</dcterms:available>
    <dcterms:abstract xml:lang="eng">The use of standardized test formats in the assessment of historical competencies has recently come under severe criticism, especially in the United States, where standardized tests are particularly common. History researchers have argued that open-ended items are more appropriate for assessment. However, providing largescale evaluations of open-ended answers is time consuming and poses challenges regarding the objectivity, validity, and replicability of ratings. To address this issue, we investigated the extent to which computer-based evaluation methods are suitable for evaluating student answers by combining qualitative methods from history education research with quantitative, computer-based linguistic analyses. In two studies, we analyzed data from an intervention study in which 962 students (ninth graders) completed seven open-ended tasks. In Study 1, we investigated the extent to which task complexity could be predicted from the linguistic complexity of the students' answers. In Study 2, we conducted an automatic content assessment by aligning student answers with predefined target answers and compared automatic scores with human ratings that were based on an elaborate evaluation scheme we developed. In our first study, we identified several linguistic features of students’ answers that successfully predicted task complexity. In our second study, the correlations between manual and computational task scores were encouraging. However, our human interrater agreement left room for improvement and demonstrated the challenges of reliably applying explicit evaluation criteria to open tasks. Overall, our findings illustrate that the combination of qualitative methods from history education research and quantitative computational-linguistic analyses may support the large-scale evaluation of open tasks.</dcterms:abstract>
    <dcterms:issued>2022-12</dcterms:issued>
    <dc:creator>Ziai, Ramon</dc:creator>
    <dc:rights>terms-of-use</dc:rights>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/54965"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/31"/>
    <dc:contributor>Zachrich, Lisa</dc:contributor>
    <dcterms:title>Artificial intelligence in history education : Linguistic content and complexity analyses of student writings in the CAHisT project (Computational assessment of historical thinking)</dcterms:title>
    <dc:creator>Weiss, Zarah</dc:creator>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/32"/>
    <dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/32"/>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:contributor>Bertram, Christiane</dc:contributor>
    <dc:contributor>Ziai, Ramon</dc:contributor>
    <dc:language>eng</dc:language>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Ja
Online First: Zeitschriftenartikel, die schon vor ihrer Zuordnung zu einem bestimmten Zeitschriftenheft (= Issue) online gestellt werden. Online First-Artikel werden auf der Homepage des Journals in der Verlagsfassung veröffentlicht.
Diese Publikation teilen