Artificial intelligence in history education : Linguistic content and complexity analyses of student writings in the CAHisT project (Computational assessment of historical thinking)

dc.contributor.authorBertram, Christiane
dc.contributor.authorWeiss, Zarah
dc.contributor.authorZachrich, Lisa
dc.contributor.authorZiai, Ramon
dc.date.accessioned2021-09-22T14:41:38Z
dc.date.available2021-09-22T14:41:38Z
dc.date.issued2022-12
dc.description.abstractThe use of standardized test formats in the assessment of historical competencies has recently come under severe criticism, especially in the United States, where standardized tests are particularly common. History researchers have argued that open-ended items are more appropriate for assessment. However, providing largescale evaluations of open-ended answers is time consuming and poses challenges regarding the objectivity, validity, and replicability of ratings. To address this issue, we investigated the extent to which computer-based evaluation methods are suitable for evaluating student answers by combining qualitative methods from history education research with quantitative, computer-based linguistic analyses. In two studies, we analyzed data from an intervention study in which 962 students (ninth graders) completed seven open-ended tasks. In Study 1, we investigated the extent to which task complexity could be predicted from the linguistic complexity of the students' answers. In Study 2, we conducted an automatic content assessment by aligning student answers with predefined target answers and compared automatic scores with human ratings that were based on an elaborate evaluation scheme we developed. In our first study, we identified several linguistic features of students’ answers that successfully predicted task complexity. In our second study, the correlations between manual and computational task scores were encouraging. However, our human interrater agreement left room for improvement and demonstrated the challenges of reliably applying explicit evaluation criteria to open tasks. Overall, our findings illustrate that the combination of qualitative methods from history education research and quantitative computational-linguistic analyses may support the large-scale evaluation of open tasks.eng
dc.description.versionpublishedeng
dc.identifier.doi10.1016/j.caeai.2021.100038
dc.identifier.pmid36441613
dc.identifier.urihttps://kops.uni-konstanz.de/handle/123456789/54965
dc.language.isoengeng
dc.rightsterms-of-use
dc.rights.urihttps://rightsstatements.org/page/InC/1.0/
dc.subject.ddc370eng
dc.titleArtificial intelligence in history education : Linguistic content and complexity analyses of student writings in the CAHisT project (Computational assessment of historical thinking)eng
dc.typeJOURNAL_ARTICLEeng
dspace.entity.typePublication
kops.citation.bibtex
@article{Bertram2022-12Artif-54965,
  year={2022},
  doi={10.1016/j.caeai.2021.100038},
  title={Artificial intelligence in history education : Linguistic content and complexity analyses of student writings in the CAHisT project (Computational assessment of historical thinking)},
  issn={0360-1315},
  journal={Computers and Education : Artificial Intelligence},
  author={Bertram, Christiane and Weiss, Zarah and Zachrich, Lisa and Ziai, Ramon}
}
kops.citation.iso690BERTRAM, Christiane, Zarah WEISS, Lisa ZACHRICH, Ramon ZIAI, 2022. Artificial intelligence in history education : Linguistic content and complexity analyses of student writings in the CAHisT project (Computational assessment of historical thinking). In: Computers and Education : Artificial Intelligence. Elsevier. ISSN 0360-1315. eISSN 1873-782X. Available under: doi: 10.1016/j.caeai.2021.100038deu
kops.citation.iso690BERTRAM, Christiane, Zarah WEISS, Lisa ZACHRICH, Ramon ZIAI, 2022. Artificial intelligence in history education : Linguistic content and complexity analyses of student writings in the CAHisT project (Computational assessment of historical thinking). In: Computers and Education : Artificial Intelligence. Elsevier. ISSN 0360-1315. eISSN 1873-782X. Available under: doi: 10.1016/j.caeai.2021.100038eng
kops.citation.rdf
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/54965">
    <dc:creator>Zachrich, Lisa</dc:creator>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-09-22T14:41:38Z</dc:date>
    <dc:creator>Bertram, Christiane</dc:creator>
    <dc:contributor>Weiss, Zarah</dc:contributor>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/31"/>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-09-22T14:41:38Z</dcterms:available>
    <dcterms:abstract xml:lang="eng">The use of standardized test formats in the assessment of historical competencies has recently come under severe criticism, especially in the United States, where standardized tests are particularly common. History researchers have argued that open-ended items are more appropriate for assessment. However, providing largescale evaluations of open-ended answers is time consuming and poses challenges regarding the objectivity, validity, and replicability of ratings. To address this issue, we investigated the extent to which computer-based evaluation methods are suitable for evaluating student answers by combining qualitative methods from history education research with quantitative, computer-based linguistic analyses. In two studies, we analyzed data from an intervention study in which 962 students (ninth graders) completed seven open-ended tasks. In Study 1, we investigated the extent to which task complexity could be predicted from the linguistic complexity of the students' answers. In Study 2, we conducted an automatic content assessment by aligning student answers with predefined target answers and compared automatic scores with human ratings that were based on an elaborate evaluation scheme we developed. In our first study, we identified several linguistic features of students’ answers that successfully predicted task complexity. In our second study, the correlations between manual and computational task scores were encouraging. However, our human interrater agreement left room for improvement and demonstrated the challenges of reliably applying explicit evaluation criteria to open tasks. Overall, our findings illustrate that the combination of qualitative methods from history education research and quantitative computational-linguistic analyses may support the large-scale evaluation of open tasks.</dcterms:abstract>
    <dcterms:issued>2022-12</dcterms:issued>
    <dc:creator>Ziai, Ramon</dc:creator>
    <dc:rights>terms-of-use</dc:rights>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/54965"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/31"/>
    <dc:contributor>Zachrich, Lisa</dc:contributor>
    <dcterms:title>Artificial intelligence in history education : Linguistic content and complexity analyses of student writings in the CAHisT project (Computational assessment of historical thinking)</dcterms:title>
    <dc:creator>Weiss, Zarah</dc:creator>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/32"/>
    <dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/32"/>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:contributor>Bertram, Christiane</dc:contributor>
    <dc:contributor>Ziai, Ramon</dc:contributor>
    <dc:language>eng</dc:language>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
  </rdf:Description>
</rdf:RDF>
kops.description.openAccessopenaccessgold
kops.flag.isPeerReviewedtrueeng
kops.flag.knbibliographytrue
kops.sourcefieldComputers and Education : Artificial Intelligence. Elsevier. ISSN 0360-1315. eISSN 1873-782X. Available under: doi: 10.1016/j.caeai.2021.100038deu
kops.sourcefield.plainComputers and Education : Artificial Intelligence. Elsevier. ISSN 0360-1315. eISSN 1873-782X. Available under: doi: 10.1016/j.caeai.2021.100038deu
kops.sourcefield.plainComputers and Education : Artificial Intelligence. Elsevier. ISSN 0360-1315. eISSN 1873-782X. Available under: doi: 10.1016/j.caeai.2021.100038eng
relation.isAuthorOfPublication7b05e7c4-c57b-40ae-95e4-c35da5379f95
relation.isAuthorOfPublication5377b18a-41a2-4249-b049-9ed657cbd90e
relation.isAuthorOfPublication.latestForDiscovery7b05e7c4-c57b-40ae-95e4-c35da5379f95
source.identifier.eissn1873-782Xeng
source.identifier.issn0360-1315eng
source.periodicalTitleComputers and Education : Artificial Intelligenceeng
source.publisherElseviereng
temp.internal.recheckOnline First: Metadaten vervollständigen

Dateien