Publikation:

DeAF : A multimodal deep learning framework for disease prediction

Lade...
Vorschaubild

Dateien

Zu diesem Dokument gibt es keine Dateien.

Datum

2023

Autor:innen

Li, Kangshun
Chen, Can
Cao, Wuteng
Wang, Hui
Han, Shuai
Wang, Renjie
Ye, Zaisheng
Wu, Zhijie
Yuan, Zixu
et al.

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

URI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Zeitschriftenartikel
Publikationsstatus
Published

Erschienen in

Computers in Biology and Medicine. Elsevier. 2023, 156, 106715. ISSN 0010-4825. eISSN 1879-0534. Available under: doi: 10.1016/j.compbiomed.2023.106715

Zusammenfassung

Multimodal deep learning models have been applied for disease prediction tasks, but difficulties exist in training due to the conflict between sub-models and fusion modules. To alleviate this issue, we propose a framework for decoupling feature alignment and fusion (DeAF), which separates the multimodal model training into two stages. In the first stage, unsupervised representation learning is conducted, and the modality adaptation (MA) module is used to align the features from various modalities. In the second stage, the self-attention fusion (SAF) module combines the medical image features and clinical data using supervised learning. Moreover, we apply the DeAF framework to predict the postoperative efficacy of CRS for colorectal cancer and whether the MCI patients change to Alzheimer’s disease. The DeAF framework achieves a significant improvement in comparison to the previous methods. Furthermore, extensive ablation experiments are conducted to demonstrate the rationality and effectiveness of our framework. In conclusion, our framework enhances the interaction between the local medical image features and clinical data, and derive more discriminative multimodal features for disease prediction. The framework implementation is available at https://github.com/cchencan/DeAF.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
330 Wirtschaft

Schlagwörter

Konferenz

Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690LI, Kangshun, Can CHEN, Wuteng CAO, Hui WANG, Shuai HAN, Renjie WANG, Zaisheng YE, Zhijie WU, Deyu DING, Zixu YUAN, 2023. DeAF : A multimodal deep learning framework for disease prediction. In: Computers in Biology and Medicine. Elsevier. 2023, 156, 106715. ISSN 0010-4825. eISSN 1879-0534. Available under: doi: 10.1016/j.compbiomed.2023.106715
BibTex
@article{Li2023multi-66702,
  year={2023},
  doi={10.1016/j.compbiomed.2023.106715},
  title={DeAF : A multimodal deep learning framework for disease prediction},
  volume={156},
  issn={0010-4825},
  journal={Computers in Biology and Medicine},
  author={Li, Kangshun and Chen, Can and Cao, Wuteng and Wang, Hui and Han, Shuai and Wang, Renjie and Ye, Zaisheng and Wu, Zhijie and Ding, Deyu and Yuan, Zixu},
  note={Article Number: 106715}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/66702">
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/46"/>
    <dc:contributor>Ye, Zaisheng</dc:contributor>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2023-04-21T08:31:37Z</dc:date>
    <dc:contributor>Yuan, Zixu</dc:contributor>
    <dc:contributor>Han, Shuai</dc:contributor>
    <dcterms:abstract>Multimodal deep learning models have been applied for disease prediction tasks, but difficulties exist in training due to the conflict between sub-models and fusion modules. To alleviate this issue, we propose a framework for decoupling feature alignment and fusion (DeAF), which separates the multimodal model training into two stages. In the first stage, unsupervised representation learning is conducted, and the modality adaptation (MA) module is used to align the features from various modalities. In the second stage, the self-attention fusion (SAF) module combines the medical image features and clinical data using supervised learning. Moreover, we apply the DeAF framework to predict the postoperative efficacy of CRS for colorectal cancer and whether the MCI patients change to Alzheimer’s disease. The DeAF framework achieves a significant improvement in comparison to the previous methods. Furthermore, extensive ablation experiments are conducted to demonstrate the rationality and effectiveness of our framework. In conclusion, our framework enhances the interaction between the local medical image features and clinical data, and derive more discriminative multimodal features for disease prediction. The framework implementation is available at https://github.com/cchencan/DeAF.</dcterms:abstract>
    <dc:creator>Wang, Renjie</dc:creator>
    <dc:creator>Wu, Zhijie</dc:creator>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2023-04-21T08:31:37Z</dcterms:available>
    <dcterms:issued>2023</dcterms:issued>
    <dc:contributor>Ding, Deyu</dc:contributor>
    <dc:creator>Cao, Wuteng</dc:creator>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/46"/>
    <dc:contributor>Wang, Renjie</dc:contributor>
    <dc:creator>Wang, Hui</dc:creator>
    <dc:creator>Ye, Zaisheng</dc:creator>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/66702"/>
    <dc:creator>Chen, Can</dc:creator>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:creator>Han, Shuai</dc:creator>
    <dc:creator>Ding, Deyu</dc:creator>
    <dc:creator>Li, Kangshun</dc:creator>
    <dc:contributor>Li, Kangshun</dc:contributor>
    <dc:contributor>Cao, Wuteng</dc:contributor>
    <dc:contributor>Wu, Zhijie</dc:contributor>
    <dc:contributor>Chen, Can</dc:contributor>
    <dc:creator>Yuan, Zixu</dc:creator>
    <dc:contributor>Wang, Hui</dc:contributor>
    <dc:language>eng</dc:language>
    <dcterms:title>DeAF : A multimodal deep learning framework for disease prediction</dcterms:title>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Begutachtet
Ja
Diese Publikation teilen