Publikation:

Appearance-based modeling for segmentation of hippocampus and amygdala using multi-contrast MR imaging

Lade...
Vorschaubild

Dateien

Zu diesem Dokument gibt es keine Dateien.

Datum

2011

Autor:innen

Hu, Shiyan
Coupé, Pierrick
Collins, D. Louis

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

URI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Zeitschriftenartikel
Publikationsstatus
Published

Erschienen in

NeuroImage. 2011, 58(2), pp. 549-559. ISSN 1053-8119. eISSN 1095-9572. Available under: doi: 10.1016/j.neuroimage.2011.06.054

Zusammenfassung

A new automatic model-based segmentation scheme that combines level set shape modeling and active appearance modeling (AAM) is presented. Since different MR image contrasts can yield complementary information, multi-contrast images can be incorporated into the active appearance modeling to improve segmentation performance. During active appearance modeling, the weighting of each contrast is optimized to account for the potentially varying contribution of each image while optimizing the model parameters that correspond to the shape and appearance eigen-images in order to minimize the difference between the multi-contrast test images and the ones synthesized from the shape and appearance modeling. As appearance-based modeling techniques are dependent on the initial alignment of training data, we compare (i) linear alignment of whole brain, (ii) linear alignment of a local volume of interest and (iii) non-linear alignment of a local volume of interest. The proposed segmentation scheme can be used to segment human hippocampi (HC) and amygdalae (AG), which have weak intensity contrast with their background in MRI. The experiments demonstrate that non-linear alignment of training data yields the best results and that multimodal segmentation using T1-weighted, T2-weighted and proton density-weighted images yields better segmentation results than any single contrast. In a four-fold cross validation with eighty young normal subjects, the method yields a mean Dice к of 0.87 with intraclass correlation coefficient (ICC) of 0.946 for HC and a mean Dice к of 0.81 with ICC of 0.924 for AG between manual and automatic labels.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
150 Psychologie

Schlagwörter

Konferenz

Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Verknüpfte Datensätze

Zitieren

ISO 690HU, Shiyan, Pierrick COUPÉ, Jens C. PRUESSNER, D. Louis COLLINS, 2011. Appearance-based modeling for segmentation of hippocampus and amygdala using multi-contrast MR imaging. In: NeuroImage. 2011, 58(2), pp. 549-559. ISSN 1053-8119. eISSN 1095-9572. Available under: doi: 10.1016/j.neuroimage.2011.06.054
BibTex
@article{Hu2011-09Appea-38403,
  year={2011},
  doi={10.1016/j.neuroimage.2011.06.054},
  title={Appearance-based modeling for segmentation of hippocampus and amygdala using multi-contrast MR imaging},
  number={2},
  volume={58},
  issn={1053-8119},
  journal={NeuroImage},
  pages={549--559},
  author={Hu, Shiyan and Coupé, Pierrick and Pruessner, Jens C. and Collins, D. Louis}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/38403">
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43"/>
    <dc:contributor>Pruessner, Jens C.</dc:contributor>
    <dcterms:title>Appearance-based modeling for segmentation of hippocampus and amygdala using multi-contrast MR imaging</dcterms:title>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:creator>Pruessner, Jens C.</dc:creator>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2017-04-07T07:57:25Z</dcterms:available>
    <dcterms:abstract xml:lang="eng">A new automatic model-based segmentation scheme that combines level set shape modeling and active appearance modeling (AAM) is presented. Since different MR image contrasts can yield complementary information, multi-contrast images can be incorporated into the active appearance modeling to improve segmentation performance. During active appearance modeling, the weighting of each contrast is optimized to account for the potentially varying contribution of each image while optimizing the model parameters that correspond to the shape and appearance eigen-images in order to minimize the difference between the multi-contrast test images and the ones synthesized from the shape and appearance modeling. As appearance-based modeling techniques are dependent on the initial alignment of training data, we compare (i) linear alignment of whole brain, (ii) linear alignment of a local volume of interest and (iii) non-linear alignment of a local volume of interest. The proposed segmentation scheme can be used to segment human hippocampi (HC) and amygdalae (AG), which have weak intensity contrast with their background in MRI. The experiments demonstrate that non-linear alignment of training data yields the best results and that multimodal segmentation using T1-weighted, T2-weighted and proton density-weighted images yields better segmentation results than any single contrast. In a four-fold cross validation with eighty young normal subjects, the method yields a mean Dice к of 0.87 with intraclass correlation coefficient (ICC) of 0.946 for HC and a mean Dice к of 0.81 with ICC of 0.924 for AG between manual and automatic labels.</dcterms:abstract>
    <dcterms:issued>2011-09</dcterms:issued>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/38403"/>
    <dc:contributor>Hu, Shiyan</dc:contributor>
    <dc:contributor>Coupé, Pierrick</dc:contributor>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:contributor>Collins, D. Louis</dc:contributor>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43"/>
    <dc:language>eng</dc:language>
    <dc:creator>Coupé, Pierrick</dc:creator>
    <dc:creator>Hu, Shiyan</dc:creator>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2017-04-07T07:57:25Z</dc:date>
    <dc:creator>Collins, D. Louis</dc:creator>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Nein
Begutachtet
Diese Publikation teilen