Appearance-based modeling for segmentation of hippocampus and amygdala using multi-contrast MR imaging

dc.contributor.authorHu, Shiyan
dc.contributor.authorCoupé, Pierrick
dc.contributor.authorPruessner, Jens C.
dc.contributor.authorCollins, D. Louis
dc.date.accessioned2017-04-07T07:57:25Z
dc.date.available2017-04-07T07:57:25Z
dc.date.issued2011-09eng
dc.description.abstractA new automatic model-based segmentation scheme that combines level set shape modeling and active appearance modeling (AAM) is presented. Since different MR image contrasts can yield complementary information, multi-contrast images can be incorporated into the active appearance modeling to improve segmentation performance. During active appearance modeling, the weighting of each contrast is optimized to account for the potentially varying contribution of each image while optimizing the model parameters that correspond to the shape and appearance eigen-images in order to minimize the difference between the multi-contrast test images and the ones synthesized from the shape and appearance modeling. As appearance-based modeling techniques are dependent on the initial alignment of training data, we compare (i) linear alignment of whole brain, (ii) linear alignment of a local volume of interest and (iii) non-linear alignment of a local volume of interest. The proposed segmentation scheme can be used to segment human hippocampi (HC) and amygdalae (AG), which have weak intensity contrast with their background in MRI. The experiments demonstrate that non-linear alignment of training data yields the best results and that multimodal segmentation using T1-weighted, T2-weighted and proton density-weighted images yields better segmentation results than any single contrast. In a four-fold cross validation with eighty young normal subjects, the method yields a mean Dice к of 0.87 with intraclass correlation coefficient (ICC) of 0.946 for HC and a mean Dice к of 0.81 with ICC of 0.924 for AG between manual and automatic labels.eng
dc.description.versionpublishedeng
dc.identifier.doi10.1016/j.neuroimage.2011.06.054eng
dc.identifier.pmid21741485eng
dc.identifier.urihttps://kops.uni-konstanz.de/handle/123456789/38403
dc.language.isoengeng
dc.subject.ddc150eng
dc.titleAppearance-based modeling for segmentation of hippocampus and amygdala using multi-contrast MR imagingeng
dc.typeJOURNAL_ARTICLEeng
dspace.entity.typePublication
kops.citation.bibtex
@article{Hu2011-09Appea-38403,
  year={2011},
  doi={10.1016/j.neuroimage.2011.06.054},
  title={Appearance-based modeling for segmentation of hippocampus and amygdala using multi-contrast MR imaging},
  number={2},
  volume={58},
  issn={1053-8119},
  journal={NeuroImage},
  pages={549--559},
  author={Hu, Shiyan and Coupé, Pierrick and Pruessner, Jens C. and Collins, D. Louis}
}
kops.citation.iso690HU, Shiyan, Pierrick COUPÉ, Jens C. PRUESSNER, D. Louis COLLINS, 2011. Appearance-based modeling for segmentation of hippocampus and amygdala using multi-contrast MR imaging. In: NeuroImage. 2011, 58(2), pp. 549-559. ISSN 1053-8119. eISSN 1095-9572. Available under: doi: 10.1016/j.neuroimage.2011.06.054deu
kops.citation.iso690HU, Shiyan, Pierrick COUPÉ, Jens C. PRUESSNER, D. Louis COLLINS, 2011. Appearance-based modeling for segmentation of hippocampus and amygdala using multi-contrast MR imaging. In: NeuroImage. 2011, 58(2), pp. 549-559. ISSN 1053-8119. eISSN 1095-9572. Available under: doi: 10.1016/j.neuroimage.2011.06.054eng
kops.citation.rdf
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/38403">
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43"/>
    <dc:contributor>Pruessner, Jens C.</dc:contributor>
    <dcterms:title>Appearance-based modeling for segmentation of hippocampus and amygdala using multi-contrast MR imaging</dcterms:title>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:creator>Pruessner, Jens C.</dc:creator>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2017-04-07T07:57:25Z</dcterms:available>
    <dcterms:abstract xml:lang="eng">A new automatic model-based segmentation scheme that combines level set shape modeling and active appearance modeling (AAM) is presented. Since different MR image contrasts can yield complementary information, multi-contrast images can be incorporated into the active appearance modeling to improve segmentation performance. During active appearance modeling, the weighting of each contrast is optimized to account for the potentially varying contribution of each image while optimizing the model parameters that correspond to the shape and appearance eigen-images in order to minimize the difference between the multi-contrast test images and the ones synthesized from the shape and appearance modeling. As appearance-based modeling techniques are dependent on the initial alignment of training data, we compare (i) linear alignment of whole brain, (ii) linear alignment of a local volume of interest and (iii) non-linear alignment of a local volume of interest. The proposed segmentation scheme can be used to segment human hippocampi (HC) and amygdalae (AG), which have weak intensity contrast with their background in MRI. The experiments demonstrate that non-linear alignment of training data yields the best results and that multimodal segmentation using T1-weighted, T2-weighted and proton density-weighted images yields better segmentation results than any single contrast. In a four-fold cross validation with eighty young normal subjects, the method yields a mean Dice к of 0.87 with intraclass correlation coefficient (ICC) of 0.946 for HC and a mean Dice к of 0.81 with ICC of 0.924 for AG between manual and automatic labels.</dcterms:abstract>
    <dcterms:issued>2011-09</dcterms:issued>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/38403"/>
    <dc:contributor>Hu, Shiyan</dc:contributor>
    <dc:contributor>Coupé, Pierrick</dc:contributor>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:contributor>Collins, D. Louis</dc:contributor>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43"/>
    <dc:language>eng</dc:language>
    <dc:creator>Coupé, Pierrick</dc:creator>
    <dc:creator>Hu, Shiyan</dc:creator>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2017-04-07T07:57:25Z</dc:date>
    <dc:creator>Collins, D. Louis</dc:creator>
  </rdf:Description>
</rdf:RDF>
kops.flag.knbibliographyfalse
kops.sourcefieldNeuroImage. 2011, <b>58</b>(2), pp. 549-559. ISSN 1053-8119. eISSN 1095-9572. Available under: doi: 10.1016/j.neuroimage.2011.06.054deu
kops.sourcefield.plainNeuroImage. 2011, 58(2), pp. 549-559. ISSN 1053-8119. eISSN 1095-9572. Available under: doi: 10.1016/j.neuroimage.2011.06.054deu
kops.sourcefield.plainNeuroImage. 2011, 58(2), pp. 549-559. ISSN 1053-8119. eISSN 1095-9572. Available under: doi: 10.1016/j.neuroimage.2011.06.054eng
relation.isAuthorOfPublication153324a0-c321-4cfb-a112-90179871cd94
relation.isAuthorOfPublication.latestForDiscovery153324a0-c321-4cfb-a112-90179871cd94
source.bibliographicInfo.fromPage549eng
source.bibliographicInfo.issue2eng
source.bibliographicInfo.toPage559eng
source.bibliographicInfo.volume58eng
source.identifier.eissn1095-9572eng
source.identifier.issn1053-8119eng
source.periodicalTitleNeuroImageeng

Dateien