Crowdsourcing Quality of Experience Experiments

dc.contributor.authorEgger-Lampl, Sebastian
dc.contributor.authorRedi, Judith
dc.contributor.authorHoßfeld, Tobias
dc.contributor.authorHirth, Matthias
dc.contributor.authorMöller, Sebastian
dc.contributor.authorNaderi, Babak
dc.contributor.authorKeimel, Christian
dc.contributor.authorSaupe, Dietmar
dc.date.accessioned2018-02-02T12:11:32Z
dc.date.available2018-02-02T12:11:32Z
dc.date.issued2017-09-28eng
dc.description.abstractCrowdsourcing enables new possibilities for QoE evaluation by moving the evaluation task from the traditional laboratory environment into the Internet, allowing researchers to easily access a global pool of workers for the evaluation task. This makes it not only possible to include a more diverse population and real-life environments into the evaluation, but also reduces the turn-around time and increases the number of subjects participating in an evaluation campaign significantly, thereby circumventing bottle-necks in traditional laboratory setups. In order to utilise these advantages, the differences between laboratory-based and crowd-based QoE evaluation are discussed in this chapter.eng
dc.description.versionpublishedeng
dc.identifier.doi10.1007/978-3-319-66435-4_7eng
dc.identifier.ppn505522144
dc.identifier.urihttps://kops.uni-konstanz.de/handle/123456789/41214
dc.language.isoengeng
dc.rightsterms-of-use
dc.rights.urihttps://rightsstatements.org/page/InC/1.0/
dc.subject.ddc004eng
dc.titleCrowdsourcing Quality of Experience Experimentseng
dc.typeINPROCEEDINGSeng
dspace.entity.typePublication
kops.citation.bibtex
@inproceedings{EggerLampl2017-09-28Crowd-41214,
  year={2017},
  doi={10.1007/978-3-319-66435-4_7},
  title={Crowdsourcing Quality of Experience Experiments},
  number={10264},
  isbn={978-3-319-66434-7},
  issn={0302-9743},
  publisher={Springer},
  address={Cham},
  series={Lecture Notes in Computer Science},
  booktitle={Evaluation in the Crowd : Crowdsourcing and Human-Centered Experiments : Revised Contributions},
  pages={154--190},
  editor={Archambault, Daniel and Purchase, Helen and Hoßfeld, Tobias},
  author={Egger-Lampl, Sebastian and Redi, Judith and Hoßfeld, Tobias and Hirth, Matthias and Möller, Sebastian and Naderi, Babak and Keimel, Christian and Saupe, Dietmar}
}
kops.citation.iso690EGGER-LAMPL, Sebastian, Judith REDI, Tobias HOSSFELD, Matthias HIRTH, Sebastian MÖLLER, Babak NADERI, Christian KEIMEL, Dietmar SAUPE, 2017. Crowdsourcing Quality of Experience Experiments. Dagstuhl Seminar 15481. Dagstuhl, Wadern, 22. Nov. 2015 - 27. Nov. 2015. In: ARCHAMBAULT, Daniel, ed., Helen PURCHASE, ed., Tobias HOSSFELD, ed.. Evaluation in the Crowd : Crowdsourcing and Human-Centered Experiments : Revised Contributions. Cham: Springer, 2017, pp. 154-190. Lecture Notes in Computer Science. 10264. ISSN 0302-9743. eISSN 1611-3349. ISBN 978-3-319-66434-7. Available under: doi: 10.1007/978-3-319-66435-4_7deu
kops.citation.iso690EGGER-LAMPL, Sebastian, Judith REDI, Tobias HOSSFELD, Matthias HIRTH, Sebastian MÖLLER, Babak NADERI, Christian KEIMEL, Dietmar SAUPE, 2017. Crowdsourcing Quality of Experience Experiments. Dagstuhl Seminar 15481. Dagstuhl, Wadern, Nov 22, 2015 - Nov 27, 2015. In: ARCHAMBAULT, Daniel, ed., Helen PURCHASE, ed., Tobias HOSSFELD, ed.. Evaluation in the Crowd : Crowdsourcing and Human-Centered Experiments : Revised Contributions. Cham: Springer, 2017, pp. 154-190. Lecture Notes in Computer Science. 10264. ISSN 0302-9743. eISSN 1611-3349. ISBN 978-3-319-66434-7. Available under: doi: 10.1007/978-3-319-66435-4_7eng
kops.citation.rdf
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/41214">
    <dc:contributor>Saupe, Dietmar</dc:contributor>
    <dc:creator>Hirth, Matthias</dc:creator>
    <dc:creator>Redi, Judith</dc:creator>
    <dc:contributor>Keimel, Christian</dc:contributor>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:contributor>Naderi, Babak</dc:contributor>
    <dc:creator>Naderi, Babak</dc:creator>
    <dc:creator>Keimel, Christian</dc:creator>
    <dc:creator>Saupe, Dietmar</dc:creator>
    <dc:contributor>Redi, Judith</dc:contributor>
    <dc:creator>Möller, Sebastian</dc:creator>
    <dcterms:title>Crowdsourcing Quality of Experience Experiments</dcterms:title>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2018-02-02T12:11:32Z</dcterms:available>
    <dc:contributor>Egger-Lampl, Sebastian</dc:contributor>
    <dc:contributor>Hoßfeld, Tobias</dc:contributor>
    <dcterms:abstract xml:lang="eng">Crowdsourcing enables new possibilities for QoE evaluation by moving the evaluation task from the traditional laboratory environment into the Internet, allowing researchers to easily access a global pool of workers for the evaluation task. This makes it not only possible to include a more diverse population and real-life environments into the evaluation, but also reduces the turn-around time and increases the number of subjects participating in an evaluation campaign significantly, thereby circumventing bottle-necks in traditional laboratory setups. In order to utilise these advantages, the differences between laboratory-based and crowd-based QoE evaluation are discussed in this chapter.</dcterms:abstract>
    <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/41214/1/Egger-Lampl_2-5iaoqamuzfkz9.pdf"/>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:creator>Egger-Lampl, Sebastian</dc:creator>
    <dc:contributor>Hirth, Matthias</dc:contributor>
    <dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2018-02-02T12:11:32Z</dc:date>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/41214"/>
    <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/41214/1/Egger-Lampl_2-5iaoqamuzfkz9.pdf"/>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:language>eng</dc:language>
    <dc:contributor>Möller, Sebastian</dc:contributor>
    <dc:rights>terms-of-use</dc:rights>
    <dcterms:issued>2017-09-28</dcterms:issued>
    <dc:creator>Hoßfeld, Tobias</dc:creator>
  </rdf:Description>
</rdf:RDF>
kops.conferencefieldDagstuhl Seminar 15481, 22. Nov. 2015 - 27. Nov. 2015, Dagstuhl, Waderndeu
kops.date.conferenceEnd2015-11-27eng
kops.date.conferenceStart2015-11-22eng
kops.description.openAccessopenaccessgreen
kops.flag.knbibliographytrue
kops.identifier.nbnurn:nbn:de:bsz:352-2-5iaoqamuzfkz9
kops.location.conferenceDagstuhl, Waderneng
kops.sourcefieldARCHAMBAULT, Daniel, ed., Helen PURCHASE, ed., Tobias HOSSFELD, ed.. <i>Evaluation in the Crowd : Crowdsourcing and Human-Centered Experiments : Revised Contributions</i>. Cham: Springer, 2017, pp. 154-190. Lecture Notes in Computer Science. 10264. ISSN 0302-9743. eISSN 1611-3349. ISBN 978-3-319-66434-7. Available under: doi: 10.1007/978-3-319-66435-4_7deu
kops.sourcefield.plainARCHAMBAULT, Daniel, ed., Helen PURCHASE, ed., Tobias HOSSFELD, ed.. Evaluation in the Crowd : Crowdsourcing and Human-Centered Experiments : Revised Contributions. Cham: Springer, 2017, pp. 154-190. Lecture Notes in Computer Science. 10264. ISSN 0302-9743. eISSN 1611-3349. ISBN 978-3-319-66434-7. Available under: doi: 10.1007/978-3-319-66435-4_7deu
kops.sourcefield.plainARCHAMBAULT, Daniel, ed., Helen PURCHASE, ed., Tobias HOSSFELD, ed.. Evaluation in the Crowd : Crowdsourcing and Human-Centered Experiments : Revised Contributions. Cham: Springer, 2017, pp. 154-190. Lecture Notes in Computer Science. 10264. ISSN 0302-9743. eISSN 1611-3349. ISBN 978-3-319-66434-7. Available under: doi: 10.1007/978-3-319-66435-4_7eng
kops.title.conferenceDagstuhl Seminar 15481eng
relation.isAuthorOfPublicationfffb576d-6ec6-4221-8401-77f1d117a9b9
relation.isAuthorOfPublication.latestForDiscoveryfffb576d-6ec6-4221-8401-77f1d117a9b9
source.bibliographicInfo.fromPage154eng
source.bibliographicInfo.seriesNumber10264eng
source.bibliographicInfo.toPage190eng
source.contributor.editorArchambault, Daniel
source.contributor.editorPurchase, Helen
source.contributor.editorHoßfeld, Tobias
source.identifier.eissn1611-3349eng
source.identifier.isbn978-3-319-66434-7eng
source.identifier.issn0302-9743eng
source.publisherSpringereng
source.publisher.locationChameng
source.relation.ispartofseriesLecture Notes in Computer Scienceeng
source.titleEvaluation in the Crowd : Crowdsourcing and Human-Centered Experiments : Revised Contributionseng

Dateien

Originalbündel

Gerade angezeigt 1 - 1 von 1
Vorschaubild nicht verfügbar
Name:
Egger-Lampl_2-5iaoqamuzfkz9.pdf
Größe:
329 KB
Format:
Adobe Portable Document Format
Beschreibung:
Egger-Lampl_2-5iaoqamuzfkz9.pdf
Egger-Lampl_2-5iaoqamuzfkz9.pdfGröße: 329 KBDownloads: 1256