Publikation:

Sample selection for MCMC-based recommender systems

Lade...
Vorschaubild

Dateien

Zu diesem Dokument gibt es keine Dateien.

Datum

2013

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Beitrag zu einem Konferenzband
Publikationsstatus
Published

Erschienen in

Proceedings of the 7th ACM conference on Recommender systems - RecSys '13. New York, New York, USA: ACM Press, 2013, pp. 403-406. ISBN 978-1-4503-2409-0. Available under: doi: 10.1145/2507157.2507224

Zusammenfassung

Bayesian Inference with Markov Chain Monte Carlo (MCMC) has been shown to provide high prediction quality in recommender systems. The advantage over learning methods such as coordinate descent/alternating least-squares (ALS) or (stochastic) gradient descent (SGD) is that MCMC takes uncertainty into account and moreover MCMC can easily integrate priors to learn regularization values. For factorization models, MCMC inference can be done with efficient Gibbs samplers.


However, MCMC algorithms are not point estimators, but they generate a chain of models. The whole chain of models is used to calculate predictions. For large scale models like factorization methods with millions or billions of model parameters, saving the whole chain of models is very storage intensive and can even get infeasible in practice. In this paper, we address this problem and show how a small subset from the chain of models can approximate the predictive distribution well. We use the fact that models from the chain are correlated and propose online selection techniques to store only a small subset of the models. We perform an empirical analysis on the large scale Netflix dataset with several Bayesian factorization models, including matrix factorization and SVD++. We show that the proposed selection techniques approximate the predictions well with only a small subset of model samples.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
004 Informatik

Schlagwörter

Konferenz

the 7th ACM conference, 12. Okt. 2013 - 16. Okt. 2013, Hong Kong, China
Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690SILBERMANN, Thierry, Immanuel BAYER, Steffen RENDLE, 2013. Sample selection for MCMC-based recommender systems. the 7th ACM conference. Hong Kong, China, 12. Okt. 2013 - 16. Okt. 2013. In: Proceedings of the 7th ACM conference on Recommender systems - RecSys '13. New York, New York, USA: ACM Press, 2013, pp. 403-406. ISBN 978-1-4503-2409-0. Available under: doi: 10.1145/2507157.2507224
BibTex
@inproceedings{Silbermann2013Sampl-26477,
  year={2013},
  doi={10.1145/2507157.2507224},
  title={Sample selection for MCMC-based recommender systems},
  isbn={978-1-4503-2409-0},
  publisher={ACM Press},
  address={New York, New York, USA},
  booktitle={Proceedings of the 7th ACM conference on Recommender systems - RecSys '13},
  pages={403--406},
  author={Silbermann, Thierry and Bayer, Immanuel and Rendle, Steffen}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/26477">
    <dc:contributor>Bayer, Immanuel</dc:contributor>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dcterms:issued>2013</dcterms:issued>
    <dcterms:title>Sample selection for MCMC-based recommender systems</dcterms:title>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2014-02-25T09:56:09Z</dc:date>
    <dc:rights>terms-of-use</dc:rights>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:creator>Bayer, Immanuel</dc:creator>
    <dcterms:bibliographicCitation>RecSys '13 : proceedings of the 7th ACM conference on Recommender systems ; Hong Kong, China — October 12 - 16, 2013 / Qiang Yang ... (eds.). - New York : ACM, 2013. - S. 403-406. - ISBN 978-1-4503-2409-0</dcterms:bibliographicCitation>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2014-02-25T09:56:09Z</dcterms:available>
    <dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:contributor>Silbermann, Thierry</dc:contributor>
    <dcterms:abstract xml:lang="eng">Bayesian Inference with Markov Chain Monte Carlo (MCMC) has been shown to provide high prediction quality in recommender systems. The advantage over learning methods such as coordinate descent/alternating least-squares (ALS) or (stochastic) gradient descent (SGD) is that MCMC takes uncertainty into account and moreover MCMC can easily integrate priors to learn regularization values. For factorization models, MCMC inference can be done with efficient Gibbs samplers.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;However, MCMC algorithms are not point estimators, but they generate a chain of models. The whole chain of models is used to calculate predictions. For large scale models like factorization methods with millions or billions of model parameters, saving the whole chain of models is very storage intensive and can even get infeasible in practice. In this paper, we address this problem and show how a small subset from the chain of models can approximate the predictive distribution well. We use the fact that models from the chain are correlated and propose online selection techniques to store only a small subset of the models. We perform an empirical analysis on the large scale Netflix dataset with several Bayesian factorization models, including matrix factorization and SVD++. We show that the proposed selection techniques approximate the predictions well with only a small subset of model samples.</dcterms:abstract>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:creator>Silbermann, Thierry</dc:creator>
    <dc:contributor>Rendle, Steffen</dc:contributor>
    <dc:language>eng</dc:language>
    <dc:creator>Rendle, Steffen</dc:creator>
    <bibo:uri rdf:resource="http://kops.uni-konstanz.de/handle/123456789/26477"/>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Diese Publikation teilen