Aufgrund von Vorbereitungen auf eine neue Version von KOPS, können derzeit keine Publikationen eingereicht werden. (Due to preparations for a new version of KOPS, no publications can be submitted currently.)
Type of Publication:  Contribution to a conference collection 
Author:  Rendle, Steffen 
Year of publication:  2010 
Conference:  2010 IEEE 10th International Conference on Data Mining (ICDM), Dec 13, 2010  Dec 17, 2010, Sydney, Australia 
Published in:  2010 IEEE International Conference on Data Mining.  IEEE, 2010.  pp. 9951000.  ISBN 9781424491315 
DOI (citable link):  https://dx.doi.org/10.1109/ICDM.2010.127 
Summary: 
In this paper, we introduce Factorization Machines (FM) which are a new model class that combines the advantages of Support Vector Machines (SVM) with factorization models. Like SVMs, FMs are a general predictor working with any real valued feature vector. In contrast to SVMs, FMs model all interactions between variables using factorized parameters. Thus they are able to estimate interactions even in problems with huge sparsity (like recommender systems) where SVMs fail. We show that the model equation of FMs can be calculated in linear time and thus FMs can be optimized directly. So unlike nonlinear SVMs, a transformation in the dual form is not necessary and the model parameters can be estimated directly without the need of any support vector in the solution. We show the relationship to SVMs and the advantages of FMs for parameter estimation in sparse settings. On the other hand there are many different factorization models like matrix factorization, parallel factor analysis or specialized models like SVD++, PITF or FPMC. The drawback of these models is that they are not applicable for general prediction tasks but work only with special input data. Furthermore their model equations and optimization algorithms are derived individually for each task. We show that FMs can mimic these models just by specifying the input data (i.e. the feature vectors). This makes FMs easily applicable even for users without expert knowledge in factorization models.

Subject (DDC):  004 Computer Science 
Keywords:  factorization machine, sparse data, tensor factorization, support vector machine 
Files  Size  Format  View 

There are no files associated with this item. 
RENDLE, Steffen, 2010. Factorization Machines. 2010 IEEE 10th International Conference on Data Mining (ICDM). Sydney, Australia, Dec 13, 2010  Dec 17, 2010. In: 2010 IEEE International Conference on Data Mining. IEEE, pp. 9951000. ISBN 9781424491315. Available under: doi: 10.1109/ICDM.2010.127
@inproceedings{Rendle201012Facto12699, title={Factorization Machines}, year={2010}, doi={10.1109/ICDM.2010.127}, isbn={9781424491315}, publisher={IEEE}, booktitle={2010 IEEE International Conference on Data Mining}, pages={9951000}, author={Rendle, Steffen} }
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22rdfsyntaxns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digitalrepositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.unikonstanz.de/rdf/resource/123456789/12699"> <dc:creator>Rendle, Steffen</dc:creator> <dc:rights>termsofuse</dc:rights> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dcterms:issued>201012</dcterms:issued> <dc:language>eng</dc:language> <bibo:uri rdf:resource="http://kops.unikonstanz.de/handle/123456789/12699"/> <dcterms:abstract xml:lang="eng">In this paper, we introduce Factorization Machines (FM) which are a new model class that combines the advantages of Support Vector Machines (SVM) with factorization models. Like SVMs, FMs are a general predictor working with any real valued feature vector. In contrast to SVMs, FMs model all interactions between variables using factorized parameters. Thus they are able to estimate interactions even in problems with huge sparsity (like recommender systems) where SVMs fail. We show that the model equation of FMs can be calculated in linear time and thus FMs can be optimized directly. So unlike nonlinear SVMs, a transformation in the dual form is not necessary and the model parameters can be estimated directly without the need of any support vector in the solution. We show the relationship to SVMs and the advantages of FMs for parameter estimation in sparse settings. On the other hand there are many different factorization models like matrix factorization, parallel factor analysis or specialized models like SVD++, PITF or FPMC. The drawback of these models is that they are not applicable for general prediction tasks but work only with special input data. Furthermore their model equations and optimization algorithms are derived individually for each task. We show that FMs can mimic these models just by specifying the input data (i.e. the feature vectors). This makes FMs easily applicable even for users without expert knowledge in factorization models.</dcterms:abstract> <dcterms:isPartOf rdf:resource="https://kops.unikonstanz.de/rdf/resource/123456789/36"/> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">20110908T06:32:24Z</dcterms:available> <dc:contributor>Rendle, Steffen</dc:contributor> <foaf:homepage rdf:resource="http://localhost:8080/jspui"/> <dcterms:bibliographicCitation>Publ. in: 2010 IEEE 10th International Conference on Data Mining (ICDM 2010) : Sydney, Australia, 13  17 December 2010 ; [proceedings] / [IEEE Computer Society]. Ed.: Geoffrey I. Webb ... . Piscataway, NJ : IEEE, 2010, pp. 9951000</dcterms:bibliographicCitation> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">20110908T06:32:24Z</dc:date> <dspace:isPartOfCollection rdf:resource="https://kops.unikonstanz.de/rdf/resource/123456789/36"/> <dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/> <dcterms:title>Factorization Machines</dcterms:title> </rdf:Description> </rdf:RDF>