Experimental quantum speed-up in reinforcement learning agents

Lade...
Vorschaubild
Dateien
Zu diesem Dokument gibt es keine Dateien.
Datum
2021
Autor:innen
Saggio, Valeria
Asenbeck, Beate E.
Hamann, Arne
Strömberg, Teodor
Schiansky, Peter
Dunjko, Vedran
Friis, Nicolai
Harris, Nicholas C.
Walther, Philip
et al.
Herausgeber:innen
Kontakt
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
URI (zitierfähiger Link)
ArXiv-ID
Internationale Patentnummer
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Sammlungen
Core Facility der Universität Konstanz
Gesperrt bis
Titel in einer weiteren Sprache
Forschungsvorhaben
Organisationseinheiten
Zeitschriftenheft
Publikationstyp
Zeitschriftenartikel
Publikationsstatus
Published
Erschienen in
Nature. Springer Nature. 2021, 591(7849), pp. 229-233. ISSN 0028-0836. eISSN 1476-4687. Available under: doi: 10.1038/s41586-021-03242-7
Zusammenfassung

As the field of artificial intelligence advances, the demand for algorithms that can learn quickly and efficiently increases. An important paradigm within artificial intelligence is reinforcement learning1, where decision-making entities called agents interact with environments and learn by updating their behaviour on the basis of the obtained feedback. The crucial question for practical applications is how fast agents learn2. Although various studies have made use of quantum mechanics to speed up the agent’s decision-making process3,4, a reduction in learning time has not yet been demonstrated. Here we present a reinforcement learning experiment in which the learning process of an agent is sped up by using a quantum communication channel with the environment. We further show that combining this scenario with classical communication enables the evaluation of this improvement and allows optimal control of the learning progress. We implement this learning protocol on a compact and fully tunable integrated nanophotonic processor. The device interfaces with telecommunication-wavelength photons and features a fast active-feedback mechanism, demonstrating the agent’s systematic quantum advantage in a setup that could readily be integrated within future large-scale quantum communication networks.

Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
100 Philosophie
Schlagwörter
Konferenz
Rezension
undefined / . - undefined, undefined
Zitieren
ISO 690SAGGIO, Valeria, Beate E. ASENBECK, Arne HAMANN, Teodor STRÖMBERG, Peter SCHIANSKY, Vedran DUNJKO, Nicolai FRIIS, Nicholas C. HARRIS, Hans J. BRIEGEL, Philip WALTHER, 2021. Experimental quantum speed-up in reinforcement learning agents. In: Nature. Springer Nature. 2021, 591(7849), pp. 229-233. ISSN 0028-0836. eISSN 1476-4687. Available under: doi: 10.1038/s41586-021-03242-7
BibTex
@article{Saggio2021-03Exper-53255,
  year={2021},
  doi={10.1038/s41586-021-03242-7},
  title={Experimental quantum speed-up in reinforcement learning agents},
  number={7849},
  volume={591},
  issn={0028-0836},
  journal={Nature},
  pages={229--233},
  author={Saggio, Valeria and Asenbeck, Beate E. and Hamann, Arne and Strömberg, Teodor and Schiansky, Peter and Dunjko, Vedran and Friis, Nicolai and Harris, Nicholas C. and Briegel, Hans J. and Walther, Philip}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/53255">
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/40"/>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/40"/>
    <dc:contributor>Harris, Nicholas C.</dc:contributor>
    <dc:language>eng</dc:language>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-03-25T08:32:09Z</dc:date>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:creator>Saggio, Valeria</dc:creator>
    <dc:contributor>Hamann, Arne</dc:contributor>
    <dc:creator>Hamann, Arne</dc:creator>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-03-25T08:32:09Z</dcterms:available>
    <dc:contributor>Dunjko, Vedran</dc:contributor>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/53255"/>
    <dc:creator>Briegel, Hans J.</dc:creator>
    <dc:contributor>Briegel, Hans J.</dc:contributor>
    <dc:creator>Harris, Nicholas C.</dc:creator>
    <dc:contributor>Saggio, Valeria</dc:contributor>
    <dc:creator>Dunjko, Vedran</dc:creator>
    <dc:creator>Friis, Nicolai</dc:creator>
    <dc:creator>Asenbeck, Beate E.</dc:creator>
    <dc:creator>Schiansky, Peter</dc:creator>
    <dc:contributor>Asenbeck, Beate E.</dc:contributor>
    <dcterms:issued>2021-03</dcterms:issued>
    <dc:contributor>Strömberg, Teodor</dc:contributor>
    <dc:contributor>Schiansky, Peter</dc:contributor>
    <dc:creator>Strömberg, Teodor</dc:creator>
    <dc:creator>Walther, Philip</dc:creator>
    <dc:contributor>Walther, Philip</dc:contributor>
    <dcterms:title>Experimental quantum speed-up in reinforcement learning agents</dcterms:title>
    <dc:contributor>Friis, Nicolai</dc:contributor>
    <dcterms:abstract xml:lang="eng">As the field of artificial intelligence advances, the demand for algorithms that can learn quickly and efficiently increases. An important paradigm within artificial intelligence is reinforcement learning1, where decision-making entities called agents interact with environments and learn by updating their behaviour on the basis of the obtained feedback. The crucial question for practical applications is how fast agents learn2. Although various studies have made use of quantum mechanics to speed up the agent’s decision-making process3,4, a reduction in learning time has not yet been demonstrated. Here we present a reinforcement learning experiment in which the learning process of an agent is sped up by using a quantum communication channel with the environment. We further show that combining this scenario with classical communication enables the evaluation of this improvement and allows optimal control of the learning progress. We implement this learning protocol on a compact and fully tunable integrated nanophotonic processor. The device interfaces with telecommunication-wavelength photons and features a fast active-feedback mechanism, demonstrating the agent’s systematic quantum advantage in a setup that could readily be integrated within future large-scale quantum communication networks.</dcterms:abstract>
  </rdf:Description>
</rdf:RDF>
Interner Vermerk
xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter
Kontakt
URL der Originalveröffentl.
Prüfdatum der URL
Prüfungsdatum der Dissertation
Finanzierungsart
Kommentar zur Publikation
Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Begutachtet
Ja
Diese Publikation teilen