Publikation:

Learning collective behavior in an experimental system of feedback-controlled microswimmers

Lade...
Vorschaubild

Dateien

Loeffler-2-jvnrqdcpx49e1.pdf
Loeffler-2-jvnrqdcpx49e1.pdfGröße: 12.3 MBDownloads: 184

Datum

2023

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

DOI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Link zur Lizenz

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Open Access Green
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Dissertation
Publikationsstatus
Published

Erschienen in

Zusammenfassung

Collective behavior in groups is a recurring phenomena in nature. It is present on vastly different length and time scales and can occur in small groups of few individuals as well as in colonies of millions of participants. Over the past few decades, research effort has been increased to better understand and also model the wide variety of observed collective patterns. While modern computer simulations have helped the effort, at the same time experimental model systems of artificial active matter have become a compelling middle ground. They can enable precise control over interactions between agents while providing the complexity of real environments, thus, bridging the gap between nature and simulations. Similarly, different concepts of modeling the decision process of collectively acting agents have been developed. A common approach is to apply simple interaction rules based on so-called social forces, which produce rich collective behavior and can also lead to criticality in the observed dynamics. More recently, though, an effort has also been started to model the evolutionary process leading to such rules, by employing techniques from multi-agent reinforcement learning.

In the experiments presented in this thesis, we apply both social interaction rules as well as rein- forcement learning to an experimental model system of artificial light-activated microswimmers, for which we are able to individually manipulate the speed and orientation by an external feedback control loop. First, a social interaction rule is applied to a group of active particles, which can lead to unordered swarming and rotationally ordered swirls, tuneable by a single angular parameter. Continuously varying this parameter, we find a continuous transition between swarms and swirls, with a clear bifurcation point in rotational symmetry, which is spontaneously broken into either direction above the critical angle. Furthermore, we provide a minimal model for this behavior, which describes the observations well by simple symmetry arguments without the assumption of thermal equilibrium. We further verify the continuous nature of the transition by experimental measurements of a distinct hysteresis loop. At the same time, we are also able to recreate swirling motion by reinforcement learning, based on a reward function inspired by social forces, to discuss similarities and differences between the two approaches. Notably in the case of reinforcement learning, the symmetry in rotational order is not broken spontaneously during motion, but in an early state of the learning process, making the broken symmetry a crucial part of the behavioral policy. Finally, we discuss a more general reinforcement learning scenario, where the reward for individuals is solely provided by feeding on a virtual food source. Despite the selfish nature of the task, collective behavior emerges in the group, including swirling motion as seen in the previous experiment. We find that this collectivity is mainly driven by the complex interactions between agents, regarding information transfer as well as physical interactions between microswimmers in the experimental system and can only partially be replicated by auxiliary simulations. The resulting policy is robust enough to even provide stable collective motion when applied to an previously unseen scenario in which food is completely absent.

Our results highlight the importance of model experiments for social interaction rules as well as in reinforcement learning. Experiments come with a complexity which is hard to replicate in simulations but might be vital for the emergence of robust mechanisms in collective behavior. Considering and understanding these solutions is not only beneficial to better understand trade-offs in natural systems, but is equally important when designing future artificial systems of autonomously acting agents, where the existence of a “reality gap” is well known and much discussed in literature.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
530 Physik

Schlagwörter

active matter, reinforcement learning, active colloidal particles, collective behavior

Konferenz

Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690LÖFFLER, Robert Clemens, 2023. Learning collective behavior in an experimental system of feedback-controlled microswimmers [Dissertation]. Konstanz: University of Konstanz
BibTex
@phdthesis{Loffler2023-08-22Learn-67644,
  year={2023},
  title={Learning collective behavior in an experimental system of feedback-controlled microswimmers},
  author={Löffler, Robert Clemens},
  address={Konstanz},
  school={Universität Konstanz}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/67644">
    <dcterms:title>Learning collective behavior in an experimental system of feedback-controlled microswimmers</dcterms:title>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2023-08-22T11:11:54Z</dc:date>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/67644"/>
    <dcterms:issued>2023-08-22</dcterms:issued>
    <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/67644/4/Loeffler-2-jvnrqdcpx49e1.pdf"/>
    <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/67644/4/Loeffler-2-jvnrqdcpx49e1.pdf"/>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/41"/>
    <dc:language>eng</dc:language>
    <dcterms:abstract>Collective behavior in groups is a recurring phenomena in nature. It is present on vastly different length and time scales and can occur in small groups of few individuals as well as in colonies of millions of participants. Over the past few decades, research effort has been increased to better understand and also model the wide variety of observed collective patterns. While modern computer simulations have helped the effort, at the same time experimental model systems of artificial active matter have become a compelling middle ground. They can enable precise control over interactions between agents while providing the complexity of real environments, thus, bridging the gap between nature and simulations. Similarly, different concepts of modeling the decision process of collectively acting agents have been developed. A common approach is to apply simple interaction rules based on so-called social forces, which produce rich collective behavior and can also lead to criticality in the observed dynamics. More recently, though, an effort has also been started to model the evolutionary process leading to such rules, by employing techniques from multi-agent reinforcement learning.

In the experiments presented in this thesis, we apply both social interaction rules as well as rein- forcement learning to an experimental model system of artificial light-activated microswimmers, for which we are able to individually manipulate the speed and orientation by an external feedback control loop. First, a social interaction rule is applied to a group of active particles, which can lead to unordered swarming and rotationally ordered swirls, tuneable by a single angular parameter. Continuously varying this parameter, we find a continuous transition between swarms and swirls, with a clear bifurcation point in rotational symmetry, which is spontaneously broken into either direction above the critical angle. Furthermore, we provide a minimal model for this behavior, which describes the observations well by simple symmetry arguments without the assumption of thermal equilibrium. We further verify the continuous nature of the transition by experimental measurements of a distinct hysteresis loop. At the same time, we are also able to recreate swirling motion by reinforcement learning, based on a reward function inspired by social forces, to discuss similarities and differences between the two approaches. Notably in the case of reinforcement learning, the symmetry in rotational order is not broken spontaneously during motion, but in an early state of the learning process, making the broken symmetry a crucial part of the behavioral policy. Finally, we discuss a more general reinforcement learning scenario, where the reward for individuals is solely provided by feeding on a virtual food source. Despite the selfish nature of the task, collective behavior emerges in the group, including swirling motion as seen in the previous experiment. We find that this collectivity is mainly driven by the complex interactions between agents, regarding information transfer as well as physical interactions between microswimmers in the experimental system and can only partially be replicated by auxiliary simulations. The resulting policy is robust enough to even provide stable collective motion when applied to an previously unseen scenario in which food is completely absent.

Our results highlight the importance of model experiments for social interaction rules as well as in reinforcement learning. Experiments come with a complexity which is hard to replicate in simulations but might be vital for the emergence of robust mechanisms in collective behavior. Considering and understanding these solutions is not only beneficial to better understand trade-offs in natural systems, but is equally important when designing future artificial systems of autonomously acting agents, where the existence of a “reality gap” is well known and much discussed in literature.</dcterms:abstract>
    <dc:creator>Löffler, Robert Clemens</dc:creator>
    <dc:rights>Attribution 4.0 International</dc:rights>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2023-08-22T11:11:54Z</dcterms:available>
    <dcterms:rights rdf:resource="http://creativecommons.org/licenses/by/4.0/"/>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:contributor>Löffler, Robert Clemens</dc:contributor>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/41"/>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

June 30, 2023
Hochschulschriftenvermerk
Konstanz, Univ., Diss., 2023
Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Begutachtet
Diese Publikation teilen